id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.09065
The connection between polymer collapse and the onset of jamming
Previous studies have shown that the interiors of proteins are densely packed, reaching packing fractions that are as large as those found for static packings of individual amino-acid-shaped particles. How can the interiors of proteins take on such high packing fractions given that amino acids are connected by peptide bonds and many amino acids are hydrophobic with attractive interactions? We investigate this question by comparing the structural and mechanical properties of collapsed attractive disk-shaped bead-spring polymers to those of three reference systems: static packings of repulsive disks, of attractive disks, and of repulsive disk-shaped bead-spring polymers. We show that attractive systems quenched to temperatures below the glass transition $T \ll T_g$ and static packings of both repulsive disks and bead-spring polymers possess similar interior packing fractions. Previous studies have shown that static packings of repulsive disks are isostatic at jamming onset, i.e. the number of contacts $N_c$ matches the number of degrees of freedom, which strongly influences their mechanical properties. We find that repulsive polymers are hypostatic at jamming onset, but effectively isostatic when including quartic modes. While attractive disk and polymer packings are hyperstatic, we identify a definition for interparticle contacts for which they can also be considered as effectively isostatic. As a result, we show that the mechanical properties (e.g. scaling of the potential energy with excess contact number and low-frequency contribution to the density of vibrational modes) of weakly attractive disk and polymer packings are similar to those of ${\it isostatic}$ repulsive disk and polymer packings. Our results demonstrate that static packings generated via attractive collapse or compression of repulsive particles possess similar structural and mechanical properties.
Alex T. Grigas, Aliza Fisher, Mark D. Shattuck, Corey S. O'Hern
2023-09-16T18:19:56Z
http://arxiv.org/abs/2309.09065v1
# The connection between polymer collapse and the onset of jamming ###### Abstract Previous studies have shown that the interiors of proteins are densely packed, reaching packing fractions that are as large as those found for static packings of individual amino-acid-shaped particles. How can the interiors of proteins take on such high packing fractions given that amino acids are connected by peptide bonds and many amino acids are hydrophobic with attractive interactions? We investigate this question by comparing the structural and mechanical properties of collapsed attractive disk-shaped bead-spring polymers to those of three reference systems: static packings of repulsive disks, of attractive disks, and of repulsive disk-shaped bead-spring polymers. We show that attractive systems quenched to temperatures below the glass transition \(T\ll T_{g}\) and static packings of both repulsive disks and bead-spring polymers possess similar interior packing fractions. Previous studies have shown that static packings of repulsive disks are isostatic at jamming onset, i.e. the number of contacts \(N_{c}\) matches the number of degrees of freedom, which strongly influences their mechanical properties. We find that repulsive polymers are hypostatic at jamming onset, but effectively isostatic when including quartic modes. While attractive disk and polymer packings are hyperstatic, we identify a definition for interparticle contacts for which they can also be considered as effectively isostatic. As a result, we show that the mechanical properties (e.g. scaling of the potential energy with excess contact number and low-frequency contribution to the density of vibrational modes) of weakly attractive disk and polymer packings are similar to those of _isostatic_ repulsive disk and polymer packings. Our results demonstrate that static packings generated via attractive collapse or compression of repulsive particles possess similar structural and mechanical properties. ## I Introduction It has long been appreciated since the first atomic-resolution x-ray crystal structures of proteins were solved that their interior, solvent inaccessible, or core, regions are densely packed, regardless of the differences in their overall folds [1; 2; 3; 4; 5; 6]. Other experimental atomic-scale structural characterization methods, such as NMR spectroscopy, provide all-atom structures of proteins in solution and at room temperature, and have shown that high-quality NMR structures also possess densely packed interiors with packing fractions similar to those of x-ray crystal structures [7]. Additionally, perturbing the dense packing of the solvent-inaccessible hydrophobic interior of proteins via mutation has been shown to significantly affect protein structure and stability [8; 9; 10; 11]. Prior analyses of protein x-ray crystal structures that allowed unphysical atomic overlaps suggested that the interiors of proteins possessed packing fractions as large as \(\phi\sim 0.7-0.75\)[1; 5]. However, more recent studies that account for the non-spherical shapes of amino acids and do not allow atomic overlaps have shown that the average packing fraction of solvent inaccessible amino acids is \(\phi\approx 0.55\pm 0.02\)[12; 13; 14]. Why do the core regions of all experimentally determined protein structures, regardless of the overall fold, possess this value for the packing fraction? Previously, we have shown that jammed packings of rigid amino-acid-shaped particles with purely repulsive interactions under periodic boundary conditions possess similar packing fraction distributions as those for experimentally determined protein cores [6]. Despite this agreement, these prior simulations lacked important features of protein structure: the amino acids were rigid with no backbone dihedral angle degrees of freedom and they were _disconnected_, lacking peptide bonds; the packings were generated by compression, not by hydrophobic polymer collapse; and the packings were generated using periodic boundary conditions instead of being fully solvated. In addition, when thermal fluctuations are included in the amino-acid-shaped particle-packing generation protocol, we find that the onset of jamming occurs over a range of packing fractions, \(0.55\lesssim\phi_{J}\lesssim 0.62\), where \(\phi_{J}\) increases as the rate at which thermal energy is removed from the system decreases [15; 16]. To date, the only high-resolution experimentally determined protein cores that possess \(\phi\gtrsim 0.55\) were solved using x-ray crystallography at extremely high pressures [17]. Does the correspondence between the packing fraction of jammed packings of repulsive, disconnected amino-acid-shaped particles generated via rapid compression and the cores of experimentally determined proteins indicate a deep connection between the two systems or is it fortuitous? More generally, to isolate the essential features of the problem, we can ask, for connected and disconnected spherical particles, what is the relationship between the thermal collapse of sticky, spherical bead-spring polymers or aggregation of sticky spherical particles and the onset of jamming of purely repulsive spherical particles under athermal, quasi-static compression? Here, we focus specifically on disk -shaped particles versus disk-shaped bead-spring polymers and purely repulsive potentials versus potentials with both short-range repulsive and longer-range attractive interactions in two dimensions (2D). Mechanically stable (or jammed) packings of repulsive spherical particles are isostatic, i.e. the number of constraints arising from interparticle and particle-boundary contacts matches the number of degrees of freedom, which strongly influences their structural and mechanical properties [18]. Prior studies have shown that isostatic sphere packings at jamming onset can occur over a range of packing fractions (known as the J-line), from a lower bound similar to values quoted for random close packing and increasing as the compression rate and rate of energy relaxation decrease [19; 20; 21]. Isostatic jammed sphere packings also possess an excess low frequency contribution to the vibrational density of states \(D(\omega)\), which is quantified by a characteristic frequency \(\omega^{*}\) that increases as the packings are compressed above jamming onset. Further, the shear modulus and \(\omega^{*}\) obey power-law scaling relations with the deviation \(\Delta\varepsilon\) of the coordination number from that at jamming onset. Previous work has also suggested that repulsive spherical bead-spring polymers compressed to jamming onset are nearly isostatic even though they possess fixed constraints through the polymer backbone [22; 23; 24; 25; 26]. As found for jammed sphere packings, jammed repulsive polymer packings occur over a range of packing fractions when they are generated using different protocols, but it is unclear whether this range of packing fractions is the same as that for jammed sphere packings. Further, it has been suggested that the elastic moduli of jammed repulsive polymer packings are similar to those of jammed sphere packings [26]. Collections of spherical monomers with attractive interactions are generally not isostatic. For example, attractive, spherical particles can form sparse, yet solid-like gels at extremely low packing fractions with on average two contacts per particle. They can also form dense, attractive glasses, where each particle possesses close to an isostatic number of nearest-neighbor contacts and many more longer-range interactions [27; 28; 29]. Spherical bead-spring polymers with attractive interactions collapse into dense liquid globules at sufficiently low temperatures [30]. Further decreasing the temperature will generate collapsed _glassy_ globules with a wide range of structural and mechanical properties [31; 32]. Despite this fact, we have found in previous studies that the interiors of folded proteins (that possess both short-range repulsive and longer-range attractive interactions) appear to share properties with jammed packings of disconnected, repulsive amino-acid-shaped particles generated via athermal, quasi-static compression. Here, to understand the connection between the thermal collapse of sticky polymers and jamming of repulsive particles under athermal compression, we compare the interior packing fractions of static packings of single disk-shaped bead-spring polymers and static packings of disconnected disks, with either attractive or repulsive interactions, as shown in Fig. 1. For systems with non-bonded attractive interactions, we study the interior packing fraction as the system is cooled below the glass transition temperature at varying rates. For systems with purely repulsive non-bonded interactions, we develop an open-boundary "jamming" protocol where the system undergoes athermal, quasi-static compression until reaching a mechanically stable state using an externally applied radial force. We find several important results. First, for a collapsed polymer with attractive non-bonded interactions to obtain interior packing fractions \(\phi\) similar to those found for jammed packings of purely repulsive disks, they must be quenched well below the glass transition temperature. Additionally, we find that the attractive systems (both monomeric and polymeric) quenched to zero temperature and the repulsive systems (both disks and polymes) compressed to jamming onset with open boundary conditions possess similar interior packing fractions for all system sizes, damping parameters, and initial temperatures studied. We show that packings of attractive disks and polymers possess excess low-frequency vibrational modes in the limit of small attractive strength. As expected, we find that repulsive disks compressed to jamming onset are isostatic. In contrast to prior work, we find that packings of polymers with non-bonded repulsive interactions are hypostatic at jamming onset. However, the number of missing contacts matches the number of quartic modes, and thus packings of repulsive polymers are effectively isostatic. While packings of attractive monomers and polymers are hyperstatic when counting contacts using the full interaction potential, they can also be considered to be effectively isostatic if we appropriately re-define the interparticle contact network. By varying the attractive strength, we observe the same scaling of the low-frequency modes of \(D(\omega)\) and excess number of contacts \(\Delta N\) from the isostatic number versus the potential energy as found for repulsive disk packings compressed above jamming onset. This article is organized into three additional sections and two appendices. In Sec. II, we describe the numerical models for the disk-shaped bead-spring polymers and disk-shaped monomers with non-bonded attractive and repulsive interactions, the packing generation protocols, and how we identify surface versus core disks for the calculation of the interior packing fraction. In Sec. III, we present the results for the interior packing fraction, characteristic plateau frequency of the distribution of vibrational modes \(D(\omega)\), and contact number for each system. Finally, in Section IV, we discuss the implications of the results for understanding the dynamics of polymer collapse and protein folding and propose future work on athermal compression of all-atom models of proteins to jamming onset. In Appendix A, we describe methods to avoid size segregation when applying a radial force to generate jammed packings of repulsive monomers and polymers in open boundary conditions and in Appendix B, we provide additional details of the algorithm for identifying interior versus surface particles. Methods ### Model systems We study four types of systems: single disk-shaped bead-spring polymers with attractive non-bonded interactions, attractive disks (or monomers), single disk-shaped bead-spring polymers with repulsive non-bonded interactions, and repulsive disks (or monomers) as shown in Fig. 1. The non-bonded, repulsive interactions are modeled by the repulsive linear spring potential, \[\frac{V^{mb}(r_{ij})}{\epsilon}=\frac{1}{2}\left(1-\frac{r_{ij}}{\sigma_{ij}} \right)^{2}\Theta\left(1-\frac{r_{ij}}{\sigma_{ij}}\right), \tag{1}\] where \(r_{ij}\) is the center-to-center distance between disks \(i\) and \(j\), \(\sigma_{ij}\) is their average diameter, \(\epsilon\) is the energy scale of the repulsive interaction, and \(\Theta\left(x\right)\) is the Heaviside step-function. For the \(N-1\) bonded interactions between disks \(i\) and \(j=i+1\) in the bead-spring polymer, the repulsive linear spring potential is extended into a double-sided linear spring potential: \[\frac{V^{b}(r_{ij})}{\epsilon}=\frac{1}{2}\left(1-\frac{r_{ij}}{\sigma_{ij}} \right)^{2}. \tag{2}\] We parameterize the non-bonded attractive interactions by the attractive cutoff distance \(\alpha\) and depth \(\beta\). Previous work on jamming of spherical particles with short-ranged attractive interactions used a single parameter to characterize the attractive interactions [27; 28; 29]. Here, we separate the attractive range and depth to allow the model to capture both short-ranged, sticky disks and molecular liquids with weak, but long-range attractive interactions. For the non-bonded attractive interactions, we extend the potential in Eq. 1 to \(r_{\beta}>\sigma_{ij}\) and cutoff the interactions at \(r_{\alpha}=(1+\alpha)\sigma_{ij}>r_{\beta}\): \[\frac{V^{amb}(r_{ij})}{\epsilon}=\begin{cases}\frac{1}{2}\left(1-\frac{r_{ij} }{\sigma_{ij}}\right)^{2}-V_{c}/\epsilon&\text{for }r_{ij}\leq r_{\beta}\\ -\frac{k}{2\epsilon}\left(\frac{r_{ij}}{r_{\alpha}}-1\right)^{2}&\text{for }r _{\beta}<r_{ij}\leq r_{\alpha}\\ 0&\text{for }r_{ij}>r_{\alpha},\end{cases} \tag{3}\] where \(V_{c}/\epsilon=(k/\epsilon)\left(r_{\beta}/r_{\alpha}-1\right)^{2}/2+\left(1- r_{\beta}/\sigma_{ij}\right)^{2}/2\). The pair potential energy for attractive polymers (Fig. 1 (a)) is \(V(r_{ij})=V^{b}(r_{ij})+V^{amb}(r_{ij})\). For repulsive polymers (Fig. 1 (b)), \(V(r_{ij})=V^{b}(r_{ij})+V^{mb}(r_{ij})\). For attractive disks (Fig. 1 (c)), \(V(r_{ij})=V^{amb}(r_{ij})\) and for repulsive disks, \(V(r_{ij})=V^{mb}(r_{ij})\) (Fig. 1 (d)). The total potential energy and interparticle forces for each system are given by \(V=\sum_{j>j}V(r_{ij})\) and \(\vec{F}_{ij}=-(dV/dr_{ij})\hat{r}_{ij}\). Note that we set \(F_{ij}(r_{\beta})=-\epsilon\beta/\sigma_{ij}\) and \(k/\epsilon=(\beta r_{\alpha}/\sigma_{ij})\left(r_{\beta}/r_{\alpha}-1\right)\) to ensure that the non-bonded forces are continuous as shown in Fig. 1 (e). Below, we consider dimensionless forces \(F_{ij}\sigma_{s}/\epsilon\), potential energies \(V/\epsilon\), frequencies \(\sqrt{\epsilon/m}\sigma_{s}^{-1}\), and temperature \(k_{b}T/\epsilon\), where \(k_{b}=1\) is the Boltzmann constant, \(m\) is the mass of each disk, and \(\sigma_{s}\) is the size of the smallest disk. To prevent crystallization [21] during the packing generation process, the disk diameters are selected randomly from a power-law size distribution, \(P(\sigma_{i})=A\sigma_{i}^{-3}\), with minimum and maximum diameters \(\sigma_{s}\) and \(\sigma_{\text{max}}=2.2\sigma_{s}\) and polydispersity \(D=(\langle\sigma_{i}^{2}\rangle-\langle\sigma_{i}\rangle^{2})/\langle\sigma_ {i}\rangle^{2}\sim 0.23\). For each system size of \(N\) disks, we average over 100 different sets of diameters \(\{\sigma_{i}\}\) Figure 1: Example static packings for a single disk-shaped bead-spring polymer (a) with and (b) without attractive interactions and disk-shaped monomers (c) with and (d) without attractive interactions. The disk diameters are polydisperse, obeying an inverse power-law distribution for the diameters; the color shading indicates the particle size from large to small (light green to blue). The cyan shading in (a) and (c) indicates the range of the attractive interactions with \(\alpha=1.5\) (Eq. 3). The black solid lines connecting adjacent disks indicate the polymer backbone. (e) Force magnitude \(F_{ij}\sigma_{ij}/\epsilon\) between disks \(i\) and \(j\) plotted versus their separation \(r_{ij}\) normalized by their average diameter \(\sigma_{ij}=(\sigma_{i}+\sigma_{j})/2\). For repulsive non-bonded interactions, the disks interact only when they overlap and are repelled by a repulsive linear spring force for \(r_{ij}<\sigma_{ij}\) (vertical black dashed line). Repulsive polymers include the same repulsive interactions and tend the interaction for \(r_{ij}>\sigma_{ij}\) to a double-sided linear spring for bonded disks (red thin solid line). Non-bonded attractive interactions are specified by an attractive range \(\alpha\) and strength \(\beta\); in this case, the non-bonded force is extended to \(F_{ij}(r_{\beta}/\sigma_{ij})\sigma_{ij}/\epsilon=-\beta\), where \(r_{\beta}/\sigma_{ij}=1+\beta\) (vertical red dot-dashed line), after which the force linearly returns to zero at \(r_{\alpha}/\sigma_{ij}=1+\alpha\) (vertical grey dotted line). that were randomly selected from \(P(\sigma_{i})\). ### Packing-generation protocol Without thermal noise, each initial configuration of disks can be uniquely mapped to a given jammed packing after specifying the packing-generation protocol [18]. Therefore, in this study, we should consider similar sets of initial configurations for all four systems: attractive and repulsive bead-spring polymers and attractive and repulsive disks. To achieve the initial states, we generate liquid globule configurations of attractive bead-spring polymers. The initial disk configurations can be obtained from the liquid globules by replacing the bonded interactions with non-bonded interactions and the purely repulsive configurations can be obtained from the liquid globules by replacing the non-bonded attractive interactions with purely repulsive interactions. Packings at jamming onset for all four systems can then be generated through potential energy minimization using the appropriate potential energy functions described in Sec. II.1. #### ii.2.1 Preparing initial configurations via polymer collapse To generate initial configurations, we simulate bead-spring polymers with non-bonded attractive interactions over a range of temperatures using a Langevin thermostat. We integrate Newton's equations of motion for each monomer position \(\vec{r}_{j}\) using a modified velocity-Verlet integration scheme with timestep \(\Delta t=0.01\)[33]. We characterize the temperature-dependent polymer configurations using the normalized radius of gyration: \[\widetilde{R}_{g}=\frac{R_{g}-R_{g}^{\text{min}}}{R_{g}^{\text{max}}-R_{g}^{ \text{min}}}, \tag{4}\] where \(R_{g}^{\text{max}}\) and \(R_{g}^{\text{min}}\) are the maximum and minimum radii of gyration. As shown in Fig. 2 (a) for \(N=256\) and averaged over 100 different initial conditions, polymers with attractive non-bonded interactions undergo two distinct transitions as they are cooled from high to low temperatures. At high temperatures, the polymer samples an excluded-volume random walk. As the temperature is lowered, the attractive interactions overcome thermal fluctuations, and the polymer collapses into a condensed droplet, signaling the coil-to-globule transition. We can fit a sigmoidal curve to the normalized radius of gyration, \[\widetilde{R}_{g}(T)=\frac{1}{1+e^{\kappa(T-T_{m})}}, \tag{5}\] to identify the melting temperature \(T_{m}\)[30] at which \(\widetilde{R}_{g}(T_{m})=1/2\) and where \(\kappa\) gives the transition width. By cooling the polymer below \(T_{m}\), we can induce a glass transition, where the structural relaxation time \(\tau_{r}\) of the globule diverges. We determine \(\tau_{r}\) by calculating the self-part of the intermediate Figure 2: (a) Normalized radius of gyration \(\widetilde{R}_{g}\) plotted versus temperature \(T\) normalized by the melting temperature \(T_{m}\) (vertical solid black line). The dot-dashed line gives the fit of \(\widetilde{R}_{g}\) to Eq. 5. (b) The self-part of the intermediate scattering function \(F_{g}(q,t)\) at \(q=2\pi/\sigma_{\text{max}}\) averaged over all particles and time origins for several \(T/T_{m}\). The filled circles indicate the structural relaxation times \(\tau_{r}\) at which \(F_{g}(q,\tau_{r})=1/e\). The colors from red to blue indicate high to low \(T/T_{m}\). The vertical dashed line in (a) indicates \(\mathcal{T}_{g}\) below which \(\tau_{r}\rightarrow\infty\). (c) The average core packing fraction \(\left\langle\phi\right\rangle\) is plotted versus \(T-T_{g}\). The dashed line gives \(\left\langle\phi\right\rangle_{g}-\left\langle\phi\right\rangle\sim(T-T_{g})^ {T}\), where \(\left\langle\phi\right\rangle_{g}\approx 0.796\) (dotted line) and \(\gamma\approx 0.9\). The horizontal solid line at \(\left\langle\phi\right\rangle\approx 0.835\) indicates the average packing fraction at jamming onset for repulsive monomers under periodic boundary conditions. In all panels, the data are for attractive polymers and the angle brackets indicate averages over at least \(10^{2}\) configurations generated via different initial conditions. scattering function, \[F_{s}(q,t)=\frac{1}{N}\left\langle\sum_{j=1}^{N}e^{i\vec{q}\cdot(\vec{r}_{j}(q_{0}+ t)-\vec{r}_{j}(q_{0}))}\right\rangle, \tag{6}\] as a function of time \(t\). The angle brackets indicate an average over time origins \(t_{0}\) and directions of the wavenumber with magnitude \(q=2\pi/\sigma_{\rm max}\). As shown in Fig. 2 (b), at short times, \(F_{s}(q,t)\sim 1\) since the monomer positions are similar to what they were at the time origin. \(F_{s}(q,t)\) decays to zero when the configuration at time \(t\) is uncorrelated with the initial configuration. We define the structural relaxation time \(\tau_{r}\) using \(F_{s}(q,\tau_{r})=1/e\), which increases rapidly as the temperature decreases. We can estimate the glass transition temperature \(T_{g}\) at which \(\tau_{r}\to\infty\) using a power-law, \(\tau_{r}\propto(T-T_{g})^{-\lambda}\) (with \(\lambda\approx 2\)), or super-Arrhenius form, \(\tau_{r}\propto e^{A/(T-T_{g})}\) (with \(A\approx 10\)). Both forms give \(T_{g}/T_{m}\approx 0.14\). The results in Fig. 2 are shown for an interparticle potential with attractive range \(\alpha=1.5\) and depth \(\beta=10^{-5}\). We find qualitatively similar results for a range of \(\alpha\) and \(\beta\). Increasing \(\beta\) shifts the melting curve to larger values of temperature, while increasing \(\alpha\) broadens the coil-to-globule transition [31]. We first generate extended polymer configurations at high temperature \(T\gg T_{m}\). We then slowly cool the polymers to temperatures \(T_{0}\) below \(T_{m}\), i.e. \(T_{0}/T_{m}=0.43\), \(0.32\), and \(0.27\), but above \(T_{g}\), as shown in Fig. 2 (a). We collect between \(10^{2}\) and \(10^{3}\) distinct sets of positions and velocities of the polymers at each \(T_{0}\), with each set separated by \(10\tau_{r}\). We consider \(N=64\), \(128\), \(256\), \(512\), and \(1024\) to assess system-size effects. After generating the collapsed polymer configurations, we follow the protocols below to generate zero-temperature packings of polymers with non-bonded attractive interactions, disk packings with attractive interactions, packings of polymers with only non-bonded repulsive interactions, and disk packings with only repulsive interactions. #### ii.2.2 Packing-generation protocol for attractive disks and polymers To generate static packings of attractive polymers, we cool liquid globules at \(T_{0}\) to zero temperature using damped molecular dynamics (MD) simulations, where we solve Newton's equations of motion, \[m\vec{a}_{j}=-\partial V/\partial\vec{r}_{j}-b\vec{v}_{j}, \tag{7}\] with dissipative forces proportional to the disk velocities \(\vec{v}_{j}\), potential energy \(V=V^{b}+V^{amb}\), disk mass \(m\) and acceleration \(\vec{a}_{j}\), and \(j=1,\ldots,N\) labels the disks. For computational efficiency, each system is cooled using the reported damping parameter \(b\) until the total force magnitude in the system reaches \(F_{\rm tol}=\Sigma_{j=1}^{N}|\vec{F}_{j}|<10^{-7}\), and then it is increased to \(b=0.1\) in the overdamped limit. The simulations are terminated when \(F_{\rm tol}<10^{-15}\). The damped MD simulations can be performed on attractive disks (as well as attractive polymers) to investigate the effect of the polymer backbone on the zero-temperature packings. To generate static packings of attractive disks, we initialize the system with the positions and velocities of the collapsed globules at \(T_{0}\) and then use damped MD simulations (Eq. 7) to minimize the total potential energy, except now \(V=V^{amb}\). #### ii.2.3 Packing-generation protocol for purely repulsive disk and polymers For systems with attractive interactions, we employ open boundary conditions. Since static packings of purely repulsive particles possess non-zero pressures at jamming onset, they must be confined to form jammed packings, e.g. using periodic or fixed boundary conditions. To generate jammed packings of purely repulsive particles in _open_ boundary conditions, we include a linear spring potential that connects each particle to center of mass of the packing, which is the origin of the coordinate system, \[\frac{V^{c}(r_{i})}{\epsilon}=\frac{k_{c}}{2\epsilon}r_{i}^{2}\left(\sigma_{i }/\sigma_{\rm max}\right)^{\nu}, \tag{8}\] where \(k_{c}\sigma_{s}^{2}\ll\epsilon\) is the compressive energy scale. (See Appendix A for a discussion of how the results depend on \(k_{c}/\epsilon\).) To generate zero-temperature packings of purely repulsive particles, we initialize the system with the positions and velocities from the collapsed globules at \(T_{0}\). We then run damped MD simulations with \(V=V^{b}+V^{mb}+V^{c}\) for purely repulsive polymers or \(V=V^{mb}+V^{c}\) for purely repulsive monomers until force balance is achieved. The radial spring is then removed and the packings are again energy minimized until \(F_{\rm tol}<10^{-15}\). For small damping coefficients, packings of repulsive disks with similar sizes segregate and crystallize. We thus include a factor of \(\left(\sigma_{i}/\sigma_{\rm max}\right)^{\nu}\) with \(\nu=2\) in Eq. 8 to prevent size segregation. (See Appendix A.) To calculate the structural and mechanical properties of the packings as a function of the packing fraction above jamming onset, we add a repulsive circular boundary with radius \(R\) via the repulsive linear spring potential, \[\frac{V^{w}(r_{i})}{\epsilon}=\frac{1}{2}\left(1-\frac{R-r_{i}}{\sigma_{i}} \right)^{2}\Theta\left(1-\frac{R-r_{i}}{\sigma_{i}}\right). \tag{9}\] \(R\) is initialized so that there are no disk-wall contacts. The system is successively compressed by scaling the wall and particle positions such that \(r_{i}^{\prime}=r_{i}(1-2\Delta\phi/\phi)\) with each compression step \(\Delta\phi=10^{-3}\) followed by energy minimization using damped MD simulations with \(b=0.1\). The system is compressed until it reaches a target total potential energy per particle \(V_{0}<V/N<2V_{0}\). If the system is compressed above \(V/N>2V_{0}\), the previous particle positions and boundary radius are re-initialized, the system is compressed by \(\Delta\phi/2\), and energy-minimized. The static packings were prepared over a wide range of potential energies per particle, \(10^{-13}\lesssim V_{0}\lesssim 10^{-2}\). ### Core packing fraction To analyze the structural properties of the interiors of static packings, their surfaces must first be identified. To do this, we adapt and apply an algorithm first proposed for finding the surfaces of proteins in solvent from Lee and Richards Lee and Richards (1999). We first place a probe disk of diameter \(\sigma_{p}\) on the surface of the disk or polymer packing. It is then rolled over the surface of the packing until it returns to its initial location. In this study, we consider any disk touched by the probe as a'surface' disk. The size of the probe disk affects which disks are considered as surface disks. We set \(\sigma_{p}/\sigma_{s}=0.1\), which is similar to the ratio of the diameter of a water molecule to the diameter of alanine. The variation of the average core packing fraction in static packings with \(\sigma_{p}/\sigma_{s}\) is investigated in Appendix B. After identifying the surface disks of a given configuration, a radical Voronoi tessellation is performed on the disk centers within a square box with an edge length exceeding the largest extent of each packing (Kurz and Sze, 1999). The core packing fraction for a particular configuration is defined as \[\phi=\frac{\sum_{i=1}^{N_{c}}\pi r_{i}^{2}}{\sum_{i=1}^{N_{c}}a_{i}}, \tag{10}\] where \(N_{c}\) is the number of core disks and \(a_{i}\) the area of the Voronoi polygon surrounding the \(i\)th core disk. Due to the small probe radius, all of the core disks have closed Voronoi cells and so their areas do not depend on the enclosing box size. ## III Results In this section, we describe the structural and mechanical properties of static packings of disks and disk-shaped bead-spring polymers with purely repulsive, as well as attractive interactions. In Sec. III.1, we first show that when attractive disk-shaped bead-spring polymers are cooled toward the glass transition temperature \(T_{g}\), the average packing fraction of the interior (or core region) is well-below values given for random close packing for disordered packings of repulsive disks. Therefore, in Sec. III.2 we study the core packing fraction of attractive polymers as they are cooled from \(T_{0}>T_{g}\) to zero temperature using damped MD simulations. We find that attractive disk-shaped bead-spring polymers, as well as attractive disks, when cooled to zero temperature, possess similar core packing fractions as found for static packings of repulsive disks and disk-shaped bead-spring polymers over a wide range of initial temperatures \(T_{0}\), damping parameters \(b\), and system sizes \(N\). In Sec. III.3, we show that attractive disks and disk-shaped bead-spring polymers quenched to zero temperature possess an excess number of low-frequency modes in the density of vibrational states (similar to jammed packings of repulsive disks). We further show that slowly increasing the depth \(\beta\) of the attractive interparticle potential causes the attractive packings to lose low-frequency modes in a way that is similar to compression of repulsive disk packings above jamming onset. In Sec. III.4, we find that, contrary to previous studies, static packings of repulsive disk-shaped bead-spring polymers are hypostatic at jamming onset, but the number of missing contacts relative to the isostatic number matches the number of quartic modes that arise from the polymer backbone constraints. When we account for the quartic modes, the excess number of contacts above isostaticity (for packings of repulsive polymers) scales as \(\Delta N\sim\left(V_{r}N^{3}\right)^{\alpha}\), where \(V_{r}\) is the total repulsive potential energy of the packing, \(\alpha=1/2\) at small \(\Delta N\), and the exponent crosses over to \(\alpha=1/4\) in the large-\(\Delta N\) limit. Finally, in Sec. III.5 we show that zero-temperature attractive disks and disk-shaped bead-spring polymers are also effectively isostatic if contacts are defined as \(r_{ij}<r_{\beta}\) and they obey the same scaling of the excess number of contacts with the repulsive energy, \(\Delta N\sim\left(V_{r}N^{3}\right)^{\alpha}\), as found for static packings of repulsive disks and disk-shaped bead-spring polymers. Core packing fraction for collapsed polymers near \(T_{g}\) is well below random close packing for repulsive disks What is the core packing fraction of an attractive disk-shaped bead-spring polymer as it is cooled toward the glass transition temperature \(T_{g}\)? In Fig. 2 (c), we plot the average core packing fraction \(\left\langle\phi\right\rangle\) versus \(T-T_{g}\) for \(N=256\) averaged over 100 polymers with different initial conditions. The core packing fraction increases with decreasing temperature, \(\left\langle\phi\right\rangle_{g}-\left\langle\phi\right\rangle\sim(T-T_{g})^{\gamma}\), approaching the plateau value of \(\left\langle\phi\right\rangle_{g}\approx 0.796\) as \(T\to T_{g}\) (with \(\gamma\approx 0.9\)). \(\left\langle\phi\right\rangle_{g}\) is similar to values reported for the packing fraction near the glass transition in experimental, computational, and theoretical studies of hard spheres (Kurz and Sze, 1999; Sze, 1999). In contrast, static packings of \(N=256\) purely repulsive polydisperse disks, without a polymer backbone, possess a much larger packing fraction, \(\left\langle\phi\right\rangle\approx 0.835\), at jamming onset (Kurz and Sze, 1999). The core packing fraction for collapsed attractive polymers near \(T_{g}\) is far below that for static packings Figure 3: The average core packing fraction \(\left\langle\phi\right\rangle\) from damped MD simulations plotted versus the damping parameter \(b\) for attractive disk-shaped bead-spring polymers (circles with solid lines), attractive disks (squares with solid lines), repulsive disk-shaped bead-spring polymers (circles with dashed lines), and repulsive disks (squares with dashed lines), prepared from initial temperatures \(T_{0}/T_{m}=0.43\) (red), \(0.32\) (yellow), and \(0.27\) (blue) for \(N=512\). of purely repulsive disks at jamming onset. This result indicates that for the core packing fraction of collapsed attractive polymers to reach those of jammed disconnected, repulsive disks, they must be cooled to temperatures much below the glass transition temperature. Core packing fraction for collapsed polymers with \(T\ll T_{g}\) matches that for jammed repulsive disk packings To study the core packing fraction of collapsed, attractive polymers below the glass transition temperature \(T_{g}\), we performed damped MD simulations to take attractive polymers with initial temperatures \(T_{m}>T_{0}>T_{g}\) to zero temperature using a wide range of damping parameters. In Fig. 3, we show that the core packing fraction of collapsed, attractive polymers increases with decreasing damping parameter from roughly 0.83-0.84 to 0.85 (circles with solid lines) for \(N=512\). For large damping parameters, larger initial temperatures \(T_{0}\) give rise to the lowest values of the core packing fraction. However, for low damping parameters, the results for the core packing fraction of collapsed, attractive polymers are the same for all \(T_{0}\). To study the effects of the polymer backbone constraint on the core packing fraction, we repeat these simulations for disconnected, attractive disks (squares with solid lines). The dependence of \(\langle\phi\rangle\) on the damping parameter \(b\) and initial temperature \(T_{0}\) is similar to that for collapsed, attractive polymers, however, the packing fraction is shifted to larger values by \(\approx 0.01\) for all \(b\) and \(T_{0}\). To compare the core packing fraction of collapsed, attractive polymers to the packing fraction of jammed repulsive systems, we developed a novel compression protocol to generate jammed repulsive systems in open boundary conditions. (See Sec. II.2.3.) We start with the same attractive polymer configurations prepared at \(T_{0}\) for both polymers and disconnected disks. We then replace the non-bonded attractive interactions (\(V^{amb}\)) with non-bonded repulsive interactions (\(V^{amb}\)) and compress the system isotropically by attaching each disk to a radial linear spring anchored to the origin. In Fig. 3, we show the core packing fraction for jammed packings of repulsive disk-shaped bead-spring polymers (circles with dashed lines) and repulsive disks (squares with dashed lines). For these purely repulsive systems, the core packing fraction does not depend strongly on \(T_{0}\). Further, for small \(T_{0}\), the collapsed, attractive polymers and jammed repulsive polymers possess similar core packing fractions for all damping parameters \(b\). In addition, there is qualitative agreement for the core packing fraction of packings of disconnected attractive and repulsive disks for all \(b\). These results emphasize that the attractive interactions do not strongly influence the core packing fraction, i.e. structures that collapse due to attractive interactions are similar to those that form due to mechanical compression with weak thermal fluctuations. As discussed above, the core packing fraction for collapsed, attractive polymers is the lowest for large damping parameters \(b\) and high initial temperatures \(T_{0}\). We find that these collapsed structures possess large void regions surround by regions that are densely packed. To identify the void regions, we test each interior disk to determine whether a probe disk of diameter \(\sigma_{p}\) can be placed at its edge without causing any overlaps. If the probe can be placed without causing overlaps, we remove that disk from the list of core disks. In Fig. 4, we show that when we remove core disks that are near void regions (by choosing \(\sigma_{p}/\sigma_{s}=1\)), the core packing fraction \(\langle\phi\rangle\) is no longer strongly dependent on \(T_{0}\) for large damping parameters. Since the collapsed structures in the low-damping limit do not possess void regions, \(\langle\phi\rangle\) does not depend on \(T_{0}\) or \(\sigma_{p}\) for small \(b\). Thus, aside from void regions, the initial temperature has only a minor effect on the packing fraction of dense core regions of collapsed, attractive polymers. In Fig. 5, we present the results for the core packing fraction (averaged over all \(T_{0}\) and excluding void regions) plotted versus the system size \(N\) and damping parameter \(b\) for (a) disk-shaped bead-spring polymers and (b) disconnected disks. In general, when we do not consider void regions, the core packing fraction for collapsed, attractive polymers matches that for jammed, repulsive polymers and the core packing fraction for packings of attractive disks matches that for jammed repulsive disks for all \(b\) and \(N\). These results suggest that the structural properties of systems with attractive interactions that are cooled to zero temperature are similar to those for repulsive systems that are compressed to jamming onset. In addition, we find that the average core packing fraction _decreases_ with increasing system size \(N\), whereas packing-generation protocols that start from low-density configurations yield \(\langle\phi\rangle\) that typically _increase_ with \(N\)[18]. For polymers, \(\langle\phi\rangle\) varies between 0.84-0.85 in the large-\(N\) limit. For disks, \(\langle\phi\rangle\approx 0.85\)-0.86 for large \(N\). To better understand the system-size dependence of \(\langle\phi\rangle\), we also calculate the local core packing fraction \(\phi_{l}\) as a function of the distance to the surface of the packing. For small packings, a relatively large fraction of the disks are located near the Figure 4: The average core packing fraction \(\langle\phi\rangle\) from damped MD simulations of attractive polymers initialized at \(T_{0}/T_{m}=0.43\) (circles with dashed lines) and 0.27 (squares with dashed lines) plotted versus the damping parameter \(b\) when void regions are identified using probe diameters, \(2.2\lesssim\sigma_{p}/\sigma_{s}<1\) (where purple to yellow indicates increasing size), for \(N=512\). Core disks adjacent to void regions are not included in the calculation of \(\langle\phi\rangle\). curved boundaries. As \(N\) increases, a larger number of disks are considered bulk, far from the curved boundaries. In Fig. 6 (a), we plot the local core packing fraction \(\phi_{l}\) versus the number of Voronoi cells \(N_{\nu}\) between a given disk and the closest surface disk for collapsed, attractive polymers and jammed, repulsive polymers. (\(N_{\nu}=0\) indicates that a core disk is adjacent to a surface disk.) We find that the core packing fraction for both attractive and repulsive polymers is largest for small systems and near surface disks. As \(N_{\nu}\) increases, \(\langle\phi_{l}\rangle\) decreases and converges in the large-system limit. In addition, \(\langle\phi_{l}\rangle\) is more uniform for jammed, repulsive polymer packings. We also calculated the local hexatic order parameter associated with each core disk, \[|\psi_{6}|=\frac{1}{n_{k}}\left|\sum_{j=1}^{n_{k}}e^{i\theta_{jk}}\right|, \tag{11}\] where \(\theta_{jk}\) is the angle between a central core disk \(k\) and its Voronoi neighbors \(j=1\),...,\(n_{k}\), to determine whether increases in the core packing fraction are correlated with increases in positional order. In Fig. 6 (b), we show that \(\langle|\psi_{6}|\rangle\sim 0.5\) is independent of \(N_{\nu}\) and comparable to values for amorphous jammed disk packings [38]. ### Low-frequency contribution to the density of vibrational modes Above, we showed that the core packing fractions for collapsed, attractive polymers and packings of attractive disks are similar to those of jammed repulsive polymers and repulsive disks. Do these disparate systems also share the other structural and mechanical properties of jammed packings of repulsive disks? We first consider the vibrational density of states \(D(\omega)\), which is obtained by calculating the dynamical Figure 5: The core packing fraction \(\langle\phi\rangle\) from damped MD simulations averaged over all initial temperatures \(T_{0}\) and plotted versus the system size \(N\) and damping parameter \(b\) (increasing from purple to yellow). We show results for (a) collapsed, attractive polymers (circles with solid lines) and jammed repulsive polymers (circles with dashed lines) and (b) attractive disks (squares with solid lines) and jammed repulsive disks (squares with dashed lines). Void regions are identified using probe size \(\sigma_{p}=1\) and core disks adjacent to void regions are not included in the calculation of \(\langle\phi\rangle\). Figure 6: (a) The local packing fraction \(\langle\phi_{l}\rangle\) and (b) hexatic order parameter \(\langle|\psi_{6}|\rangle\) for each disk plotted versus the number of Voronoi cells \(N_{\nu}\) between each disk and the closest surface disk for collapsed, attractive polymers (solid lines) and jammed, repulsive polymers (dashed lines) for several system sizes, \(N=64\) (circles), \(128\) (squares), \(256\) (upward triangles), \(512\) (downward triangles), and \(1024\) (stars). matrix, \[M_{kl}=\frac{\partial^{2}V}{\partial\bar{r}_{k}\partial\bar{r}_{l}}, \tag{12}\] where \(k\) and \(l\) label the \(2N\) coordinates of the disks. The eigenvectors \(\overline{\xi}_{k}^{i}=\{e_{1x}^{i},e_{1y}^{i},\ldots,e_{Nx}^{i},e_{Ny}^{i}\}\) represent an orthogonal set of \(2N\) normal modes whose eigenvalues \(e^{i}\) correspond to the normal mode frequencies \(\omega^{i}=\sqrt{e^{i}}\). \(D(\omega)\) does not depend strongly on the initial temperature \(T_{0}\) or the damping paramter \(b\) used to generate the packings, and we focus on packings prepared using \(T_{0}/T_{m}=0.27\) and \(b=10^{-5}\). To generate mechanically stable repulsive packings, we jammed the repulsive disks and polymers under circular boundary conditions. Specifically, we initialize the repulsive packings analyzed in Sec. III.2 and then apply sequential affine compressions of \(\Delta\phi=10^{-3}\) followed by overdamped energy minimization until reaching a target potential energy \(V_{r}/N=10^{-14}\), where \(V_{r}=V^{mb}+V^{w}\) for repulsive disks and \(V_{r}=V^{mb}+V^{b}+V^{w}\) for repulsive polymers. Additionally, under-constrained disks associated with zero-modes are removed--ratters in the case of repulsive disks and flippers in the case of repulsive polymers. (See Sec. III.4 for further details.) In Fig. 7 (a) and (b), we show the density of vibrational states \(D(\omega)\) for packings of repulsive disks and packings of repulsive polymers, respectively. As expected, \(D(\omega)\) for jammed packings of repulsive disks possess an anomalous plateau at low frequencies rather than Debye behavior [18]. Similarly, packings of repulsive polymers also display a low-frequency plateau with \(10^{-2}<\omega<10^{-1}\) in Fig. 7 (b). However, there are further excess vibrational modes in packings of repulsive polymers for \(\omega<10^{-2}\), which indicate the presence of quartic modes that are discussed below in Sec. III.4. When the attractive interactions are weak, i.e. \(\beta=10^{-5}\) as discussed in Sec. III.2, attractive disk and polymer packings possess only small disk overlaps, \(V_{r}/N\lesssim 10^{-14}\), where \(V_{r}=V^{mb}\) for attractive disks and \(V_{r}=V^{mpb}+V^{b}\) for attractive polymers. We find that \(D(\omega)\) for attractive disk and attractive polymer packings with \(V_{r}/N\lesssim 10^{-14}\) possess no non-trivial zero modes and a broad low-frequency plateau, similar to that obtained for jammed, repulsive disk packings prepared with comparable values of \(V_{r}\) as shown in Fig. 7 (c) and (d). The small peak at the lowest frequencies in packings of attractive polymers indicates the presence of quartic modes. When we compress repulsive disk and polymer packings above jamming onset by increasing \(\phi\) and thus \(V_{r}\) (from purple to yellow), the plateau in \(D(\omega)\) at low frequencies decreases, as shown in Fig 7 (a) and (b) [39; 40]. Effective compression of attractive packings can be obtained by increasing the attractive depth \(\beta\). In Fig. 7 (c) and (d), we vary the attractive depth by successively multiplying \(\beta\) by a factor of \(r\sim 1.12\) in the range \(10^{-8}<\beta<10^{-1}\) followed by overdamped energy minimization after each change in \(\beta\). Increasing \(\beta\) gives rise to concomitant increases in \(V_{r}\) and a loss of the low-frequency plateau. We quantify the anomalous low-frequency plateau in \(D(\omega)\) by identifying a characteristic frequency \(\omega^{*}\) at which \(D(\omega^{*})\) falls below a small threshold. Here, we use \(D(\omega^{*})=10^{-1}\), but the results are similar over a range of thresholds. In Fig. 8 (a), we show \(\omega^{*}\) as a function of \(V_{r}\) for packings of repulsive disks compressed under circular boundary conditions for several system sizes \(N=64\), \(128\), \(256\), \(512\), and \(1024\). Previous work has shown that under periodic boundary conditions the characteristic plateau frequency scales as \(\omega^{*}N\sim\left(PN^{2}\right)^{1/2}\) at high pressures \(P\)[39; 40; 41]. Attractive packings with no boundaries are at zero pressure, and thus we plot their low frequency response against \(V_{r}\) instead of \(P\). Potential energy \(V\) and pressure \(P\) in repulsive systems have a known scaling relation of \(P\sim\left(V/N\right)^{1/2}\)[18]. Combining these two scaling relations gives \(\omega^{*}N\sim\left(VN^{3}\right)^{1/4}\), which is plotted as black dashed line in Fig. 8 (a) [42]. Additionally, we show in Fig. 8 (b) that compressing repulsive polymer packings above jamming onset gives nearly identical results for \(\omega^{*}N\) versus \(V_{r}N^{3}\) as found for repulsive disk packings, when quartic modes are removed. This result indicates at least in the harmonic approximation double-sided polymer bonds do not strongly affect the low-frequency mechanical response. Does the power-law scaling of \(\omega^{*}\) versus \(V_{r}\) still hold for attractive packings as we increase \(\beta\) and thus \(V_{r}\)? In Fig. 8 (c) and (d), we show that increasing the attraction depth is similar to overcompression of a repulsive disk packing, i.e. Figure 7: The vibrational density of states \(D(\omega)\) for (a) jammed repulsive disks, (b) jammed repulsive polymers, (c) attractive disks, and (d) attractive polymers, colored by \(V_{r}/N\) (increasing from purple to yellow) for \(N=128\). The black dashed line defines the characteristic frequency \(\omega^{*}\), where \(D(\omega^{*})=10^{-1}\). Note the large low-frequency peak for packings of repulsive and attractive polymers in (b) and (d), which arise due to quartic modes. Quartic modes are removed from \(D(\omega)\) when calculating \(\omega^{*}\). (See Sec. III.4.) both lead to a decrease in the low-frequency plateau in \(D(\omega)\) and give rise to \(\omega^{*}N\sim(V_{r}N^{3})^{1/4}\) for the finite-size scaling of the plateau frequency. In Fig. 8, we achieved an effective compression of attractive packings by increasing the attractive depth \(\beta\), while fixing the attractive interaction range at \(\alpha=1.5\). In Sec. III.5 we address varying \(\alpha\) as well as \(\beta\) and find similar results. ### Repulsive polymer packings are hypostatic, but effectively isostatic Jammed packings of repulsive disks are known to be isostatic, i.e. the onset of rigidity occurs when the number of constraints (arising from interparticle and particle-wall contacts) equals the number of degrees of freedom. For isostatic packings, the number of contacts at jamming onset satisfies: \(N_{c}^{\rm iso}=2(N-N_{r})+f(d)+1\), where \(N_{r}\) is the number of unconstrained rattler particles, \(f(d)\) indicates the number of unconstrained degrees of freedom from the boundary conditions (e.g. \(f(d)=1\) for circular fixed boundaries in \(d=2\)), and the \(+1\) corresponds to the particle size degree of freedom [18; 43]. Rattler particles for packings of repulsive disks correspond to particles with fewer than three contacts or particles where all contacts occur on a semicircle. Rattler particles are identified and removed iteratively. Previous studies have shown that compressing jammed packings gives rise to an increase in interparticle contacts, which in turn increases the characteristic plateau frequency \(\omega^{*}\). In Fig 9 (a), we plot \(\Delta N=N_{c}+N_{w}-N_{c}^{\rm iso}\) versus \(V_{r}N^{3}\), where \(N_{c}\) is the number of interparticle contacts and \(N_{w}\) is the number of particle-wall contacts. We show that \(\Delta N\) obeys power-law scaling with \(V_{r}/N\): \(\Delta N\sim(V_{r}N^{3})^{\zeta}\), where \(\zeta=0.5\) for \(V_{r}N^{4}\lesssim 1\) and \(\zeta=0.25\) for \(V_{r}N^{3}\gtrsim 1\). These results match those for the finite-size scaling of the pressure dependence of \(\Delta N\) and shear modulus \(G\) for jammed packings of repulsive disks and spheres [41; 44], i.e. \(\Delta N\sim G\sim(pN^{2})^{\lambda}\), where \(\lambda=1\) for \(pN^{2}\lesssim 1\) and \(\lambda=0.5\) for \(pN^{2}\gtrsim 1\). Previous studies have suggested that jammed packings of repulsive polymers are isostatic [23; 26]. However, one must carefully identify "flipper" particles that have too few contacts to be fully constrained, as well as quartic modes. We find that jammed packings of repulsive polymers are in fact _hypostatic_, but are effectively isostatic when accounting for flippers and quartic modes. Previous work identified flipper particles as those with no non-bonded interactions [22; 26]. Here, we use (non-rotational) zero modes of the dynamical matrix \(\bar{\xi}^{i}\) to identify underconstrained flipper particles in repulsive polymer packings. We successively remove the largest contribution \(\{e^{i}_{jkr},e^{i}_{jkr}\}\) to \(\bar{\xi}^{i}\) until it is no longer a zero mode. Each particle \(j\) with the largest contribution to the zero mode is identified as a flipper particle. In Fig. 10 (a), the yellow-shaded particles are flippers since they only have bonded contacts, one of their neighbors only has bonded contacts, and they can collectively rotate without changing the length of the bonds and without making additional contacts. The red and cyan particles have no non-bonded contacts, but their bonded neighbors have at least one non-bonded contact, and so they are not flipper particles. The grey arrows in Fig. 10 (a) indicate a quartic mode in a repulsive polymer packing. The cyan particle has the largest contribution to the quartic mode and its motion is perpendicular to the approximately \(180^{\circ}\) bond angle. When we perturb a packing by an amplitude \(\delta\) along a typical eigenvector \(\bar{\xi}^{i}\) of the dynamical matrix, the change in potential energy \(\Delta V_{r}\sim\delta^{2}\) scales quadratically with the amplitude as shown in Fig. 10 (b). However, hypostatic packings contain quartic modes, such that the change in energy \(\Delta V_{r}\) for perturbations with amplitude \(\delta\) along a quartic mode scale as \(\Delta V_{r}\sim\delta^{4}\)[45]. In Fig. 10 (b), we show the quartic scaling for \(\delta\gtrsim\delta_{q}\), where \(\delta_{q}\sim P\) varies linearly with pressure, for perturbations along the quartic mode given in Fig. 10 (a). Since the change in potential energy for perturbations along "quartic" modes scales quadratically with the amplitude of the perturbation for \(\delta\lesssim\delta_{q}\), it can be challenging to identify quartic modes. To count the number of quartic modes, we decompose the dynamical matrix into two components, the stiffness matrix \(H\) and stress matrix \(S\), where \(M=H+S\)[45; 46]. The stiffness matrix only depends on the geometry of the system Figure 8: Characteristic plateau frequency of the vibrational density of states \(\omega^{*}N\) versus potential energy \(V_{r}N^{3}\) for packings of (a) repulsive disks, (b) repulsive polymers, (c) attractive disks, and (d) attractive polymers as a function of system size, \(N=64\) (circles), \(128\) (squares), \(256\) (upward triangles), \(512\) (downward triangles), and \(1024\) (stars) colored from blue to red with increasing system size. The dashed line has a slope of \(0.25\). (not the interaction potential or pressure), \[H_{kl}=\sum_{i>j}\frac{\partial^{2}V}{\partial(\vec{r}_{ij}/\sigma_{ij})^{2}} \frac{\partial(r_{ij}/\sigma_{ij})}{\partial\vec{r}_{k}}\frac{\partial(r_{ij}/ \sigma_{ij})}{\partial\vec{r}_{l}}, \tag{13}\] where \(k\) and \(l\) loop over all \(N\) particle coordinates. Previous work has shown that quartic modes \(\vec{\xi}^{i}\) in \(M\) have non-zero eigenvalues \(e^{i}\) at non-zero pressure; however, the same eigenmode yields \(H\vec{\xi}^{i}=h^{i}\vec{\xi}^{i}\), where \(h^{i}=0\)[45]. Therefore, for each repulsive polymer packing, we calculate the number of quartic modes \(N_{q}=H_{0}-M_{0}\), where \(M_{0}\) and \(H_{0}\) are the number of zero modes in the dynamical matrix and stiffness matrix, respectively. We find that packings of repulsive polymers are hypostatic at jamming onset with \(N_{c}+N_{w}+N_{b}<N_{c}^{\rm iso}\), where \(N_{b}\) is the number of polymer bonds. However, the number of missing contacts \(N_{m}=N_{c}^{\rm iso}-N_{c}-N_{w}-N_{b}\) equals the number of quartic modes \(N_{m}=N_{q}\) for each repulsive polymer packing. As shown Fig. 9 (b), we find identical finite-size scaling and collapse of the excess number of contacts \(\Delta N\) versus \(V_{r}N^{3}\) for packings of repulsive polymers and packings of repulsive disks, where \(\Delta N=N_{c}+N_{w}+N_{b}+N_{q}-(2(N-N_{f})+f(d)+1)\) for packings of repulsive polymers. ### Attractive disk and polymer packings are hyperstatic, but effectively isostatic Above, we showed that repulsive packings are isostatic at jamming onset and obey power-law scaling relations for \(\omega^{*}\) and \(\Delta N\) versus \(V_{r}N^{3}\). In addition, we find that attractive monomer and polymer packings not only possess similar core packing fractions as their repulsive counterparts, but also follow the same power-law scaling relation for \(\omega^{*}\) versus \(V_{r}N^{3}\). Can attractive disk and polymer packings be viewed as effectively isostatic as well? Typical contact counting analyses consider a constraint as the onset of any non-zero interaction between particles or between a particle and a wall. Thus, for attractive systems in Eq. 3, a contact could be defined as an interparticle separation that satisfies \(r_{ij}/\sigma_{ij}<1+\alpha\). With this definition, packings of attractive monomers and polymers are highly hyperstatic. However, previous studies have suggested that weak long-range attractions are relatively unimportant for determining the mechanical properties of attractive solids [47]. Remarkably, using the attractive potential in Eq. 3, we find that if we count contacts as those with interparticle separations with \(r_{ij}/r_{\beta}<1\), packings of attractive monomers are effectively isostatic for small \(V_{r}\), i.e. \(N_{c}(r_{ij}<r_{\beta})=N_{c}^{\rm iso}\), where \(N_{c}^{\rm iso}=2N-f(d)\) and \(f(d)=3\) for the two uniform translations and a single rotation that have no energy cost for attractive packings with open boundary conditions. In Eq. 3, \(r_{\beta}\) indicates a change in the interaction stiffness. For \(r_{ij}/\sigma_{ij}<r_{\beta}\), \(|\partial^{2}V/\partial r_{ij}^{2}|\sim\varepsilon\), whereas \(|\partial^{2}V/\partial r_{ij}^{2}|\sim k/\varepsilon\sim\beta\) tends to zero as \(\beta\to 0\). In Fig. 9 (c), we show that \(\Delta N=N_{c}(r_{ij}<r_{\beta})-N_{c}^{\rm iso}\) obeys the same power-law scaling with \(V_{r}N^{3}\) as found for packings of repulsive disks and polymers. We have shown that if we define contacts for packings of attractive disks as those with \(r_{ij}<r_{\beta}\), attractive disk packings are effectively isostatic (for \(V_{r}N^{3}\ll 1\)) and \(\Delta N\) versus \(V_{r}N^{3}\) obeys similar power-law scaling as that found for isostatic repulsive packings. However, do attractive packings with contacts defined by \(r_{ij}<r_{\beta}\) possess any zero-energy modes? To address this question, we construct the stiffness matrix from contacts defined by \(r_{ij}/r_{\beta}<1\) in attractive disk packings. We then calculate the stiffness matrix eigenvalues \(h^{i}(r_{ij}<r_{\beta})\) and compare them to the eigenvalues of the stiffness matrix \(h^{i}(r_{ij}<r_{\alpha})\) using contacts defined by the full attractive potential. We not only find that attractive disk packings with contact networks defined by \(r_{ij}<r_{\beta}\) are effectively isostatic, but also that \(H(r_{ij}<r_{\beta})\) has no non-trivial zero-energy modes, \(h^{i}(r_{ij}<r_{\beta})>0\). We further show in Fig. 11 (a) that for the attractive disks the eigenvalues \(h^{i}(r_{ij}<r_{\beta})\) are nearly identical to the eigenvalues \(h^{i}(r_{ij}<r_{\alpha})\). Are packings of attractive polymers effectively isostatic using the same definition of interparticle contacts as packings of attractive disks? When defining contacts as \(r_{ij}/r_{\beta}<1\), some attractive polymer packings appear to be hypostatic with \(N_{c}(r_{ij}<r_{\beta})+N_{b}<N_{\rm c}^{\rm iso}\). For example, in Fig. 12 (a), we show an attractive polymer packing with \(N_{c}(r_{ij}<r_{\beta})+N_{b}=124\) and \(N_{\rm c}^{\rm iso}=2N-3=125\) and therefore this packing is missing a single contact. We find that the lowest non-trivial eigenmode of the dynamical matrix \(M\) is very similar to a quartic mode in a jammed repulsive polymer packing, where the largest contribution to the mode is perpendicular to a \(\sim 180^{\circ}\) bond angle. For repulsive polymer packings, the number of quartic modes satisfies \(N_{q}=H_{0}-M_{0}\). In attractive polymer packings with missing contacts, \(H_{0}=M_{0}\) and \(N_{q}\) appears to be \(0\). However, we show in Fig. 12 (b) that when we perturb the attractive polymer packing in Fig. 12 (a) along the possible quartic mode of \(M\), the change in the total potential energy \(V=V^{amb}+V^{b}\) versus the perturbation amplitude \(\delta\) scales as \(\Delta V\sim\delta^{4}\) for \(\delta>\delta_{q}\sim\beta^{2}\). When we consider \(H(r_{ij}<r_{\alpha})\) and \(M(r_{ij}<r_{\alpha})\), we find that \(N_{q}=H_{0}-M_{0}=0\) even for attractive polymer packings that are hypostatic. However, we find that \(H_{0}(r_{ij}<r_{\beta})>H_{0}(r_{ij}<r_{\alpha})\) for attractive polymer packings with missing contacts. Therefore, for attractive polymer packings, we count the number of quartic modes \(N_{q}\) as the number of non-trivial zero modes in \(H(r_{ij}<r_{\beta})\). When including these \(N_{q}\) quartic modes, we find that \(\Delta N=N_{c}(r_{ij}<r_{\beta})+N_{b}+N_{q}\) versus \(V_{r}N^{3}\) obeys the same power-law scaling and finite-size collapse as packings of repulsive disks, repulsive polymers, and attractive disks. (See Fig. 9 (d)). While packings of attractive polymers are effectively isostatic, we also find that the Figure 10: (a) Jammed repulsive polymer packing showing the quartic mode in (b) with grey arrows for \(N=64\). Red lines indicate interparticle and particle-wall contacts. Black lines indicate the polymer backbone. The large black circle that encloses the polymer indicates the circular wall. Non-flipper disks are colored white. The pair of yellow disks are underconstrained flippers. The cyan disk has no non-bonded contacts and participates most directly in the quartic mode. The red disk also has no non-bonded contacts, but does not lead to a quartic mode. (b) Change in potential energy \(\Delta V_{r}/N\) following a perturbation with amplitude \(\delta\) applied along an eigenvector of the dynamical matrix for a jammed repulsive polymer packing corresponding to a quadratic (grey solid line) and quartic mode (black solid line). Grey dot-dashed and black dashed lines indicate slopes of \(2\) and \(4\). Figure 11: The eigenvalues \(h^{i}(r_{ij}<r_{\beta})\) of the stiffness matrix \(H(r_{ij}<r_{\beta})\) for attractive packings with contacts defined by \(r_{ij}<r_{\beta}\) plotted versus the eigenvalues \(h^{i}(r_{ij}<r_{\alpha})\) for \(H(r_{ij}<r_{\alpha})\) with contacts defined using the full attractive potential for attractive (a) disks and (b) polymers as a function of system size, \(N=64\) (circles), \(128\) (squares), \(256\) (upward triangles), \(512\) (downward triangles), and \(1024\) (stars) colored from blue to red with increasing system size. The black dashed line indicates \(h^{i}(r_{ij}<r_{\beta})=h^{i}(r_{ij}<r_{\alpha})\). low-frequency eigenvalues of the stiffness matrix \(h^{i}(r_{ij}<r_{\beta})\) deviate from those \(h^{i}(r_{ij}<r_{\alpha})\) defined using the full attractive potential (Fig. 11 (b)). This result indicates that quartic modes in attractive polymer packings are more sensitive (compared to the low-frequency stiffness matrix eigenvalues of packings of attractive disks) to the addition of the weak long-range attractions of the full attractive potential. Are attractive disks and polymers still effectively isostatic when varying the range of the attractive interaction \(\alpha\)? We change the attractive range in small steps, \(\alpha=\alpha_{0}\pm\Delta\alpha\), where \(\alpha_{0}=1.5\) and \(\Delta\alpha=0.01\) with each \(\alpha\) increment followed by energy minimization. In Fig. 13 (a) and (b), we show the scaling of \(\omega^{*}N\) versus \(V_{r}N^{3}/\alpha\) for \(0.1\leq\alpha\leq 2\) for packings of attractive disks and polymers and find that \(\omega^{*}N\sim(V_{r}N^{3}/\alpha)^{1/4}\) collapses the data for all values of \(\alpha\). In Fig. 13 (c) and (d), we show that packings of attractive disks and polymers are also effectively isostatic when defining contacts according to \(r_{ij}<r_{\beta}\) for all \(\alpha\). For all packings of attractive disks and polymers, \(\Delta N>0\) and \(\Delta N\) versus \(V_{r}N^{3}/\alpha\) obeys the same scaling relation as that found for isostatic packings of repulsive disks and polymers. ## IV Conclusions and future directions In this work, we studied the connection between the collapse of attractive disk-shaped bead-spring polymers and the onset of jamming in packings of repulsive disks and polymers. This work was motivated by the fact that protein cores possess similar packing fractions to those of jammed packings of purely repulsive, disconnected amino-acid-shaped particles. Is there a deep connection between attractive polymer collapse and compression-induced jamming or is the similarity fortuitous? First, we showed that for packings of attractive disk-shaped bead-spring polymers to possess interior packing fractions similar to those in jammed repulsive disk packings, they must be quenched to temperatures much below the glass transition. To compare packings of attractive and repulsive disks and polymers, we developed a method to compress repulsive systems under open boundary conditions. We find that the average core packing fraction of repulsive disk and polymer packings under this protocol is similar to that generated by Figure 12: (a) Illustration of an attractive polymer packing with \(N=64\) and \(\beta=10^{-5}\). We highlight the quartic mode in (b) with grey arrows. The red lines indicate contacts that satisfy \(r_{ij}<r_{\beta}\) and the black lines indicate the polymer backbone. \(N_{c}(r_{ij}<r_{\beta})+N_{b}=124\) and \(N_{c}^{\rm iso}=2N-3=125\) and therefore the packing is missing a single contact. The cyan-shaded particle has no non-bonded contacts with \(r_{ij}<r_{\beta}\) and has the largest contribution to the quartic mode. (b) Change in the total potential energy \(\Delta V/N\) following a perturbation with amplitude \(\delta\) applied along the quartic mode of the dynamical matrix in (a) for increasing attractive strength \(\beta\) (curves shaded from blue to red). The grey dot-dashed and black dashed lines indicate slopes of 2 and 4. Figure 13: Characteristic plateau frequency of the vibrational density of states \(\omega^{*}\) plotted versus \(V_{r}N^{3}/\alpha\) for attractive (a) disk and (b) polymer packings and the excess contacts \(\Delta N\) plotted versus \(V_{r}N^{3}/\alpha\) for attractive (c) disk \((\Delta N=N_{c}(r_{ij}<r_{\beta})-N_{c}^{\rm iso})\) and (d) polymer \((\Delta N=N_{c}(r_{ij}<r_{\beta})+N_{b}+N_{q}-N_{c}^{\rm iso})\) packings and with varying attractive ranges, \(\alpha=0.1\) (circles), \(0.5\) (squares), \(1.0\) (upward triangles), \(1.5\) (downward triangles), and \(2.0\) (stars) colored purple to yellow with increasing \(\alpha\) for \(N=256\). In (a) and (b), the dashed lines indicate slopes of \(0.25\) and in (c) and (d) the dashed and solid lines indicate slopes of \(0.25\) and \(0.5\) respectively. thermally quenching attractive disks and polymers. Previous studies have shown that repulsive disk packings at jamming onset are isostatic and possess an excess of low-frequency modes in the vibrational density of states, with a characteristic plateau frequency \(\omega^{*}\sim\Delta N\sim(V_{r}N^{3})^{1/4}\), where \(\Delta N\) is the excess contact number, \(\Delta N=N_{c}+N_{w}-N_{c}^{\rm iso}\), \(V_{r}\) is the repulsive contribution to the potential energy, \(N_{c}\) is the number of interparticle contacts, \(N_{w}\) is the number of particle-wall contacts, and \(N_{c}^{\rm iso}=2(N-N_{r})+f(d)+1\). While repulsive polymer packings are typically hypostatic at jamming onset, the number of missing contacts equals the number of quartic modes \(N_{q}\) and we find that repulsive polymers are effectively isostatic such that the excess contacts \(\Delta N=N_{c}+N_{w}+N_{b}+N_{q}-N_{c}^{\rm iso}\) versus \(V_{r}N^{3}\) obeys the same scaling form as that found for packings of repulsive disks, where \(N_{b}\) is the number of polymer bonds and \(N_{c}^{\rm iso}=2(N-N_{f})+f(d)+1\). In overconstrained systems, the vibrational density of states \(D(\omega)\to 0\) in the low-frequency limit [18]. Here, we show that even though attractive disk and polymer packings are highly hyperstatic due to longer-range attractive interactions, they possess a plateau in the low-frequency region of \(D(\omega)\) and that \(\omega^{*}\sim(V_{r}N^{3})^{1/4}\). Since this power-law scaling behavior for \(\omega^{*}\) versus \(V_{r}N^{3}\) is similar to that for packings of repulsive disks and polymers near jamming onset, it suggests that packings of attractive monomers and polymers with weak attractions are effectively isostatic. We find that if we define contacts as non-bonded pairs with \(r_{ij}<r_{B}\), packings of attractive of monomers and polymers are effectively isostatic with \(\Delta N=N_{c}(r_{ij}<r_{\beta})+N_{q}-N_{c}^{\rm iso}\sim(V_{r}N^{3})^{1/4}\), where \(N_{c}^{\rm iso}=2N-f(d)\). These results indicate that longer-range attractions provide an average compression force, but that the mechanical properties are controlled by the stronger short-range repulsive interactions. Note that scattering experiments on protein crystal structures have shown that they also possess a plateau in \(D(\omega)\) at low frequencies, which suggests that proteins may also be viewed as effectively isostatic [48]. Overall, we find that there is a deep connection between the interior packing fraction, low-frequency regions of the vibrational density of states, and isostaticity in all four systems: jammed packings of repulsive disks and polymers and thermally quenched, collapsed attractive disks and polymers. Note that we considered an interparticle potential with a discontinuous jump in its second derivative, and the location of the discontinuity corresponded to the definition of interparticle contacts that yields effective isostaticity. In future work, we will study interaction potentials where we can vary the magnitude of the change in the second derivative and the range over which it changes to understand the parameters that control whether attractive packings can be considered as effectively isostatic. Here, we established that for thermally quenched attractive disk-shaped bead-spring polymers to obtain interior packing fractions near values found for jammed packings of repulsive disks and polymers, they must be cooled below the glass transition temperature. Thus, the collapsed polymers we considered are glassy and the interior packing fraction can be increased by decreasing the cooling rate [49]. Similarly, we have already shown that the packing fraction at jamming onset for packings of repulsive amino-acid-shaped particles spans the range \(0.55<\phi<0.62\), where the average core packing fraction for protein x-ray crystal structures (\(\langle\phi\rangle\sim 0.55\)) is only obtained in the limit of rapid compression and energy minimization [15]. In contrast, the current view of the protein energy landscape emphasizes that proteins fold in equilibrium to the global energy minimum [50; 51; 52; 53]. Our work suggests that experimentally determined protein cores can in principle reach packing fractions of \(\phi=0.62\) and yet, we find that they always possess the rapid thermal quench value of \(\phi\sim 0.55\). In future work, we will generate packings using an all-atom hard-sphere model for proteins with stereochemical constraints (including constraints on the bond lengths, bond angles, and peptide bond dihedral angles \(\omega\)) using compression or thermal collapse with short-range attractive interactions, to verify that the cores in these model proteins can possess a range of packing fractions, \(0.55<\phi<0.62\). These single protein packings will obey the geometric criteria of high-quality protein x-ray crystal structures (i.e. no non-bonded overlaps and bond lengths, bond angles, and backbone and side-chain dihedral angles will obey the statistics found for protein structures in the Protein Data Bank) and possess core packing fractions with \(0.55<\phi<0.62\), but will not take on their native folds [7; 54]. To investigate whether proteins in their native conformations can possess a range of core packing fractions, we will initialize these simulations with a given protein x-ray crystal structure, add short-range attractive, non-bonded atomic interactions with different strengths, thermally quench the system over a range of cooling rates, and measure the core packing fraction. Additionally, varying the attractive depth of the atomic interactions can be used to capture the range of hydrophobic interactions for different amino acids. ###### Acknowledgements. The authors acknowledge support from NIH Training Grant No. T32GM145452 and the High Performance Computing facilities operated by Yale's Center for Research Computing. ## Appendix A Generating repulsive disk and polymer packings in open boundary conditions To generate static packings of repulsive disks and polymers under open boundary conditions, we apply an external central potential \(V^{c}\) in Eq. 8 for all disks in the packing. With this central potential and in the limit of large damping parameters, repulsive disk and polymer packings are highly disordered. However, with low damping parameters, thermal fluctuations can induce size segregation in packings of repulsive disks, with small disks slipping past large disks, which leaves only large disks on the surface and gives rise to crystallization. Therefore, we add a bias factor \(\left(\sigma_{i}/\sigma_{\rm max}\right)^{\nu}\) to the compression force, such that larger disks feel larger compression forces. The exponent \(\nu\) controls the strength of the bias factor. As shown in Fig. 6 (b), attractive disk and polymer packings do not size segregate and therefore we can calibrate the value of \(\nu\) by comparing the structural properties of repulsive disk to those of attractive disk and polymer packings. In Fig. 14, we plot the average hexatic order parameter \(\langle|\psi_{6}|\rangle\) versus the number \(N_{\nu}\) of Voronoi cells between a disk and the surface as a function of \(\nu\) for packings of repulsive disks. As \(\nu\) increases, the hexactic order decreases strongly for all values of \(N_{\nu}\). However, the similarity between the repulsive and attractive disk packings decreases when \(\nu\gtrsim 2.5\). Therefore, we use \(\nu=2\) for preparing all repulsive disk packings in these studies. We also studied the influence of the spring constant \(k_{c}/\epsilon\) on the core packing fraction in packings of repulsive disks. The spring constant \(k_{c}\) controls the effective rate of compression, which is known to influence the structural properties of jammed packings [18]. In Fig. 15, we plot the average core packing fraction \(\langle\phi\rangle\) for 100 repulsive disk packings for \(N=256\) and \(b=0.1\) versus \(k_{c}/\epsilon\). When compressing with large \(k_{c}/\epsilon\), the repulsive disk packings tend to be less densely packed and the packing fraction reaches a plateau for \(k_{c}/\epsilon\lesssim 10^{-4}\). Therefore, we selected \(k_{c}/\epsilon=10^{-4}\) to generate all repulsive disk packings. ## Appendix B Identification of core disks To examine the packing fraction of the interior of disk and polymer packings in open boundaries, we must first quantitatively define which disks are considered as "core" versus "non-core". Here, we implement an algorithm first proposed by Lee and Richards [34] that is frequently used to measure the solvent-accessible surface area in proteins. In the case of disk and polymer packings in open boundaries, we place a probe disk of diameter \(\sigma_{p}\) on the "anchor" disk that is furthest from the center of mass of the packing. We rotate the probe around the anchor disk in angle increments of \(\Delta\theta=0.1\) radians and check for overlaps with neighboring disks. If a new contact is made with the probe disk, the new contacting disk becomes the anchor disk. This process is repeated until the probe disk returns to the initial anchor disk. In proteins, \(\sigma_{p}\) is given by the size of a water molecule so that the surface area swept out by the probe reflects the solvent-accessible surface area. The size of the probe will determine which disks are labeled as core and thus affect the average core packing fraction \(\langle\phi\rangle\). In Fig. 16, we plot \(\langle\phi\rangle\) versus \(\sigma_{p}\) for \(N=256\) attractive polymer packings For large probe sizes, similar in size to the largest disk in the system, the core packing fraction decreases significantly as more surface-like (non-core) particles are in Figure 16: The average core packing fraction \(\langle\phi\rangle\) plotted versus the ratio of the surface probe diameter to the smallest disk diameter \(\sigma_{p}/\sigma_{s}\) for packings of attractive polymers with \(N=256\), \(b=10^{-5}\), and \(T_{0}/T_{m}=0.27\). The vertical dashed line indicates \(\sigma_{p}/\sigma_{s}\sim 0.43\), which is the ratio of the diameter of a water molecule to an Alanine residue. Figure 14: The average hexatic order parameter \(\langle|\psi_{6}|\rangle\) plotted versus the number of Voronoi cells \(N_{\nu}\) between each disk and the closest surface disk for varying exponents \(\nu\) (increasing from purple to yellow) that control the strength of the bias factor of the compression force for packings of repulsive disks with \(N=256\) prepared using \(b=10^{-5}\). As a comparison, we also show results for packings of attractive disks prepared at the same value of \(b\) (grey squares). Figure 15: The average core packing fraction of packings of repulsive disks plotted as a function of \(k_{c}/\epsilon\) using \(N=256\) and \(b=0.1\). cluded in the average. The core packing fraction plateaus as \(\sigma_{p}\lesssim 0.4\). The typical probe size used to study proteins is the diameter of a water molecule \(\sigma_{p}\sim 2.8\) A, whereas the maximum diameter of an Alanine residue is 6.6 A, which yields the ratio, \(\sigma_{p}/\sigma_{s}\sim 0.43\). In the studies in the main text, we chose a similar ratio \(\sigma_{p}/\sigma_{s}=0.1\).
2309.07048
The Fourier transform on valuations is the Fourier transform
Alesker has proved the existence of a remarkable isomorphism of the space of translation-invariant smooth valuations that has the same functorial properties as the classical Fourier transform. In this paper, we show how to directly describe this isomorphism in terms of the Fourier transform on functions. As a consequence, we obtain simple proofs of the main properties of the Alesker--Fourier transform. One of these properties was previously only conjectured by Alesker and is proved here for the first time.
Dmitry Faifman, Thomas Wannerer
2023-09-13T15:58:39Z
http://arxiv.org/abs/2309.07048v1
# The Fourier transform on valuations is the Fourier transform ###### Abstract. Alesker has proved the existence of a remarkable isomorphism of the space of translation-invariant smooth valuations that has the same functorial properties as the classical Fourier transform. In this paper, we show how to directly describe this isomorphism in terms of the Fourier transform on functions. As a consequence, we obtain simple proofs of the main properties of the Alesker-Fourier transform. One of these properties was previously only conjectured by Alesker and is proved here for the first time. 2020 Mathematics Subject Classification: 52B45, 53C65, 42B10, 43A32 DF was supported by the Israel Science Foundation grant No. 1750/20 TW was supported by DFG grant WA 3510/3-1. which respects the Alesker product, while surjective linear maps induce a pushforward \[f_{*}\colon\operatorname{Val}^{\infty}(V)\otimes\operatorname{Dens}(V^{*}) \to\operatorname{Val}^{\infty}(W)\otimes\operatorname{Dens}(W^{*}),\] which respects the Bernig-Fu convolution, see [5]. Here \(\operatorname{Dens}(V)\) denotes the one-dimensional space of densities on \(V\), see below for a precise definition. Product and convolution admit a common description in terms of the exterior product of valuations \[\operatorname{Val}^{\infty}(V)\times\operatorname{Val}^{\infty}(W)\to \operatorname{Val}(V\times W)\] denoted by \(\phi\boxtimes\psi\). Namely, using the operations of pullback and pushforward, \[\phi\cdot\psi=\Delta^{*}(\phi\boxtimes\psi)\quad\text{and}\quad\phi*\psi=a_{*} (\phi\boxtimes\psi),\] where \(\Delta\colon V\to V\times V\) is the diagonal embedding, and \(a\colon V\times V\to V\) is the addition in the vector space \(V\). The Alesker-Fourier transform, or simply the Fourier transform on valuations enriches this picture even further. First defined by Alesker for even valuations in [2] and only later in [5] constructed in full generality, it is an isomorphism \[\mathbb{F}:\operatorname{Val}^{\infty}(V)\to\operatorname{Val}^{\infty}(V^{*} )\otimes\operatorname{Dens}(V)\] which interchanges the product with convolution, pullback with pushforward, and satisfies the inversion formula: applying the Alesker-Fourier transform twice equals the pullback by the antipodal map \(x\mapsto-x\). Its construction in the general case rested on Alesker's highly non-trivial irreducibility theorem, as well as sophisticated methods from infinite-dimensional representation theory of \(\operatorname{GL}_{n}(\mathbb{R})\). The name "Fourier" was attached to \(\mathbb{F}\) due to its functorial properties, which are strongly reminiscent of the classical Fourier transform on functions. More recently however, a special case of the Alesker-Fourier transform was observed to be indeed linked to the Fourier transform of functions [23, Corollary 3.9]. ### Our results In this work, we give a description of the Alesker-Fourier transform that directly derives it from the Fourier transform on functions, or rather differential forms, in all cases, and directly deduces its key properties from the corresponding properties of the Fourier transform. This solves a problem posed by Alesker [9, p. 17]. Moreover, our approach allows us to establish an additional property of the Fourier transform, which were conjectured by Alesker [9, p. 17]. In particular, we make no use of the irreducibility theorem of Alesker [1], assuming only the definition of smooth valuations through integration over the normal cycle: A valuation \(\phi\in\operatorname{Val}(V)\) is smooth if there exists a smooth differential form \(\omega\) on \(\mathbb{P}_{+}(V^{*})\), the oriented projectivization of \(V^{*}\), with values in \(\wedge^{\bullet}V^{*}\otimes\operatorname{or}(V)\), and a density \(\theta\) on \(V\) such that \[\phi(K)=\theta(K)+\int_{\operatorname{nc}(K)}\omega \tag{1}\] for all convex bodies \(K\) in \(V\). Here \(\operatorname{nc}(K)\) denotes the normal cycle of \(K\), and \(\operatorname{or}(V)\) denotes the one-dimensional space of orientation functions on \(V\). In the first step of our construction of the Fourier transform on valuations, which seems to be also of independent interest, we associate to every smooth valuation \(\phi\in\operatorname{Val}^{\infty}(V)\) a unique \(0\)-homogeneous current \(\tau=\tau(\phi)\) on \(V^{*}\), which is smooth outside of the origin. More precisely, if \(\phi\in\mathrm{Val}_{k}^{\infty}(V)\) is given by (1), then we define a generalized form \(\Omega_{-\infty}^{n-k}(V^{*},\wedge^{k}V^{*}\otimes\mathrm{or}(V))\) by \[\tau(\phi)=\phi(\{0\})\cdot\delta_{0}+r^{0}((-1)^{n-k}a^{*}D\omega+\theta).\] Here \(D\) is the Rumin differential of \(\omega\), a natural second order differential operator arising in contact geometry, \(a\) is the antipodal map, and \(r^{0}\) denotes the \(0\)-homogeneous extension from \(\mathbb{P}_{+}(V^{*})\) to \(V^{*}\). For more details, see section 4.1. A closely related description of valuations in terms of a pair of currents was introduced in [8] building on earlier work of Bernig and Brocker [14]. The main point of this construction is that we are able to characterize all generalized forms arising in this fashion by a short list of inevitable properties (Proposition 4.2). Moreover, the operations of pullback, pushforward, and exterior product of valuations admit simple descriptions in terms of the \(0\)-homogeneous current \(\tau(\phi)\), see Propositions 4.5, 4.6, and 4.7. Furthermore, the \(0\)-homogeneous current of a valuation allows an explicit description of the \(0\)-homogeneous component of the pushforward of a valuation by an epimorphism, which is not readily available when using the standard presentation by a pair of currents. Let \(F\) be a finite-dimensional vector spaces of over the reals, and let \(\Omega_{\mathcal{S}}^{k}(V,F)\subset\Omega^{k}(V,F)\) denote the Schwartz space of differential \(k\)-forms on \(V\) with values in \(F\) and rapidly decreasing coefficients. The Fourier transform of functions on \(V\) extends naturally to differential forms on \(V\). **Definition 1.1.** The Fourier transform \[\mathcal{F}:\Omega_{\mathcal{S}}^{k}(V,F)\to\Omega_{\mathcal{S}}^{n-k}(V^{*}, \mathrm{or}(V)\otimes F)\] is defined as follows. For \(\omega\in\Omega_{\mathcal{S}}^{k}(V,F)\), the Hodge star isomorphism \[*:\wedge^{k}V^{*}\xrightarrow{\sim}\wedge^{n-k}V\otimes\wedge^{n}V^{*}\simeq \wedge^{n-k}V\otimes\mathrm{Dens}(V)\otimes\mathrm{or}(V)\] allows to consider \(\omega\) as a map \(\widetilde{\omega}:V\to\wedge^{n-k}V\otimes\mathrm{Dens}(V)\otimes\mathrm{or }(V)\otimes F\). Hence we may define for \(\xi\in V^{*}\) \[\mathcal{F}(\omega)(\xi)=\int_{V}e^{2\pi\mathfrak{i}\langle x,\xi\rangle} \widetilde{\omega}(x)\in\wedge^{n-k}V\otimes\mathrm{or}(V)\otimes F.\] We remark that while different normalizations exist for the Fourier transform, an involutive Fourier transform on \(0\)-homogeneous forms is unique up to sign. We extend this definition also to differential forms with tempered distributions as coefficients. In applications to valuations, the vector space \(F\) is \(\wedge^{\bullet}V^{*}\otimes\mathrm{or}(V)\). If we denote the composition of \(\mathcal{F}\) with the canonical isomorphism \[F\otimes\mathrm{or}(V)\simeq\wedge^{\bullet}V\otimes\mathrm{Dens}(V)\otimes \mathrm{or}(V),\] by \(\mathcal{F}^{0}\), then \[\mathcal{F}^{0}\tau(\phi)\in\Omega_{\mathcal{S}^{\prime}}(V,\wedge^{\bullet}V \otimes\mathrm{or}(V))\otimes\mathrm{Dens}(V).\] Our first theorem states that this form defines a smooth valuation. **Theorem 1.2.** Let \(\phi\in\mathrm{Val}^{\infty}(V)\) be a smooth valuation, and let \(\tau(\phi)\) be its \(0\)-homogeneous current. Then the Fourier transform \(\mathcal{F}^{0}\tau(\phi)\) is the \(0\)-homogeneous current of a unique smooth valuation. This theorem allows us to make the following definition. **Definition 1.3**.: If \(\phi\in\operatorname{Val}^{\infty}(V)\) is a smooth valuation, then its \(\mathcal{F}\)-transform is the unique smooth valuation \(\mathcal{F}\phi\in\operatorname{Val}^{\infty}(V^{*})\otimes\operatorname{Dens}(V)\) satisfying \[\tau(\mathcal{F}\phi)=\mathcal{F}^{0}\tau(\phi).\] The same equation also defines the \(\mathcal{F}\)-transform of generalized valuations \(\phi\in\operatorname{Val}^{-\infty}(V)\). It will be seen in Theorem 1.5 that the \(\mathcal{F}\)-transform coincides with the Alesker-Fourier transform. With this definition and the description of the algebraic operations of valuations in terms of the \(0\)-homogeneous current, the main properties of the Alesker-Fourier transform are for the \(\mathcal{F}\)-transform a direct consequence of the corresponding properties of the Fourier transform of functions. We write \(\mathcal{F}_{V}\) for the \(\mathcal{F}\)-transform of valuations on \(V\) in case we consider several vector spaces at the same time. **Theorem 1.4**.: The \(\mathcal{F}\)-transform of valuations has the following properties: 1. \(\mathcal{F}\) commutes with the natural action of \(\operatorname{GL}(V)\). 2. Inversion formula: \((\mathcal{F}_{V^{*}}\times\operatorname{id})\circ\mathcal{F}_{V}\phi=(- \operatorname{id})^{*}\phi\). 3. Let \(i:V\hookrightarrow W\) be an injective linear map, and \(p=i^{\vee}:W^{*}\to V^{*}\). Then for \(\phi\in\operatorname{Val}^{\infty}(W)\), \[\mathcal{F}_{V}(i^{*}\phi)=p_{*}(\mathcal{F}_{W}\phi).\] 4. \(\mathcal{F}(\phi\boxtimes\psi)=\mathcal{F}\phi\boxtimes\mathcal{F}\psi\) for \(\phi\in\operatorname{Val}^{\infty}(V)\), \(\psi\in\operatorname{Val}^{\infty}(W)\). 5. \(\mathcal{F}\) intertwines product and convolution: for \(\phi,\psi\in\operatorname{Val}^{\infty}(V)\), \[\mathcal{F}(\phi\cdot\psi)=\mathcal{F}\phi*\mathcal{F}\psi.\] 6. Self-adjointness: \(\langle\mathcal{F}u,\theta\rangle=\langle u,\mathcal{F}\theta\rangle\) for \(u\in\operatorname{Val}^{-\infty}(V)\), \(\theta\in\operatorname{Val}^{\infty}(V^{*})\). 7. Let \(p:V\to W\) be a surjective linear map, and \(i=p^{\vee}:W^{*}\to V^{*}\). Then for \(\psi\in\operatorname{Val}^{-\infty}(W)\), \[\mathcal{F}_{V}(p^{*}\psi)=i_{*}(\mathcal{F}_{W}\psi).\] Using the above properties of the \(\mathcal{F}\)-transform we finally prove **Theorem 1.5**.: The \(\mathcal{F}\)-transform and the Alesker-Fourier transform coincide: \(\mathcal{F}=\mathbb{F}\). Taken together, the properties (3) and (7) imply the identity \[\mathcal{F}\circ f^{*}=(f^{\vee})_{*}\circ\mathcal{F}\] on smooth valuations for every linear map \(f\) with dual map \(f^{\vee}\). The self-adjointness (6) of the Alesker-Fourier transform was already established in the PhD thesis of the first-named author [22], using Alesker's original definition, and employed to extend the Alesker-Fourier transform to the space of generalized valuations. This extension gives formal meaning to properties (4) and (7), which were previously conjectured by Alesker [9, p. 17]. While properties (3) and (7) are easily seen to be equivalent via self-adjointness and the inversion formula, property (4) is proved here for the first time. ### Relation to other work The relevance of the classical Fourier transform for questions about volumes of sections and projections of convex bodies was realized in 1990s, when it led to a complete solution of the Busemann-Petty problem. The application of Fourier analytic techniques to problems in convex geometry has developed into a rich area of research, see the papers [11, 21, 28, 38, 40, 47], the lecture notes by Ryabogin and Zvavitch [48], and Koldobsky's monograph [39] for more information. As already mentioned above, that the Alesker-Fourier transform can be linked to the Fourier transform on functions was first observed by Dorrek and Schuster [23]. This special case motivated us to look for such a connection in general. The Alesker-Fourier transform has important applications in integral geometry. Kinematic formulas are a centerpiece of integral geometry, first studied for the orthogonal Euclidean group by Blaschke, Chern, and Santalo, see the books by Santalo [49] and Klain and Rota [37]. For convex bodies \(A,B\) in \(\mathbb{R}^{n}\) they come in two flavors, namely intersectional \[\int_{G\times\mathbb{R}^{n}}\chi(A\cap gB)\,dg=\sum_{i,j}c_{ij}\mu_{i}(A)\mu_ {j}(B)\] and additive \[\int_{G}\operatorname{vol}(A+gB)\,dg=\sum_{i,j}d_{ij}\mu_{i}(A)\mu_{j}(B),\] where \(G\) is a compact Lie group acting transitively on the unit sphere, integration is with respect to the Haar measure, the \(\mu_{i}\) are a basis of \(\operatorname{Val}^{G}\), the space of \(G\)-invariant valuations, and the constants \(c_{ij},d_{ij}\) are independent of \(A\) and \(B\). More generally, the Euler characteristic and volume may be replaced by any of the \(\mu_{i}\). Explicitly determining the constants in the kinematic formulas is a challenge. A key insight due to Fu [26] is that they are the structure constants of the algebras of \(G\)-invariant valuations for product and convolution. Dually to the Alesker-Fourier transform intertwining product and convolution, it intertwines intersectional and additive kinematic formulas. Since the convolution is typically simpler to compute than the Alesker product, this allows at least in principle to deduce both types of kinematic formulas from the convolution table of the invariant valuations. For the unitary groups \(\operatorname{U}(n)\) and \(\operatorname{SU}(n)\), this program has been successfully completed by Bernig and Fu [13, 16]. While determining the Alesker-Fourier transform is relatively easy in these cases, in the works of Bernig and Solanes [19, 20] on the integral geometry of the quaternionic plane and Bernig and Hug [18] on the integral geometry of tensor valuations, this step relies on a sizable and difficult argument. We have not attempted to do this, but it seems plausible that our explicit description of the Alesker-Fourier transform of valuations could simplify these arguments. The Alesker-Fourier transform can also be used to construct the Holmes-Thompson intrinsic volumes in a normed space. Crofton formulas in normed spaces were first constructed Schneider and Wieacker [52] and Alvarez-Paiva and Fernandes [10], which were then used by Bernig [12] to construct a natural family of valuations on all normed spaces, now called intrinsic volumes, extending the Holmes-Thompson volume on all flats. As observed by Fu [27, Theorem 2.3.22], the simplest definition of those valuations is in fact as the (inverse) Alesker-Fourier transform of the mixed volume with the polar of the unit ball, explicitly \[V_{k}^{\mathrm{HT}}=\frac{1}{\omega_{k}}\binom{n}{k}\mathbb{F}^{-1}V(B^{ \diamond}[k],\bullet[n-k]),\] where \(\omega_{k}\) denotes the volume of the \(k\)-dimensional Euclidean unit ball. Moreover, for non-symmetric normed spaces the latter is the only known way to define the intrinsic volumes so that they are generated from the first one under the Alesker product, see [25]. Let us finally point out that the Alesker-Fourier transform was used to deduce new inequalities for mixed volumes of convex bodies from the recently established Hodge-Riemann relations on the space of smooth valuations [7, 41, 42]. ### Acknowledgements We are grateful to Semyon Alesker for his valuable comments on an earlier draft. ## 2. Preliminaries If \(V\) is an \(n\)-dimensional real vector space, we denote by \(\wedge^{\bullet}V=\bigoplus_{k=0}^{n}\wedge^{k}V\) the exterior algebra of \(V\). A density on \(V\) is a function \(\mu\colon\wedge^{n}V\to\mathbb{C}\) with the property that \(\mu(t\omega)=|t|\mu(\omega)\) holds for all \(t\in\mathbb{R}\). The \(1\)-dimensional space of densities is denoted by \(\operatorname{Dens}(V)\), and its nonzero elements are naturally identified with the complex-valued Lebesgue measures on \(V\). An orientation function on \(V\) is a function \(\epsilon\colon\wedge^{n}V\setminus\{0\}\to\mathbb{C}\) satisfying \(\epsilon(t\omega)=\operatorname{sgn}(t)\epsilon(\omega)\) for all \(t\in\mathbb{R}\setminus\{0\}\). The \(1\)-dimensional space of orientation functions is denoted by \(\operatorname{or}(V)\). We will make use of the following canonical isomorphisms \[\operatorname{Dens}(V)\otimes\operatorname{or}(V) \simeq\wedge^{n}V^{*},\] \[\operatorname{or}(V) \simeq\operatorname{or}(V^{*}),\] \[\wedge^{k}V \simeq\wedge^{n-k}V^{*}\otimes\wedge^{n}V.\] Moreover, if \(0\to U\to V\to W\to 0\) is exact, then \[\operatorname{Dens}(U)\otimes\operatorname{Dens}(W)\simeq\operatorname{Dens}(V)\] and \[\operatorname{or}(U)\otimes\operatorname{or}(W)\simeq\operatorname{or}(V).\] The Euler vector field \(E(x)=x\) is defined on any vector space \(V\). ### Generalized forms The theory of distributions on a smooth manifold is a bit more subtle than that of distributions on open subsets of \(\mathbb{R}^{n}\). We recall in this section the relevant concepts and refer the reader to [33, Chapter VI] for more information. By \(\operatorname{Dens}(M)\) we denote the line bundle over \(M\) with fiber \(\operatorname{Dens}(T_{x}M)\) over \(x\in M\). Its smooth sections are naturally identified with the smooth measures on \(M\). The orientation bundle \(\operatorname{or}(M)\) of \(M\) is defined analogously. The space of generalized sections of a vector bundle \(\mathcal{E}\to M\) is defined by \(C^{-\infty}(M,\mathcal{E})=(C^{\infty}_{c}(M,\mathcal{E}^{*}\otimes \operatorname{Dens}(M)))^{*}\). The space of generalized differential \(k\)-forms on a manifold \(M\) is therefore \[\Omega^{k}_{-\infty}(M)=\Omega^{n-k}_{c}(M,\operatorname{or}(M))^{*}=C^{ \infty}_{c}(M,\wedge^{n-k}T^{*}M\otimes\operatorname{or}(M))^{*}.\] It contains the dense subspace of smooth forms, given by \[\langle\omega,\psi\rangle=\int_{M}\omega\wedge\psi,\quad\omega\in\Omega^{k}( M),\psi\in\Omega^{n-k}_{c}(M,\operatorname{or}(M)).\] Most operations on smooth forms extend to generalized forms, including the pullback under a smooth proper submersion \(f\colon M\to N\), the exterior derivative \(d\), and the contraction with a vector field \(i_{X}\). For \(\omega\in\Omega^{k}_{-\infty}(M)\), test forms \(\Omega_{c}^{n-k}(M,\operatorname{or}(M))\), \(\eta\in\Omega_{c}^{n-k+1}(M,\operatorname{or}(M))\), \(\psi\in\Omega_{c}^{n-k-1}(M,\operatorname{or}(M))\), and \(X\) a vector field we have \[\langle f^{*}\omega,\phi\rangle =\langle\omega,f_{*}\phi\rangle,\] \[\langle i_{X}\omega,\eta\rangle =(-1)^{k+1}\langle\omega,i_{X}\eta\rangle,\] \[\langle d\omega,\psi\rangle =(-1)^{k+1}\langle\omega,d_{\nabla}\psi\rangle,\] where \(f_{*}\) denotes integration along the fibers and \(\nabla\) is the canonical connection on \(\operatorname{or}(M)\). Let \(V,F\) be finite-dimensional real vector spaces. By a tempered generalized differential \(k\)-form on \(V\) with values in \(F\) we understand a continuous linear functional on \(\Omega_{\mathcal{S}}^{n-k}(V,F^{*}\otimes\operatorname{or}(V))\). We denote it by \(\Omega_{\mathcal{S}^{\prime}}^{k}(V,F)=\mathcal{S}^{\prime}(V,\wedge^{k}V^{*} \otimes F)\). Equivalently, a tempered generalized form is a generalized form with tempered distributions as coefficients with respect to a basis. Thus \[\Omega_{\mathcal{S}^{\prime}}^{k}(V,F)\subset\Omega_{-\infty}^{k}(V,F).\] In the following we will sometimes have to replace the vector space \(F\) by a canonically isomorphic vector space \(G\). Let us be clear that this means that if \(\omega\in\Omega_{\mathcal{S}^{\prime}}^{k}(V,F)\) is a generalized form, \(f\colon F\to G\) is a linear isomorphism, and \(\eta\in\Omega_{\mathcal{S}}^{n-k}(V,G^{*}\otimes\operatorname{or}(V))\) is a test form, then we define \[\langle f\circ\omega,\eta\rangle=\langle\omega,f^{\vee}\circ\eta\rangle, \tag{2}\] where \(f^{\vee}\colon G^{*}\to F^{*}\) denotes the dual map. A generalized form \(\omega\in\Omega_{-\infty}(V,F)\) is \(r\)-homogeneous if \(m_{\lambda}^{*}\omega=\lambda^{r}\omega\), for all \(\lambda>0\). For any \(r\), an \(r\)-homogeneous generalized form is tempered. We denote by \(\mathbb{P}_{+}(V)\) the oriented projectivization of \(V\). The manifold \(\mathbb{P}_{V}=V\times\mathbb{P}_{+}(V^{*})\) carries a contact structure defined by the hyperplanes \(H_{p,[\xi]}=\ker\xi\circ d\pi\) in \(T_{p,[\xi]}\mathbb{P}_{V}\), where \(\pi\colon\mathbb{P}_{V}\to V\) denotes the projection to the first factor. A smooth differential form \(\omega\) on \(\mathbb{P}_{V}\) is said to be vertical, if \(\omega|_{H_{p,[\xi]}}=0\) for all \(p,[\xi]\). A generalized form \(\omega\) is called vertical if \(\langle\omega,\psi\rangle=0\) for all vertical test forms \(\psi\). By \(\delta_{p}\) we denote the delta measure at \(p\). The corresponding generalized top form, which depends on a choice of orientation \(\sigma_{p}\) at \(p\), is \(\delta_{p}\otimes\sigma_{p}\). Often we will simply write \(\delta_{p}\) also for the top form when the choice of \(\sigma_{p}\) is clear or not important. ## 3. \(0\)-homogeneous forms #### 3.0.1. Homogeneous extension and restriction Let us recall how to extend (generalized) forms on \(\mathbb{P}_{+}(V)\) to \(0\)-homogeneous tempered generalized forms on \(V\). We moreover discuss a characterization of such extensions. Let \(\pi\colon V\setminus\{0\}\to\mathbb{P}_{+}(V)\) denote the canonical projection. Observe that there is a well-defined push-forward (fiber integration) of twisted differential forms \[\pi_{*}\colon\Omega_{\mathcal{S}}^{k}(V,\operatorname{or}(V)\otimes F)\to \Omega^{k-1}(\mathbb{P}_{+}(V),\operatorname{or}(\mathbb{P}_{+}(V))\otimes F).\] Explicitly, assuming \(F=\mathbb{R}\), if \(\omega=f(r,\theta)dr\wedge\pi^{*}\xi\) is a \(k\)-form, where \(\xi\in\Omega^{k-1}(\mathbb{P}_{+}(V))\), then \(\pi_{*}\omega=(-1)^{n-k}(\int_{0}^{\infty}fdr)\xi\), so that \(\langle\pi^{*}\phi,\omega\rangle=\langle\phi,\pi_{*}\omega\rangle\) for all \(\phi\in\Omega^{n-k}(\mathbb{P}_{+}(V))\). **Definition 3.1**.: Let \(T\in\Omega_{-\infty}^{k}(\mathbb{P}_{+}(V),F)\). We define the \(0\)-homogeneous extension of \(T\) to \(V\), denoted \(r^{0}T\in\Omega_{\mathcal{S}^{\prime}}^{k}(V,F)\) by \[\langle r^{0}T,\omega\rangle=\langle T,\pi_{*}\omega\rangle,\quad\omega\in \Omega_{\mathcal{S}}^{n-k}(V,\operatorname{or}(V)\otimes F^{*}).\] Clearly the restriction of \(r^{0}T\) to \(V\setminus\{0\}\) is \(\pi^{*}T\). _Remark 3.2_.: The notation \(r^{0}T\) is inspired by Gelfand-Shilov [29, Section 3.5]. **Lemma 3.3**.: Let \(S,T\in\Omega^{k}_{-\infty}(V)\) be \(0\)-homogeneous generalized forms of degree \(k<n\). If \(S-T\) is supported at the origin, then \(S=T\). Proof.: It suffices to prove that \(T=0\) if \(T\) is supported at the origin. Fixing any basis of \(V\), the corresponding coefficients of \(T\) are \((-k)\)-homogeneous generalized functions that are supported at the origin. Any such function is a linear combination of the delta function and its derivatives, and consequently cannot be \((-k)\) homogeneous for \(k<n\), unless it is zero. Hence \(T=0\) as claimed. Recall that \(E\) denotes the Euler vector field \(E(x)=x\). **Proposition 3.4**.: Let \(T\in\Omega^{k}_{-\infty}(\mathbb{P}_{+}(V))\). Then \(T_{0}=r^{0}T\in\Omega^{k}_{-\infty}(V)\) has the following properties 1. \(i_{E}T_{0}=0\). 2. \(T_{0}\) is \(0\)-homogeneous. Conversely, if \(T_{0}\in\Omega^{k}_{-\infty}(V)\) satisfies the above properties, then either \(0\leq k\leq n-1\), whence there exists a unique \(T\) such that \(r^{0}T=T_{0}\), or \(k=n\) and \(T\) is a multiple of a delta \(n\)-form supported at the origin. Proof.: It is clear that \(T_{0}=r^{0}T\) has the second stated property, and it holds that \(i_{E}(T_{0}|_{V\setminus\{0\}})=0\). Thus \(i_{E}T_{0}\) is a \(0\)-homogeneous \((k-1)\)-form supported at the origin, but this is impossible for \(k-1<n\). Therefore, \(i_{E}T_{0}=0\). For the converse statement, fix a Euclidean structure on \(V=\mathbb{R}^{n}\), and assume first \(k\leq n-1\). The uniqueness of \(T\) is straightforward. To prove existence let us first observe that (1) and (2) imply \[i_{E}dT_{0}=\mathcal{L}_{E}T_{0}=0.\] We define \(\langle T,\omega\rangle:=\langle T_{0},\phi\rangle\) for \(\omega=\pi_{*}\phi\), where \(\phi\in\Omega^{n-k}_{c}(\mathbb{R}^{n}\setminus\{0\})\) is a compactly supported test form. For this to be well-defined it suffices to show that \(\langle T_{0},\phi\rangle=0\) if \(\pi_{*}\phi=0\). We may introduce polar coordinates \(r>0,u\in S^{n-1}\) on \(V\setminus\{0\}\) and write \[\phi=f(r,u)dr\wedge\xi+\eta,\] where \(\xi,\eta\in u^{*}(\Omega(S^{n-1}))\). As a consequence, \(\pi_{*}\phi=(-1)^{k}\left(\int_{0}^{\infty}fdr\right)\xi\). By assumption we have \(\int_{0}^{\infty}fdr=0\), so that \(f=\frac{\partial h}{\partial r}\) for the compactly supported function \(h(r,u)=\int_{0}^{r}f(t,u)dt\). Put \(\zeta=h\xi\). Observe that whenever \(i_{E}\psi=0\) and \(\theta\in u^{*}\Omega(S^{n-1})\), and \(\psi\wedge\theta\) is a top degree form, one has the pointwise equality \(\psi\wedge\theta=0\). Using that \(i_{E}T_{0}=0\) and \(i_{E}dT_{0}=0\), we find \[\langle T_{0},\phi\rangle=\langle T_{0},\frac{\partial h}{\partial r}dr\wedge \xi\rangle=\langle T_{0},d\zeta\rangle=(-1)^{k+1}\langle dT_{0},\zeta\rangle=0\] as required. By construction, \(r^{0}T-T_{0}\) is supported at the origin. Applying Lemma 3.3 yields \(r^{0}T=T_{0}\) as desired. Now if \(k=n\), we may write \(T_{0}=f(x)dx_{1}\wedge\cdots\wedge dx_{n}\) for some \(f\in C^{-\infty}(\mathbb{R}^{n})\). The condition \(i_{E}T_{0}=0\) readily implies that \(f\) is supported at the origin. As it is also \((-n)\)-homogeneous, it must be a multiple of the delta function, completing the proof. Note that if \(T\in\Omega^{k}(\mathbb{P}_{+}(V))\) is a smooth form, then \(r^{0}T\) is smooth on \(V\setminus\{0\}\). **Lemma 3.5**.: Let \(\tau\in\Omega^{k}(\mathbb{P}_{+}(V))\) and \(\omega=r^{0}\tau\in\Omega^{k}_{-\infty}(V)\). Then \(\omega\) is locally integrable. Proof.: Fix a Euclidean structure on \(\mathbb{R}^{n}\). On \(\mathbb{R}^{n}\setminus\{0\}\) we may write \(\omega=\omega^{\prime}\), where \(\omega^{\prime}=\frac{1}{|x|^{k}}\sum_{|I|=k}g_{I}(\frac{x}{|x|})dx_{I}\) with \(g_{I}\in C^{\infty}(S^{n-1})\). Since \(k<n\), the coefficients of \(\omega^{\prime}\) are locally integrable. Applying Lemma 3.3 yields \(\omega=\omega^{\prime}\) on \(\mathbb{R}^{n}\). ### Operations on \(0\)-homogeneous forms We need to know how to pullback and pushforward \(0\)-homogeneous generalized forms under linear maps. #### 3.1.1. Pullback under linear monomorphisms For a vector \(v\in\mathbb{R}^{n}\), we denote by \(S_{v}:\mathbb{R}^{n}\to\mathbb{R}^{n}\) the shift \(x\mapsto x+v\). **Lemma 3.6**.: Assume \(\omega\in\Omega^{k}_{-\infty}(\mathbb{R}^{n})\) is smooth outside of the origin, \(0\)-homogeneous. Let \(e:\mathbb{R}^{j}\hookrightarrow\mathbb{R}^{n}\) be a fixed monomorphism, and assume \(j>k\). Then the weak limit \[\omega^{\prime}:=\lim_{V\setminus e(\mathbb{R}^{j})\ni v\to 0}e^{*}S_{v}^{*}\omega\] exists, is \(0\)-homogeneous, and on \(\mathbb{R}^{j}\setminus\{0\}\) coincides with \(e^{*}\omega\). If moreover \(i_{E}\omega=0\) then also \(i_{E}\omega^{\prime}=0\). If \(\omega=r^{0}\tau\) for \(\tau\in\Omega^{k}(\mathbb{P}_{+}(\mathbb{R}^{n}))\), then one has \(\omega^{\prime}=r^{0}(e^{*}\tau)\). We will subsequently denote \(\omega^{\prime}=e^{*}\omega\). Proof.: By the smoothness outside of the origin, \(\omega^{\prime}\) coincides with \(e^{*}\omega\) on \(\mathbb{R}^{j}\setminus\{0\}\). Choose orthonormal coordinates such that \(\mathbb{R}^{j}=\operatorname{Span}(e_{1},\dots,e_{j})\). Observe that since \(\omega\) is \(0\)-homogeneous, Lemma 3.3 implies \[\omega=\frac{1}{|x|^{k}}\sum_{|I|=k}g_{I}(\frac{x}{|x|})dx_{I}\] for some \(g_{I}\in C^{\infty}(S^{n-1})\). Then \[\omega^{\prime} =\lim_{v\to 0}\frac{1}{|y+v|^{k}}\sum_{I\subset\{1,\dots,j\}}g_{I}( \frac{y+v}{|y+v|})\wedge_{j\in I}dy_{j}\] \[=\frac{1}{|y|^{k}}\sum_{I\subset\{1,\dots,j\}}g_{I}(\frac{y}{|y|} )\wedge_{j\in I}dy_{j}.\] Thus \(\omega^{\prime}\) is locally integrable and \(0\)-homogeneous. Assume now \(i_{E}\omega=0\). By Proposition 3.4 we may write \(\omega=r^{0}\tau\) for some \(\tau\in\Omega^{k}(\mathbb{P}_{+}(\mathbb{R}^{n}))\). Define \(\omega_{1}=r^{0}(e^{*}\tau)\). Then \(\omega_{1}\) coincides with \(\pi^{*}(e^{*}\tau)=e^{*}(\pi^{*}\tau)=e^{*}\omega=\omega^{\prime}\) outside of the origin. Thus \(\omega_{1}=\omega^{\prime}\) by Lemma 3.3 and therefore in particular \(i_{E}\omega^{\prime}=0\). This concludes the proof. **Proposition 3.7**.: Assume \(\omega\in\Omega^{k}_{-\infty}(\mathbb{R}^{n})\) is smooth outside of the origin, \(0\)-homogeneous, \(k<n\), and \(i_{E}\omega=0\). Let \(e:\mathbb{R}^{k}\hookrightarrow\mathbb{R}^{n}\) be a fixed monomorphism. Then for \(v\in\mathbb{R}^{n}\setminus e(\mathbb{R}^{k})\), the weak limit \[e_{v}^{*}\omega:=\lim_{e\to 0^{+}}e^{*}S_{ev}^{*}\omega\] exists, is uniform in \(v\in S^{n-1}\setminus e(\mathbb{R}^{k})\), and equals \(c([v])\delta_{0}\) for some continuous function \(c\) on \(\mathbb{P}_{+}(\mathbb{R}^{n}/e(\mathbb{R}^{k}))\). Furthermore, if \(\omega\) is closed then \(c([v])\) is constant. In the latter situation, we denote the limit by \(e^{*}\omega\). Proof.: Write \(V=\mathbb{R}^{n}\), \(W=\mathbb{R}^{k}\). Without loss of generality we may assume that \(e\) is the inclusion of a subspace \(W\) of \(V\). We choose an orthonormal basis of \(V\) such that \(W=\operatorname{Span}(e_{1},\ldots,e_{k})\). By Lemma 3.3 we can find functions \(g_{J}\in C^{\infty}(S^{n-1})\) such that \[\omega=\frac{1}{|x|^{k}}\sum_{|J|=k}g_{J}(\frac{x}{|x|})dx_{J}.\] Consider first a unit vector \(u\perp W\). Denoting \(I=\{1,\ldots,k\}\), it holds that \[e^{*}(S^{*}_{eu}\omega)|_{y}=\frac{1}{|y+\epsilon u|^{k}}g_{I}\left(\frac{y+ \epsilon u}{|y+\epsilon u|}\right)dy_{1}\wedge\cdots\wedge dy_{k}.\] Since \(i_{E}\omega=0\), it holds that \(g_{I}(y)=0\) for \(y\in W\). Hence \[g_{I}(y+z)=\sum_{i=k+1}^{n}z_{i}h_{i}(y,z),\quad y\in W,\ z\in W^{\perp}, \tag{3}\] where \[h_{i}(y,z)=\int_{0}^{1}\frac{\partial g_{I}}{\partial x_{i}}(y+tz)dt.\] Denoting \(W_{u}=W\oplus\operatorname{Span}(u)\), \(g_{u}=g_{I}|_{S(W_{u})}\), we may write \(g_{u}(\theta)=\langle\theta,u\rangle h_{u}(\theta)\) for some \(h_{u}\in C^{\infty}(S(W_{u}))\), which also depends smoothly on \(u\). Observe that the family \(\{|h_{u}|:u\in S(W^{\perp})\}\) can be uniformly bounded by some constant \(B\). Now let \(\psi\otimes\sigma\in C_{c}^{\infty}(W)\otimes\operatorname{or}(W)\) be supported in a ball of radius \(R\). Assume for simplicity that the coordinates \(y_{1},\ldots,y_{k}\) are positively oriented with respect to \(\sigma\), and we omit \(\sigma\) henceforth. We may write \[\langle e^{*}(S^{*}_{eu}\omega),\psi\rangle=\int_{|y|\leq R}\psi(y)\frac{ \epsilon}{|y+\epsilon u|^{k+1}}h_{u}\left(\frac{y+\epsilon u}{|y+\epsilon u|} \right)dy=I_{1}+I_{2},\] where \[I_{1} =\epsilon\int_{|y|\leq R}(\psi(y)-\psi(0))\frac{1}{|y+\epsilon u |^{k+1}}h_{u}\left(\frac{y+\epsilon u}{|y+\epsilon u|}\right)dy,\] \[I_{2} =\psi(0)\epsilon\int_{|y|\leq R}\frac{1}{|y+\epsilon u|^{k+1}}h_{ u}\left(\frac{y+\epsilon u}{|y+\epsilon u|}\right)dy.\] Observe that \(|\psi(y)-\psi(0)|\leq C_{0}|y|\) for some constant \(C_{0}\). Put \(M=C_{0}B\). We do a change of variables, \(y=\epsilon w\), to find that \[|I_{1}| \leq\epsilon M\int_{|w|\leq R/\epsilon}\frac{\epsilon|w|}{\epsilon ^{k+1}|w+u|^{k+1}}\epsilon^{k}dw=\epsilon M\int_{|w|\leq R/\epsilon}\frac{|w| }{(1+|w|^{2})^{(k+1)/2}}dw\] \[\leq C_{1}\epsilon+\epsilon M\int_{1\leq|w|\leq R/\epsilon}\frac{| w|}{(1+|w|^{2})^{(k+1)/2}}dw\leq C_{1}\epsilon+C_{2}\epsilon\int_{1}^{R/ \epsilon}\frac{r^{k}}{(1+r^{2})^{(k+1)/2}}dr\] \[\leq C_{1}\epsilon+C_{2}\epsilon\int_{1}^{R/\epsilon}\frac{1}{r} dr=C_{1}\epsilon+C_{2}\epsilon\log\frac{R}{\epsilon}.\] Thus \(I_{1}\to 0\) as \(\epsilon\to 0^{+}\), uniformly in \(u\in S(W^{\perp})\). Next we do the same change of variable \(y=\epsilon w\) for \(I_{2}\). We find that \[I_{2} =\psi(0)\epsilon\int_{|y|\leq R}\frac{1}{|y+\epsilon u|^{k+1}}h_{u} \left(\frac{y+\epsilon u}{|y+\epsilon u|}\right)dy\] \[=\psi(0)\int_{|w|\leq R/\epsilon}\frac{1}{(1+|w|^{2})^{(k+1)/2}}h _{u}\left(\frac{w+u}{|w+u|}\right)dw.\] The last integral converges as \(\epsilon\to 0\), uniformly in \(u\in S(W^{\perp})\), to the absolutely convergent integral \[c(u):=\int_{\mathbb{R}^{k}}\frac{1}{(1+|w|^{2})^{(k+1)/2}}h_{u}\left(\frac{w+u }{|w+u|}\right)dw, \tag{4}\] and so \(I_{2}\to c(u)\psi(0)\). We conclude that \(e^{*}(S^{*}_{\epsilon u}\omega)\to c(u)\delta_{0}\) as \(\epsilon\to 0^{+}\), uniformly in \(u\in S(W^{\perp})\). As a trivial consequence we obtain that \(e^{*}(S^{*}_{\epsilon u}\omega)\to c(u)\delta_{0}\) uniformly in \(\{(u,\lambda):u\in S(W^{\perp}),0<\lambda\leq 1\}\). Now for a general \(v\in S(V)\setminus W\), there are unique \(w\in W\), \(0<\lambda\leq 1\) and \(u\in S(W^{\perp})\) such that \(v=\lambda u+w\). We have \(S_{\epsilon v}=S_{\epsilon u}\circ S_{\epsilon w}\), so that \[\langle e^{*}(S^{*}_{\epsilon v}\omega),\psi\rangle=\langle e^{*}(S^{*}_{ \epsilon w}\circ S^{*}_{\epsilon\lambda u}\omega),\psi\rangle=\langle e^{*}(S^ {*}_{\epsilon\lambda u}\omega),S^{*}_{-\epsilon w}\psi\rangle.\] Tracing the proof above with \(\psi\) replaced by \(S^{*}_{-\epsilon w}\psi\), we see that \(|I_{1}|\to 0\) uniformly in \(v\), while \[I_{2}=\psi(-\epsilon w)\int_{|w|\leq R/(\lambda\epsilon)}\frac{1}{(1+|w|^{2}) ^{(k+1)/2}}h_{u}\left(\frac{w+u}{|w+u|}\right)dw\] converges to \(c(u)\psi(0)\) uniformly in \(v\). In particular, the limit only depends on \([v]\in\mathbb{P}_{+}(V/W)\). The continuity of \(c(u)\) on \(S(W^{\perp})\) is seen from formula (4). Finally, assume that \(\omega\) is closed. By Proposition 3.4 we have \(\omega=r^{0}\tau\) for some \(\tau\in\Omega^{k}(S^{n-1})\). Since \(0=d\omega=d\pi^{*}\tau=\pi^{*}d\tau\) on \(V\setminus\{0\}\), where \(\pi\colon V\setminus\{0\}\to S^{n-1}\) denotes the radial projection, we conclude that \(\tau\) is closed. Fix \(\psi\in C^{\infty}_{c}(W)\otimes\mathrm{or}(W)\). For every \(v\in V\setminus W\) put \(W_{v}=W\oplus\mathrm{span}(v)\). For every \(\epsilon>0\), the image of \(\epsilon v+W\) under \(\pi\) is the open hemisphere \(S(W_{v})\) containing \(\pi(v)\), with boundary \(S(W)\). Let \(S^{+}_{v}(W_{v})\) denote the corresponding closed hemisphere. Put \(\pi_{\epsilon,v}=\pi|_{\epsilon v+W}\). Let \(\psi_{\epsilon,v}\in C^{\infty}_{c}(S(W_{v}),\mathrm{or}(S(W_{v}))\) be such that \((\pi_{\epsilon,v})^{*}\psi_{\epsilon,v}=S^{*}_{-\epsilon v}\psi\). It then holds that \[\langle e^{*}S^{*}_{\epsilon v}\omega,\psi\rangle=\int_{\epsilon v+W}\pi^{*} \tau\otimes\pi^{*}\psi_{\epsilon,v}=\int_{S^{+}_{v}(W_{v})}\tau\otimes\psi_{ \epsilon,v}.\] Observe that \[\psi_{\epsilon,v}\to\psi(0)\cdot\mathbf{1}_{S^{+}_{v}(W_{v})}\] pointwise as \(\epsilon\) goes to zero. Hence by the dominated convergence theorem we obtain \[c(v)=\lim_{\epsilon\to 0}\int_{S(W_{v})}\tau\otimes\psi_{\epsilon,v}\to\int_{S^{+}_ {v}(W_{v})}\tau\otimes\sigma_{v}\otimes\psi(0), \tag{5}\] where \(\sigma_{v}(v)=1\). Since \(d\tau=0\), the latter integral remains constant if \(v\) follows a continuous path in \(S^{n-1}\setminus W\). By the same token, \(\int_{S^{+}_{v}(W)}\tau+\int_{S^{+}_{-v}(W)}\tau=\int_{S(W_{v})}\tau\). Since \(\sigma_{-v}=-\sigma_{v}\), it follows that \(c(-v)=c(v)\). This concludes the proof. **Lemma 3.8**.: Assume \(\omega\in\Omega^{k}_{-\infty}(\mathbb{R}^{n})\) is a closed generalized form, smooth outside of the origin, \(0\)-homogeneous, and \(i_{E}\omega=0\). Let \(e:\mathbb{R}^{j}\hookrightarrow\mathbb{R}^{n}\) be a fixed monomorphism. Let \(\mu_{\epsilon}(x)dx\in\mathcal{M}^{\infty}(\mathbb{R}^{n})\) be an approximate identity, with \(\mu_{\epsilon}\) a Schwartz function. Then the weak limit \[\lim_{\epsilon\to 0^{+}}e^{*}(\omega*\mu_{\epsilon})\] exists and equals \(e^{*}\omega\). Proof.: For \(k<j\), this follows from Lemma 3.6, while for \(k=j\) we use Proposition 3.7. Let us provide the details in the case \(k=j\), which is more involved. By Proposition 3.7, the limit \(e^{*}\omega=\lim_{\mathbb{R}^{n}\setminus\mathbb{R}^{j}\ni v\to 0}e^{*}S^{*}_{v}\omega\) exists. Let \(\psi\in\Omega_{c}(\mathbb{R}^{k})\) be a test form. It then holds that \[f(v):=\langle e^{*}S^{*}_{v}\omega,\psi\rangle-\langle e^{*}\omega,\psi \rangle:\mathbb{R}^{n}\setminus\mathbb{R}^{j}\rightarrow\mathbb{R}\] satisfies \(f(v)\to 0\) as \(|v|\to 0\). Furthermore, \(f\) is bounded. Indeed, write \(v=\lambda u+w\), where \(w\in W\) and \(u\in S(W^{\perp})\). Write \[\langle e^{*}S^{*}_{v}\omega,\psi\rangle=\langle e^{*}S^{*}_{\lambda u}\omega,S^{*}_{-w}\psi\rangle=I_{1}+I_{2}\] as in the proof of Proposition 3.7. As \(|S^{*}_{-w}\psi(y)-S^{*}_{-w}\psi(0)|\leq C_{0}|y|\) for some \(C_{0}\) that is independent of \(w\), while \(|\psi(-w)|\) is uniformly bounded, it follows that \(I_{1},I_{2}\) and therefore \(\langle e^{*}S^{*}_{v}\omega,\psi\rangle\) is bounded as \(\lambda\to 0\), uniformly in \(w\) and \(u\). The same estimates for \(I_{1},I_{2}\) in the proof of Proposition 3.7 show that \(I_{1},I_{2}\) remain bounded uniformly in \(u,w\) also for \(|\lambda|\geq\lambda_{0}>0\). It follows that \(\langle e^{*}S^{*}_{v}\omega,\psi\rangle\) is bounded as a function of \(v\in\mathbb{R}^{n}\setminus\mathbb{R}^{j}\), and therefore so is \(f(v)\). Choose a function \(\delta(\epsilon)>0\) such that \(\delta(\epsilon)\to 0\) as \(\epsilon\to 0\), and \(\int_{|v|\geq\delta(\epsilon)}\mu_{\epsilon}(v)\leq\delta(\epsilon)\). It then holds that \[\langle e^{*}(\omega*\mu_{\epsilon}),\psi\rangle-\langle e^{*}\omega,\psi \rangle=\int\langle e^{*}S^{*}_{v}\omega-e^{*}\omega,\psi\rangle d\mu_{ \epsilon}(v)=\int_{\mathbb{R}^{n}}f(v)d\mu_{\epsilon}(v).\] Seeing that \(\left|\int_{|v|\geq\delta(\epsilon)}f(v)d\mu_{\epsilon}(v)\right|\leq\sup_{ \mathbb{R}^{n}}|f(v)|\delta(\epsilon)\), while \[\int_{|v|\leq\delta(\epsilon)}f(v)d\mu_{\epsilon}(v)\leq\sup_{|v|\leq\delta( \epsilon)}|f(v)|\to 0,\] it follows that \[\langle e^{*}(\omega*\mu_{\epsilon}),\psi\rangle\rightarrow\langle e^{*} \omega,\psi\rangle,\] as claimed. #### 3.1.2. Pushforward under linear epimorphisms In general, generalized forms do not pushforward under linear epimorphisms. The following definition is motivated by the description of the pullback of valuations in terms of differential forms given in [4]. **Definition 3.9**.: Let \(\omega\in\Omega^{k}_{-\infty}(V)\) be smooth outside of the origin, \(0\)-homogeneous, and satisfy \(i_{E}\omega=0\), and let \(p:V\to W\) be a linear epimorphism. We define the pushforward of \(\omega\) as \[p_{*}\omega:=r^{0}(\tilde{p}_{*}\tilde{\tau})\in\Omega^{k-s}_{-\infty}(W, \operatorname{or}(\ker p)),\] where \(\tilde{p}:\widetilde{\mathbb{P}_{+}(V)}\to\mathbb{P}_{+}(W)\) is the induced projection on the oriented blow-up of \(\mathbb{P}_{+}(V)\) along \(\mathbb{P}_{+}(\ker p)\), \(\tau\) satisfies \(\omega=r^{0}\tau\), and \(\tilde{\tau}=\pi^{*}\tau\) where \(\pi:\widetilde{\mathbb{P}_{+}(V)}\to\mathbb{P}_{+}(V)\) is the natural projection. Furthermore, for \(\omega\in\Omega_{-\infty}^{n}(V)\) equal to \(c\delta_{0}\otimes\sigma_{V}\), we define \(p_{*}\omega=c\delta_{0}\otimes\sigma_{W}\otimes\sigma_{p}\), where \(\sigma_{V}\in\mathrm{or}(V),\sigma_{W}\in\mathrm{or}(W),\sigma_{p}\in\mathrm{ or}(\ker p)\) are such that \(\sigma_{V}=\sigma_{W}\otimes\sigma_{p}\). We will need the following description of the pushforward. **Lemma 3.10**.: Let \(p:V\to W\) be a linear epimorphism, \(\dim V=n\), \(\dim W=n-s\). Assume that \(\omega\in\Omega_{-\infty}^{k}(V)\) is \(0\)-homogeneous, smooth outside of the origin, and \(i_{E}\omega=0\). Let \(\nu_{\epsilon}\in C^{\infty}(V)\) satisfy \(\nu_{\epsilon}\to 1\) smoothly on compact sets, with \(\nu_{\epsilon}\) a uniformly bounded family of Schwartz functions. Then \(p_{*}\omega\in\Omega_{-\infty}^{k-s}(W,\mathrm{or}(\ker p))\) is given by \[\langle\psi,p_{*}\omega\rangle=\lim_{\epsilon\to 0}\langle\nu_{\epsilon}\cdot p ^{*}\psi,\omega\rangle,\] for all compactly supported \(\psi\in\Omega^{n-k}(W,\mathrm{or}(V))\). Proof.: For \(k=n\) we have \(\omega=c\delta_{0}\), and both sides trivially equal \(c\psi(0)\). Assume now \(k<n\) and that \(V=\mathbb{R}^{n}\), \(W=\mathbb{R}^{n-s}\) and coordinates are chosen so that \(\ker p=\{x_{1}=\cdots=x_{n-s}=0\}\). Write \[\omega=\frac{1}{|x|^{k}}\sum_{|I|=k}g_{I}(\frac{x}{|x|})dx_{I}=:\sum\omega_{I}.\] We first show that \[\int_{\mathbb{R}^{n}}p^{*}\psi\wedge\omega\] is absolutely convergent, and so by the dominated convergence theorem, the limit on the right hand side of the claimed equality exists and equals this integral. When \(k>s\), this follows at once since \(\frac{1}{|x|^{k}}\) is integrable over \(\mathbb{R}^{s}\). Now assume \(k=s\). Denoting \(I=\{n-k+1,\ldots,n\}\), we may assume \(\omega=\omega_{I}\). Examining the coefficient of \(dx_{n-k+2}\wedge\cdots\wedge dx_{n}\) in the equality \(i_{E}\omega=0\), and writing \(I_{j}=\{j,n-k+2,\ldots,n\}\) for \(1\leq j\leq n-k\), we find that \[g_{I}(\frac{x}{|x|})x_{n-k+1}=-\sum_{j=1}^{n-k}g_{I_{j}}(\frac{x}{|x|})x_{j}.\] Denoting \(x=y+z\) where \(y\in\mathbb{R}^{n-k}\) and \(z\in\mathbb{R}^{k}\), we write this equality as \[g_{I}(\frac{x}{|x|})=\frac{f_{1,y}(z)}{x_{n-k+1}}=\frac{f_{1,y}(z)}{z_{1}}\] for some \(f_{1,y}(z)\in C^{\infty}(\mathbb{R}^{k}\setminus 0)\), which is bounded for each fixed \(y\), and smooth in \(\mathbb{R}^{k}\) when \(y\neq 0\). We similarly can find \(f_{i,y}(z)\in C^{\infty}(\mathbb{R}^{k}\setminus 0)\) for \(1\leq i\leq k\) such that \[g_{I}(\frac{x}{|x|})=\frac{f_{i,y}(z)}{z_{i}}.\] Note also that \(|f_{i,y}(z)|\leq c_{0}|y|\) for some fixed \(c_{0}>0\). We have \[p^{*}\psi\wedge\omega =\frac{1}{|x|^{k}}g_{I}(\frac{x}{|x|})p^{*}\psi\wedge dx_{n-k+1} \wedge\cdots\wedge dx_{n}\] \[=h(y,z)dy_{1}\wedge\cdots\wedge dy_{n-k}dz_{1}\wedge\cdots\wedge dz _{k}\] for some \(h(y,z)\) smooth outside \((0,0)\) and supported in \(|y|<R\) for some \(R>0\). Thus for all \((y,z)\in\mathbb{R}^{n-k}\oplus\mathbb{R}^{k}\) with \(y,z\neq 0\) it holds that \[|h(y,z)|\leq\frac{c_{0}|y|}{|y+z|^{k}\max(|z_{1}|,\ldots,|z_{k}|)}\leq\frac{c_{1 }|y|}{|y+z|^{k}|z|}.\] Consequently, \[\int_{\mathbb{R}^{k}}|h(y,z)|dz\leq c_{1}|y|\int_{\mathbb{R}^{k}}\frac{dz}{|z ||y+z|^{k}}=c_{2}|y|\int_{0}^{\infty}\frac{r^{k-2}dr}{(r^{2}+|y|^{2})^{k/2}}=c_ {3}\] and so \(\int_{\mathbb{R}^{n}}|h(y,z)|dydz\leq\int_{|y|\leq R}c_{3}dy\) is finite. Thus in all cases it remains to verify that \[\langle\psi,p_{*}\omega\rangle=\langle p^{*}\psi,\omega\rangle.\] Let \(\omega=r^{0}\tau\). To avoid dealing with the blow-up, we may assume that \(\tau\) vanishes near \(\ker p\), and conclude using approximation. Denoting by \(\pi:V\setminus\{0\}\to\mathbb{P}_{+}(V)\) the radial projection, the left hand side is by definition \[\langle\psi,p_{*}\omega\rangle=\langle\psi,r^{0}p_{*}\tau\rangle=\langle\pi_ {*}\psi,p_{*}\tau\rangle=\langle p^{*}\pi_{*}\psi,\tau\rangle.\] On the other hand, since \(\tau\) vanishes near \(\ker p\), \[\langle p^{*}\psi,\omega\rangle=\langle\pi_{*}p^{*}\psi,\tau\rangle.\] It remains to observe that the top arrow in is a bundle map that restricts to a diffeomorphism on each fiber, and so \(\pi_{*}p^{*}\psi=p^{*}\pi_{*}\psi\). ## 4. The \(0\)-homogeneous current of a valuation A key property of smooth valuations is their description through a pair of currents \((C,T)\), obtained in [8], equations (60)-(62). Here we unify them into a single current in the translation-invariant case. This simplifies the description of some operations on valuations such as pullback and pushforward, and is very natural for describing the Alesker-Fourier transform on valuations. In the following we make frequent use of the Hodge star isomorphism \[*=*_{V}:\wedge^{k}V^{*}\xrightarrow{\sim}\wedge^{n-k}V\otimes\wedge^{n}V^{*} \simeq\wedge^{n-k}V\otimes\operatorname{Dens}(V)\otimes\operatorname{or}(V), \tag{6}\] given by \(\langle\eta,\zeta\rangle=\eta\wedge*\zeta,\quad\zeta\in\wedge^{k}V^{*},\eta \in\wedge^{k}V\). In view of later computations, we remark that the Hodge star isomorphism satisfies \[\zeta\wedge\theta=\langle*\zeta,\theta\rangle,\quad\zeta\in\wedge^{k}V^{*}, \theta\in\wedge^{n-k}V^{*}\] and \[(*_{V^{*}}\otimes\operatorname{id})\circ*_{V}=(-1)^{k(n-k)}\operatorname{id}.\] ### The \(0\)-homogeneous current of a valuation Recall that by definition a generalized valuation \(\phi\in\operatorname{Val}_{k}^{-\infty}(V)\) is a continuous linear functional on \(\operatorname{Val}_{n-k}^{\infty}(V)\otimes\operatorname{Dens}(V)^{*}\). Hence \(\phi\) defines for \(k>0\) a linear functional on \(\Omega^{k-1}(\mathbb{P}_{+}(V^{*}),\wedge^{n-k}V^{*}\otimes\operatorname{or}( V)\otimes\operatorname{Dens}(V)^{*})\), that is a generalized form in \(S\in\Omega_{-\infty}^{n-k}(\mathbb{P}_{+}(V^{*}),\wedge^{n-k}V\otimes \operatorname{Dens}(V))\). The Hodge star isomorphism (6) allows to consider it as a generalized form \[T=T(\phi)=(-1)^{k(n-k)}*_{V^{*}}\circ S\in\Omega_{-\infty}^{n-k}(\mathbb{P}_{+ }(V^{*}),\wedge^{k}V^{*}\otimes\operatorname{or}(V)).\] This is the current \(T\) of a valuation mentioned above, specialized to the translation-invariant case. _Remark 4.1_.: Note that our convention differs slightly from that of the standard one, see e.g. [8]. Namely, let \(\phi\in\operatorname{Val}_{k}^{\infty}(V)\), \(V=\mathbb{R}^{n}\), be given by \(\phi(K)=\int_{\operatorname{nc}(K)}\omega\) for \(\omega\in\Omega^{k,n-1-k}(V\times\mathbb{P}_{+}(V^{*}),\operatorname{or}(V))\). Then \[T(\phi)\in\Omega^{n-k}(\mathbb{P}_{+}(V^{*}),\wedge^{k}V^{*}\otimes \operatorname{or}(V))=\Omega^{k,n-k}(V\times\mathbb{P}_{+}(V^{*}), \operatorname{or}(V))\] is given by \[T(\phi)=(-1)^{n-k}a^{*}D\omega, \tag{7}\] where \(a\) is the antipodal map, and \(D\) the Rumin differential. The extra sign \((-1)^{n-k}=(-1)^{(n-k)^{2}}\) is due to the sign difference between the natural pairing \[\Omega^{n-k}(\mathbb{P}_{+}(V^{*}),\wedge^{k}V^{*}\otimes \operatorname{or}(V))\otimes\Omega^{k-1}(\mathbb{P}_{+}(V^{*}),\wedge^{n-k}V^ {*}\otimes\operatorname{or}(V)\otimes\operatorname{Dens}(V)^{*})\to\mathbb{C},\] \[(\alpha\otimes dx_{I})\otimes(\beta\otimes dx_{J})\mapsto\left( \int_{\mathbb{P}_{+}(V^{*})}\alpha\wedge\beta\right)\otimes(dx_{I}\wedge dx_{J }),\] and the single wedge product pairing \[\Omega^{k,n-k}(V\times\mathbb{P}_{+}(V^{*}),\operatorname{or}(V))\otimes \Omega^{n-k,k-1}(V\times\mathbb{P}_{+}(V^{*}),\operatorname{or}(V)\otimes \operatorname{Dens}(V)^{*})\to\mathbb{C}.\] For a generalized translation-invariant valuation \(\phi\in\operatorname{Val}_{k}^{-\infty}(V)\), we define its \(0\)_-homogeneous current_\(\tau(\phi)\in\Omega_{-\infty}^{n-k}(V^{*},\wedge^{k}V^{*}\otimes \operatorname{or}(V))\) as follows. If \(k>0\), then \(\tau(\phi)=r^{0}T\). If \(k=0\), then \(\phi=c\chi\) and we set \(\tau(\phi)=c\delta_{0}\), where \(\delta_{0}\) is the delta measure at the origin. **Proposition 4.2**.: A generalized form \(\tau\in\Omega_{-\infty}^{n-k}(V^{*},\wedge^{k}V^{*}\otimes\operatorname{or}( V))\) is \(\tau(\phi)\) for some \(\phi\in\operatorname{Val}_{k}^{-\infty}(V^{*})\) if and only if \(\tau\) satisfies 1. \(0\)-homogeneous. 2. \(i_{E}\tau=0\). 3. \(\tau\wedge E=0\). 4. \(d\tau=0\). Moreover, \(\phi\) is smooth if and only if \(\tau(\phi)\) is smooth on \(V^{*}\setminus\{0\}\). Proof.: Assume \(\tau=\tau(\phi)\). For \(k=0\) the verification is trivial, thus we assume \(k\geq 1\). Write \(\tau=r^{0}T\) as in the definition of \(\tau(\phi)\). By Lemma 3.4, \(\tau\) satisfies the first two properties. It holds on \(V^{*}\setminus\{0\}\) that \(d\pi^{*}T=\pi^{*}dT=0\). Therefore, the generalized form \(d\tau\) is supported at the origin. For a fixed functional \(\psi\in(\wedge^{k}V^{*}\otimes\operatorname{or}(V))^{*}\), \(\phi(d\tau)\) a \(0\)-homogeneous generalized \((n-k+1)\)-form, and so it must hold that \(n-k+1=n\), and \(\phi(d\tau)\) is a multiple of the delta top form at the origin, denoted \(\delta_{0}\). But \(\delta_{0}\) does not vanish on constant functions, while \(d\tau\) must vanish on constants. It follows that \(d\tau=0\). Finally, for a test form \(\psi\in\Omega^{k}(V^{*},\wedge^{n-k-1}V^{*})\) it holds that \[\langle\tau,\psi\wedge E\rangle=\langle T,\pi_{*}(\psi\wedge E)\rangle=0\] by the verticality of \(T\). Therefore, \(\tau\wedge E=0\). Conversely, let \(\tau\) satisfy all four properties. Assume first \(k=0\). Since \(\tau\wedge E=0\), \(\tau\) must be supported at the origin. Since it is \(0\)-homogeneous, \(\tau\) is a multiple of the delta measure, and so \(\tau=\tau(c\chi)\) for some constant \(c\). Assume now \(k\geq 1\). By Lemma 3.4, \(\tau=r^{0}\omega\) for a unique generalized form \(\omega\) on \(\mathbb{P}_{+}(V^{*})\). It follows immediately from \(\tau\wedge E=0\) that \(\omega\) is vertical. Also on \(V^{*}\setminus\{0\}\) we have \(\pi^{*}d\omega=d\pi^{*}\omega=d\tau=0\), and therefore \(d\omega=0\). The existence of \(\phi\) such that \(\tau=\tau(\phi)\) now follows from [8]. **Definition 4.3.** A form satisfying the properties listed in Proposition 4.2 will be called valuation-type. If it is smooth outside the origin, we shall call it smooth valuation-type. The following special case will be relevant for us later. **Lemma 4.4.** Let \(U\) be a \(k\)-dimensional linear subspace of \(V=\mathbb{R}^{n}\) and let \(P\colon V\to V/U\) denote the canonical projection. Let \(\operatorname{vol}_{V/U}\) be a density on \(V/U\). Then \(\operatorname{vol}_{V/U}\) defines a continuous valuation \[\phi(K)=\operatorname{vol}_{V/U}(PK),\quad K\subset V.\] It then holds that \[\tau(\phi)=P^{*}\operatorname{vol}_{V/U}\otimes[[U^{\perp}]].\] We remark that \([[U^{\perp}]]\in\Omega^{k}_{-\infty}(V^{*},\operatorname{or}(U^{\perp})\otimes \operatorname{or}(V))=\Omega^{k}_{-\infty}(V^{*},\operatorname{or}(U))\), while \(P^{*}\operatorname{vol}_{V/U}\in\wedge^{n-k}(V/U)^{*}\otimes\operatorname{or} (V/U)\subset\wedge^{n-k}V^{*}\otimes\operatorname{or}(V)\otimes \operatorname{or}(U)^{*}\), so that \(P^{*}\operatorname{vol}_{V/U}\otimes[[U^{\perp}]]\) is well-defined. Proof.: Let \(V=\mathbb{R}^{n}\) with the standard Euclidean structure and orientation, and suppose that \(U=\{x_{k+1}=\ldots=x_{n}=0\}\) and \(\phi(K)=\operatorname{vol}_{U^{\perp}}(PK)\), where \(P\) is the orthogonal projection onto \(U^{\perp}\). Let \(L\subset U\) be a convex body of unit volume. The valuation \(\phi\in\operatorname{Val}_{n-k}(V)\) is continuous, but not smooth. Its action on \(\psi\in\operatorname{Val}_{k}^{\infty}(V)\) is \(\langle\phi,\psi\rangle=\psi(L)\). If \(\psi=\int_{\operatorname{nc}(K)}\omega\), then by eq. (7) we have \[\langle\phi,\psi\rangle=(-1)^{k}\int_{L\times S(U^{\perp})}\omega\] and hence \[T(\phi)=(-1)^{k+k(n-k)}dx_{k+1}\wedge\cdots\wedge dx_{n}\otimes[[S(U^{\perp})]].\] We conclude that \[\tau(\phi)=r^{0}(T(\phi))=(-1)^{k(n-k)}dx_{k+1}\wedge\cdots\wedge dx_{n}\otimes [[U^{\perp}]]. \tag{8}\] ### Operations on valuations Recall for the following that for any generalized form \(\omega\in\Omega_{-\infty}(V)\), \(i_{*}\omega\) and \(p^{*}\omega\) are well-defined generalized forms, where \(i:V\to W\) denotes any monomorphism, and \(p:W\to V\) any epimorphism. Here we describe some operations on valuations in terms of the \(0\)-homogeneous current representation. Given a linear map \(f:V_{1}\to V_{2}\), one can define the pullback \(f^{*}:\operatorname{Val}(V_{2})\to\operatorname{Val}(V_{1})\), and the pushforward \(f_{*}:\operatorname{Val}(V_{1})\otimes\operatorname{Dens}(V_{1}^{*})\to \operatorname{Val}(V_{2})\otimes\operatorname{Dens}(V_{2}^{*})\), see [5]. Recall that if \(f\) is a monomorphism, then \(f^{*}\) maps smooth valuations to smooth valuations, while if \(f\) is an epimorphism, then \(f^{*}\) extends by continuity to a linear map \[f^{*}\colon\operatorname{Val}_{k}^{-\infty}(W)\to\operatorname{Val}_{k}^{- \infty}(V).\] Let \(n_{V}=\dim V\) and \(n_{W}=\dim W\) in the following. **Proposition 4.5**.: Let \(f\colon V\to W\) be a linear map. The pullback \(f^{*}\colon\operatorname{Val}_{k}^{\infty}(W)\to\operatorname{Val}_{k}^{- \infty}(V)\) is then given by \[\tau(f^{*}u)=f^{\vee}\circ(f^{\vee})_{*}\tau(u),\] where \((f^{\vee})_{*}:\Omega_{-\infty}^{n_{W}-k}(W^{*},\wedge^{k}W^{*}\otimes \operatorname{or}(W))\to\Omega_{-\infty}^{n_{V}-k}(V^{*},\wedge^{k}W^{*} \otimes\operatorname{or}(V))\) is the pushforward by \(f^{\vee}\colon W^{*}\to V^{*}\), and \(f^{\vee}:\wedge^{k}W^{*}\to\wedge^{k}V^{*}\) is applied to the value of the form. Proof.: Assume first \(k=0\). We then have \(\tau(\chi)=\delta_{0}=[[\{0\}]]\). If \(f\) is either an epimorphism or a monomorphism, then \[(f^{\vee})_{*}(\delta_{0})=\delta_{0}.\] Indeed, in the first case the pushforward is the usual pushforward under proper immersions. In the second case, our Definition 3.9 applies. Since \(f^{*}\chi=\chi\), the conclusion for \(k=0\) follows. All other cases follow immediately from [4], adapted to the translation-invariant case and bearing in mind the additional sign of eq. (7) imposed by our convention for the current of a valuation. Recall that if \(f\) is an epimorphism, then \(f_{*}\) maps smooth valuations to smooth valuations, while if \(f\) is a monomorphism, then \(f_{*}\) extends by continuity to a linear map \[f_{*}\colon\operatorname{Val}_{k}^{-\infty}(V)\otimes\operatorname{Dens}(V^{ *})\to\operatorname{Val}_{k+n_{W}-n_{V}}^{-\infty}(W)\otimes\operatorname{Dens }(W^{*}).\] **Proposition 4.6**.: Let \(f\colon V\to W\) be a linear map. For \(0\leq k\leq n_{V}\) and \(\phi\in\operatorname{Val}_{k}^{-\infty}(V)\otimes\operatorname{Dens}(V^{*})\), we have \[\tau(\phi)\in\Omega_{-\infty}^{n_{V}-k}(V^{*},\wedge^{k}V^{*}\otimes\wedge^{n_ {V}}V)\] and so \[\widetilde{\tau}(\phi):=*_{V}\circ\tau(\phi)\in\Omega_{-\infty}^{n_{V}-k}(V^{* },\wedge^{n_{V}-k}V).\] The pushforward \[f_{*}\colon\operatorname{Val}_{k}^{\infty}(V)\otimes\operatorname{Dens}(V^{* })\to\operatorname{Val}_{k+n_{W}-n_{V}}^{-\infty}(W)\otimes\operatorname{Dens }(W^{*})\] is then given by \[\widetilde{\tau}(f_{*}\phi)=f\circ(f^{\vee})^{*}\widetilde{\tau}(\phi)\in\Omega _{-\infty}^{n_{V}-k}(W^{*},\wedge^{n_{V}-k}W) \tag{9}\] if \(k+n_{W}>n_{V}\). If \(k+n_{W}=n_{V}\), then \[\widetilde{\tau}(f_{*}\phi)=f\circ(f^{\vee})^{*}_{v}\widetilde{\tau}(\phi)\in \Omega_{-\infty}^{n_{W}}(W^{*},\wedge^{n_{W}}W), \tag{10}\] where the choice of \(v\in V^{*}\setminus f^{\vee}(W^{*})\) can be arbitrary. Proof.: Suppose \(k>0\) and \(k+n_{W}-n_{V}>0\). If \(f\) is an epimorphism, then the statement follows directly from [43, Lemma 7.4]. If \(f\) is a monomorphism, then using the fact that \(f_{*}\) is defined as the dual of \(f^{*}\), the claim follows from a short computation. Let us verify the statement for \(k=0\), that is for \(f_{*}(\chi)\). Since every linear map factors as \(V\to f(V)\to W\), we have that \(f_{*}\chi=0\) if \(f\) is not injective. In this case also right-hand side of (9) vanishes. Hence we may assume that \(f\) is a monomorphism. Fix orientations and Lebesgue measures on \(V\) and \(W\) for simplicity. This also fixes orientations and Lebesgue measures on \(f(V)\) and \(W/f(V)\) and elements in the top exterior powers of \(V\) and \(f(V)\) that we denote by \(\sigma_{V}\) and \(\sigma_{f(V)}\). One has \(f\otimes(f^{\vee})^{*}(\sigma_{V}\otimes[[0]])=f(\sigma_{V})\otimes[[\ker f^{ \vee}]]\), while \[f_{*}\chi(K)=\int_{W/f(V)}\chi((f(V)+w)\cap K)dw=\operatorname{vol}(P_{W/f(V)}K)\] and so by Lemma 4.4 one has \(\widetilde{\tau}(f_{*}\chi)=\sigma_{f(V)}\otimes[[f(V)^{+}]]=f(\sigma_{V}) \otimes[[\ker f^{\vee}]]\), as required. It remains to verify the statement for \(k=n_{V}-n_{W}\). Since every linear map factors as \(V\to f(V)\to W\), the pushforward \(f_{*}\phi\) vanishes if \(f\) is not surjective. In this case also right-hand side of (10) vanishes. We assume therefore that \(f\) is an epimorphism. For convenience fix a Euclidean structure on \(V\), and orientations on \(V\) and \(W\), such that \(v\) is orthogonal to \(f^{\vee}(W^{*})\). Assume \(\widetilde{\tau}=\widetilde{\tau}(\phi)=r^{0}\widetilde{\beta}\) with \(\widetilde{\beta}\in\Omega^{n_{W}}(S(V^{*}),\wedge^{n_{W}}V)\). Note that \(\widetilde{\beta}=d\widetilde{\omega}\) for some \(\widetilde{\omega}\in\Omega^{n_{W}-1}(S(V^{*}),\wedge^{n_{W}}V)\). Then \(f_{*}\phi=\chi\otimes\sigma\), where \(\sigma=f(\int_{S(W^{*})}\widetilde{\omega})\in\wedge^{n_{W}}W\). Hence \[\widetilde{\tau}(f_{*}\phi)=f(\int_{S(f^{\vee}(W^{*}))}\widetilde{\omega}) \delta_{0},\] and comparing with (5) completes the proof. **Proposition 4.7**.: Let \(\phi\in\operatorname{Val}_{k}^{\infty}(V)\) and \(\psi\in\operatorname{Val}_{l}^{\infty}(W)\). Then \(\tau(\phi\boxtimes\psi)=(-1)^{(n-k)l}\tau(\phi)\boxtimes\tau(\psi)\). Here the factor \((-1)^{(n-k)l}\) accounts for the difference between viewing forms on \(\mathbb{P}_{+}(V^{*})\) with values in \(\wedge^{\bullet}V^{*}\) as translation-invariant forms on the cosphere bundle \(V\times\mathbb{P}_{+}(V^{*})\), which is the setting of [8], when taking wedge products. Proof.: We may assume that \(\phi\in\operatorname{Val}_{k}^{\infty}(V)\), \(\psi\in\operatorname{Val}_{l}^{\infty}(W)\). Fix Euclidean inner products. Assume first \(k,l>0\), so that \(\tau(\phi)\), \(\tau(\psi)\) are locally integrable. Let us recall the construction of the exterior product of \(\phi\in\operatorname{Val}^{\infty}(\mathbb{R}^{m})\) and \(\psi\in\operatorname{Val}^{\infty}(\mathbb{R}^{n})\) in these terms. On the blow-up space \[\Sigma:=S^{m-1}\times[0,\pi/2]\times S^{n-1}\] one has the blow-down map \[F:\Sigma\to S^{m+n-1},\quad F(u,\theta,v)=(\cos(\theta)u,\sin(\theta)v)\in S^{ m+n-1}\] and the projection \[\Phi:\Sigma\to S^{m-1}\times S^{n-1},\quad\Phi(u,\theta,v)=(u,v).\] The defining generalized \(n\)-form of \(\phi\boxtimes\psi\) is \[T(\phi\boxtimes\psi)=F_{*}\Phi^{*}(T(\phi)\boxtimes T(\psi))\in\Omega_{-\infty}( \mathbb{P}_{+}(V^{*}\times W^{*}),\wedge^{\bullet}(V^{*}\times W^{*})).\] We use \(\pi\) to denote the radial projection in all spaces. Define \[F_{0}\colon(0,\infty)\times\Sigma\to\mathbb{R}^{m+n},\quad F_{0}(t,u,\theta,v)= (t\cos(\theta)u,t\sin(\theta)v)\] so that \(\pi\circ F_{0}=F\). Define also \(\beta:(0,\infty)\times\Sigma\to\Sigma\), \(\beta(r,\sigma)=\sigma\). \[\Phi_{0}:(0,\infty)\times\Sigma\to S^{m-1}\times S^{n-1},\quad\Phi_{0}=\Phi \circ\beta.\] Finally, define the map \[R\colon\mathbb{R}^{m}\setminus\{0\}\times\mathbb{R}^{n}\setminus\{0\}\to(0, \infty)\times\Sigma\] so that \(F_{0}\circ R=\operatorname{id}\) and \(\Phi_{0}\circ R=\pi\times\pi\). Observe also that \(F\), \(R\), \(F_{0}\) are all diffeomorphisms outside of submanifolds of positive codimension. As both sides of the claimed equality \[r^{0}F_{*}\Phi^{*}(T(\phi)\boxtimes T(\psi))=(-1)^{(n-k)l}r^{0}T(\phi)\boxtimes r ^{0}T(\psi)\] are locally integrable, it suffices to show that \[\pi^{*}F_{*}\Phi^{*}(T(\phi)\boxtimes T(\psi))=(\pi\times\pi)^{*}(T(\phi) \boxtimes T(\psi))\] holds outside of \(\mathbb{R}^{m}\times\{0\}\cup\{0\}\times\mathbb{R}^{n}\). Noting that \((\pi\times\pi)^{*}=R^{*}\Phi_{0}^{*}\), it suffices to verify that \[R^{*}\Phi_{0}^{*}=\pi^{*}F_{*}\Phi^{*}\iff\Phi_{0}^{*}=F_{0}^{*}\pi^{*}F_{*} \Phi^{*}.\] Since \(\pi\circ F_{0}=F\circ\beta\), we find \(F_{0}^{*}\pi^{*}=\beta^{*}F^{*}\) and we should verify \[\Phi_{0}^{*}=\beta^{*}F^{*}F_{*}\Phi^{*},\] and since \(F^{*}F_{*}=\operatorname{id}\), we are left with the equality \[\Phi_{0}^{*}=\beta^{*}\Phi^{*},\] which holds since \(\Phi_{0}=\Phi\circ\beta\). Now assume \(k>0,l=0\), so that \(\psi=\chi\). This case then follows immediately from [4, equation 2.1.13]. The remaining case \(k=0,l=0\) amounts to the verification \(\tau(\chi\boxtimes\chi)=\tau(\chi)\boxtimes\tau(\chi)\), which follows from \(\delta_{0}\boxtimes\delta_{0}=\delta_{0}\). ### Symmetry of closed vertical forms Closed, vertical forms on the sphere bundle play a key role in valuation theory. In the translation invariant case, such forms exhibit a certain symmetry, as we now proceed to show. **Lemma 4.8**.: If \(\tau\in\Omega^{n-k}(S^{n-1},\wedge^{k}(\mathbb{R}^{n})^{*})\) is closed and vertical, then the form \(\overline{\tau}\in\Omega^{n-k}(S^{n-1},\wedge^{n-k}(\mathbb{R}^{n})^{*})\), corresponding to \(\tau\) under the Hodge star \(\wedge^{k}(\mathbb{R}^{n})^{*}\simeq\wedge^{n-k}(\mathbb{R}^{n})^{*}\), belongs to the subspace \(C^{\infty}(S^{n-1},\operatorname{Sym}^{2}(\wedge^{n-k}T^{*}S^{n-1}))\). Proof.: Let \((x_{1},\ldots,x_{n},\xi_{1},\ldots,\xi_{n})\) be the standard coordinates on \(\mathbb{R}^{n}\times\mathbb{R}^{n}\supset\mathbb{R}^{n}\times S^{n-1}\). Let \(R=\sum\xi_{i}\frac{\partial}{\partial x_{i}}\) denote the Reeb vector field. Since \(\tau\) is vertical, one has \(i_{R}\overline{\tau}=0\). Hence, using that \(\tau\) is closed, one obtains \[0=d(i_{R}\overline{\tau})=\sum d\xi_{i}\wedge i_{\frac{\partial}{\partial x_{i }}\overline{\tau}}.\] Hence by [31, Proposition 2.2] (see also [17, Remark 5.8]) we deduce that \(\tau_{\xi}\in\operatorname{Sym}^{2}(\wedge^{n-k}T^{*}_{\xi}S^{n-1})\), concluding the proof. Let \(\tau\in\Omega^{n-k}_{-\infty}(V^{*},\wedge^{k}V^{*}\otimes\mathrm{or}(V))\). Using the identification \(\wedge^{k}V^{*}\simeq\wedge^{n-k}V\otimes\wedge^{n}V^{*}\), we define the generalized form \(\overline{\tau}\in\Omega^{n-k}_{-\infty}(V^{*},\wedge^{n-k}V\otimes\mathrm{ Dens}(V))\) corresponding to \(\tau\). **Proposition 4.9**.: Let \(\tau\in\Omega^{n-k}_{-\infty}(V^{*},\wedge^{k}V^{*}\otimes\mathrm{or}(V))\) be the \(0\)-homogeneous current of a valuation. Considered as an element of \(C^{-\infty}(V^{*},\wedge^{n-k}V\otimes\wedge^{n-k}V\otimes\mathrm{ Dens}(V))\), \(\overline{\tau}\) belongs to the subspace \(C^{-\infty}(V^{*},\mathrm{Sym}^{2}(\wedge^{n-k}V)\otimes\mathrm{ Dens}(V))\). Proof.: If \(k=0\), there is nothing to prove. Suppose therefore that \(k>0\). and use Lemma 3.4 to write \(\tau=r^{0}\tau^{\prime}\), where \(\tau^{\prime}\) is vertical and closed. Assume first that \(\tau^{\prime}\) is a smooth form. Fix a Euclidean inner product on \(V\). By Lemma 4.8 we know that \(\overline{\tau}^{\prime}\) is symmetric. It follows that \(\overline{\tau}=\pi^{*}\overline{\tau}^{\prime}\) is symmetric on \(V\setminus\{0\}\). Since \(\overline{\tau}\) is locally integrable, the symmetry of \(\overline{\tau}\) follows. In the general case, choose an approximate identity \(\rho_{j}\in C^{\infty}(\mathrm{SL}(V^{*}))\), and set \(\tau^{\prime}_{j}=\rho_{j}*\tau^{\prime}\). Then \(\tau^{\prime}_{j}\) is smooth, vertical and closed, and so \(\rho_{j}*\overline{\tau}=r^{0}(\rho_{j}*\overline{\tau}^{\prime})=r^{0} \overline{\tau^{\prime}_{j}}\) is symmetric. Taking \(j\to\infty\) proves the proposition. ## 5. Fourier transform of differential forms ### Fourier transform of smooth and generalized differential forms Let \(V\) and \(F\) be finite-dimensional vector spaces of over the reals, and let \(\Omega^{k}_{\mathcal{S}}(V,F)=\mathcal{S}(V,\wedge^{k}V^{*}\otimes F)\subset \Omega^{k}(V,F)\) denote the Schwartz space of differential \(k\)-forms on \(V\) with values in \(F\) and rapidly decreasing coefficients. **Definition 5.1**.: The Fourier transform \[\mathcal{F}:\Omega^{k}_{\mathcal{S}}(V,F)\to\Omega^{n-k}_{\mathcal{S}}(V^{*}, \mathrm{or}(V)\otimes F)\] is defined as follows. For \(\omega\in\Omega^{k}_{\mathcal{S}}(V,F)\), the Hodge star isomorphism (6) allows to consider \(\omega\) as a map \(\widetilde{\omega}:V\to\wedge^{n-k}V\otimes\mathrm{ Dens}(V)\otimes\mathrm{or}(V)\otimes F\). Hence we may define for \(\xi\in V^{*}\) \[\mathcal{F}(\omega)(\xi)=\int_{V}e^{2\pi\mathrm{i}(x,\xi)}\widetilde{\omega}(x )\in\wedge^{n-k}V\otimes\mathrm{or}(V)\otimes F.\] For \(\omega\in\Omega^{k}(V,F)\) and \(\eta\in\Omega^{n-k}_{c}(V,\mathrm{or}(V)\otimes F^{*})\), the isomorphism \[\wedge^{k}V^{*}\otimes\wedge^{n-k}V^{*}\otimes\mathrm{or}(V)\simeq\mathrm{ Dens}(V)\] allows to define the function \(V\ni x\mapsto\omega(x)\wedge\eta(x)\in\mathrm{ Dens}(V)\) and hence the pairing \[\langle\omega,\eta\rangle=\int_{V}\omega(x)\wedge\eta(x). \tag{11}\] Note that the inclusion of \(\Omega^{k}(V,F)\) into \(\Omega^{k}_{-\infty}(V,F)\) defined by this pairing coincides with the canonical inclusion of \(C^{\infty}(M,\mathcal{E})\) into \(C^{-\infty}(M,\mathcal{E})\). **Lemma 5.2**.: Let \(\omega\in\Omega^{k}_{\mathcal{S}}(V,F)\) and \(\eta\in\Omega^{k}_{\mathcal{S}}(V^{*},F^{*})\). Then \[\langle\mathcal{F}\omega,\eta\rangle=(-1)^{k(n-k)}\langle\omega,\mathcal{F} \eta\rangle.\] Proof.: This follows immediately from the fact that pointwise we have \[\widetilde{\omega}_{x}\wedge\eta_{\xi}=(-1)^{k(n-k)}\omega_{x}\wedge \widetilde{\eta}_{\xi}\in\mathrm{Dens}(V)\otimes\mathrm{Dens}(V^{*}).\] **Definition 5.3**.: The Fourier transform of tempered generalized forms on \(V\) with values in \(F\) \[\mathcal{F}\colon\Omega^{k}_{\mathcal{S}^{\prime}}(V,F)\to\Omega^{n-k}_{ \mathcal{S}^{\prime}}(V^{*},F\otimes\operatorname{or}(V))\] is defined by \[\langle\mathcal{F}T,\eta\rangle=(-1)^{k(n-k)}\langle T,\mathcal{F}\eta\rangle, \quad\forall\eta\in\Omega^{k}_{\mathcal{S}}(V^{*},F^{*}).\] In view of Lemma 5.2, this definition continuously extends the Fourier transform of smooth forms. Let us discuss functorial properties of the Fourier transform. **Lemma 5.4**.: The inversion formula \[(\mathcal{F}_{V^{*}}\times\operatorname{id})\circ\mathcal{F}_{V}(\omega)=(-1 )^{kn}(-\operatorname{id})^{*}\omega\] holds for all \(\omega\in\Omega^{k}_{\mathcal{S}^{\prime}}(V,F)\). Proof.: This follows immediately from the classical inversion formula. **Lemma 5.5**.: Let \(i\colon V\to W\) be a monomorphism, and \(p:W\to V\) an epimorphism. Write \(n=\dim V\), \(m=\dim W\). Then \[\mathcal{F}_{W}\circ i_{*}=(-1)^{(m-n)(n-k)}(i^{\vee})^{*}\circ\mathcal{F}_{V }\colon\Omega^{k}_{\mathcal{S}^{\prime}}(V,F)\to\Omega^{n-k}_{\mathcal{S}^{ \prime}}(W^{*},F\otimes\operatorname{or}(V)),\] and \[(p^{\vee})_{*}\circ\mathcal{F}_{V}=\mathcal{F}_{W}\circ p^{*}\colon\Omega^{k} _{\mathcal{S}^{\prime}}(V,F)\to\Omega^{m-k}_{\mathcal{S}^{\prime}}(W^{*},F \otimes\operatorname{or}(V)).\] Proof.: We may assume \(F=\mathbb{R}\). It suffices to show that \[i^{*}\mathcal{F}_{W^{*}}\varphi=(-1)^{(m-n)(n-k)}\mathcal{F}_{V^{*}}(i^{\vee} )_{*}\varphi \tag{12}\] for every Schwartz test function \(\varphi\in\Omega^{k+m-n}_{\mathcal{S}}(W^{*},\operatorname{or}(V)\otimes \operatorname{or}(W))\). Indeed, if this holds, then for every \(T\in\Omega^{k}_{\mathcal{S}^{\prime}}(V)\) one has \[\langle\mathcal{F}(i_{*}T),\varphi\rangle =(-1)^{(k+m-n)(n-k)}\langle i_{*}T,\mathcal{F}\varphi\rangle\] \[=(-1)^{(k+m-n)(n-k)+(m-n+k)(n-k)+k(n-k)}\langle T,i^{*}\mathcal{ F}\varphi\rangle\] \[=(-1)^{k(n-k)+(m-n)(n-k)}\langle T,\mathcal{F}((i^{\vee})_{*} \varphi)\rangle\] \[=(-1)^{(m-n)(n-k)}\langle\mathcal{F}T,(i^{\vee})_{*}\varphi\rangle\] \[=(-1)^{(m-n)(n-k)}\langle(i^{\vee})^{*}\mathcal{F}T,\varphi\rangle\] For the proof of (12) choose linear coordinates \((y_{1},\ldots,y_{m})\) on \(W\) such that \(i(V)=\{y_{n+1}=\cdots=y_{m}=0\}\) and let \(x_{j}=i^{*}y_{j}\) for \(j=1,\ldots,n\). Let \((\eta_{1},\ldots,\eta_{m})\) and \((\xi_{1},\ldots,\xi_{n})\) be dual coordinates on \(W^{*}\) and \(V^{*}\). For a multi-index \(I=(i_{1}<\cdots<i_{r})\) we write \(dy_{I}=dy_{i_{1}}\wedge\cdots\wedge dy_{i_{r}}\), and similarly for other coordinates. We may assume that \(\varphi=f(\eta)d\eta_{I}\wedge d\eta_{J}\otimes\sigma_{W}\otimes\sigma_{V}\) where \(f\) is a Schwartz function, \(I=(1,\ldots,k)\), \(J=(n+1,\ldots,m)\). Then, writing \(d\eta=d\eta_{1}\wedge\cdots\wedge d\eta_{m}\), we have \(*(d\eta_{I}\wedge d\eta_{J})=(-1)^{(m-n)(n-k)}dy_{I^{c}}\), where \(I^{c}=(k+1,\ldots,n)\). Thus \[i^{*}\circ\mathcal{F}_{W^{*}}\varphi=(-1)^{(m-n)(n-k)}\left(\int_{W^{*}}e^{2 \pi\mathbf{i}(\eta,i(x))}f(\eta)\ d\eta\otimes\sigma_{W}\right)dx_{I^{c}} \otimes\sigma_{V}.\] Before we treat \(\mathcal{F}_{V^{*}}(i^{\vee})_{*}\varphi\), recall that the defining property of fiber integration along an epimorphism \(p\colon W\to V\) is \(\int_{W}p^{*}\omega\wedge\psi=\int_{V}\omega\wedge p_{*}\psi\), where \(\omega\) and \(\psi\) are Schwartz test forms. Thus, if \(\psi\in\Omega^{k}_{\mathcal{S}}(W,\mathrm{or}(W))\), say \(\psi=f(y)dy_{I}\wedge dy_{J}\otimes\sigma_{V}\) where \(p(y)=(y_{1},\ldots,y_{n})\) and \(dy_{J}=dy_{n+1}\wedge\cdots\wedge dy_{m}\), then \[p_{*}\psi=\left(\int_{y\in p^{-1}(x)}f(y)\;dy_{J}\otimes\sigma_{\ker p}\right) dx_{I}\otimes\sigma_{V},\] where \(\sigma_{V}\otimes\sigma_{\ker p}=\sigma_{W}\). Therefore, writing \(i^{\vee}=q\) we obtain \[(i^{\vee})_{*}\varphi=\left(\int_{\eta\in q^{-1}(\xi)}f(\eta)d\eta_{J}\otimes \sigma_{\ker q}\right)d\xi_{I}.\] Hence, putting \(d\xi=d\xi_{1}\wedge\cdots\wedge d\xi_{n}\), we find \[\mathcal{F}\circ(i^{\vee})_{*}\varphi =\left(\int_{V^{*}}\left(\int_{\eta\in q^{-1}(\xi)}f(\eta)d\eta_{ J}\otimes\sigma_{\ker q}\right)e^{2\pi\mathbf{i}(\xi,x)}d\xi\otimes \sigma_{V}\right)dx_{I^{e}}\otimes\sigma_{V}\] \[=\left(\int_{W^{*}}e^{2\pi\mathbf{i}(\eta,i(x))}f(\eta)\;d\eta \otimes\sigma_{W}\right)dx_{I^{e}}\otimes\sigma_{V},\] as claimed. Combined with the inversion formula, we deduce from the first identity for \(S\in\Omega^{k}_{\mathcal{S}^{\prime}}(V^{*})\) that \[(-1)^{(m-k)m}(-\operatorname{id})^{*}i_{*}\mathcal{F}_{V^{*}}S =\mathcal{F}_{W^{*}}\mathcal{F}_{W}i_{*}\mathcal{F}_{V^{*}}S=(-1) ^{(m-n)k}\mathcal{F}_{W^{*}}(i^{\vee})^{*}\mathcal{F}_{V}\mathcal{F}_{V^{*}}S\] \[=(-1)^{kn+(m-n)k}\mathcal{F}_{W^{*}}(i^{\vee})^{*}(-\operatorname {id})^{*}S\] \[=(-1)^{mk}\mathcal{F}_{W^{*}}(i^{\vee})^{*}(-\operatorname{id})^ {*}S.\] Recall that \(i_{*}\mathcal{F}_{V^{*}}S\in\Omega_{\mathcal{S}^{\prime}}(W^{*},\mathrm{or}(W))\). Accounting for the action of \((-\operatorname{id})^{*}\) on \(\mathrm{or}(W)\), which is given by \((-1)^{m}\), while the action of \((-\operatorname{id})^{*}\) appearing in the inversion formula is only on the form and not on its values, we have \((-\operatorname{id})^{*}i_{*}\mathcal{F}_{V^{*}}S=(-1)^{m}i_{*}\mathcal{F}_{V^ {*}}(-\operatorname{id})^{*}S\). This concludes the proof of the second identity. **Corollary 5.6**.: If \(\omega\in\Omega^{k}_{\mathcal{S}^{\prime}}(V,F)\) is \(r\)-homogeneous, then \(\mathcal{F}\omega\in\Omega^{n-k}_{\mathcal{S}^{\prime}}(V^{*},F\otimes \mathrm{or}(V))\) is \((-r)\)-homogeneous. **Lemma 5.7**.: For a tempered generalized form \(\omega\in\Omega^{k}_{\mathcal{S}^{\prime}}(V)\) one has \[\mathcal{F}(d\omega)=(-1)^{k+1}2\pi\mathbf{i}\cdot i_{E}(\mathcal{F}\omega), \qquad d(\mathcal{F}\omega)=(-1)^{k+1}2\pi\mathbf{i}\cdot\mathcal{F}(i_{E} \omega).\] Proof.: This follows immediately from basic properties of the Fourier transform: \(\mathcal{F}(\partial_{j}f)=-2\pi\mathbf{i}\xi_{j}\mathcal{F}f\), \(\partial_{j}(\mathcal{F}f)=2\pi\mathbf{i}\mathcal{F}(x_{j}f)\). ### Fourier transform of valuation-type differential forms The goal of this section is to show that valuation-type differential forms are closed under the Fourier transform. First, we consider closed forms. **Proposition 5.8**.: Let \(i:W\hookrightarrow V\) be a monomorphism, and let \(p\colon V\to W\) be an epimorphism. Put \(n=\dim V\), \(m=\dim W\). Let \(\omega\in\Omega^{k}_{-\infty}(V)\) be closed, \(0\)-homogeneous, smooth outside the origin and satisfying \(i_{E}\omega=0\). Then \(\mathcal{F}\omega\) has the same properties, and the identities \[\mathcal{F}_{W}i^{*}\omega=(i^{\vee})_{*}\mathcal{F}_{V}\omega\quad\text{and} \quad(p^{\vee})^{*}\mathcal{F}_{V}\omega=(-1)^{(n-m)(n-k)}\mathcal{F}_{W}p_{*}\omega\] hold. Proof.: Observe first that \(\mathcal{F}_{V}\omega\) is closed, \(0\)-homogeneous, smooth outside the origin, and satisfies \(i_{E}\mathcal{F}_{V}\omega=0\). Indeed, it is \(0\)-homogeneous by Corollary 5.6, and it is smooth outside of the origin by [44, Theorem 3.2.4]. We have \(d\mathcal{F}_{V}\omega=0\) and \(i_{E}\mathcal{F}_{V}\omega=0\) by Lemma 5.7. In particular, both sides of the asserted equalities are well-defined. Fix a Euclidean structure on \(V\). For a compactly supported form \(\eta\in\Omega^{k}(W)\) we must show \[\langle\mathcal{F}_{W}i^{*}\omega,\eta\rangle=\langle(i^{\vee})_{*}\mathcal{ F}_{V}\omega,\eta\rangle.\] Choose an approximate identity \(\mu_{\epsilon}(x)dx\in\mathcal{M}^{\infty}(\mathbb{R}^{n})\) with \(\mu_{\epsilon}\) Schwartz. Using Lemma 5.5 we find \[\langle\mathcal{F}i^{*}\omega,\eta\rangle =(-1)^{k(m-k)}\langle i^{*}\omega,\mathcal{F}\eta\rangle=(-1)^{k( m-k)}\lim_{\epsilon\to 0}\langle i^{*}(\omega*\mu_{\epsilon}),\mathcal{F}\eta\rangle\] \[=(-1)^{k(m-k)}\lim_{\epsilon\to 0}\langle\omega*\mu_{\epsilon},i_{ *}\mathcal{F}\eta\rangle=(-1)^{k(m-k)}\lim_{\epsilon\to 0}\langle\omega, \mathcal{F}(i^{\vee})^{*}\eta*\mu_{\epsilon}\rangle\] \[=(-1)^{k(m-k)}\lim_{\epsilon\to 0}\langle\omega,\mathcal{F}((i^{ \vee})^{*}\eta\cdot\mathcal{F}^{-1}\mu_{\epsilon})\rangle\] \[=(-1)^{k(m-k)+k(n-k)}\lim_{\epsilon\to 0}\langle\mathcal{F} \omega,(i^{\vee})^{*}\eta\cdot\mathcal{F}^{-1}\mu_{\epsilon}\rangle\] \[=(-1)^{k(m-k)+k(n-k)+k(m-k)+k(n-k)}\langle(i^{\vee})_{*}( \mathcal{F}\omega),\eta\rangle,\] concluding the proof of the first identity. The second follows from first via the inversion formula. #### 5.2.1. Differential forms related to valuations We now focus our attention on the forms in \[\mathcal{S}^{\prime}_{n-k,k}(V^{*})=\Omega^{n-k}_{\mathcal{S}^{\prime}}(V^{*},\wedge^{k}V^{*}\otimes\operatorname{or}(V))\] which play an important role in valuation theory. Given \(\omega\in\mathcal{S}^{\prime}_{n-k,k}(V^{*})\), we can consider its Fourier transform \[\mathcal{F}\omega\in\Omega^{k}_{\mathcal{S}^{\prime}}(V,\wedge^{k}V^{*}).\] **Definition 5.9**.: Given \(\omega\in\mathcal{S}^{\prime}_{n-k,k}(V^{*})\), we define its Fourier transform \[\mathcal{F}^{0}\omega\in\mathcal{S}^{\prime}_{k,n-k}(V)\otimes\operatorname{ Dens}(V),\] as the Fourier transform \(\mathcal{F}\omega\) of Definition 5.1, combined with the Hodge star \(*_{V^{*}}\colon\wedge^{k}V\to\wedge^{n-k}V^{*}\otimes\operatorname{ Dens}(V^{*})\otimes\operatorname{or}(V)\) according to our convention (2). The inversion formula for \(\mathcal{F}^{0}\) assumes the following form. **Lemma 5.10**.: It holds for all \(\omega\in\mathcal{S}^{\prime}_{n-k,k}(V^{*})\) that \[(\mathcal{F}^{0}_{V}\times\operatorname{id})\circ\mathcal{F}^{0}_{V^{*}}( \omega)=(-\operatorname{id})^{*}\omega.\] Proof.: It follows from Lemma 5.4 that \((\mathcal{F}_{V}\times\operatorname{id})\circ\mathcal{F}_{V^{*}}(\omega)=(-1) ^{n(n-k)}(-\operatorname{id})^{*}\), while the composition of the Hodge star with itself contributes a factor of \((-1)^{k(n-k)}\). It remains to note that \((-\operatorname{id})^{*}\) acts also on the space \(\wedge^{k}V^{*}\otimes\operatorname{or}(V)\) by the factor \((-1)^{k+n}\), and \((-1)^{n(n-k)}(-1)^{k(n-k)}(-1)^{k+n}=1\). Furthermore, \(\mathcal{F}^{0}\) is compatible with exterior products. **Lemma 5.11**.: If \(V,W\) are vector spaces of dimensions \(n,m\), then \[\mathcal{F}^{0}(\omega\boxtimes\zeta)=(-1)^{km+ln}\mathcal{F}^{0}\omega\boxtimes \mathcal{F}^{0}\zeta\] holds for all \(\omega\in\mathcal{S}^{\prime}_{n-k,k}(V^{*})\) and \(\zeta\in\mathcal{S}^{\prime}_{m-l,l}(W^{*})\) Proof.: First observe that if \(F_{1},F_{2}\) are vector spaces and \[\mathcal{F}_{V}\colon\Omega^{n-k}_{\mathcal{S}^{\prime}}(V^{*},F_{1})\to\Omega ^{k}_{\mathcal{S}^{\prime}}(V,or(V)\otimes F_{1})\] \[\mathcal{F}_{W}\colon\Omega^{m-l}_{\mathcal{S}^{\prime}}(W^{*},F_{2})\to\Omega ^{l}_{\mathcal{S}^{\prime}}(W,or(W)\otimes F_{2})\] \[\mathcal{F}_{V\times W}\colon\Omega^{n-k,m-l}_{\mathcal{S}^{\prime}}(V^{*} \times W^{*},F_{1}\otimes F_{2})\to\Omega^{k,l}_{\mathcal{S}^{\prime}}(V \times W,or(V\times W)\otimes F_{1}\otimes F_{2}),\] denote the corresponding Fourier transforms, then \[\mathcal{F}_{V\times W}=(-1)^{k(m-l)}\mathcal{F}_{V}\boxtimes\mathcal{F}_{W}.\] Let \(I_{V}\), \(I_{W}\), \(I_{V\times W}\) be the respective Hodge star isomorphisms as in eq. (6). Then \[I_{V\times W}=(-1)^{l(n-k)}I_{V}\boxtimes I_{W}.\] This completes the proof. #### 5.2.2. Existence of the Fourier transform on valuations We are now ready to prove the main result of this section. **Proposition 5.12**.: Assume that \(\omega\in\Omega^{n-k}_{-\infty}(V^{*},\wedge^{k}V^{*}\otimes\mathrm{or}(V))\) is valuation-type, i.e. it has the following properties 1. \(0\)-homogeneous. 2. \(i_{E}\omega=0\). 3. vertical: \(\omega\wedge E=0\). 4. closed: \(d\omega=0\). Then \(\mathcal{F}^{0}\omega\in\Omega^{k}_{-\infty}(V,\wedge^{n-k}V\otimes\mathrm{or }(V^{*})\otimes\mathrm{Dens}(V))\) is valuation-type. Furthermore, if \(\omega\) is smooth outside of the origin, then so is \(\mathcal{F}^{0}\omega\). Proof.: The case of \(k=0\) is straightforward. Since \(\omega\wedge E=0\), it follows that \(\omega\) is supported at the origin. As it is \(0\)-homogeneous, it must be a multiple of the delta measure at the origin. Then \(\mathcal{F}^{0}\omega\) is constant on \(V\), and so trivially valuation-type. Assume \(k\geq 1\). Using the identification \(\wedge^{k}V^{*}=\wedge^{n-k}V\otimes\wedge^{n}V^{*}\), we define the generalized form \(\overline{\omega}\in\Omega^{n-k}_{-\infty}(V^{*},\wedge^{n-k}V\otimes\mathrm{ Dens}(V))\) corresponding to \(\omega\). It holds that \(d\overline{\omega}=0\). Now \[\eta:=\mathcal{F}^{0}\omega=\mathcal{F}\overline{\omega}\in\Omega^{k}_{-\infty }(V,\wedge^{n-k}V\otimes\mathrm{Dens}(V)\otimes\mathrm{or}(V^{*}))\] is \(0\)-homogeneous by Corollary 5.6. The identification \(\wedge^{k}V^{*}=\wedge^{n-k}V\otimes\mathrm{Dens}(V)\otimes\mathrm{or}(V^{*})\) gives a corresponding \(0\)-homogeneous generalized form \(\overline{\eta}\in\Omega^{k}_{-\infty}(V,\wedge^{k}V^{*})\). We have by Lemma 5.7 \[d\eta=d\mathcal{F}\overline{\omega}=(-1)^{n-k+1}2\pi\mathrm{i}\mathcal{F}(i_{E }\overline{\omega})=0,\] that is \(\eta\) is closed, and hence so is \(\overline{\eta}\). Similarly, \(i_{E}\overline{\eta}=i_{E}\mathcal{F}\omega=(-1)^{n-k}\frac{1}{2\pi\mathrm{i} }\mathcal{F}d\omega=0\). The symmetry of \(\overline{\omega}\) established in Proposition 4.9 immediately implies the symmetry of \(\overline{\eta}\). Combined with the identity \(i_{E}\overline{\eta}=0\), it follows that \(\eta\wedge E=0\). Finally, if \(\omega\) is smooth on \(V^{*}\setminus\{0\}\) then so is \(\overline{\omega}\), and by [44, Theorem 3.2.4], \(\eta\) is smooth on \(V\setminus\{0\}\). ## 6. Alesker-Fourier transform of translation-invariant valuations Proposition 5.12 allows to define the \(\mathcal{F}\)-transform of valuations as in Definition 1.3. More precisely: **Definition 6.1**.: Let \(V\) be an \(n\)-dimensional linear space, and \(0\leq k\leq n\). The \(\mathcal{F}\)-transform of a valuation \(\phi\in\operatorname{Val}_{k}^{-\infty}(V)\) is the unique valuation \(\mathcal{F}\phi\in\operatorname{Val}_{n-k}^{-\infty}(V^{*})\) such that \(\tau(\mathcal{F}\phi)=\mathcal{F}^{0}\tau(\phi)\). It follows from Proposition 5.12 that \(\mathcal{F}\phi\) is a smooth valuation whenever \(\phi\) is a smooth valuation. We will prove below that the \(\mathcal{F}\)-transform of valuations thus defined coincides with the Fourier transform constructed by Alesker in [5]. **Lemma 6.2**.: For \(i=1,2\) consider the pairings \[P_{i}\colon\operatorname{Val}^{\infty}(V)\otimes\operatorname{Dens}(V^{*}) \otimes\operatorname{Dens}(V)\times\operatorname{Val}^{\infty}(V)\otimes \operatorname{Dens}(V^{*})\to\mathbb{C}\] defined by \[P_{1}(\phi\otimes\alpha\otimes v,\psi\otimes\beta)=\langle\phi\otimes\alpha* \psi\otimes\beta(\{0\}),v\rangle\] and \[P_{2}(\phi\otimes\alpha\otimes v,\psi\otimes\beta)=\langle\alpha,v\rangle \langle((-\operatorname{id})^{*}\phi\cdot\psi)_{n},\beta\rangle.\] Then \(P_{1}=P_{2}\). Proof.: Choose a Euclidean inner product to identify \(V\) with \(\mathbb{R}^{n}\) and \(\operatorname{Dens}(V)\simeq\mathbb{C}\simeq\operatorname{Dens}(V^{*})\). Choose \(S\subset\mathbb{R}^{n}\) with volume 1. Then \[\phi*\psi(\{0\}) =a_{*}(\phi\boxtimes\psi)(\{0\})\] \[=\frac{1}{n!}\left.\frac{\partial^{n}}{\partial t^{n}}\right|_{t =0}\phi\boxtimes\psi((-\operatorname{id},\operatorname{id})\circ\Delta(tS))\] \[=(\Delta^{*}((-\operatorname{id})^{*}\phi\boxtimes\psi)_{n}\] \[=((-\operatorname{id})^{*}\phi\cdot\psi)_{n}.\] We are now ready to prove Theorem 1.4, which we restate below. **Theorem 6.3**.: The \(\mathcal{F}\)-transform of valuations has the following properties: 1. \(\mathcal{F}\) commutes with the natural action of \(\operatorname{GL}(V)\). 2. Inversion formula: \((\mathcal{F}_{V^{*}}\times\operatorname{id})\circ\mathcal{F}_{V}\phi=(- \operatorname{id})^{*}\phi\). 3. Let \(i:V\hookrightarrow W\) be an injective linear map, and \(p=i^{\vee}:W^{*}\to V^{*}\). Then for \(\phi\in\operatorname{Val}^{\infty}(W)\), \(\mathcal{F}_{V}(i^{*}\phi)=p_{*}(\mathcal{F}_{W}\phi)\). 4. \(\mathcal{F}(\phi\boxtimes\psi)=\mathcal{F}\phi\boxtimes\mathcal{F}\psi\) for \(\phi\in\operatorname{Val}^{\infty}(V)\), \(\psi\in\operatorname{Val}^{\infty}(W)\). 5. \(\mathcal{F}\) intertwines product and convolution: for \(\phi,\psi\in\operatorname{Val}^{\infty}(V)\), \(\mathcal{F}(\phi\cdot\psi)=\mathcal{F}\phi*\mathcal{F}\psi\). 6. Self-adjointness: \(\langle\mathcal{F}u,\theta\rangle=\langle u,\mathcal{F}\theta\rangle\) for \(u\in\operatorname{Val}^{-\infty}(V)\), \(\theta\in\operatorname{Val}^{\infty}(V^{*})\). 7. Let \(p:V\to W\) be a surjective linear map, and \(i=p^{\vee}:W^{*}\to V^{*}\). Then for \(\psi\in\operatorname{Val}^{-\infty}(W)\), \(\mathcal{F}_{V}(p^{*}\psi)=i_{*}(\mathcal{F}_{W}\psi)\). Proof.: 1. This is clear by construction. 2. Follows at once from Lemma 5.10. 3. Follows from Propositions 4.5, 4.6 and 5.8. The first two propositions assert that \(i^{*}\phi\) corresponds to the pushforward under \(p\) of the \(0\)-homogeneous current of the valuation, while \(p_{*}(\mathcal{F}_{W}\phi)\) is given by the restriction (using \(i\)) of the respective current. The last proposition asserts the equality of the two currents. 4. Follows from Proposition 4.7 and Lemma 5.11. 5. Observe that \(\Delta_{V}^{\vee}=a_{V^{*}}\), while \(\mathcal{F}(\phi\boxtimes\psi)=\mathcal{F}\phi\boxtimes\mathcal{F}\psi\) by (2). Let \(\mu_{j}\in\mathcal{M}_{c}^{\infty}(\operatorname{GL}(V\times V))\) be an approximate identity such that \(\mu_{j}\to\delta_{\operatorname{id}}\) as \(j\to\infty\). Denoting \(\zeta=\phi\boxtimes\psi\in\operatorname{Val}^{-\infty}(V\times V)\), \(\zeta_{j}=\zeta*\mu_{j}\in\operatorname{Val}^{\infty}(V\times V)\), we have \[\mathcal{F}(\phi\cdot\psi) =\mathcal{F}(\Delta^{*}(\zeta))=\lim_{j\to\infty}\mathcal{F}( \Delta^{*}(\zeta_{j}))=\lim_{j\to\infty}a_{*}\mathcal{F}(\zeta_{j})\] \[=\lim_{j\to\infty}a_{*}(\mathcal{F}(\zeta)*\mu_{j})=a_{*}( \mathcal{F}(\zeta))=\mathcal{F}(\phi)*\mathcal{F}(\psi),\] where the second equality follows from the sequential continuity of \(\Delta^{*}\) in the Hormander topology [24] of a subspace of valuations with restricted wavefront to which \(\zeta\) belongs, and in which \(\zeta_{j}\) converges, the third equality is property (3), the fourth equality follows from the equivariance of the \(\mathcal{F}\)-transform on valuations with respect to the general linear group, and the fifth equality follows from the sequential continuity of \(a_{*}\) in the Hormander topology, similarly to \(\Delta^{*}\). 6. By continuity, it is enough to prove the claim for \(u\in\operatorname{Val}^{\infty}(V)\subset\operatorname{Val}^{-\infty}(V)\). Let \(L_{0}\colon\operatorname{Val}^{\infty}(V^{*})\otimes\operatorname{Dens}(V) \otimes\operatorname{Dens}(V^{*})\to\mathbb{C}\) be defined by composition of evaluation at a point with the identification \(\operatorname{Dens}(V)\otimes\operatorname{Dens}(V^{*})\simeq\mathbb{C}\). Similarly, let \(L_{n}\colon\operatorname{Val}^{\infty}(V)\otimes\operatorname{Dens}(V^{*}) \to\mathbb{C}\) be the projection to the top-degree component, which is just \(\operatorname{Dens}(V)\), followed by the same identification. Using \(\mathcal{F}\chi=\operatorname{vol}\otimes\operatorname{vol}^{*}\), the homomorphism property (5), Lemma 6.2, and the inversion formula (2), we obtain \[\langle\mathcal{F}_{V}u,\theta\rangle =L_{n}(\mathcal{F}_{V}u\cdot\theta)\] \[=L_{0}((\mathcal{F}_{V^{*}}\times\operatorname{id})\circ \mathcal{F}_{V}u)*\mathcal{F}_{V^{*}}\theta)\] \[=P_{1}((\mathcal{F}_{V^{*}}\times\operatorname{id})\circ \mathcal{F}_{V}u,\mathcal{F}_{V^{*}}\theta)\] \[=P_{2}((\mathcal{F}_{V^{*}}\times\operatorname{id})\circ \mathcal{F}_{V}u,\mathcal{F}_{V^{*}}\theta)\] \[=L_{n}(u\cdot\mathcal{F}_{V^{*}}\theta)\] \[=\langle u,\mathcal{F}_{V^{*}}\theta\rangle,\] as claimed. 7. For smooth valuations, this follows from Propositions 4.5 and 4.6 and Lemma 5.5. The general case then follows by continuity. It remains to show that our \(\mathcal{F}\)- and Alesker's Fourier transforms coincide. Since this verification will be reduced to the two-dimensional case, we will use the following statement about valuations in the plane. **Lemma 6.4**.: Suppose that \(\phi\in\operatorname{Val}_{1}^{-\infty}(\mathbb{R}^{2})\) is a generalized valuation, and \(g(e^{i\theta})d\theta\) a generalized measure such that \[\phi(K)=\int_{0}^{2\pi}h_{K}(e^{\mathbf{i}\theta})g(e^{i\theta})d\theta\] for every smooth and strictly positively curved convex body \(K\) in \(\mathbb{R}^{2}\). Then \[\tau(\phi)=\frac{g(-y/|y|)}{|y|^{3}}(y_{1}dx_{1}+y_{2}dx_{2})\otimes(y_{1}dy_{ 2}-y_{2}dy_{1}),\] where \((x,y)\) are the coordinates on \(\mathbb{R}^{2}\times\mathbb{R}^{2}\supset\mathbb{R}^{2}\times S^{1}\). Proof.: If \(\psi=\int_{\operatorname{nc}(K)}\omega\), where \(\omega=f_{1}(y)dx_{1}+f_{2}(y)dx_{2}\in\Omega^{1}(S\mathbb{R}^{2})\), then a computation shows that \[\psi(K)=\int_{S^{1}}(-f_{1}(y)y_{2}+f_{2}(y)y_{1}\,)dS_{1}(K,y)=2V(K,-f_{1}y_{ 2}+f_{2}y_{1}).\] The definition of \(\phi\) implies \[\phi\cdot\psi=\int_{S^{1}}h(e^{\mathbf{i}\theta})g(-e^{i\theta})d\theta\] for valuations of the form \(\psi(K)=2V(K,h)\). Hence \[T(\phi)=(y_{1}dx_{1}+y_{2}dx_{2})\otimes g(-e^{i\theta})d\theta,\] from which the expression for \(\tau(\phi)\) follows. Proof of Theorem 1.5.: The Alesker construction of the Fourier transform strongly relies on the irreducibility theorem [1], and so we make use of it for the verification. Denote the Alesker-Fourier transform by \(\mathbb{F}\). Observe that property (4) of Theorem 1.4 holds for both definitions, while Alesker products of \(k\)-tuples of elements of \(\operatorname{Val}_{1}^{\infty}(V)\) span a dense subset of \(\operatorname{Val}_{k}^{\infty}(V)\) by the irreducibility theorem. It therefore suffices to verify that \(\mathcal{F}=\mathbb{F}\) on \(\operatorname{Val}_{1}^{\infty}(V)\). Furthermore, since \(\mathcal{F},\mathbb{F}:\operatorname{Val}_{1}^{\pm,\infty}(V)\to \operatorname{Val}_{n-1}^{\pm,\infty}(V^{*})\otimes\operatorname{Dens}(V)\) are both equivariant isomorphisms between irreducible \(\operatorname{GL}_{n}(\mathbb{R})\)-modules, there exist constants \(c_{\pm}(V)\) such that \(\mathcal{F}=c_{\pm}(V)\mathbb{F}\). Since by Theorem 1.4 and [5] both \(\mathcal{F}\) and \(\mathbb{F}\) intertwine restrictions and pushforwards by projections, the number \(c_{\pm}(V)=c_{\pm}(n)\) depends only on the dimension of \(V\). By the same token, fixing an inclusion \(i:\mathbb{R}^{k}\hookrightarrow\mathbb{R}^{n}\) shows that \(c_{\pm}\) is independent of the dimension. Finally, the inversion formulas of Theorem 1.4 and [5] imply that \(c_{+}^{2}=c_{-}^{2}=1\). To determine the value of \(c_{+}\), it suffices to note that on \(\mathbb{R}^{1}\), \(\mathcal{F}(\operatorname{vol}_{1})=\chi=\mathbb{F}(\operatorname{vol}_{1})\), that is \(c_{+}=1\). Now consider \(\mathbb{R}^{2}=\mathbb{C}\), and define \(\phi\in\operatorname{Val}_{1}(\mathbb{R}^{2})\) by \[\phi(K)=h_{K}(1)+h_{K}(e^{2\pi\mathbf{i}/3})+h_{K}(e^{4\pi\mathbf{i}/3}),\] which is a continuous, and therefore generalized, translation-invariant valuation. Then \(\psi(K):=\phi(K)-\phi(-K)\) lies in \(\operatorname{Val}_{1}^{-,\infty}(\mathbb{R}^{2})\). According to Lemma 6.4 and denoting \(\omega=\operatorname{sign}(y_{1})\boxtimes\delta_{0}(y_{2})dy_{2}\), the \(0\)-homogeneous current of \(\phi\) is \[\tau(\psi)=dx_{1}\otimes\omega+(e^{2\pi\mathbf{i}/3})^{*}(dx_{1}\otimes\omega) +(e^{-2\pi\mathbf{i}/3})^{*}(dx_{1}\otimes\omega).\] We have \[\mathcal{F}^{0}(dx_{1}\otimes\omega)=d\xi_{2}\otimes\mathcal{F}(\omega),\] where \[\mathcal{F}(\omega)(\eta_{1},\eta_{2})=\frac{1}{\pi\mathbf{i}\eta_{1}}\boxtimes 1 \,d\eta_{1}.\] As \(\mathcal{F}\) commutes with rotations, a straightforward computation now shows that \[\tau(\mathcal{F}\psi)=\frac{3}{\pi\mathbf{i}\eta_{1}(\eta_{1}^{2}-3\eta_{2}^{2} )}(\eta_{1}d\xi_{1}+\eta_{2}d\xi_{2})\otimes(\eta_{1}d\eta_{2}-\eta_{2}d\eta_{ 1}),\] where \((\xi,\eta)\) are the coordinates on \(\mathbb{R}^{2}\times\mathbb{R}^{2}\supset\mathbb{R}^{2}\times S^{1}\). Let us recall from [6] the description of the Alesker-Fourier transform in \(\mathbb{R}^{2}=\mathbb{C}\). For \[\zeta(K)=\int_{0}^{2\pi}h_{K}(e^{\mathbf{i}\theta})f(\theta)d\theta,\] write \(f=f_{+}+f_{-}\), \(f_{-}=f_{-}^{hol}+f_{-}^{anti}\), where \(f_{+},f_{-}\) are the even and odd parts of \(f\) on the circle, \(f_{-}^{hol}(\theta)=\sum_{n\equiv 1(2),n>0}\widehat{f}(n)e^{\mathbf{i}n\theta}\) and \(f_{-}^{anti}(\theta)=\sum_{n\equiv 1(2),n<0}\widehat{f}(n)e^{\mathbf{i}n\theta}\) the holomorphic and anti-holomorphic parts of \(f_{-}\), respectively. Then \[\mathbb{F}\zeta(K)=\int_{0}^{2\pi}h_{K}(e^{\mathbf{i}\theta})\tilde{f}(\theta )d\theta,\] where \[\tilde{f}(\theta)=f_{+}(\theta+\frac{\pi}{2})+f_{-}^{hol}(\theta+\frac{\pi}{2 })-f_{-}^{anti}(\theta+\frac{\pi}{2})=\sum\mathbf{i}^{|n|}\widehat{f}(n)e^{ \mathbf{i}n\theta}.\] Now \(\psi\) is given by \(\psi(K)=\int_{0}^{2\pi}h_{K}(e^{\mathbf{i}\theta})f_{\psi}(\theta)d\theta\) with \[f_{\psi}(\theta)=\frac{1}{2\pi}(\delta_{0}(\theta)+\delta_{2\pi/3}(\theta)+ \delta_{4\pi/3}(\theta)-\delta_{\pi}(\theta)-\delta_{\pi/3}(\theta)-\delta_{5 \pi/3}(\theta)).\] The Fourier series of \(\delta_{0}(\theta)\) is \[\delta_{0}(\theta)=\sum_{n=-\infty}^{\infty}e^{\mathbf{i}n\theta},\] implying that \[f_{\psi}(\theta)=\frac{1}{2\pi}\sum_{n=-\infty}^{\infty}(3e^{3\mathbf{i}n \theta}-3e^{3\mathbf{i}n(\theta+\pi)}).\] It follows that \(\mathbb{F}\psi\) is given by \[\mathbb{F}\psi(K)=\int_{0}^{2\pi}h_{K}(e^{\mathbf{i}\theta})g_{\psi}(\theta)d\theta\] with \[g_{\psi}(\theta)=\frac{1}{2\pi}\sum_{n=-\infty}^{\infty}(3\mathbf{i}^{3|n|}e^{ 3\mathbf{i}n\theta}-3\mathbf{i}^{3|n|}e^{3\mathbf{i}n(\theta+\pi)})=\frac{3}{ \pi\mathbf{i}\cos 3\theta}.\] In view of Lemma 6.4 it remains to notice that on the unit circle, \(\eta_{1}(\eta_{1}^{2}-3\eta_{2}^{2})=4\cos^{3}\theta-3\cos\theta=\cos 3\theta\). ### Example: even valuations The Fourier transform on even valuations, acting on the Klain section or the Crofton measure of a valuation, amounts to the action of the orthogonal complement on the Grassmannian. Let us recover this fact using our construction. We use the standard Euclidean structure and orientation on \(\mathbb{R}^{n}\). For \(E\in\operatorname{Gr}_{k}(\mathbb{R}^{n})\), denote \(\phi_{E}(K)=\operatorname{vol}(P_{E^{\perp}}K)\), where \(P_{E^{\perp}}\) is the orthogonal projection onto \(E^{\perp}\). Choose orthonormal coordinates on \(\mathbb{R}^{n}\) such that \(E=\{x_{k+1}=\cdots=x_{n}=0\}\). In view of Lemma 4.4, equation (8), we have \[\tau(\phi_{E})=dx_{1}\wedge\cdots\wedge dx_{k}\otimes\delta_{0}(y_{k+1})\cdots \delta_{0}(y_{n})dy_{k+1}\wedge\cdots\wedge dy_{n}.\] As \[\mathcal{F}(\delta_{0}(y_{k+1})\cdots\delta_{0}(y_{n})dy_{k+1}\wedge\cdots \wedge dy_{n})=(-1)^{k(n-k)}\delta_{0}(y_{1})\cdots\delta_{0}(y_{k})dy_{1} \wedge\cdots\wedge dy_{k},\] we find that \[\tau(\mathcal{F}(\phi_{E})) =\mathcal{F}^{0}\tau(\phi_{E})\] \[=(-1)^{k(n-k)}*(dx_{1}\wedge\cdots\wedge dx_{k})\otimes\delta_{0 }(y_{1})\cdots\delta_{0}(y_{k})dy_{1}\wedge\cdots\wedge dy_{k}\] \[=\tau(\phi_{E^{\perp}}).\] Now for any valuation given by the Crofton formula \(\phi(K)=\int_{\operatorname{Gr}_{k}(\mathbb{R}^{n})}\phi_{E}(K)dm(E)\), we deduce by linearity and continuity of the Fourier transform that \(\mathcal{F}\phi(K)=\int_{\operatorname{Gr}_{k}(\mathbb{R}^{n})}\phi_{E^{\perp }}(K)dm(E)\), that is \(\mathcal{F}\phi(K)=\int_{\operatorname{Gr}_{n-k}(\mathbb{R}^{n})}\phi_{F}(K) dm^{\prime}(F)\), where \(m^{\prime}=\perp_{*}(m)\). ### Example: intrinsic volumes For the purpose of illustration, let us compute the Fourier transform of the intrinsic volumes \(V_{k}\) directly from our definition. **Lemma 6.5**.: For \(k=1,\ldots,n-1\), \[\tau(V_{k})=di_{E}\lambda_{k},\] where \[\lambda_{k}=\frac{1}{k!(n-k)!\operatorname{vol}(S^{n-k-1})}\frac{1}{|y|^{n-k} }\sum_{\pi}\operatorname{sign}(\pi)dx_{\pi_{1}}\cdots dx_{\pi_{k}}dy_{\pi_{k+1 }}\cdots dy_{\pi_{n}}.\] Proof.: Integration over the normal cycle of \[\kappa_{k}=c_{k,n}\sum_{\pi}\operatorname{sign}(\pi)y_{\pi_{1}}dx_{\pi_{2}} \cdots dx_{\pi_{k+1}}dy_{\pi_{k+2}}\cdots dy_{\pi_{n}},\] where \(c_{k,n}=\frac{1}{k!(n-k)!\omega_{n-k}}\) and the sum is over the permutation group, yields the \(k\)th intrinsic volume. Note that \[d\kappa_{k}=c_{k,n}\sum_{\pi}\operatorname{sign}(\pi)dx_{\pi_{1}}\cdots dx_{ \pi_{k}}dy_{\pi_{k+1}}\cdots dy_{\pi_{n}}=(-1)^{n-k}a^{*}D\kappa_{k}=T(V_{k}).\] We have to pull this form back to \(\mathbb{R}^{n}\setminus\{0\}\). First observe that since \(j\circ\pi(y)=y/|y|\), where \(j\colon S^{n-1}\to\mathbb{R}^{n}\) denotes the inclusion and \(\pi\) is the radial projection, we have \[\pi^{*}j^{*}dy_{i}=d(y_{i}/|y|)=\frac{1}{|y|}dy_{i}-\frac{y_{i}}{|y|}\rho\] where \(\rho=|y|^{-2}\sum_{j=1}^{n}y_{j}dy_{j}=\frac{1}{|y|}d(|y|)\). Thus \[\pi^{*}j^{*}(dy_{\pi_{k+1}}\cdots dy_{\pi_{n}}) =\frac{1}{|y|^{n-k}}dy_{\pi_{k+1}}\cdots dy_{\pi_{n}}-\frac{1}{|y| ^{n-k}}\rho\wedge\sum_{j=1}^{k}y_{\pi_{j}}i_{\partial_{y_{\pi_{j}}}}dy_{\pi_{k +1}}\cdots dy_{\pi_{n}}\] \[=\frac{1}{|y|^{n-k}}dy_{\pi_{k+1}}\cdots dy_{\pi_{n}}-\frac{1}{|y |^{n-k}}\rho\wedge i_{E}dy_{\pi_{k+1}}\cdots dy_{\pi_{n}}\] \[=\frac{1}{|y|^{n-k}}i_{E}(\rho\wedge dy_{\pi_{k+1}}\cdots dy_{\pi _{n}})\] \[=-\frac{1}{n-k}i_{E}d(|y|^{-(n-k)}dy_{\pi_{k+1}}\cdots dy_{\pi_{n }}).\] Since \(i_{E}d\lambda_{k}=-di_{E}\lambda_{k}\) the claim follows. According to [39, Lemma 3.6], \[\mathcal{F}(|x|^{-n+k})(\xi)=\frac{\operatorname{vol}(S^{n-k-1})}{ \operatorname{vol}(S^{k-1})}\frac{1}{|\xi|^{k}}\] and hence \[\mathcal{F}^{0}\lambda_{k}=\lambda_{n-k}.\] Using Lemmas 6.5 and 5.7 we conclude \[\mathcal{F}^{0}\tau(V_{k})=\mathcal{F}^{0}(di_{E}\lambda_{k})=-i_{E}d \mathcal{F}^{0}\lambda_{k}=-i_{E}d\lambda_{n-k}=di_{E}\lambda_{n-k}=\tau(V_{n -k}),\] that is, \(\mathcal{F}V_{k}=V_{n-k}\).
2310.20594
Tracially Complete C*-Algebras
We introduce a new class of operator algebras -- tracially complete C*-algebras -- as a vehicle for transferring ideas and results between C*-algebras and their tracial von Neumann algebra completions. We obtain structure and classification results for amenable tracially complete C*-algebras satisfying an appropriate version of Murray and von Neumann's property gamma for II_1 factors. In a precise sense, these results fit between Connes' celebrated theorems for injective II_1 factors and the unital classification theorem for separable simple nuclear C*-algebras. The theory also underpins arguments for the known parts of the Toms-Winter conjecture.
José R. Carrión, Jorge Castillejos, Samuel Evington, James Gabe, Christopher Schafhauser, Aaron Tikuisis, Stuart White
2023-10-31T16:30:19Z
http://arxiv.org/abs/2310.20594v4
# Tracially complete \(C^{*}\)-algebras ###### Abstract. We introduce a new class of operator algebras - tracially complete \(C^{*}\)-algebras - as a vehicle for transferring ideas and results between \(C^{*}\)-algebras and their tracial von Neumann algebra completions. We obtain structure and classification results for amenable tracially complete \(C^{*}\)-algebras satisfying an appropriate version of Murray and von Neumann's property \(\Gamma\) for \(\Pi_{1}\) factors. In a precise sense, these results fit between Connes' celebrated theorems for injective \(\Pi_{1}\) factors and the unital classification theorem for separable simple nuclear \(C^{*}\)-algebras. The theory also underpins arguments for the known parts of the Toms-Winter conjecture. This work was supported by: long term structural funding - a Methusalem grant of the Flemish Government (Castillejos); UNAM-PAPIIT IA103124 (Castillejos); Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 427320536 - SFB 1442 (Evington); Germany's Excellence Strategy EXC 2044 390685587 Mathematics Munster: Dynamics-Geometry-Structure (Evington); ERC Advanced Grant 834267 - AMAREC (Evington); Engineering and Physical Sciences Research Council [Grant Refs: EP/R025061/1, EP/R025061/2, and EP/X026647/1] (Evington, White); Australian Research Council grant DP180100595 (Gabe); NSF grant DMS-2000129 (Schaflauser); NSERC Discovery Grant (Tikuisis). ###### Abstract We consider a \(2\)-seminorm algebra \(\pi_{\tau}(A)\) of \(A\), and a finite von Neumann algebra \(\pi_{\tau}(A)^{\prime\prime}\) of \(A\). We show that the \(\pi_{\tau}(A)^{\prime\prime}\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) is a \(2\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) of \(A\). We show that the \(\pi_{\tau}(A)^{\prime\prime}\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) is a \(2\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) of \(A\). We also show that the \(\pi_{\tau}(A)^{\prime\prime}\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) is a \(2\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) of \(A\). We also show that the \(\pi_{\tau}(A)^{\prime\prime}\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) is a \(2\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) of \(A\). We also show that the \(\pi_{\tau}(A)^{\prime\prime}\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) is a \(2\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) of \(A\). We also show that the \(\pi_{\tau}(A)^{\prime\prime}\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) is a \(2\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) of \(A\). We also show that the \(\pi_{\tau}(A)^{\prime\prime}\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) is a \(2\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) of \(A\). We also show that the \(\pi_{\tau}(A)^{\prime\prime}\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) is a \(2\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) of \(A\). We also show that the \(\pi_{\tau}(A)^{\prime\prime}\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) is a \(2\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) of \(A\). We also show that the \(\pi_{\tau}(A)^{\prime\prime}\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) is a \(2\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) of \(A\). We also show that the \(\pi_{\tau}(A)^{\prime\prime}\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) is a \(2\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) of \(A\). We also show that the \(\pi_{\tau}(A)^{\prime\prime}\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) is a \(2\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) of \(A\). We also show that the \(\pi_{\tau}(A)^{\prime\prime}\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) of \(A\) is a \(2\)-seminorm algebra \(\pi_{\tau}(A)^{\prime\prime}\) of \(A\). **Structure:**: Structural theorems ([117, 118, 106, 75, 76, 98, 10, 23, 19]), which combine to prove most of the _Toms-Winter conjecture_ ([119, Conjecture 9.3], cf. [112, 41]) for \(C^{*}\)-algebras. These results mirror aspects of Connes' equivalence of injectivity and hyperfiniteness for von Neumann II\({}_{1}\) factors ([29]). With the benefit of hindsight, the advances above have been heavily driven by a subtle interplay between nuclear \(C^{*}\)-algebras \(A\) and their uniform tracial completions \(\overline{A}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! as well as by work on regularity properties (Jiang-Su stability, uniform property \(\Gamma\)) for \(C^{*}\)-algebras ([23, 22, 94, 75]). **Hyperfiniteness:**: We consider inductive limits of finite dimensional tracially complete \(C^{*}\)-algebras, in the spirit of Murray and von Neumann's hyperfinite von Neumann algebras and Bratteli's AF \(C^{*}\)-algebras. **Concrete models:**: For each metrisable Choquet simplex \(X\), we construct a hyperfinite \(\mathrm{II}_{1}\) tracially complete \(C^{*}\)-algebra \((\mathcal{R}_{X},X)\) in the same spirit as Blackadar's and Goodearl's constructions of a simple unital AF \(C^{*}\)-algebra with trace simplex affinely homeomorphic to \(X\). Unlike the case of AF \(C^{*}\)-algebras, the resulting tracially complete \(C^{*}\)-algebra is independent of all choices made in the construction. When \(X\) is an \(n\)-dimensional simplex, \((\mathcal{R}_{X},X)\) is isomorphic to the direct sum of \(n+1\) copies of the hyperfinite \(\mathrm{II}_{1}\) factor with its entire trace simplex. When \(X\) is a metrisable Bauer simplex (i.e. the extreme boundary \(\partial_{e}X\) is compact), \((\mathcal{R}_{X},X)\) can be identified with the trivial \(W^{*}\)-bundle over \(\partial_{e}X\) whose fibres are the hyperfinite \(\mathrm{II}_{1}\) factor \(\mathcal{R}\). With this setup, our structure and classification theorems for amenable tracially complete \(C^{*}\)-algebras are as follows. **Theorem B** (Structure theorem).: _For an amenable type \(\mathrm{II}_{1}\) factorial tracially complete \(C^{*}\)-algebra, property \(\Gamma\), the McDuff property, and hyperfiniteness are equivalent._ **Theorem C** (Classification theorem).: _Any amenable factorial tracially complete \(C^{*}\)-algebra which is separable (in the uniform 2-norm) and has property \(\Gamma\) is isomorphic to the hyperfinite model \((\mathcal{R}_{X},X)\) with the corresponding Choquet simplex \(X\) of traces._ The equivalence of property \(\Gamma\) and the McDuff property in Theorem B was established in [22] for the uniform tracial completions of separable nuclear \(C^{*}\)-algebras with no finite dimensional representations. As with the classification programme for \(C^{*}\)-algebras and Murray and von Neumann's uniqueness of the hyperfinite \(\mathrm{II}_{1}\) factor before that, our classification theorem is powered by classification results for \({}^{*}\)-homomorphisms satisfying a suitable notion of amenability together with an Elliott-style intertwining argument. In the setting of tracially complete \(C^{*}\)-algebras amenability is given in terms of the completely positive approximation property in the uniform 2-norm, or equivalently using uniformly amenable traces - we call such maps _tracially nuclear_.3 Footnote 3: We state Theorem D for tracially complete \(C^{*}\)-algebras satisfying the regularity hypothesis of property \(\Gamma\). In the main body, we will work with a more general hypothesis – complemented partitions of unity (discussed in Section 1.4 below) – so that that the classification of tracially nuclear \({}^{*}\)-homomorphisms generalises the classification of weakly nuclear \({}^{*}\)-homomorphisms from \(C^{*}\)-algebras into finite von Neumann algebras. **Theorem D** (Classification of tracially nuclear \({}^{*}\)-homomorphisms).: _Consider tracially complete \(C^{*}\)-algebras \((\mathcal{M},X)\) and \((N,Y)\) with \((\mathcal{M},X)\) being \(\|\cdot\|_{2,X}\)-separable and \((\mathcal{N},Y)\) being factorial with property \(\Gamma\). Then a map \(\phi\colon(\mathcal{M},X)\to(\mathcal{N},Y)\) is tracially nuclear if and only if the induced map \(\phi^{*}\colon Y\to X\) takes values in the uniformly amenable traces on \(\mathcal{M}\). Any continuous affine \(\gamma\colon Y\to X\) taking values in the uniformly amenable traces on \(\mathcal{M}\) is induced by a tracially nuclear map \(\phi\colon(\mathcal{M},X)\to(\mathcal{N},Y)\), which is unique up to approximate unitary equivalence in the uniform 2-norm given by \((\mathcal{N},Y)\)._ An important special case of the previous theorem is given by taking \((\mathcal{M},X)\coloneqq\left(\mathbb{C}^{2},T(\mathbb{C}^{2})\right)\), giving a classification of projections. In this case we can do better: unitary equivalence and Murray-von Neumann subequivalence of projections in factorial tracially complete \(C^{*}\)-algebras with property \(\Gamma\) are determined by the designated set of traces (see Theorem 7.18).4 We highlight the following special case of this in the language of von Neumann algebras, showing that Murray and von Neumann's foundational classification of projections can be performed continuously in II\({}_{1}\) factors with property \(\Gamma\). It is obtained by applying our classification of projections to the trivial \(W^{*}\)-bundle over \(X\) with fibre \(\mathcal{M}\). Footnote 4: This also holds with complemented partitions in place of property \(\Gamma\), but we emphasis that we do not know Theorem E for general II\({}_{1}\) factors \(\mathcal{M}\) unless \(X\) is totally disconnected. It is open whether Theorem E holds when \(X=[0,1]\) and \(\mathcal{M}\) is a free group factor. **Theorem E**.: _Let \(K\) be a compact Hausdorff space and let \(\mathcal{M}\) be a II\({}_{1}\) factor with property \(\Gamma\). Suppose that \(p,q\colon K\to\mathcal{M}\) are projection-valued functions which are continuous with respect to \(\|\cdot\|_{2}\)._ 1. _There is a_ \(\|\cdot\|_{2}\)_-continuous_ \(v\colon K\to\mathcal{M}\) _with_ \(v(x)^{*}v(x)=p(x)\) _and_ \(v(x)v(x)^{*}\leq q(x)\) _for all_ \(x\in K\) _if and only if_ \(\tau(p(x))\leq\tau(q(x))\) _for all_ \(x\in K\)_._ 2. _There is a_ \(\|\cdot\|_{2}\)_-continuous_ \(u\colon K\to\mathcal{M}\) _taking values in the unitary group of_ \(\mathcal{M}\) _with_ \(u(x)p(x)u(x)^{*}=q(x)\) _for all_ \(x\in K\) _if and only if_ \(\tau(p(x))=\tau(q(x))\) _for all_ \(x\in K\)_._ The existence theorem corresponding to Theorem E is immediate. Given a continuous \(f\colon K\to[0,1]\), fix a copy of \(L^{\infty}[0,1]\subseteq\mathcal{M}\). Then the projection-valued function \(p(x)=\chi_{[0,f(x)]}\) is \(\|\cdot\|_{2}\)-continuous and has \(\tau(p(x))=f(x)\) for all \(x\in K\). ###### Contents * 1 Introduction * 1.1 Tracially complete \(C^{*}\)-algebras * 1.2 The trace problem * 1.3 Local-to-global: amenability * 1.4 Local-to-global: CPoU and property \(\Gamma\) * 1.5 Structure and classification * 1.6 Classification and the Toms-Winter conjecture * 2 Traces and Choquet simplices * 2.1 Choquet theory * 2.2 Traces and the GNS construction * 3 Tracially complete \(C^{*}\)-algebras * 3.1 Definitions and basic properties * 3.2 Factoriality * 3.3 Tracial completions * 3.4 Dense subalgebras of tracially complete \(C^{*}\)-algebras ###### Contents * 1 Introduction * 2 Traceally complete \(C^{*}\)-algebras * 2.1 Traceally complete \(C^{*}\)-algebras * 2.2 Traceally complete \(C^{*}\)-algebras * 2.3 Traceally complete \(C^{*}\)-algebras * 2.4 Traceally complete \(C^{*}\)-algebras * 2.5 Traceally complete \(C^{*}\)-algebras * 2.6 Traceally complete \(C^{*}\)-algebras * 2.7 Traceally complete \(C^{*}\)-algebras * 2.8 Traceally complete \(C^{*}\)-algebras * 2.9 Traceally complete \(C^{*}\)-algebras * 2.10 Traceally complete \(C^{*}\)-algebras \(C^{*}\)-algebra. With a minor abuse of notation, the extension of \(\tau\in X\) to \(\overline{A}^{X}\) is still denoted \(\tau\), and the set of such extensions is still denoted \(X\). Given a tracial von Neumann algebra \((\mathcal{M},\tau)\),5 it is typically necessary in classification results to consider all (normal) traces on \(\mathcal{M}\) instead of only \(\tau\) itself. For example, the specified trace \(\tau\) will classify projections in \(\mathcal{M}\) only when it is the unique trace on \(\mathcal{M}\) - i.e. when \(\mathcal{M}\) is a factor. By analogy, we would like to work with a subclass of tracially complete \(C^{*}\)-algebras \((\mathcal{M},X)\), where the traces in \(X\) provide enough information for classification. In the case when \(X\) is a face in \(T(\mathcal{M})\), the set \(X\) is precisely the set of \(\|\cdot\|_{2,X}\)-continuous traces on \(\mathcal{M}\) (and conversely) - see Proposition 3.15. We call such tracially complete \(C^{*}\)-algebras _factorial_. The name is further justified by the fact that a \(W^{*}\)-bundle is factorial as a tracially complete \(C^{*}\)-algebra if and only if each fibre is a factor (Proposition 3.6) and, more generally, a tracially complete \(C^{*}\)-algebra is factorial if and only if every extreme point of \(X\) gives rise to a factor representation of \(\mathcal{M}\) (Proposition 3.14). Footnote 5: A tracial von Neumann algebra \((\mathcal{M},\tau)\) is von Neumann algebra \(\mathcal{M}\) equipped with a faithful normal trace \(\tau\). The examples of greatest interest - uniform tracial completions of a \(C^{*}\)-algebras - are automatically factorial. Further, as well as being necessary to classify projections by the distinguished traces, we have found factoriality to be an important technical condition when applying local-to-global arguments. Accordingly, we have come to regard factorial tracially complete \(C^{*}\)-algebras as the fundamental objects in the class of tracially complete \(C^{*}\)-algebras. When working with factorial tracially complete \(C^{*}\)-algebras \((\mathcal{M},X)\) (such as uniform tracial completions of \(C^{*}\)-algebras), difficulties increase with the complexity of \(X\). The space \(X\) is always always always a Choquet simplex as it is a closed face in the Choquet simplex \(T(\mathcal{M})\). When \(X\) is finite dimensional, or more generally a Bauer simplex, a factorial tracially complete \(C^{*}\)-algebra over \(X\) has extra structure and is easier to analyse. This is illustrated schematically as follows. (1.1) \[\begin{array}{c}\mbox{Metrisable Choquet simplices:}\\ \mbox{Tracially complete $C^{*}$-algebras}\\ \mbox{Metrisable Bauer simplices:}\\ W^{*}\mbox{-bundles}\\ \mbox{Finite dimensional simplices:}\\ \mbox{$\mathcal{M}_{1}\oplus\cdots\oplus\mathcal{M}_{n}$}\end{array}\] Firstly, when \(X\) is a finite dimensional simplex, \(\mathcal{M}\) is just a finite direct sum \(\mathcal{M}_{1}\oplus\cdots\oplus\mathcal{M}_{n}\) of factors. In this case, Theorems B and C are Connes' Theorem6 ([29]) and the regularity hypothesis (property \(\Gamma\)) is automatic from amenability. The next threshold of complexity occurs when \(X\) is a _Bauer simplex_: that is, when the extreme boundary \(\partial_{e}X\) is compact. Ozawa paid particular attention to this situation in [86], introducing the abstract notion of (the section algebra of) a _continuous \(W^{*}\)-bundle_ over a compact Hausdorff space \(K\) (see Section 3.6 for the precise definition). The most basic examples of \(W^{*}\)-bundles are trivial bundles. The trivial \(W^{*}\)-bundle over \(K\) with fibre a tracial von Neumann algebra \((\mathcal{M},\tau)\) is given by \[C_{\sigma}(K,\mathcal{M})\!:=\!\{f\!:\!K\to\mathcal{M}:f\text{ is }\|\!\cdot\!\| \text{-bounded and }\|\!\cdot\!\|_{2,\tau}\text{-continuous}\}, \tag{1.2}\] together with the conditional expectation \(E\!\colon C_{\sigma}(K,\mathcal{M})\to C(K)\) given by composing with \(\tau\). Ozawa showed that if \(A\) is a \(C^{*}\)-algebra whose tracial state space \(T(A)\) is a Bauer simplex, then the uniform tracial completion \(\overline{A}^{T(A)}\) naturally has the structure of a continuous \(W^{*}\)-bundle over \(\partial_{e}T(A)\) whose fibre at \(\tau\in\partial_{e}T(A)\) is the von Neumann factor \(\pi_{\tau}(A)^{\prime\prime}\). Moreover, he proved a 'trivialisation theorem' ([86, Theorem 15]), characterising when \(W^{*}\)-bundles whose fibres are the hyperfinite II\({}_{1}\) factor are trivial - this amounts to asking when Connes' theorem can be established in a continuous fashion over \(K\). Theorem A follows from Ozawa's trivialisation theorem whenever the \(C^{*}\)-algebras involved have Bauer trace simplices. Using Ozawa's methods, we show in Section 3.6 that factorial tracially complete \(C^{*}\)-algebras \((\mathcal{M},X)\) whose traces \(X\) form a Bauer simplex have a natural structure of a \(W^{*}\)-bundle over \(\partial_{e}X\), whose fibre at \(\tau\in\partial_{e}X\) is the factor \(\pi_{\tau}(\mathcal{M})\).7 In this way, Theorems B and C also follow from Ozawa's trivialisation theorem in the case of factorial tracially complete \(C^{*}\)-algebras with a Bauer simplex of traces. Footnote 7: In this case, by Ozawa’s work \(\pi_{\tau}(\mathcal{M})=\pi_{\tau}(\mathcal{M})^{\prime\prime}\). The trace simplex of a \(C^{*}\)-algebras can be very far from Bauer and can have highly complex affine structure. In fact, any metrisable Choquet simplex can arise as the trace space of a separable \(C^{*}\)-algebras, including, for example, the Poulsen simplex characterised as the unique (non-trivial) metrisable Choquet simplex where the extreme points form a dense subset. We think of factorial tracially complete \(C^{*}\)-algebras \((\mathcal{M},X)\) as providing a formalism for (the section algebra of) a bundle over a Choquet simplex \(X\), with 'fibres' coming from the GNS representations of \(\mathcal{M}\) at points \(\tau\in X\), which takes into account the affine relations between the fibres (see the discussion in Section 3). Outside the Bauer simplex setting, the subtle interaction between the affine structure of \(X\) and the operator algebraic structure of \(\mathcal{M}\) is quite challenging and leads to very different behaviour compared to \(W^{*}\)-bundles. To give one example, suppose that \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebra; if \(X\) is a Bauer simplex then the centre of \(\mathcal{M}\) can be identified with the continuous affine functions on \(X\), whereas in the general case, the centre of \(\mathcal{M}\) can be trivial. ### The trace problem We take a brief interlude to discuss why we keep track of the designated collection of traces in our definition of a tracially complete \(C^{*}\)-algebra. Every II\({}_{1}\) factor \(\mathcal{M}\) has a unique trace \(\tau\), so in particular all traces on \(\mathcal{M}\) are normal. We do not know if the analogous statement holds for factorial tracially complete \(C^{*}\)-algebras. When \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra, all the traces in \(X\) are evidently \(\|\cdot\|_{2,X}\)-continuous - the appropriate notion of continuity in this setting. Moreover all \(\|\cdot\|_{2,X}\)-continuous traces on \(\mathcal{M}\) belong to the closed face in \(T(\mathcal{M})\) generated by \(X\) (see Proposition 3.15). Accordingly, when \((\mathcal{M},X)\) is factorial, \(X\) consists precisely of the \(\|\cdot\|_{2,X}\) continuous traces on \(\mathcal{M}\). Are there any other traces? We regard this as a foundational problem in the theory of tracially complete \(C^{*}\)-algebras. **Question 1.1** (Trace Problem).: Let \((\mathcal{M},X)\) be a factorial tracially complete \(C^{*}\)-algebra. Are all traces on \(\mathcal{M}\) automatically \(\|\cdot\|_{2,X}\)-continuous? Equivalently, is the inclusion \(X\subseteq T(\mathcal{M})\) an equality? When \(X\) is a finite dimensional simplex, \(\mathcal{M}\) is a finite direct sum of factors, and hence the trace problem has a positive solution since all traces on \(\mathcal{M}\) are normal. In this paper we resolve the trace problem for ultrapowers (and reduced products) of tracially complete \(C^{*}\)-algebras with property \(\Gamma\) (Theorem 7.5); property \(\Gamma\) is discussed in Section 1.4 and Section 5.3. This is in the spirit of various 'no silly trace' results asserting that, under appropriate regularity conditions, traces on ultrapowers are generated by the limit traces (such as [86, Theorem 8], [84, Theorem 1.2], [10, Theorem 3.22]; see [4, Theorem A] for a very general \(C^{*}\)-algebra result). The trace problem is particularly pertinent when we take tracial completions. Given a \(C^{*}\)-algebra \(A\), Ozawa's uniform tracial completion \(\overline{A}^{T(A)}\) gives rise to a factorial tracially complete \(C^{*}\)-algebra \(\big{(}\overline{A}^{T(A)},T(A)\big{)}\), but is \(\overline{A}^{T(A)}\) uniformly tracially complete with respect to _all_ its traces? If there are additional traces on \(\overline{A}^{T(A)}\) which are not \(\|\cdot\|_{2,T(A)}\)-continuous, then there seems to be no reason why this should be the case. A positive answer to the trace problem would ensure that the uniform tracial completion process stabilises.8 Footnote 8: During the long gestation period of this paper, the trace problem was resolved positively by the third-named author for tracially complete \(C^{*}\)-algebras with property \(\Gamma\). In particular, this means that for a \(\mathcal{Z}\)-stable \(C^{*}\)-algebra \(A\), Ozawa’s \(\overline{A}^{T(A)}\) has a complete unit ball in its uniform 2-norm. ### Local-to-global: amenability Our main approach to understanding the structure of tracially complete \(C^{*}\)-algebras is to pass from local properties at each trace, i.e. properties of the fibres \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\), to global properties that hold uniformly over all traces. Identifying the appropriate class of _amenable_ tracially complete \(C^{*}\)-algebras through completely positive approximations gives a first example of this idea. Recall that nuclearity of a \(C^{*}\)-algebra \(A\) is characterised through the completely positive approximation property ([64]): there is a net \((F_{i},\phi_{i},\psi_{i})_{i}\) consisting of finite dimensional \(C^{*}\)-algebras \(F_{i}\) and completely positive and contractive (c.p.c.) maps (1.3) \(A\) such that \(\|\phi_{i}(\psi_{i}(a))-a\|\to 0\) for all \(a\in A\). The directly analogous concept for a von Neumann algebra \(\mathcal{M}\) is _semidiscreteness_, which asks for c.p.c. maps (1.4) which approximate the identity on \(\mathcal{M}\) in the point-weak\({}^{*}\)-topology (rather than the point-norm topology). When \((\mathcal{M},\tau)\) is a tracial von Neumann algebra, (i.e. \(\tau\) is a specified faithful normal trace on \(\mathcal{M}\)), a standard Hahn-Banach argument shows that this is equivalent to the completely positive approximation property in the \(\|\cdot\|_{2,\tau}\)-norm. A large body of deep work, including Connes' theorem, gives a plethora of other conditions equivalent to the completely positive approximation property for \(C^{*}\)-algebras and to semidiscreteness for a von Neumann algebras. If we look for analogous conditions on a tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\), one option is to work locally and ask for all the von Neumann fibres \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) for \(\tau\in X\) to be semidiscrete. Our local-to-global result for amenability allows us to go from such a local condition to a single system of completely positive approximations which works uniformly over all traces. **Theorem 1.2**.: _Let \((\mathcal{M},X)\) be a tracially complete \(C^{*}\)-algebra. The following are equivalent:_ 1. \((\mathcal{M},X)\) _is amenable, in the sense that the completely positive approximation property holds in the point-_\(\|\cdot\|_{2,X}\)_-norm;_ 2. _for all_ \(\tau\in X\)_,_ \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) _is semidiscrete, in the sense that the completely positive approximation property holds for_ \(\pi_{\tau}(X)^{\prime\prime}\) _in the point-weak_\({}^{*}\)_-topology;_ 3. _every_ \(\tau\in X\) _is uniformly amenable in the sense of_ _[_12_, Definition 3.2.1]__._ In particular, via Connes' theorem the uniform tracial completion of a nuclear \(C^{*}\)-algebra is amenable as a tracially complete \(C^{*}\)-algebra.9 Footnote 9: A naive argument for this fails in the same way that a direct proof that the bidual of a nuclear \(C^{*}\)-algebra is semidiscrete as a von Neumann algebra fails – an extension of the approximations witnessing nuclearity of \(A\) will not a priori approximate the identity map on the tracial completion in the point-\(\|\cdot\|_{2,X}\)-norm. Theorem 1.2 generalises to \({}^{*}\)-homomorphisms between \(C^{*}\)-algebras and tracially complete \(C^{*}\)-algebras (Theorem 4.9, which proves the first statement in Theorem D) characterising the completely positive approximation property (with respect to the uniform \(2\)-norm) in terms of pointwise amenability conditions. We call such maps _tracially nuclear_, and this is the appropriate amenability condition on \({}^{*}\)-homomorphisms for classification; see the discussion in Section 1.5. Theorem 1.2, and its generalisation to morphisms in Theorem 4.9, are both proved by means of a Hahn-Banach trick which we learnt from [69, 48]. The key point - used by Ozawa in [86] - is that the weak topology on the space \(\mathrm{Aff}(X)\) of continuous affine functions on a Choquet simplex is given by pointwise convergence. Since the set of c.p.c. maps which factorise through finite dimensional \(C^{*}\)-algebras is closed under convex combinations, it is then possible to take convex combinations of a finite set of local approximations (obtained through compactness) to build a global approximation. ### Local-to-global: CPoU and property \(\Gamma\) The use of the Hahn-Banach Theorem with \(\operatorname{Aff}(X)\) described in the previous section provides a general local-to-global tool in the setting of tracially complete \(C^{*}\)-algebras for conditions witnessed by elements of a convex set (see Lemma 4.7). However, the Hahn-Banach strategy is unlikely to apply to conditions which are not affine in nature. To discuss this, let us consider the problem (which is open in general) of whether a factorial \(\operatorname{II}_{1}\) tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\) admits approximate projections which are approximately of trace \(1/2\); i.e. given \(\epsilon>0\), does there exist a positive contraction \(p\in\mathcal{M}\) with \(\|p-p^{2}\|_{2,X}<\epsilon\) and \(|\tau(p)-1/2|<\epsilon\) for all \(\tau\in X\)? As we think of tracially complete \(C^{*}\)-algebras as a kind of affine bundle, it is natural to approach local-to-global problems via partition of unity techniques. For a \(W^{*}\)-bundle \(\mathcal{M}\) with \(\operatorname{II}_{1}\) factor fibres over a compact Hausdorff space \(K\), the algebra \(C(K)\) embeds centrally into \(\mathcal{M}\), and so it is straightforward to combine elements in \(\mathcal{M}\) with a partition of unity in \(C(K)\) - however such a direct approach does not preserve algebraic conditions such as (in our sample problem) being a projection. To see where it fails, for each fibre \(\tau\in K\) fix a positive contraction \(p_{\tau}\in\mathcal{M}\) such that \(\pi_{\tau}(p_{\tau})\) is a projection of trace \(1/2\) in the fibre \(\pi_{\tau}(\mathcal{M})\) (which is automatically a \(\operatorname{II}_{1}\) factor). Fixing \(\epsilon>0\), compactness gives an open cover \(U_{1},\ldots,U_{n}\) of \(K\) and positive contractions \(p_{1},\ldots,p_{n}\in\mathcal{M}\) such that \[\|p_{i}^{2}-p_{i}\|_{2,\tau}<\epsilon\text{ and }\tau(p_{i})\approx_{\epsilon}1/ 2,\quad\tau\in U_{i}. \tag{1.5}\] Taking a partition of unity \(f_{1},\ldots,f_{n}\in C(K)\) subordinate to \(U_{1},\ldots,U_{n}\), one can form \(p\coloneqq\sum_{i=1}^{n}f_{i}p_{i}\). While this will have \(\tau(p)\approx_{\epsilon}1/2\) for all \(\tau\in K\), there is no reason for it to be an approximate projection in uniform 2-norm. Partitions of unity from \(C(K)\) do not interact well with multiplication due to the lack of orthogonality of the \(f_{i}\). The solution is to ask that the \(f_{i}\) are approximate projections (giving rise to approximate orthogonality). However, unless \(K\) is zero dimensional, this requires that they come from \(\mathcal{M}\) instead of \(C(K)\), and we must weaken centrality to approximate centrality. If such \(f_{i}\) can be found, then \(p\coloneqq\sum_{i=1}^{n}f_{i}p_{i}\) will give the required approximate projection in \(\mathcal{M}\).10 In general, such partitions of unity need not exist (see Example 6.6), but when they do, they can be used for local-to-global results. Ozawa's trivialisation theorem is an example par excellence: the passage from the existence of approximately central approximate projections of trace \(1/2\) in a \(W^{*}\)-bundle with hyperfinite \(\operatorname{II}_{1}\) factor fibres to global triviality of the bundle ([86, Theorem 15(ii)\(\Rightarrow\)(iii)\(\Rightarrow\)(i)]) is underpinned by such partition of unity arguments. Subsequently, this strategy was made explicit and systematically used in [10] for local-to-global transfer in trivial \(W^{*}\)-bundles whose fibre is a McDuff \(\operatorname{II}_{1}\) factors. Beyond the \(W^{*}\)-bundle setting, one needs even more control on the approximately central projections \(f_{i}\) forming a partition of unity. It is necessary to be able to uniformly control \(\tau(f_{i}a)\) for certain \(a\), whereas in the \(W^{*}\)-bundle setting, this control comes for free from knowing that \(\tau(f_{i})\) vanishes outside of \(U_{i}\) (see Footnote 10). A solution was identified in [23] in the form of _complemented partitions of unity_ (CPoU) for a \(C^{*}\)-algebra. The technical definition is designed to give the required control with respect to a given family of positive elements, which should be thought of as complementary to the tracial support of the \(f_{i}\). See the second part of the introduction to [23] for a further discussion of this and the challenges that must be overcome outside the Bauer simplex setting. In [23], the concept of complemented partitions of unity was set out using the uniform tracial ultraproduct of a \(C^{*}\)-algebra but, as we describe in Section 6, this definition naturally lives at the level of tracially complete \(C^{*}\)-algebras. When complemented partitions of unity can be found, they give rise to a very general local-to-global transfer process. We give a number of examples in Section 7 (some of which are based on applications of CPoU for uniform tracial closures of nuclear \(C^{*}\)-algebras in [22, 21]). As with the example described above of approximate projections, the local-to-global transfer process produces approximate properties in factorial tracially complete \(C^{*}\)-algebras with CPoU, i.e. properties holding up to a small error in uniform 2-norm. In order to cleanly encode such approximate conditions, we develop the theory of reduced products (including ultraproducts) of tracially complete \(C^{*}\)-algebras in Section 5. The effect of the local-to-global argument is that the ultrapower of factorial tracially complete \(C^{*}\)-algebras with CPoU enjoys several of the fundamental properties of a finite von Neumann algebra: real rank zero, stable rank one, all unitaries are exponentials, and Murray-von Neumann comparison of projections is determined by traces. Our solution to the trace problem for such reduced products (Theorem 7.5) is obtained as a consequence of these results, avoiding the use of sums of commutators found in precursor results such as [10, Section 3.4] (see Remark 7.6). Since CPoU arguments inevitably produce approximate conclusions, there is a more work to be done to achieve the exact classification of projections needed for Theorem E. This is achieved by means of an intertwining argument, based on explicit estimates showing that projections of the same trace which are close in 2-norm are approximately conjugate by a unitary close to the unit. This is in the spirit of similar perturbation results for finite von Neumann algebras and leads to the following result (proved as Theorems 7.18 and 7.19). **Theorem 1.3**.: _Let \((\mathcal{M},X)\) be a factorial type \(\mathrm{II}_{1}\) tracially complete \(C^{*}\)-algebra with CPoU._ 1. _If_ \(p,q\in\mathcal{M}\) _are projections and_ \(\tau(p)=\tau(q)\) _for all_ \(\tau\in X\)_, then_ \(p\) _and_ \(q\) _are unitarily equivalent._ 2. _For any continuous affine_ \(f\colon X\to[0,1]\) _there is a projection_ \(p\in\mathcal{M}\) _with_ \(\tau(p)=f(\tau)\) _for all_ \(\tau\in X\)_._ The fundamental challenge is to determine when complemented partitions of unity can be found, i.e. when a factorial tracially complete \(C^{*}\)-algebra has CPoU. In [23], a subset of the authors and Winter introduced the concept of _uniform property_\(\Gamma\) for a \(C^{*}\)-algebra as a uniform \(2\)-norm version of Murray and von Neumann's property \(\Gamma\) for \(\mathrm{II}_{1}\) factors. This too is most naturally a property of tracially complete \(C^{*}\)-algebras, and we say that \((\mathcal{M},X)\) has property \(\Gamma\) when there exist uniform \(2\)-norm approximately central approximate projections, which approximately divide the trace of elements of \(\mathcal{M}\) in half (Definition 5.19 and Proposition 5.23). The main technical result of [23] is that unital nuclear \(C^{*}\)-algebras with uniform property \(\Gamma\) have complemented partitions of unity. We strengthen the main result of [23] by removing the condition of nuclearity, allowing CPoU to be obtained from property \(\Gamma\) in general. As set out in Section 1.6, this forms a major ingredient in the forthcoming general framework for the classification of \({}^{*}\)-homomorphisms ([17]). **Theorem 1.4**.: _Let \((\mathcal{M},X)\) be a factorial tracially complete \(C^{*}\)-algebra with property \(\Gamma\). Then \((\mathcal{M},X)\) has CPoU. In particular, unital \(C^{*}\)-algebras with uniform property \(\Gamma\) (e.g. unital \(\mathcal{Z}\)-stable \(C^{*}\)-algebras) have CPoU._ For a factorial tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\), all of whose fibres \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) have property \(\Gamma\) as tracial von Neumann algebras, obtaining property \(\Gamma\) for \((\mathcal{M},X)\) is itself a local-to-global transfer problem. Theorem 1.4, together with the local-to-global technology, shows that transferring property \(\Gamma\) from fibres to a global result is to some extent a universal local-to-global problem. The strategy to prove Theorem 1.4 follows the overall framework used in [23], which, in the language of this paper, proves Theorem 1.4 for the uniform tracial completion of a separable nuclear \(C^{*}\)-algebra. The argument splits into two steps: * Obtain a weak form of CPoU in which all the approximate projections making up the partition of unity are replaced by contractions (Theorem 6.15) and are not orthogonal. * Use property \(\Gamma\) to convert the weak form of CPoU to CPoU by means of orthogonalisation, projectionisation, and a maximality argument (see the discussion in the last section of the introduction to [23] for an outline). The second step works generally as was foreshadowed in [23, Lemma 3.7]. However, in [23], nuclearity was instrumental in performing the first step ([23, Lemma 3.6]) through a refined form of the completely positive approximation property from [14]. In particular, Connes' theorem on the equivalence of injectivity and hyperfiniteness underpins these approximations. Our proof of Theorem 1.4 establishes this weak form of CPoU in general (Theorem 6.15), a result which is already of independent interest (see [102, 120]). We do this by means of a Hahn-Banach-driven local-to-global transfer of the form described in Section 1.3. The point is that all finite von Neumann algebras (viewed as tracially complete \(C^{*}\)-algebras with respect to all their traces) satisfy CPoU. Taking suitable convex combinations of the elements witnessing CPoU in finitely many fibres gives rise to the required weak form of CPoU, and we do not need to rely on anything as deep as Connes' work. Specialising to the uniform tracial completion of a nuclear \(\mathcal{Z}\)-stable \(C^{*}\)-algebra, this approach gives a much simpler overall argument for CPoU as compared with [23]. ### Structure and classification Our structure and classification results (Theorems A-D) are proved by a combination of local-to-global transfer in the same spirit as [23, 22, 21] and Elliott-style intertwining arguments. For a separable nuclear \(C^{*}\)-algebra \(A\), a local-to-global argument was given in [22, Theorem 4.6] to pass from uniform property \(\Gamma\) to the uniform McDuff property via CPoU. The key point is that as \(A\) is nuclear, all tracial von Neumann algebras \(\pi_{\tau}(A)^{\prime\prime}\) associated to traces on \(A\) are McDuff, and this can then be transformed into a global statement via CPoU. This proves the equivalence of property \(\Gamma\) and the McDuff property in Theorem B for the uniform tracial closures \(\left(\overline{A}^{T(A)},T(A)\right)\) of such \(C^{*}\)-algebras. An identical argument can be used to obtain this equivalence for amenable factorial II\({}_{1}\) tracially complete \(C^{*}\)-algebras, the only difference being the use of Theorem 1.4 in place of the main result of [23]. For the additional equivalence of hyperfiniteness in Theorem B, we go through classification. Our local characterisations of amenability ensure that hyperfinite tracially complete \(C^{*}\)-algebras are amenable (Theorem 8.2), so once we show hyperfinite factorial tracially complete \(C^{*}\)-algebras have CPoU (Theorem 8.3), the rest of the structure theorem (Theorem B) will follow from the classification theorem. As factorial finite dimensional tracially complete \(C^{*}\)-algebras have CPoU, and CPoU is preserved under inductive limits, it is easy to obtain CPoU for limits of factorial finite dimensional \(C^{*}\)-algebras. It is similarly straightforward to show CPoU holds for a tracially complete \(C^{*}\)-algebra that is locally approximated by embedded factorial finite dimensional tracially complete subalgebras. The problem comes when a factorial tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\) has approximations by finite dimensional \(C^{*}\)-subalgebras in which not all traces extend to elements of \(X\) - and so they don't embed as tracially complete subalgebras. This is reminiscent of Murray and von Neumann's work ([83]) on the uniqueness of hyperfinite II\({}_{1}\) factors, where they have to be concerned with approximating finite dimensional subalgebras which are not factors. Our solution, found in Section 8, is the same as Murray and von Neumann's: reduce to the case that the building blocks can always be taken to be factorial. This proceeds by showing that for separable tracially complete \(C^{*}\)-algebras, local and inductive limit definitions of hyperfiniteness agree, which allows us to realise the given algebra as a tracial completion of an AF \(C^{*}\)-algebra. Turning to classification, the primary objective is the classification of tracially nuclear maps in Theorem D (proved as Theorem 9.12(ii)). As is standard for classification results for maps, this consists of two components: existence (of tracially nuclear \({}^{*}\)-homomorphisms with specified behaviour at the level of traces) and uniqueness (of such maps up to approximate unitary equivalence). The uniqueness aspect of Theorem D (Theorem 9.3) is a direct application of the corresponding uniqueness result for weakly nuclear \({}^{*}\)-homomorphisms into finite von Neumann algebras by traces (a folklore consequence of Connes' theorem), and a CPoU-powered local-to-global argument. This works in essentially the same fashion as the uniqueness result found in [21, Theorem 2.2], and the only difference is the increased generality of the statement. As with the corresponding theorems in the \(C^{*}\)-classification programme, the existence aspect of Theorem D is obtained in two stages. At the first pass, one only aims for an 'approximate' result - for a \(C^{*}\)-algebra \(A\), and factorial tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\) we want uniform 2-norm approximately multiplicative maps \(A\to\mathcal{M}\) which approximately implement a given continuous affine map \(\gamma\colon X\to T(A)\). For this to hold generally, it will be necessary for the traces \(\gamma(X)\subseteq T(A)\) to satisfy suitable approximation properties - in particular, we require the range of \(\gamma\) to consist of _hyperlinear_ traces on \(A\) (i.e. those factoring through the tracial ultrapower \(\mathcal{R}^{\omega}\) of \(\mathcal{R}\)).11 The approximate existence result uses another CPoU-powered local-to-global argument to patch together approximate morphisms into the fibres \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\), \(\tau\in X\), which arise from composing approximate embeddings into \(\mathcal{R}\) with a unital embedding \(\mathcal{R}\to\pi_{\tau}(\mathcal{M})^{\prime\prime}\). Under the stronger hypothesis that the range of \(\gamma\colon X\to T(A)\) consists of uniformly amenable traces, the approximate morphisms \(A\to\mathcal{M}\) approximately realising \(\gamma\) can be further arranged to be tracially nuclear. Footnote 11: When \((\mathcal{M},X)\) is the hyperfinite \(\mathrm{II}_{1}\) factor with its unique trace \(\tau_{\mathcal{R}}\), this approximate existence result is equivalent to hyperlinearity of the relevant trace on \(A\). While it would be possible (though technically somewhat awkward) to prove Theorem 9.8 in a similar fashion to that of [21, Theorem 2.6], we instead take advantage of our classification of projections (Theorems 7.18 and 7.19), and hence maps from finite dimensional algebras,12 to give a different and arguably more conceptual approach to these results. The point is that as \((\mathcal{M},X)\) is factorial, \(X\) is a Choquet simplex, and so we can approximate \(\gamma\) by affine maps \(X\to Z\to T(A)\) factoring through finite dimensional simplices \(Z\) using a result of Lazar and Lindenstrauss from the early 1970s ([70]). Then one uses hyperlinearity to produce approximately multiplicative maps from \(A\) into finite dimensional algebras which approximately realise the maps \(Z\to T(A)\), and use the the classification of projections to embed these finite dimensional algebras into \(\mathcal{M}\) compatibly with the map \(X\to Z\). Footnote 12: The existence result for maps out of finite-dimensional \(C^{*}\)-algebras is Lemma 9.5. Uniqueness does appear explicitly but is implicitly contained in the proof of Proposition 9.2 – in particular, see Footnote 57. At the second pass one aims for exact existence results by means of a one-sided Elliott intertwining argument. This is by now a standard technique, and there are no additional difficulties caused by working with tracially complete \(C^{*}\)-algebras and uniform 2-norms.13 As ever, it is important that the uniqueness theorem is strong enough to cover approximately multiplicative maps. This gives rise to the exact existence result in Theorem 9.12, and completes the proof of Theorem D. Footnote 13: The one subtle point is that the intertwining by reparameterisation technique for constructing genuine morphisms from approximate morphisms (Theorem 5.11) requires stability of unitaries in the uniform 2-norm. We prove that this follows from CPoU in Corollary 7.11. Classification results for uniformly tracially complete \(C^{*}\)-algebras are then obtained from Theorem D using a two-sided Elliott intertwining argument. This classifies the family of amenable tracially complete \(C^{*}\)-algebras \((\mathcal{M},X)\) satisfying both the domain and codomain hypotheses of Theorem D. The amenability hypothesis is needed so that the relevant identity maps are tracially nuclear (and so fall within the scope of Theorem D), as the intertwining argument will use uniqueness to compare the identity maps with the compositions of the maps obtained from the existence portion of Theorem D. This process (which is an instance of Elliott's abstract classification framework from [38]) yields Theorem C, which contains Theorem A as a special case. See [115, Section 6] for a general description of the passage from approximate existence and uniqueness, to the classification of operator algebras via one-sided intertwining and then a symmetrisation of hypotheses. Just as we often prefer to describe the outcomes of the local-to-global transfer procedure at the level of ultraproducts or sequence algebras in order to suppress explicit error tolerances, we do the same for approximately multiplicative maps. So, in the main body our approximate existence result and the corresponding uniqueness theorem are given in terms of exact classification results into reduced products, as was done with the precursor results in [21]. ### Classification and the Toms-Winter conjecture We end the introduction by discussing the role tracially complete \(C^{*}\)-algebras play in the structure and classification of simple stably finite amenable \(C^{*}\)-algebras. These are some instances of step (iii), 'pulling results back from the tracial completion to the \(C^{*}\)-level', of the scheme on page 3. Given a simple unital \(C^{*}\)-algebra \(B\), with \(T(B)\neq\emptyset\), write \(\mathcal{M}\coloneqq\overline{B}^{T(B)}\), \(B_{\infty}\coloneqq\ell^{\infty}(B)/c_{0}(B)\) for the norm approximate sequence algebra of \(B\), and \(\mathcal{M}^{\infty}\) for the uniform 2-norm approximate sequence algebra of \(\mathcal{M}\), i.e. \(\ell^{\infty}(\mathcal{M})\) modulo the \(\|\cdot\|_{2,T(B)}\)-null sequences. By construction, there is a Kaplansky density type theorem - the unit ball of \(B\) is \(\|\cdot\|_{2,T(B)}\)-dense in the unit ball of \(\mathcal{M}\) - and this ensures that the canonical inclusion \(B\to\mathcal{M}\) gives a surjection \(B_{\infty}\to\mathcal{M}^{\infty}\). The _trace kernel extension_ is the short exact sequence induced by this surjection:14 Footnote 14: In some cases in the literature, an ultrapower version of the trace-kernel extension is used; accordingly, in this paper, we develop the theory in terms of reduced powers with respect to a free filter, simultaneously covering both cases. \[0\to J_{B}\to B_{\infty}\to\mathcal{M}^{\infty}\to 0. \tag{1.6}\] The ideal \(J_{B}\) is known as the _trace-kernel ideal_, and it inherits regularity properties (such as separable \(\mathcal{Z}\)-stability and strict comparison) from corresponding properties of \(B\). The main objective in the abstract approach to the unital classification theorem for simple nuclear \(C^{*}\)-algebras in [16] is a classification of full unital \({}^{*}\)-homomorphisms \(A\to B_{\infty}\) where \(A\) is a unital separable nuclear \(C^{*}\)-algebra satisfying the UCT, and \(B\) is a unital simple separable nuclear \(\mathcal{Z}\)-stable finite \(C^{*}\)-algebra ([16, Theorem 1.1]). Once this is in place, the classification of \(C^{*}\)-algebras follows from intertwining arguments as discussed in Section 1.5. As set out in [16, Section 1.3],15 at a very high level, the strategy for the classification of full approximate \({}^{*}\)-homomorphisms falls into the three step plan from page 3. 1. Classify maps from \(A\) into a finite von Neumann algebra by traces; this is a consequence of Connes' theorem. 2. Classify maps from \(A\) into \(\mathcal{M}^{\infty}\) by traces; this is the classification of approximate \({}^{*}\)-homomorphisms from \(A\) into \(\mathcal{M}\) described in Section 1.5, and the instance used here is the case covered in the combination of [23, 21] (as \(\mathcal{M}\) is the uniform tracial completion of a nuclear \(\mathcal{Z}\)-stable \(C^{*}\)-algebra). 3. Classify lifts of a full map \(\theta\colon A\to\mathcal{M}^{\infty}\) back to \(B_{\infty}\) in terms of \(K\)-theoretic data. This is the main task of [16], and a detailed outline of the strategy for this step is given there. In particular, none of the results in this paper are necessary for the abstract proof of the unital classification paper, though we do contend that, in hindsight, the classification results for maps \(A\to\mathcal{M}\) given here make the three-step plan above more transparent. To take stably finite classification beyond the setting of nuclear codomains, our Theorem 1.4 will be crucial to step (ii). One objective is a stably finite version of Kirchberg's very general classification results for full morphisms \(A\to B_{\infty}\), where \(A\) is a separable exact \(C^{*}\)-algebras satisfying the UCT and \(B\) is an \(\mathcal{O}_{\infty}\)-stable \(C^{*}\)-algebra (see [93, Theorem 8.3.3], [45, Theorems A and B], [63]). In the forthcoming work [17], a subset of us will extend the stably finite classification of morphisms to a level of generality corresponding to Kirchberg's framework. Sticking to the unital case, this will classify unital full nuclear \({}^{*}\)-homomorphisms \(A\to B_{\infty}\) for domains \(A\) in analogy with Kirchberg's theorem, and where \(B\) is unital, finite, \(\mathcal{Z}\)-stable, and has comparison of positive elements by bounded traces.16 Via intertwining, this entails a classification of unital \({}^{*}\)-homomorphisms from \(A\) to \(B\) which map traces on \(B\) to faithful traces on \(A\). Outside the nuclear setting, one cannot obtain CPoU for \(B\) (or equivalently, for its uniform tracial completion) from [23]. Footnote 16: This will be discussed further in [17], but it forces all quasitraces on \(B\) to be traces. Looking to the future, we hope that the breakthrough results in [47] on the classification of stronger outer actions of countable discrete amenable groups17 on Kirchberg algebras will, over time, have powerful stably finite counterparts. Here, we expect that developing a suitable classification of group actions on classifiable tracially complete \(C^{*}\)-algebras (step (ii)) will help break up the overall task in to more manageable parts, particularly in the case where the underlying action on the trace space is complex. Even more generally one can imagine more general notions of quantum symmetries acting on uniform tracially complete \(C^{*}\)-algebras, as a bridge towards studying such actions on classifiable \(C^{*}\)-algebras whose tracial state space is large. Footnote 17: The results of [47] are much more general than this, encompassing amenable isometrically shift absorbing actions of locally compact groups On the structure side, the remaining open part of the Toms-Winter conjecture is intimately linked with tracially complete \(C^{*}\)-algebras. By now, for a simple separable non-elementary nuclear \(C^{*}\)-algebra \(B\), the conditions of \(\mathcal{Z}\)-stability and finite nuclear dimension are known to coincide ([118, 117, 106], and [23, 19], building on the line of work [10, 98, 76]). Back in 2004, Rordam showed that if \(B\) is \(\mathcal{Z}\)-stable, then \(B\) has strict comparison ([94]) and the remaining piece of the Toms-Winter conjecture is the converse.18 Footnote 18: A weaker conjecture, of the form that pure simple separable non-elementary nuclear \(C^{*}\)-algebras are \(\mathcal{Z}\)-stable, is discussed by Winter in [116, Section 5.4]. Here, pureness is the combination of strict comparison with a (tracial) divisibility condition on the Cuntz semigroup and can be thought of as a combination of a weak uniqueness theorem and an existence theorem for positive elements in terms of their rank functions. The trace-kernel extension, as we view it today, has its origins in Matui and Sato's breakthrough work on the implication from strict comparison to \(\mathcal{Z}\)-stability ([75]). A key fact is that it induces a short exact sequence at the level of central sequence algebras (see [68, Theorem 3.3]) \[0\to J_{B}\cap B^{\prime}\to B_{\infty}\cap B^{\prime}\to\mathcal{M}^{ \infty}\cap\mathcal{M}^{\prime}\to 0, \tag{1.7}\] and tensorial absorption results for \(B\) and \(\mathcal{M}\) can be written in terms of the central sequence algebras \(B_{\infty}\cap B^{\prime}\) and \(\mathcal{M}^{\infty}\cap\mathcal{M}^{\prime}\). Let us split the problem of whether strict comparison implies \(\mathcal{Z}\)-stability of \(B\) into the three steps. 1. Injective II\({}_{1}\) von Neumann algebras are McDuff as a consequence of Connes' theorem.19 Footnote 19: For II\({}_{1}\) factors this was a step along the road to Connes’ theorem (this follows from 7\(\Rightarrow\)2 of [29, Theorem 5.1], defining \(\phi\) in 7 as the composition of conditional expectation and the trace), and it is folklore that it holds generally; a proof from hyperfiniteness to the McDuff property can be found as [22, Proposition 1.6]. See also [101], which obtains the equivariant McDuff property, extending results from Ocneanu beyond the factor setting. 2. Attempt to lift the McDuff property back from von Neumann algebras to \(\mathcal{M}\). This is an immediate consequence of the local-to-global argument, _provided_ one has CPoU. For the uniform tracial completion of a nuclear \(C^{*}\)-algebra with no finite dimensional representations, CPoU is equivalent to property \(\Gamma\). 3. Lift an embedding \(M_{n}\to\mathcal{M}^{\infty}\cap\mathcal{M}^{\prime}\) to an order zero map \(\phi\colon M_{n}\to B_{\infty}\cap B^{\prime}\). Such lifts always exist by projectivity of order zero maps from matrix algebras ([71, Theorem 4.9]), and in fact, any \({}^{*}\)-homomorphism from a separable \(C^{*}\)-algebra into \(\mathcal{M}^{\infty}\cap\mathcal{M}^{\prime}\) has an order zero lift by [68, Propositions 4.5 and 4.6]. Matui and Sato's notion of property (SI) - a kind of large-to-small comparison condition in \(B_{\infty}\cap B^{\prime}\), which they obtain from strict comparison and nuclearity of \(B\) - is designed to ensure that \(\phi\) gives rise to a copy of \(\mathcal{Z}\) in \(B_{\infty}\cap B^{\prime}\) and hence to \(\mathcal{Z}\)-stability of \(B\). Steps (i) and (iii) work generally; the challenge is at step (ii). This abstraction of Matui and Sato's strategy led to [22, Theorem 5.6], showing that for unital \(C^{*}\)-algebras as in the Toms-Winter conjecture, \(\mathcal{Z}\)-stability is equivalent to the combination of strict comparison and uniform property \(\Gamma\) (i.e. property \(\Gamma\) for the tracial completion).20 Footnote 20: See [20, Theorem A] for a non-unital statement. Thus the following question is fundamental; by [22, Theorem 5.6] a positive answer would resolve the unital case of the Toms-Winter conjecture. The analogous result in the von Neumann setting is the first component of Connes' theorem: injective II\({}_{1}\) factors have property \(\Gamma\) (which goes through the passage from failure of property \(\Gamma\) to a spectral gap condition; see also the two new proofs [74, 73]). **Question 1.5**.: Does every amenable type II\({}_{1}\) factorial tracially complete \(C^{*}\)-algebra satisfy property \(\Gamma\)? When the designated set of traces \(X\) is reasonably small, Question 1.5 has a positive answer; indeed in the unique trace case this is due to Connes as mentioned above. We give a positive answer for \(\partial_{e}X\) compact and zero dimensional as Proposition 5.28. More generally, it will be shown in forthcoming work by the third- and fifth-named authors that the same holds for \(\partial_{e}X\) compact and finite-dimensional. It remains mysterious whether one should expect a positive answer in general, or whether strict comparison for a \(C^{*}\)-algebra \(A\) would imply property \(\Gamma\) for its uniform tracial completion - which if true would establish the Toms-Winter conjecture. If neither of these situations hold, then it is reasonable to ask whether amenable tracially complete \(C^{*}\)-algebras with a suitable tracial divisibility property satisfy property \(\Gamma\). In other words, if a version of Winter's tracial divisibility holds for a amenable tracially complete \(C^{*}\)-algebra, must it have property \(\Gamma\)? A positive answer to this question would resolve the modified conjecture mentioned in Footnote 18. We end with a comparison of the hypotheses in the classification of amenable algebras that we have discussed. \begin{tabular}{l l} \hline \hline Type of algebra & Regularity \\ \hline Finite amenable von Neumann algebra & Automatic \\ Amenable factorial II\({}_{1}\) tracially complete \(C^{*}\)-algebra & Open \\ Simple separable non-elementary nuclear \(C^{*}\)-algebra & Not automatic \\ \hline \hline \end{tabular} With the hypotheses listed in the table,'regularity' for von Neumann algebras simply means the McDuff property; for tracially complete algebras it means either McDuff or property \(\Gamma\) (equivalent in this setting). For \(C^{*}\)-algebras as in the table, regularity can be interpreted as \(\mathcal{Z}\)-stability, and non-\(\mathcal{Z}\)-stable examples are known to exist. The other key hypothesis to \(C^{*}\)-classification is the universal coefficient theorem, which is inherently topological, and therefore doesn't have von Neumann algebra or tracially complete counterparts; whether it holds automatically remains open. ### Acknowledgements We'd like to thank Ilijas Farah, Bradd Hart, Ilan Hirshberg, Gabor Szabo and, Andrea Vaccaro for discussions, and Valerie Morris, Mikkel Munkholm, and Pawel Sarkowicz for comments an drafts of the paper. Parts of the research in this paper was undertaken at: the American Institute of Mathematics as part of the SQuaRE _von Neumann techniques in the classification of C\({}^{*}\)-algebras_ (2019-2023), the _Workshop on C*-algebras: structure and dynamics_ (Sde Boker, 2022), and the Fields Institute as part of the _Thematic program on operator algebras and their applications_ (2023). ## 2. Traces and Choquet simplices Choquet theory will play an important role throughout this paper, and we begin this preliminary section by recalling some definitions and theorems about Choquet simplices in Section 2.1. The main examples of a Choquet simplices in this paper are the spaces \(T(A)\) of tracial states on a unital \(C^{*}\)-algebra \(A\) with its weak\({}^{*}\) topology, along with the closed faces of \(T(A)\). We will discuss traces in more depth in Section 2.2 and recall some results related to their GNS representations and induced von Neumann algebras for later use. ### Choquet theory We refer the reader to [3] and [55] for general references on Choquet theory. We collect some basic definitions here along with the results needed in the main body of the paper. Let \(X\) be a compact convex set in a locally convex space and write \(\operatorname{Aff}(X)\) for the Banach space of all continuous affine functions \(X\to\mathbb{R}\) equipped with the supremum norm. Then \(\operatorname{Aff}(X)\) is an Archimedean order unit space in the sense of [3, Section II.1] with the pointwise order and the order unit \(1_{\operatorname{Aff}(X)}\), the constant function \(1\).21 Conversely, if \(V\) is a Archimedean order unit space, then the state space of \(V\), Footnote 21: Briefly, an _Archimedean order unit space_ is a triple \(V=(V,V_{+},1_{V})\) where \(V\) is a real vector space, \(V_{+}\subseteq V\) is a spanning cone, and \(1_{V}\in V_{+}\) is a distinguished element \(1_{V}\in V_{+}\), called the _order unit_, such that \[\|v\|\coloneqq\inf\{r>0:-r1_{V}\leq v\leq r1_{V}\},\qquad v\in V,\] is a complete norm on \(V\). (Note: in the literature, completeness is not always assumed as part of the definition, but for us it is.) \[X\to S(\operatorname{Aff}(X))\qquad\text{and}\qquad V\to\operatorname{Aff}(S( V)) \tag{2.1}\] are isomorphisms. This gives an anti-equivalence between the categories of compact convex sets and Archimedean order unit spaces. We refer to this result as _Kadison duality_. One important consequence is that the weak topology on \(\operatorname{Aff}(X)\) is the topology of pointwise convergence. This is standard and used, for example, in [86] and [48]. **Proposition 2.1**.: _Let \(X\) be a compact convex set._ * _All states on_ \(\operatorname{Aff}(X)\) _are given by point evaluations._ * _If_ \((f_{\lambda})\subseteq\operatorname{Aff}(X)\) _is a net and_ \(f\in\operatorname{Aff}(X)\)_, then_ \(f_{\lambda}\to f\) _weakly if and only if_ \(f_{\lambda}(\tau)\to f(\tau)\) _for all_ \(\tau\in X\)_._ Proof.: (i). This is a consequence of Kadison's duality. (ii). Note that bounded linear functionals on \(\operatorname{Aff}(X)\) extend to bounded linear functionals on \(C(X)\) by the Hahn-Banach theorem. Since the dual of \(C(X)\) is spanned by states on \(C(X)\) and all states on \(C(X)\) restrict to states on \(\operatorname{Aff}(X)\), it follows that \(S(\operatorname{Aff}(X))\) spans \(\operatorname{Aff}(X)^{*}\). The result now follows from (i). For a compact convex set \(X\), write \(\operatorname{Prob}(X)\) for the set of Radon probability measures on \(X\). Given \(\mu\in\operatorname{Prob}(X)\), the _barycentre_ of \(\mu\) is the unique point \(x\in X\) such that \[\int_{X}f\,d\mu=f(x),\qquad f\in\operatorname{Aff}(X). \tag{2.3}\] Let \(\partial_{e}X\subseteq X\) denote the set of extreme points of \(X\). We say a complex Radon measure \(\mu\) is _supported on_\(\partial_{e}X\) if for every Baire measurable set \(E\subseteq X\) with \(\partial_{e}X\subseteq E\), we have \(|\mu|(X\setminus E)=0\).22 By the Choquet-Bishop-de Leeuw theorem (see [3, Theorem I.4.8 and Corollary 1.4.12]), for every \(x\in X\), there exists \(\mu\in\operatorname{Prob}(X)\) supported on \(\partial_{e}X\) with barycentre \(x\). A compact convex set \(X\) is called a _Choquet simplex_ if this measure \(\mu\) is unique for every \(x\in X\). Footnote 22: When \(X\) is metrisable, the set \(\partial_{e}X\) is a \(G_{\delta}\)-set, so \(\mu\in\operatorname{Prob}(X)\) is supported on \(\partial_{e}X\) if and only if \(\mu(\partial_{e}X)=1\). In general, \(\partial_{e}X\) is not Baire measurable (and hence not Borel measurable) – see [5, Section VII] (and [5, Lemma 4.1] to identify \(\partial_{e}X\) with \(M(B)\)). In the finite dimensional setting, there is a unique Choquet simplex of every dimension. Specifically, if \(n\geq 0\) is an integer, then the \(n\)-dimensional Choquet simplex is given as the convex hull of an orthonormal set of \(n+1\) vectors in a Hilbert space. A Choquet simplex \(X\) is called a _Bauer simplex_ if \(\partial_{e}X\) is compact. In this case, we view \(\operatorname{Prob}(\partial_{e}X)\) as a compact convex set in the locally convex space \(C(\partial_{e}X)^{*}\). Note that there is an affine homeomorphism \[\operatorname{Prob}(\partial_{e}X)\to X \tag{2.4}\] given by sending a measure to the barycentre of its canonical extension to \(X\) (given by declaring that \(X\setminus\partial_{e}X\) has measure zero). Indeed, this map is bijective by the definition of a Choquet simplex, and it is easily seen to be a homeomorphism. When \(X\) is a metrisable Choquet simplex, a result of Lazar and Lindenstrauss in [70, Corollary of Theorem 5.2] (see also [55, Theorem 11.6]) shows \(X\) can be written as a projective limit of finite dimensional Choquet simplices. As noted in [40, Lemma 2.8], this implies \(X\) satisfies the finite dimensional approximation property in Theorem 2.2 below. As we set out in Appendix A.2, this result holds generally, i.e. without a metrisability assumption on \(X\). Naturally, for results which only concern the separable situation (\(C^{*}\)-algebras which are separable in norm and tracially complete \(C^{*}\)-algebras which are separable in their uniform 2-norm), the metrisable version of Theorem 2.2 will suffice. **Theorem 2.2**.: _If \(X\) is a Choquet simplex, then there are nets of finite dimensional Choquet simplices \(Z_{\lambda}\) and continuous affine maps_ \[X\xrightarrow{\beta_{\lambda}}Z_{\lambda}\xrightarrow{\alpha_{\lambda}}X \tag{2.5}\] _such that \(\lim_{\lambda}\|f\circ\alpha_{\lambda}\circ\beta_{\lambda}-f\|=0\) for all \(f\in\operatorname{Aff}(X)\)._ We end this subsection with some results about closed faces in a Choquet simplex. A _face_ in a compact convex set \(X\) is a convex set \(F\subseteq X\) with the following property: for all \(x_{1},x_{2}\in X\), if a non-trivial convex combination of \(x_{1}\) and \(x_{2}\) is in \(F\), then both \(x_{1}\) and \(x_{2}\) are in \(F\). The next two results can be viewed in analogy with the Hahn-Banach theorem. The first is an extension result, and the second is a separation result. **Theorem 2.3** ([3, Theorem II.5.19]).: _Let \(F\) be a closed face of a Choquet simplex \(X\). For every \(f\in\operatorname{Aff}(F)\), there exists \(\hat{f}\in\operatorname{Aff}(X)\) with \(\hat{f}|_{F}=f\) and \(\|\hat{f}\|=\|f\|\)._ The following is an easy consequence of [3, Corollary II.5.20]. The first part follows the proof of [3, Proposition II.5.16]. **Theorem 2.4**.: _Let \(F\) be a closed face in a Choquet simplex \(X\)._ * _If_ \(S\subseteq X\) _is an_ \(F_{\sigma}\)_-set with_ \(S\cap F=\emptyset\)_, then there is a continuous affine function_ \(f\colon X\to[0,1]\) _such that_ \(f(x)=0\) _for all_ \(x\in F\) _and_ \(f(x)>0\) _for all_ \(x\in S\)_._ * _If_ \(x\in X\) _and_ \(f(x)=0\) _for all_ \(f\in\operatorname{Aff}(X)_{+}\) _with_ \(f|_{F}=0\)_, then_ \(x\in F\)_._ Proof.: By [3, Corollary II.5.20], closed faces in Choquet simplices are relatively exposed, which means that (i) holds when \(S\) is a point. From here, (ii) is immediate. In (i), suppose first that \(S\) is closed (and hence compact as \(X\) is compact). For \(s\in S\), let \(f_{s}\colon X\to[0,1]\) be a continuous affine function such that \(f_{s}(x)=0\) for \(x\in F\) and \(f_{s}(s)>0\). As \(S\) is compact and each \(f_{s}\) is continuous, there are \(s_{1},\ldots,s_{n}\in S\) such that \[\inf_{s\in S}\max_{1\leq i\leq n}f_{s_{i}}(s)>0. \tag{2.6}\] Set \(f\coloneqq\frac{1}{n}\sum_{i=1}^{n}f_{s_{i}}\). To see the general case, consider an \(F_{\sigma}\)-set \(S\) and write \(S\) as the union of closed sets \((S_{n})_{n=1}^{\infty}\). For each \(n\geq 1\), let \(f_{n}\colon X\to[0,1]\) be a continuous affine function such that \(f_{n}(x)=0\) for all \(x\in F\) and \(f_{n}(x)>0\) for all \(x\in S_{n}\). Then set \(f\coloneqq\sum_{n=1}^{\infty}2^{-n}f_{n}\). The final result of this subsection concerns detecting closed faces. In general, if \(X\) is a Choquet simplex and \(S\subseteq\partial_{e}X\), then the convex hull of \(S\), written \(\operatorname{co}(S)\), is a face in \(X\). However, the closed convex hull of \(S\), written \(\overline{\operatorname{co}}(S)\), need not be a face (see [2, Theorem 1]). The following result of Roy gives a replacement. **Theorem 2.5** (cf. [95, Proposition 4.4]).: _If \(X\) is a Choquet simplex and \(F\subseteq X\) is a closed convex set, then \(F\) is a face in \(X\) if and only if \(\partial_{e}F\subseteq\partial_{e}X\)._ Proof.: The forward direction is clear. Conversely, if \(\partial_{e}F\subseteq\partial_{e}X\), then \(F_{0}\coloneqq\operatorname{co}(\partial_{e}F)\) is a face in \(X\), and by the Krein-Milman Theorem, \(F\) is the closure of \(F_{0}\). Using again that \(\partial_{e}F\subseteq\partial_{e}X\), we have that \(F\) is a face in \(X\) by [95, Proposition 4.4]. ### Traces and the GNS construction By a _trace_ on a \(C^{*}\)-algebra, we will always mean a tracial state. For a \(C^{*}\)-algebra \(A\), let \(T(A)\) denote the set of traces on \(A\) equipped with the weak\({}^{*}\) topology. Then \(T(A)\) is convex. We will typically be interested in \(C^{*}\)-algebras where \(T(A)\) is compact - this is the case for unital \(C^{*}\)-algebras, for example. The following result is folklore. **Theorem 2.6**.: _If \(A\) is a \(C^{*}\)-algebra, then every compact face in \(T(A)\) is a Choquet simplex._ Proof.: When \(A\) is unital, \(T(A)\) is a Choquet simplex by [96, Theorem 3.1.18]. By [55, Proposition 10.9], every closed face in a Choquet simplex is a Choquet simplex. When \(A\) is non-unital with unitisation \(A^{\dagger}\), traces on \(A\) can be extended to traces on \(A^{\dagger}\), and \(\tau\in T(A^{\dagger})\) induces a trace on \(A\) if and only if \(\|\tau|_{A}\|=1\). In this way \(T(A)\) can be viewed as a face in \(T(A^{\dagger})\), and the result follows from the unital case. We turn to the structure of continuous affine functions \(T(A)\to\mathbb{R}\), which will be used repeatedly in this paper. The result is well-known and is often attributed to Cuntz and Pedersen in [30], but the result does not explicitly appear in their paper. A proof of the first part of the result can be found in [68, Lemma 6.2] or [16, Proposition 2.1], for example. **Proposition 2.7** (cf. [30, Proposition 2.7]).: _Let \(A\) be a unital \(C^{*}\)-algebra and let \(f\colon T(A)\to\mathbb{R}\) be a continuous affine function. Then for any \(\epsilon>0\), there is a self-adjoint element \(a\in A\) such that_ \[\|a\|\leq\|f\|_{\infty}+\epsilon\quad\text{and}\quad\tau(a)=f(\tau),\quad\tau \in T(A). \tag{2.7}\] _Moreover, if \(f\) is strictly positive, we may assume \(a\geq 0\)._ Proof.: We only prove the last sentence. Assume that \(f\) is strictly positive. Since \(T(A)\) is compact, there is a \(\delta\in(0,2\epsilon)\) such that \(f(\tau)>\delta\) for all \(\tau\in T(A)\). Apply the first part of the proof to \(f-\frac{1}{2}(\|f\|_{\infty}+\delta)\), which has norm at most \(\frac{1}{2}(\|f\|_{\infty}-\delta)\), to obtain a self-adjoint \(b\in A\) such that \[\|b\|\leq\frac{1}{2}\|f\|_{\infty}\qquad\text{and}\qquad\tau(b)=f(\tau)-\frac{ 1}{2}(\|f\|_{\infty}+\delta) \tag{2.8}\] for all \(\tau\in T(A)\). Then set \(a\coloneqq b+\frac{1}{2}(\|f\|_{\infty}+\delta)1_{A}\). The essential starting point for all of our classification results is the classification of projections in finite von Neumann algebras by traces (parts (iii) and (iv) in the proposition below), which goes back to Murray and von Neumann. We review these results below; while these are well-known to experts, they are most often stated for factors - particularly the existence of projections realising arbitrary continuous affine functions - and in this paper we need to work with arbitrary finite von Neumann algebras. **Proposition 2.8**.: _Let \(\mathcal{M}\) be a finite von Neumann algebra._ 1. _Every trace on_ \(\mathcal{M}\) _factors uniquely through the centre-valued trace._23__ Footnote 23: See [104, Theorem V.2.6] or [62, Theorem 8.2.8] for properties of the centre-valued trace. 2. _The normal traces on_ \(\mathcal{M}\) _are dense in the traces on_ \(\mathcal{M}\)_._ 3. _Projections_ \(p,q\in\mathcal{M}\) _are unitarily equivalent if and only if_ \(\tau(p)=\tau(q)\) _for all_ \(\tau\in T(\mathcal{M})\)_;_ \(p\) _is Murray-von Neumann subequivalent to_ \(q\) _if and only if_ \(\tau(p)\leq\tau(q)\) _for all_ \(\tau\in\mathcal{T}(\mathcal{M})\)_._ _._ * _If_ \(\mathcal{M}\) _is type_ \(\mathrm{II}_{1}\) _and_ \(f\colon T(\mathcal{M})\to[0,1]\) _is a continuous affine function, then there is a projection_ \(p\in\mathcal{M}\) _such that_ \(\tau(p)=f(\tau)\) _for all_ \(\tau\in T(\mathcal{M})\)_._ Proof.: (i). This is a consequence of Dixmier's approximation theorem (see [62, Proposition 8.3.10] or [7, Theorem III.2.5.7(iv)]). (ii). As the centre-valued trace on \(\mathcal{M}\) is normal (see [103, Theorem V.2.34]), the result follows from (i) and the density of the normal states in the states on the centre of \(\mathcal{M}\). (iii). The second part of the statement follows from Murray and von Neumann's comparison theorem for projections in von Neumann algebras, the properties of the centre-valued trace, and (i); see [104, Corollary V.2.8], for example. The first part follows from applying the second part both to \(p\) and \(q\) and to \(1_{\mathcal{M}}-p\) and \(1_{\mathcal{M}}-q\). (iv). This proceeds in a very similar way to the proof that \(\mathrm{II}_{1}\) factors contain projections of arbitrary traces in \([0,1]\) (see [7, Theorem III.1.7.9 and Paragraph III.1.7.10], for example). As we have been unable to find a reference, we give the details for completeness. Let \(P\) be the set of projections \(p\in\mathcal{M}\) such that \(\tau(p)\leq f(\tau)\) for all \(\tau\in T(\mathcal{M})\). Then \(P\neq\emptyset\) as \(0\in P\). Also, if \((p_{\lambda})\) is an increasing chain of projections in \(\mathcal{M}\) with supremum \(p\in\mathcal{M}\), then for any normal trace \(\tau\in\mathcal{M}\) we have \[\tau(p)=\lim_{\lambda}\tau(p_{\lambda})\leq f(\tau). \tag{2.9}\] As \(f\) is weak\({}^{*}\)-continuous, part (ii) gives \(p\in P\). By Zorn's Lemma, there is a maximal \(p\in P\). We will show \(f(\tau)=\tau(p)\) for all \(\tau\in T(\mathcal{M})\). Suppose this is not the case. By the Krein-Milman theorem, there exists \(\tau_{0}\in\partial_{e}T(\mathcal{M})\) with \(\tau_{0}(p)<f(\tau_{0})\). Set \(\epsilon\coloneqq\frac{1}{2}(f(\tau_{0})-\tau_{0}(p))\). By (i), there is an isomorphism \[Z(\mathcal{M})\xrightarrow{\cong}C(\partial_{e}T(\mathcal{M}))\colon a\mapsto \bigl{(}\tau\mapsto\tau(a)\bigr{)}. \tag{2.10}\] In particular, \(\partial_{e}T(\mathcal{M})\) is totally disconnected. Let \(U\subseteq\partial_{e}T(\mathcal{M})\) be a clopen neighbourhood of \(\tau_{0}\) so that \[\tau(p)<f(\tau)-\epsilon \tag{2.11}\] for all \(\tau\in U\), and let \(z\in Z(\mathcal{M})\) be the projection corresponding to the characteristic function of \(U\) under the isomorphism in (2.10). Since \(\tau(1_{\mathcal{M}}-p)\geq\epsilon\) for all \(\tau\in U\), it follows that \[\tau(z(1-p))\geq\epsilon \tag{2.12}\] for all \(\tau\in\partial_{e}T(z\mathcal{M})\), and hence for all \(\tau\in T(z\mathcal{M})\). Fix \(d\geq 1\) with \(1/d<\epsilon\). Since \(\mathcal{M}\) is type \(\mathrm{II}_{1}\), so is the corner \(z\mathcal{M}\). By the proof of [104, Theorem V.1.35] (which handles the case \(d=2\)), there is a unital embedding \(M_{d}\to z\mathcal{M}\), and in particular, there is a projection \(e\in z\mathcal{M}\) so that \(\tau(e)=1/d\) for all \(\tau\in T(z\mathcal{M})\). By comparison of projections in the von Neumann algebra \(z\mathcal{M}\), it follows that \(e\) is Murray-von Neumann subequivalent to \(z(1-p)\). Hence after conjugating \(e\) by a unitary in \(z\mathcal{M}\), we may assume \(e\) is orthogonal to \(zp\). Since \(e\) is also orthogonal to the central projection \(z^{\perp}\), it follows that \(e\) is orthogonal to \(p\). Set \(p^{\prime}\coloneqq p+e\). Since \(e\) is orthogonal to \(p\), it follows that \(p^{\prime}\) is a projection. For \(\tau\in\partial_{e}X\setminus U\), we have \(\tau(p^{\prime})=\tau(p)\leq f(\tau)\), and for \(\tau\in U\), we have \(\tau(p^{\prime})=\tau(p)+\frac{1}{d}<f(\tau)\). By the Krein-Milman theorem, it follows that \(\tau(p^{\prime})\leq f(\tau)\) for all \(\tau\in T(\mathcal{M})\), and so \(p^{\prime}\in P\). This contradicts the maximality of \(p\). Given a \(C^{*}\)-algebra \(A\) and \(\tau\in T(A)\), let \(\pi_{\tau}\colon A\to\mathcal{B}(\mathcal{H}_{\tau})\) be the GNS representation associated to \(\tau\). The induced von Neumann algebra \(\pi_{\tau}(A)^{\prime\prime}\) has a faithful normal trace induced by \(\tau\) and hence is finite. Therefore, \(\pi_{\tau}(A)^{\prime\prime}\) is the direct sum of a finite von Neumann algebra of type I and a von Neumann algebra of type II\({}_{1}\). Under the assumption that \(A\) has no finite dimensional quotients, \(\pi_{\tau}(A)^{\prime\prime}\) is of type II\({}_{1}\) - this will be a common hypothesis throughout the paper. The following well-known characterisation of the extreme points of \(T(A)\) will be used frequently in the paper. A version of the result for not necessarily bounded tracial weights is given in [33, Theorem 6.7.3] - the result below is a special case.24 Footnote 24: Note that an extreme point of \(T(A)\) is precisely a character of norm \(1\) in the sense of [33, Definition 6.7.1]. **Proposition 2.9** (cf. [33, Theorem 6.7.3]).: _If \(A\) is a unital \(C^{*}\)-algebra and \(\tau\in T(A)\), then \(\tau\) is an extreme point of \(T(A)\) if and only if \(\pi_{\tau}(A)^{\prime\prime}\) is a factor._ The following lemma relates traces on the von Neumann algebra \(\pi_{\tau}(A)^{\prime\prime}\) to traces on \(A\). For a \(C^{*}\)-algebra \(A\) and a set of traces \(X\subseteq T(A)\), we define \[\pi_{X}\coloneqq\bigoplus_{\tau\in X}\pi_{\tau}\colon A\to\mathcal{B}\Big{(} \bigoplus_{\tau\in X}\mathcal{H}_{\tau}\Big{)} \tag{2.13}\] to be the direct sum of the GNS representations associated with the traces in \(X\). **Lemma 2.10**.: _Suppose \(A\) is a unital \(C^{*}\)-algebra and \(X\subseteq T(A)\). If \(\tau\in T(\pi_{X}(A)^{\prime\prime})\), then \(\tau\circ\pi_{X}\) is in the closed face generated by \(X\)._ Proof.: Let \(F\subseteq T(A)\) be the weak\({}^{*}\)-closed face generated by \(X\). By the density of the normal traces on \(\pi_{X}(A)^{\prime\prime}\) in the traces on \(\pi_{X}(A)^{\prime\prime}\) (Proposition 2.8(ii)), we may assume \(\tau\) is normal. By Theorem 2.4(ii), it suffices to show that if \(f\colon T(A)\to[0,1]\) is a continuous affine function vanishing on \(X\), then \(f(\tau\circ\pi_{X})=0\). Fix such an affine function \(f\). By applying Proposition 2.7 to the strictly positive affine functions \(f+\frac{1}{n}\) for each \(n\in\mathbb{N}\), we get a bounded sequence \((a_{n})_{n=1}^{\infty}\subseteq A_{+}\) such that \[\lim_{n\to\infty}\sup_{\sigma\in T(A)}|\sigma(a_{n})-f(\sigma)|=0. \tag{2.14}\] It follows that \(\sigma(a_{n})\to 0\) for all \(\sigma\in X\). Since \(a_{n}\geq 0\) and the sequence \((a_{n})_{n=1}^{\infty}\) is bounded, this implies \(\pi_{X}(a_{n})\to 0\) strongly in \(\pi_{X}(A)^{\prime\prime}\).25 Since \(\tau\) is normal, we have \(\tau(\pi_{X}(a_{n}))\to 0\), and hence using (2.14) with \(\sigma=\tau\circ\pi_{X}\) implies that \(f(\tau\circ\pi_{X})=0\). ## 3. Tracially complete \(C^{*}\)-algebras In this section, we introduce tracially complete \(C^{*}\)-algebras, which are the central objects of study in this paper. We discuss the motivating examples, including the tracial completion of a \(C^{*}\)-algebra, and prove some basic approximation and extension lemmas. We then construct inductive limits in the category of tracially complete \(C^{*}\)-algebras. In the final subsection, we examine the relationship between Ozawa's \(W^{*}\)-bundles ([86]) and tracially complete \(C^{*}\)-algebras. ### Definitions and basic properties The definition of tracially complete \(C^{*}\)-algebras is based on uniform \(2\)-norms. **Definition 3.1** (cf. [68, 97, 110, 86], for example).: For a \(C^{*}\)-algebra \(A\) and a set \(X\subseteq T(A)\), define the _uniform \(2\)-seminorm_ on \(A\) by \[\|a\|_{2,X}\coloneqq\sup_{\tau\in X}\sqrt{\tau(a^{*}a)},\qquad a\in A. \tag{3.1}\] The uniform \(2\)-seminorm is indeed a seminorm, and the following inequality is easily verified: \[\|ab\|_{2,X}\leq\min\{\|a\|\|b\|_{2,X},\|a\|_{2,X}\|b\|\},\qquad a,b\in A. \tag{3.2}\] It is easily seen that \(\|\cdot\|_{2,X}\) is a norm if and only if for every non-zero \(a\in A_{+}\) there exists \(\tau\in X\) with \(\tau(a)>0\). In this case, \(X\) is said to be a _faithful set of traces_, and we refer to \(\|\cdot\|_{2,X}\) as the _uniform \(2\)-norm_ with respect to \(X\). The following properties of the uniform \(2\)-norm are standard in the unique trace setting (i.e. when \(X\) is a singleton). **Proposition 3.2**.: _Suppose \(A\) is a \(C^{*}\)-algebra and \(X\subseteq T(A)\) is a faithful set of traces._ * \(\|\cdot\|\) _is lower semi-continuous with respect to_ \(\|\cdot\|_{2,X}\)_;_ * _The unit ball of_ \(A\) _is_ \(\|\cdot\|_{2,X}\)_-closed;_ * \(A_{+}\) _is_ \(\|\cdot\|_{2,X}\)_-closed._ Proof.: Fix \((a_{n})_{n=1}^{\infty}\subseteq A\) and \(a\in A\) with \(\|a_{n}-a\|_{2,X}\to 0\). For \(\tau\in X\), let \(\pi_{\tau}\colon A\to\mathcal{B}(\mathcal{H}_{\tau})\) denote the GNS representation of \(\tau\) with associated cyclic vector \(\xi_{\tau}\in\mathcal{H}_{\tau}\). If \(b\in A\), then \[\begin{split}\|(\pi_{\tau}(a_{n})-\pi_{\tau}(a))\pi_{\tau}(b)\xi _{\tau}\|&=\|(a_{n}-a)b\|_{2,\tau}\\ &\leq\|a_{n}-a\|_{2,X}\|b\|.\end{split} \tag{3.3}\] So \(\pi_{\tau}(a_{n})\eta\) converges to \(\pi(a)\eta\) for all \(\eta\in\pi_{\tau}(A)\xi_{\tau}\). As \(\pi_{\tau}(A)\xi_{\tau}\) is dense in \(\mathcal{H}_{\tau}\), \[\|\pi_{\tau}(a)\|\leq\liminf_{n\to\infty}\|\pi_{\tau}(a_{n})\|\leq\liminf_{n \to\infty}\|a_{n}\|. \tag{3.4}\] The product of the \(\pi_{\tau}\) over \(\tau\in X\) is faithful, and hence isometric, since \(\|\cdot\|_{2,X}\) is a norm on \(A\). Therefore, \[\|a\|=\sup_{\tau\in X}\|\pi_{\tau}(a)\|\leq\liminf_{n\to\infty}\|a_{n}\|. \tag{3.5}\] This proves (i), and (ii) is an easy consequence of (i). For (iii), suppose now that \(a_{n}\geq 0\) for all \(n\in\mathbb{N}\). Fix \(\tau\in X\). By (3.3), we have \[\langle\pi_{\tau}(a)\pi_{\tau}(b)\xi_{\tau},\pi_{\tau}(b)\xi_{\tau}\rangle=\lim_ {n\to\infty}\langle\pi_{\tau}(a_{n})\pi_{\tau}(b)\xi_{\tau},\pi_{\tau}(b)\xi_{ \tau}\rangle\geq 0, \tag{3.6}\] since \(\pi_{\tau}(a_{n})\geq 0\) for all \(n\in\mathbb{N}\). The density of \(\pi_{\tau}(A)\xi_{\tau}\) in \(\mathcal{H}_{\tau}\) implies \(\langle\pi_{\tau}(a)\eta,\eta\rangle\geq 0\) for all \(\eta\in\mathcal{H}_{\tau}\), so \(\pi_{\tau}(a)\geq 0\). As \(X\) is a faithful set of traces, the product of the \(\pi_{\tau}\) over \(\tau\in X\) is faithful. Since \(\pi_{\tau}(a)\geq 0\) for all \(\tau\in X\), it follows that \(a\geq 0\). We will restrict to the case when \(X\subseteq T(A)\) is weak\({}^{*}\)-compact since the weak\({}^{*}\)-closure of \(X\) defines the same uniform 2-norm as \(X\). The following observation will allow us to further restrict to the case when \(X\) is convex. For a set \(X\subseteq T(A)\), let \(\overline{\mathrm{co}}(X)\) denote the closed convex hull of \(X\) in \(A^{*}\). **Proposition 3.3**.: _If \(A\) is a \(C^{*}\)-algebra and \(X\subseteq T(A)\) is compact, then \(\overline{\mathrm{co}}(X)\subseteq T(A)\) and \(\|a\|_{2,X}=\|a\|_{2,\overline{\mathrm{co}}(X)}\) for all \(a\in A\)._ Proof.: Every \(\tau\in\overline{\mathrm{co}}(X)\) is a positive tracial functional with \(\|\tau\|\leq 1\). To see the first claim, it remains to show that \(\|\tau\|\geq 1\). Let \((e_{\lambda})\subseteq A\) be an approximate unit for \(A\). Then \((\tau(e_{\lambda}))\) increases to \(1\) for all \(\tau\in X\). By Dini's Theorem,26 this convergence is uniform over \(\tau\). Fix \(\epsilon>0\) and let \(\lambda\) be such that \(\tau(e_{\lambda})>1-\epsilon\) for all \(\tau\in X\). Then \(\tau(e_{\lambda})\geq 1-\epsilon\) for all \(\tau\in\overline{\mathrm{co}}(X)\). Therefore, each \(\tau\in\overline{\mathrm{co}}(X)\) has norm \(1\). Footnote 26: Dini’s Theorem is often stated for increasing sequences of continuous functions, but the standard proof is equally valid for increasing nets of continuous functions. Let \(a\in A\). The set of all traces \(\tau\in T(A)\) with \(\tau(a^{*}a)\leq\|a\|_{2,X}^{2}\) is weak\({}^{*}\)-closed, convex, and contains \(X\). Therefore, \(\|a\|_{2,\overline{\mathrm{co}}(X)}\leq\|a\|_{2,X}\). The reserve inequality is trivial as \(X\subseteq\mathrm{co}(X)\). A tracial von Neumann algebra (i.e. a von Neumann algebra with a distinguished faithful normal trace) can be characterised abstractly as a pair \((\mathcal{M},\tau)\) where \(\mathcal{M}\) is a \(C^{*}\)-algebra and \(\tau\) is a faithful trace on \(\mathcal{M}\) such that the unit ball of \(\mathcal{M}\) is \(\|\cdot\|_{2,\tau}\)-complete (this follows from [100, Lemma A.3.3], for example). A morphism from a tracial von Neumann algebra \((\mathcal{M},\tau_{\mathcal{M}})\) to another tracial von Neumann algebra \((\mathcal{N},\tau_{\mathcal{N}})\) is a (necessarily normal) \({}^{*}\)-homomorphism \(\phi\colon\mathcal{M}\to\mathcal{N}\) such that \(\tau_{\mathcal{N}}\circ\phi=\tau_{\mathcal{M}}\). Our definition of tracially complete \(C^{*}\)-algebras and their morphisms are modelled on these definitions. **Definition 3.4**.: A _tracially complete \(C^{*}\)-algebra_ is a pair \((\mathcal{M},X)\) where \(\mathcal{M}\) is a \(C^{*}\)-algebra and \(X\subseteq T(\mathcal{M})\) is a compact convex set such that 1. \(X\) is a faithful set of traces on \(\mathcal{M}\), and 2. the unit ball of \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-complete. Given two tracially complete \(C^{*}\)-algebras \((\mathcal{M},X)\) and \((\mathcal{N},Y)\), a _morphism_\(\phi\colon(\mathcal{M},X)\to(\mathcal{N},Y)\) between tracially complete \(C^{*}\)-algebras is a \({}^{*}\)-homomorphism \(\phi\colon\mathcal{M}\to\mathcal{N}\) such that \(\phi^{*}(Y)\subseteq X\); i.e. \(\tau\circ\phi\in X\) whenever \(\tau\in Y\). Strictly speaking, \(X\) can be the empty set, which forces \(\mathcal{M}\) to be the zero \(C^{*}\)-algebra (which is unital). Needless to say, this degenerate case is not of interest. Implicitly we always imagine \(X\) to be non-empty. A major source of examples of tracially complete \(C^{*}\)-algebras are obtained by completing a \(C^{*}\)-algebra with respect to a compact convex set of traces. In a special case, if \(A\) is a \(C^{*}\)-algebra and \(\tau\) is a trace on \(A\), then the GNS-completion \((\pi_{\tau}(A)^{\prime\prime},\tau)\) is a tracial von Neumann algebra which coincides with the tracial completion of \(A\) with respect to \(\{\tau\}\). Tracial completions will be discussed further in Section 3.3 below. Another significant source of examples are \(W^{*}\)-bundles, due to Ozawa ([86]). **Definition 3.5** ([86, Section 5]).: Let \(K\) be a compact Hausdorff space.27 A _\(W^{*}\)-bundle_ over \(K\) is a unital \(C^{*}\)-algebra \(\mathcal{M}\) together with a unital embedding \(C(K)\subseteq Z(\mathcal{M})\) and a conditional expectation \(E\colon\mathcal{M}\to C(X)\) such that Footnote 27: In [86], it is assumed \(K\) is metrisable, but this is not needed here. The non-metrisable case was also considered in [10] where ultraproducts of \(W^{*}\)-bundles were developed. 1. \(E\) is tracial in the sense that \(E(ab)=E(ba)\) for all \(a,b\in\mathcal{M}\), and 2. the unit ball of \(\mathcal{M}\) is complete with respect to the norm \(\|\cdot\|_{2,E}\), defined by \(\|a\|_{2,E}\coloneqq\|E(a^{*}a)\|^{1/2}\) for all \(a\in\mathcal{M}\). The _fibre_ of \(\mathcal{M}\) at a point \(x\in K\) is \(\pi_{\tau_{x}}(\mathcal{M})\), where \(\tau_{x}=\operatorname{ev}_{x}\circ E\); this is equal to \(\pi_{\tau_{x}}(\mathcal{M})^{\prime\prime}\) by [86, Theorem 11]. Every \(W^{*}\)-bundle gives rise to a tracially complete \(C^{*}\)-algebra. **Proposition 3.6**.: _Let \(\mathcal{M}\) be a \(W^{*}\)-bundle over \(K\) with conditional expectation \(E\colon\mathcal{M}\to C(K)\). Let \(X\) be the set of traces on \(\mathcal{M}\) of the form_ \[\tau(a)=\int_{K}E(a)\,\mathrm{d}\mu,\qquad a\in\mathcal{M}, \tag{3.7}\] _for \(\mu\in\operatorname{Prob}(K)\), the space of Radon probability measures on \(K\). Then \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra and \(X\) is a Bauer simplex with extreme boundary \(\partial_{e}X\cong K\)._ Proof.: There is a continuous affine embedding \[\operatorname{Prob}(K)\to T(\mathcal{M})\colon\mu\mapsto\int_{K}E(\,\cdot\,) \,\mathrm{d}\mu. \tag{3.8}\] If \(X\subseteq T(\mathcal{M})\) denotes the image of this map, then \(X\) is compact and convex, and the map above restricts to a homeomorphism \(K\to\partial_{e}X\). Since \(X\) is the closed convex hull of its extreme points, we have \[\|a\|_{2,X}=\|a\|_{2,\partial_{e}X}=\|E(a^{*}a)\|^{1/2}=\|a\|_{2,E} \tag{3.9}\] for all \(a\in\mathcal{M}\). Since \(\mathcal{M}\) is a \(W^{*}\)-bundle, it follows that \(\|\cdot\|_{2,X}\) is a norm and the unit ball of \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-complete. We will give a partial converse of the previous result (also essentially due to Ozawa) in Section 3.6. The most straightforward examples of \(W^{*}\)-bundles are the trivial bundles. **Example 3.7**.: (cf. [86, Theorem 13 and the preceding paragraph]) Let \(K\) be a compact Hausdorff space and let \((\mathcal{N},\tau_{\mathcal{N}})\) be a tracial von Neumann algebra. Define \(C_{\sigma}(K,\mathcal{N})\) to be the set of \(\|\cdot\|\)-bounded, \(\|\cdot\|_{2,\tau_{\mathcal{N}}}\)-continuous functions \(K\to\mathcal{N}\). This is a unital \(C^{*}\)-algebra. Let \(\operatorname{Prob}(K)\) be the set of Radon probability measures on \(K\), and for \(\mu\in\operatorname{Prob}(K)\), define \(\tau_{\mu}\in T\big{(}C_{\sigma}(K,\mathcal{N})\big{)}\) by \[\tau_{\mu}(f)\coloneqq\int_{K}\tau_{\mathcal{N}}(f(x))\,\mathrm{d}\mu(x), \qquad f\in C_{\sigma}(K,\mathcal{N}). \tag{3.10}\] Defining \(X\coloneqq\{\tau_{\mu}:\mu\in\operatorname{Prob}(K)\}\), one finds that \[\|f\|_{2,X}=\max_{x\in K}\|f(x)\|_{2,\tau_{\mathcal{N}}},\qquad f\in C_{\sigma }(K,\mathcal{N}). \tag{3.11}\] Then \(\big{(}C_{\sigma}(K,\mathcal{N}),X\big{)}\) is a tracially complete \(C^{*}\)-algebra called the _trivial \(W^{*}\)-bundle_ over \(K\) with fibre \((\mathcal{N},\tau_{\mathcal{N}})\). For any compact space \(K\), we note that \(\mathcal{M}\coloneqq C(K)\) together with \(X\coloneqq T(\mathcal{M})\cong\operatorname{Prob}(X)\) is a tracially complete \(C^{*}\)-algebra where \(\|\cdot\|\) and \(\|\cdot\|_{2,X}\) agree. More generally, if \(M_{n}\) denotes the \(C^{*}\)-algebra of \(n\times n\) matrices over \(\mathbb{C}\), then \(\mathcal{M}_{n}=C(K,M_{n})\), together with \(X=T(\mathcal{M})\cong\operatorname{Prob}(X)\) is a tracially complete \(C^{*}\)-algebra where \(\|\cdot\|\) and \(\|\cdot\|_{2,X}\) are equivalent norms. These examples show that we cannot expect tracially complete \(C^{*}\)-algebras to behave like von Neumann algebras in general. An important tool for investigating the structure of a tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\) is the GNS representation \(\pi_{\tau}\colon\mathcal{M}\to\mathcal{B}(\mathcal{H}_{\tau})\) for \(\tau\in X\). Each \(\tau\in X\) induces a faithful trace on the von Neumann algebra \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\). We view these GNS representations as giving rise to 'fibres' on tracially complete \(C^{*}\)-algebras. We are intentially somewhat vague as to the formal definition of the fibre at \(\tau\): should it be \(\pi_{\tau}(\mathcal{M})\) or \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) and should we use all traces in \(X\) or restrict to \(\partial_{e}X\) (as in the \(W^{*}\)-bundle case)? For non-extreme traces, \(\pi_{\tau}(\mathcal{M})\) and \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) can differ, even when \(\mathcal{M}\) is a \(W^{*}\)-bundle (i.e. \(X\) is Bauer). We return to this discussion with Question 3.16. For many of our latter results in this paper, we will want to restrict to the case that the von Neumann algebras \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) are all type II\({}_{1}\). **Definition 3.8**.: A tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\) is said to be _type_ II\({}_{1}\) if \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) is a type II\({}_{1}\) von Neumann algebra for every \(\tau\in X\). We now turn to proving some basic properties of tracially complete \(C^{*}\)-algebras. The following two results are both standard in the setting of tracial von Neumann algebras. **Proposition 3.9**.: _Every tracially complete \(C^{*}\)-algebra is unital._ Proof.: Suppose \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra and let \((e_{\lambda})\) be an approximate unit for \(\mathcal{M}\). Then \((\tau(e_{\lambda}))\subseteq\mathbb{R}\) is an increasing net converging to \(1\) for all \(\tau\in X\), and since \(X\) is compact, Dini's Theorem implies this convergence is uniform in \(\tau\). As \((e_{\lambda})\) is an increasing net of positive contractions, \[\|e_{\lambda}-e_{\mu}\|_{2,X}^{2}=\sup_{\tau\in X}\tau\big{(}(e_{\lambda}-e_{ \mu})^{2}\big{)}\leq\sup_{\tau\in X}\tau(e_{\lambda}-e_{\mu}), \tag{3.12}\] whenever \(\lambda\geq\mu\). Hence, \((e_{\lambda})\) is a \(\|\cdot\|_{2,X}\)-Cauchy net in the unit ball of \(\mathcal{M}\). Since \(\mathcal{M}\) is tracially complete, \((e_{\lambda})\)\(\|\cdot\|_{2,X}\)-converges to a positive contraction \(e\in\mathcal{M}\) by Proposition 3.2(iii). For all \(a\in\mathcal{M}\), working in the unitisation \(\mathcal{M}^{\dagger}\) of \(\mathcal{M}\) and extending each \(\tau\in X\) to \(\mathcal{M}^{\dagger}\), \[\|(1_{\mathcal{M}^{\dagger}}-e)a\|_{2,X}^{2}\leq\sup_{\tau\in X}\tau(1_{ \mathcal{M}^{\dagger}}-e)\|a\|^{2}=0, \tag{3.13}\] and hence \(e\) is a unit for \(\mathcal{M}\). **Proposition 3.10**.: _Every morphism of tracially complete \(C^{*}\)-algebras is unital and contractive with respect to the uniform 2-norms._ Proof.: Suppose \(\phi\colon(\mathcal{M},X)\to(\mathcal{N},Y)\) is a morphism between tracially complete \(C^{*}\)-algebras. To see \(\phi\) is contractive, fix \(\tau\in T(Y)\) and \(a\in\mathcal{M}\). Then \(\tau\circ\phi\in X\) by the definition of a morphism, so \[\tau\big{(}\phi(a)^{*}\phi(a)\big{)}=(\tau\circ\phi)(a^{*}a)\leq\sup\|a\|_{2,X} ^{2}. \tag{3.14}\] Taking the supremum over \(\tau\) yields \(\|\phi(a)\|_{2,Y}\leq\|a\|_{2,X}\) for all \(a\in\mathcal{M}\). To see \(\phi\) is unital, first note that \(\phi(1_{\mathcal{M}})\leq 1_{\mathcal{N}}\). If \(\tau\in Y\), then \(\tau\circ\phi\in X\), so \(\tau(\phi(1_{\mathcal{M}}))=1\). Therefore, \(\big{\|}1_{\mathcal{N}}-\phi(1_{\mathcal{M}})\big{\|}_{2,Y}=0\), and hence \(\phi(1_{\mathcal{M}})=1_{\mathcal{N}}\). Note that if \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra and \(\mathcal{N}\subseteq\mathcal{M}\) is a \(\|\cdot\|_{2,X}\)-closed unital \(C^{*}\)-subalgebra,28 we may form a tracially complete \(C^{*}\)-algebra \((\mathcal{N},Y)\) where Footnote 28: Note that a \(C^{*}\)-subalgebra \(\mathcal{N}\subseteq\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-closed if and only if the unit ball of \(\mathcal{N}\) is \(\|\cdot\|_{2,X}\)-closed in the unit ball of \(\mathcal{M}\). The forward direction is immediate, and the backward direction follows from Lemma 3.27 below. \[Y\coloneqq\{\tau|_{\mathcal{N}}:\tau\in X\}\subseteq T(\mathcal{N}). \tag{3.15}\] We call \((\mathcal{N},Y)\) a _tracially complete \(C^{*}\)-subalgebra_ of \((\mathcal{M},X)\). This suggests the following notion of an embedding of tracially complete \(C^{*}\)-algebras. **Definition 3.11**.: Let \((\mathcal{M},X)\) and \((\mathcal{N},Y)\) be tracially complete \(C^{*}\)-algebras. A morphism \(\phi\colon(\mathcal{M},X)\to(\mathcal{N},Y)\) is called an _embedding_ if \(\phi^{*}(Y)=X\). The next proposition justifies the terminology. **Proposition 3.12**.: _A morphism \(\phi\colon(\mathcal{M},X)\to(\mathcal{N},Y)\) is an embedding if and only if \(\|\phi(a)\|_{2,Y}=\|a\|_{2,X}\) for all \(a\in\mathcal{M}\). Further, if \(\phi\) is an embedding, then \(\phi\) is isometric in the operator norm._ Proof.: If \(\phi\) is an embedding, then for \(a\in\mathcal{M}\), we have \[\|\phi(a)\|_{2,Y}=\sup_{\tau\in Y}\tau(\phi(a^{*})\phi(a))^{1/2}=\sup_{\sigma \in X}\sigma(a^{*}a)^{1/2}=\|a\|_{2,X}. \tag{3.16}\] To show the converse, suppose \(\phi^{*}(Y)\neq X\) and fix \(\tau_{0}\in X\setminus\phi^{*}(Y)\). As \(\phi^{*}(Y)\) is weak\({}^{*}\)-closed and convex, there exists a self-adjoint \(a\in\mathcal{M}\) such that \(\tau_{0}(a)>\sup_{\sigma\in\phi^{*}(Y)}\sigma(a)\) by the Hahn-Banach theorem. Replacing \(a\) by \(a+\|a\|_{1\mathcal{M}}\), we may assume \(a\in\mathcal{M}_{+}\). We then have \[\|a^{1/2}\|_{2,X}\geq\tau_{0}(a)>\|\phi(a^{1/2})\|_{2,Y}, \tag{3.17}\] so \(\phi\) is not isometric in the uniform 2-norm. When \(\phi\) is an embedding, it is injective since \(\|\cdot\|_{2,X}\) is a norm, and therefore, it is isometric in the operator norm. ### Factoriality The following class of tracially complete \(C^{*}\)-algebras will be of greatest interest to us. This class includes the tracial completion of a unital \(C^{*}\)-algebras \(A\) with respect to the trace simplex \(T(A)\), which is the main example of interest (see Proposition 3.23(iv)). **Definition 3.13**.: A tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\) is said to be _factorial_ if \(X\) is a closed face of \(T(\mathcal{M})\). When \(X\) is a singleton, a tracially complete \(C^{*}\)-algebra is a tracial von Neumann algebra \((\mathcal{M},\tau)\), and in this case, \(\{\tau\}\) is a face of \(T(\mathcal{M})\) if and only if \(\mathcal{M}\) is a factor.29 Further, the trivial \(W^{*}\)-bundle \(C_{\sigma}(K,\mathcal{N})\) from Example 3.7 is factorial as a tracially complete \(C^{*}\)-algebra if and only if the fibre \(\mathcal{N}\) is a factor. An analogous result holds for non-trivial \(W^{*}\)-bundles (see Proposition 3.6). More generally, we have the following result. Footnote 29: If \(\mathcal{M}\) is a factor, then \(T(\mathcal{M})=\{\tau\}\), and in particular, \(\{\tau\}\) is a face of \(T(\mathcal{M})\). Conversely, if \(\{\tau\}\) is a face of \(T(\mathcal{M})\), then \(\mathcal{M}\cong\pi_{\tau}(\mathcal{M})^{\prime\prime}\) is a factor since \(\tau\) is extremal (see Proposition 2.9). **Proposition 3.14**.: _A tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\) is factorial if and only if \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) is a factor for every extreme point \(\tau\) of \(X\)._ Proof.: By Theorem 2.5, \((\mathcal{M},X)\) is factorial if and only if \(\partial_{e}X\subseteq\partial_{e}T(\mathcal{M})\). Also, by Proposition 2.9, \(\partial_{e}X\subseteq\partial_{e}T(\mathcal{M})\) if and only if \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) is a factor for all \(\tau\in\partial_{e}X\). The following result shows the equivalence of the two versions of the question stated as the trace problem (Question 1.1). The proof of this proposition illustrates the utility of factoriality: it allows for powerful separation arguments coming from the classical result that closed faces in Choquet simplices are relatively exposed. The exposedness appears in the proof via an application of Theorem 2.4(ii). **Proposition 3.15**.: _If \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra, then every \(\|\cdot\|_{2,X}\)-continuous trace on \(\mathcal{M}\) belongs to the closed face of \(T(\mathcal{M})\) generated by \(X\). In particular, if \((\mathcal{M},X)\) is factorial, then \(X\) is precisely the set of all \(\|\cdot\|_{2,X}\)-continuous traces on \(\mathcal{M}\)._ Proof.: Since \(\mathcal{M}\) is a unital \(C^{*}\)-algebra, \(T(\mathcal{M})\) is a Choquet simplex (Theorem 2.6). By Theorem 2.4, it suffices to show that if \(\tau_{0}\in T(\mathcal{M})\) is \(\|\cdot\|_{2,X}\)-continuous and \(f\colon T(\mathcal{M})\to[0,1)\) is a continuous affine function with \(f|_{X}=0\), then \(f(\tau_{0})=0\). Fix such a trace \(\tau_{0}\) and function \(f\). By Proposition 2.7, applied to \(f+\frac{1}{n}\), there is a sequence of positive contractions \((a_{n})_{n=1}^{\infty}\subseteq\mathcal{M}\) such that \[\sup_{\tau\in T(\mathcal{M})}|\tau(a_{n})-f(\tau)|\to 0. \tag{3.18}\] Since \(f(\tau)=0\) for all \(\tau\in X\), we have \(\|a_{n}^{1/2}\|_{2,X}\to 0\). As each \(a_{n}\) is contractive, it follows from (3.2) that \(\|a_{n}\|_{2,X}\to 0\). Now, since \(\tau_{0}\) is \(\|\cdot\|_{2,X}\)-continuous, we have \(\tau_{0}(a_{n})\to 0\), and hence (3.18) implies \(f(\tau_{0})=0\). Returning to our viewpoint that the GNS representations of tracially complete \(C^{*}\)-algebras should be viewed as giving rise to their fibres in some kind of affine bundle structure, the following question is natural. Ozawa's [86, Theorem 11] gives a positive answer for all \(W^{*}\)-bundles (factorial or not). **Question 3.16**.: Let \((\mathcal{M},X)\) be a factorial tracially complete \(C^{*}\)-algebra. Is it the case that \(\pi_{\tau}(\mathcal{M})^{\prime\prime}=\pi_{\tau}(\mathcal{M})\) for all \(\tau\in\partial_{e}X\)? We end the subsection with a some examples. Given a finite von Neumann algebra \(\mathcal{M}\), instead of specifying a faithful trace \(\tau\) (which will exist when \(\mathcal{M}\) has a separable predual) and viewing \(\mathcal{M}\) as a (generally non-factorial) tracially complete \(C^{*}\)-algebra over a singleton set, one can work with all the traces. By [42, Proposition 3.1.6], \(\|\cdot\|_{2,T(\mathcal{M})}\) is a complete norm on the unit ball of \(\mathcal{M}\), so that \(\big{(}\mathcal{M},T(\mathcal{M})\big{)}\) is a tracially complete \(C^{*}\)-algebra which is evidently factorial. This example will play a technical role in obtaining CPoU from property \(\Gamma\) in Section 6.4. **Proposition 3.17**.: _Let \(\mathcal{M}\) be a finite von Neumann algebra. Then the pair \(\big{(}\mathcal{M},T(\mathcal{M})\big{)}\) is a factorial tracially complete \(C^{*}\)-algebra._ It is worth noting that when \((\mathcal{M},\tau)\) is a tracial von Neumann algebra, the tracially complete \(C^{*}\)-algebras \(\big{(}\mathcal{M},\{\tau\}\big{)}\) and \((\mathcal{M},T(\mathcal{M}))\) can behave very differently, despite having the same underlying \(C^{*}\)-algebra. See Proposition 4.11, for example. For later use, we note that matrix algebras over tracially complete \(C^{*}\)-algebras are also tracially complete \(C^{*}\)-algebras in a natural way. Let \(\mathrm{tr}_{d}\) denote the unique trace on the \(C^{*}\)-algebra \(M_{d}\) of \(d\times d\) matrices over \(\mathbb{C}\). For a \(C^{*}\)-algebra \(A\) and a set \(X\subseteq T(A)\), define \[X\otimes\{\mathrm{tr}_{d}\}\coloneqq\{\tau\otimes\mathrm{tr}_{d}:\tau\in X\} \subseteq T(A\otimes M_{d}). \tag{3.19}\] **Proposition 3.18**.: _If \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra and \(d\in\mathbb{N}\), then \((\mathcal{M}\otimes M_{d},X\otimes\{\mathrm{tr}_{d}\})\) is a tracially complete \(C^{*}\)-algebra. Further, \((\mathcal{M},X)\) is factorial if and only if \((\mathcal{M}\otimes M_{d},X\otimes\{\mathrm{tr}_{d}\})\) is factorial._ Proof.: Since the map \(T(\mathcal{M})\to T(\mathcal{M}\otimes M_{d})\) given by \(\tau\mapsto\tau\otimes\mathrm{tr}_{d}\) is an affine homeomorphism, we have that \(X\otimes\{\mathrm{tr}_{d}\}\) is a compact convex subset of \(T(\mathcal{M}\otimes M_{d})\) and is a face if and only if \(X\) is a face in \(T(\mathcal{M})\). Thus it suffices to show that \(X\otimes\{\mathrm{tr}_{d}\}\) is a faithful set of traces on \(\mathcal{M}\otimes M_{d}\) and the unit ball of \(\mathcal{M}\otimes M_{d}\) is \(\|\cdot\|_{2,X\otimes\{\mathrm{tr}_{d}\}}\)-complete. Given \(a\in\mathcal{M}\otimes M_{d}\), write \(a=\sum_{i,j=1}^{d}a_{i,j}\otimes e_{i,j}\) for some \(a_{i},j\in\mathcal{M}\). For \(\tau\in X\), \[(\tau\otimes\mathrm{tr}_{d})(a^{*}a)=\frac{1}{d}\tau\Big{(}\sum_{i,j=1}^{d}a_ {i,j}^{*}a_{j,i}\Big{)}. \tag{3.20}\] If \(a\neq 0\), then there exist \(i\) and \(j\) with \(a_{i,j}^{*}a_{i,j}\neq 0\). As \(X\) is a faithful set of traces, there exists \(\tau\in X\) such that \(\tau(a_{i,j}^{*}a_{i,j})\neq 0\). Therefore, \[(\tau\otimes\mathrm{tr}_{d})(a^{*}a)\geq\tau(a_{i,j}^{*}a_{i,j})>0, \tag{3.21}\] as required. Since \(X\times\{\mathrm{tr}_{d}\}\) is a faithful set of traces on \(\mathcal{M}\otimes M_{d}\), the unit ball of \(\mathcal{M}\otimes M_{d}\) is \(\|\cdot\|_{2,X\otimes\{\mathrm{tr}_{d}\}}\)-closed by Proposition 3.2(ii). Furthermore, \[\frac{1}{d}\max_{1\leq i,j\leq d}\|a_{i,j}\|_{2,X}^{2}\leq\|a\|_{2,X\otimes\{ \mathrm{tr}_{d}\}}^{2}\leq\frac{1}{d}\sum_{i,j=1}^{d}\|a_{i,j}\|_{2,X}^{2} \tag{3.22}\] as an immediate consequence of (3.20). The \(\|\cdot\|_{2,X\otimes\{\mathrm{tr}_{d}\}}\)-completeness of the unit ball of \(\mathcal{M}\otimes M_{d}\) now follows from the \(\|\cdot\|_{2,X}\)-completeness of the unit ball of \(\mathcal{M}\). ### Tracial completions In this section, we recall Ozawa's tracial completion construction from [86] and its important features. This is both the motivation for our study of tracially complete \(C^{*}\)-algebras and an important tool in the theory that we develop. **Definition 3.19** (cf. [86]).: Let \(A\) be a \(C^{*}\)-algebra. For a compact convex set \(X\subseteq T(A)\), the _tracial completion_ of \(A\) with respect to \(X\) is the \(C^{*}\)-algebra30 Footnote 30: It is not hard to see – using (3.2) – that the \(\|\cdot\|\)-bounded, \(\|\cdot\|_{2,X}\)-Cauchy sequences in \(A\) form a \(C^{*}\)-subalgebra of \(\ell^{\infty}(A)\) and that the \(\|\cdot\|\)-bounded, \(\|\cdot\|_{2,X}\)-null sequences form an ideal of this subalgebra. \[\overline{A}^{X}\coloneqq\frac{\{(a_{n})_{n=1}^{\infty}\in\ell^{\infty}(A):(a _{n})_{n=1}^{\infty}\text{ is }\|\cdot\|_{2,X}\text{-Cauchy}\}}{\{(a_{n})_{n=1}^{ \infty}\in\ell^{\infty}(A):(a_{n})_{n=1}^{\infty}\text{ is }\|\cdot\|_{2,X}\text{-null}\}}. \tag{3.23}\] **Remark 3.20**.: In Ozawa's work, the notation \(\overline{A}^{\text{u}}\) is used for the tracial completion with \(X\) being understood from the context. Ozawa also notes that \(\overline{A}^{\text{u}}\) can be viewed as the strict closure of the range of a certain representation of \(A\) on a Hilbert module. This leads to the alternative terminology _strict closure_ for the uniform tracial completion and the alternative notation \(\overline{A}^{\text{st}}\) used in [86, 10]. Since it is important for us to keep track of the set \(X\), we include it in the notation for tracial completions in this paper. We define \(\alpha_{X}\colon A\to\overline{A}^{X}\) to be the map given by sending \(a\in A\) to the image of the constant sequence \((a,a,\dots)\). We also define a second norm on the tracial completion by \((a_{n})_{n=1}^{\infty}\mapsto\lim_{n\to\infty}\|a_{n}\|_{2,X}\). It is immediate that this limit exists for any \(\|\cdot\|_{2,X}\)-Cauchy sequence \((a_{n})_{n=1}^{\infty}\in\ell^{\infty}(A)\) and induces a well-defined norm on the quotient. Abusing notation slightly, we also write \(\|\cdot\|_{2,X}\) for this second norm, the justification being that \(\|\alpha_{X}(a)\|_{2,X}=\|a\|_{2,X}\) for all \(a\in A\).31 Footnote 31: When \(\|\cdot\|_{2,X}\) is a norm on \(A\), \(\alpha_{X}\) is an embedding and it is natural to identify \(A\) with \(\alpha_{X}(A)\). If one desires, one can quotient \(A\) by the ideal of \(\|\cdot\|_{2,X}\)-null elements first before performing the tracial completion (replacing \(X\) with the set of induced traces on the quotient). In this way, one can reduce to the case that \(\alpha_{X}\) is an embedding. Tracial completions with respect to a single trace are nothing but the von Neumann algebra generated by the associated GNS representation. **Proposition 3.21**.: _Let \(A\) be a \(C^{*}\)-algebra and \(\tau\in T(A)\). Then, for \(X\coloneqq\{\tau\}\), the tracial completion \(\overline{A}^{X}\) can be canonically identified with the von Neumann algebra \(\pi_{\tau}(A)^{\prime\prime}\), where \(\pi_{\tau}\) is the GNS representation associated to \(\tau\); i.e. there is an isomorphism \(\theta\colon\overline{A}^{X}\to\pi_{\tau}(A)^{\prime\prime}\) such that \(\theta\circ\alpha_{X}=\pi_{\tau}\)._ Proof.: As the unit ball of \(\pi_{\tau}(A)^{\prime\prime}\) is \(\|\cdot\|_{2,\tau}\)-complete (see [100, Lemma A.3.3], for example), there is a well-defined \({}^{*}\)-homomorphism \[\tilde{\theta}\colon\{(a_{n})_{n=1}^{\infty}\in\ell^{\infty}(A):(a_{n})_{n=1}^ {\infty}\text{ is }\|\cdot\|_{2,X}\text{-Cauchy}\}\to\pi_{\tau}(A)^{\prime\prime} \tag{3.24}\] given \(\tilde{\theta}((a_{n}))=\lim\pi_{\tau}(a_{n})\), where the limit is taken in the norm \(\|\cdot\|_{2,\tau}\). The kernel of this map consists of the \(\|\cdot\|_{2,X}\)-null sequences, so the first isomorphism theorem gives an injective \({}^{*}\)-homomorphism \(\theta\) with \(\theta\circ\alpha_{X}=\pi_{\tau}\). As the unit ball of \(\overline{A}^{X}\) is complete in the \(2\)-norm and \(\theta\) is isometric with respect to the \(2\)-norms, the image of the unit ball of \(\theta\) is \(\|\cdot\|_{2,\tau}\)-closed and contains the unit ball of \(\pi_{\tau}(A)\). By Kaplansky's density theorem, the unit ball of \(\pi_{\tau}(A)\) is \(\|\cdot\|_{2,\tau}\)-dense in the unit ball of \(\pi_{\tau}(A)^{\prime\prime}\), and hence \(\theta\) is surjective, proving that \(\theta\) is an isomorphism. Very similarly, tracial completions with respect to finite dimensional simplices also come from GNS representations. **Example 3.22**.: When \(X\) is a finite dimensional simplex, the tracial completion is \(\pi_{\tau}(A)^{\prime\prime}\), where \(\tau\) is the average of the traces in \(\partial_{e}X\), and so this tracial completion is again a von Neumann algebra. For each trace \(\tau\in X\), we define an induced trace \(\widetilde{\tau}\) on \(\overline{A}^{X}\) by setting \(\widetilde{\tau}(a)\coloneqq\lim_{n\to\infty}\tau(a_{n})\) for any representative \(\|\cdot\|_{2,X}\)-Cauchy sequence \((a_{n})_{n=1}^{\infty}\in\ell^{\infty}(A)\). The inequality \(|\tau(a)|\leq\|a\|_{2,X}\), which holds for all \(a\in A\), ensures that this limit exists and that \(\widetilde{\tau}\) is well-defined on the tracial completion. The induced trace satisfies \(\widetilde{\tau}(\alpha_{X}(a))=\tau(a)\) for all \(a\in A\) and is the unique \(\|\cdot\|_{2,X}\)-continuous trace with this property since \(\alpha_{X}(A)\) is \(\|\cdot\|_{2,X}\)-dense in the tracial completion.32 Footnote 32: By construction, \(\widetilde{\tau}\) is \(\|\cdot\|_{2,X}\)-continuous on \(\|\cdot\|\)-bounded subsets of \(\overline{A}^{X}\); continuity on all of \(\overline{A}^{X}\) follows from Proposition 3.23(i) (or from (3.27) in the proof). The following proposition establishes the basic properties of tracial completions. **Proposition 3.23**.: _Let \(A\) be a \(C^{*}\)-algebra and let \(X\) be a compact convex subset of \(T(A)\). Let \(\widetilde{X}\subseteq T(\overline{A}^{X})\) denote the set of all traces on \(\overline{A}^{X}\) that are induced by traces in \(X\)._ * _With the notation in the paragraph preceding the proposition, the seminorm_ \(\|\cdot\|_{2,\widetilde{X}}\) _coincides with the norm_ \(\|\cdot\|_{2,X}\) _on_ \(\overline{A}^{X}\)_._ * _The map_ \(X\to\widetilde{X}\) _sending a trace to its induced trace is an affine homeomorphism, where_ \(X\) _and_ \(\widetilde{X}\) _are equipped with their respective weak_* _topologies._ * _The pair_ \((\overline{A}^{X},\widetilde{X})\) _is a tracially complete_ \(C^{*}\)_-algebra._ * _The completion_ \((\overline{A}^{X},\widetilde{X})\) _is factorial if and only if_ \(X\) _is a face in_ \(T(A)\)_._ * _Let_ \(Y\subset X\) _be a compact convex set and let_ \(\tilde{Y}\) _denote the image of_ \(Y\) _in_ \(\tilde{X}\)_. Then_ \((\overline{A}^{Y},Y)\) _and_ \((\overline{A}^{X}{}^{\widetilde{Y}},\tilde{Y})\) _are isomorphic via an isomorphism \(\theta\) such that_ (3.25) _commutes._ * _Given_ \(\tau\in X\) _with extension_ \(\widetilde{\tau}\) _to_ \(\overline{A}^{X}\)_, write_ \(\pi_{\tau}\colon A\to\pi_{\tau}(A)^{\prime\prime}\) _and_ \(\pi_{\widetilde{\tau}}\colon\overline{A}^{X}\to\pi_{\widetilde{\tau}}( \overline{A}^{X})^{\prime\prime}\) _for the respective GNS representations. Then_ \(\alpha_{X}\colon A\to\overline{A}^{X}\) _induces an isomorphism_ \(\theta\colon\pi_{\tau}(A)^{\prime\prime}\to\pi_{\widetilde{\tau}}(\overline{A} ^{X})^{\prime\prime}\) _such that_ (3.26) \[\begin{CD}A@>{\pi_{\tau}}>{}>\pi_{\tau}(A)^{\prime\prime}\\ @V{}V{\theta}V\\ \overline{A}^{X}@>{}>{\pi_{\widetilde{\tau}}}>\pi_{\widetilde{\tau}}(\overline{A }^{X})^{\prime\prime}\end{CD}\] _commutes._ Proof.: (i): Let \(a\in\overline{A}^{X}\) be represented by a \(\|\cdot\|_{2,X}\)-Cauchy sequence \((a_{n})_{n=1}^{\infty}\in\ell^{\infty}(A)\). Let \(\tau\in X\). Then \[\|a\|_{2,\widetilde{\tau}}^{2}=\lim_{n\to\infty}\tau(a_{n}^{*}a_{n})\leq\lim_{ n\to\infty}\|a_{n}\|_{2,X}^{2}=\|a\|_{2,X}^{2}. \tag{3.27}\] Hence \(\|a\|_{2,\widetilde{X}}\leq\|a\|_{2,X}\) for all \(a\in\overline{A}^{X}\), and this implies \(\|\cdot\|_{2,\widetilde{X}}\) is \(\|\cdot\|_{2,X}\)-continuous. Since \(\|\alpha_{X}(a)\|_{2,X}=\|a\|_{2,X}=\|\alpha_{X}(a)\|_{2,\widetilde{X}}\) for all \(a\in A\) and \(\alpha_{X}(A)\) is \(\|\cdot\|_{2,X}\)-dense in \(\overline{A}^{X}\), we deduce \(\|a\|_{2,X}=\|a\|_{2,\widetilde{X}}\) for all \(a\in\overline{A}^{X}\). (ii): It is clear that the map \(\tau\mapsto\widetilde{\tau}\) is affine and bijective. As \(X\) is compact and \(\widetilde{X}\) is Hausdorff, it suffices to show continuity of this map. Let \((\tau_{\lambda})_{\lambda}\) be a net in \(X\) that converges to \(\tau\in X\) and let \(\widetilde{\tau}_{\lambda},\widetilde{\tau}\in\widetilde{X}\) denote the \(\|\cdot\|_{2,X}\)-continuous traces induced by \(\tau_{\lambda}\) and \(\tau\), respectively. We must show that \(\widetilde{\tau}_{\lambda}\to\widetilde{\tau}\). Let \(a\in\overline{A}^{X}\) and let \(\epsilon>0\). Pick \(b\in A\) such that \(\|a-\alpha_{X}(b)\|_{2,X}<\epsilon/3\). Pick \(\lambda_{0}\) such that for \(\lambda\geq\lambda_{0}\), we have \(\tau_{\lambda}(b)\approx_{\epsilon/3}\tau(b)\). Then for \(\lambda\geq\lambda_{0}\), \[\widetilde{\tau}(a)\approx_{\epsilon/3}\widetilde{\tau}(\alpha_{X}(b))=\tau(b )\approx_{\epsilon/3}\tau_{\lambda}(b)=\widetilde{\tau}_{\lambda}(\alpha_{X }(b))\approx_{\epsilon/3}\widetilde{\tau}_{\lambda}(a), \tag{3.28}\] showing continuity of the extension map. (iii): It follows from (ii) that \(\widetilde{X}\) is a compact convex subset of \(T(\overline{A}^{X})\). By the construction of \(\overline{A}^{X}\), the seminorm \(\|\cdot\|_{2,X}\) is a norm and the unit ball of \(\overline{A}^{X}\) is \(\|\cdot\|_{2,X}\)-complete, so by (i), the unit ball of \(\overline{A}^{X}\) is also \(\|\cdot\|_{2,\widetilde{X}}\)-complete. (iv): Suppose that \(X\) is a face in \(T(A)\). To show that \(\widetilde{X}\) is a face of \(T(\overline{A}^{X})\), suppose we can write \(\tau\in\widetilde{X}\) as \(\tau=\frac{1}{2}(\tau_{1}+\tau_{2})\) for some \(\tau_{1},\tau_{2}\in T(\overline{A}^{X})\). Since \((\alpha_{X}(e_{\lambda}))\) converges to the unit of \(\overline{A}^{X}\) for any approximate unit \((e_{\lambda})\) of \(A\), it follows that \(\tau\circ\alpha_{X}\) is a trace on \(A\). As \(X\) is a face of \(T(A)\), \(\tau_{1}\circ\alpha_{X},\tau_{2}\circ\alpha_{X}\in X\). Also, since \(\tau_{1},\tau_{2}\leq 2\tau\) and \(\tau\) is \(\|\cdot\|_{2,\widetilde{X}}\)-continuous, it follows that \(\tau_{1}\) and \(\tau_{2}\) are also \(\|\cdot\|_{2,X}\)-continuous. Thus \(\tau_{1}\) and \(\tau_{2}\) are the \(\|\cdot\|_{2,\widetilde{X}}\)-continuous extensions of \(\tau_{1}\circ\alpha_{X},\tau_{2}\circ\alpha_{X}\) respectively, so \(\tau_{1},\tau_{2}\in\widetilde{X}\). Conversely, suppose \((\overline{A}^{X},\widetilde{X})\) is factorial and \(\tau_{1},\tau_{2}\in T(A)\) satisfy \(\tau\coloneqq\frac{1}{2}(\tau_{1}+\tau_{2})\in\widetilde{X}\). Then \(\tau\) is \(\|\cdot\|_{2,X}\)-continuous, and hence so are \(\tau_{1}\) and \(\tau_{2}\) as they are dominated by \(2\tau\). Now, \(\tau\), \(\tau_{1}\), and \(\tau_{2}\) extend to \(\|\cdot\|_{2,X}\)-continuous traces \(\widetilde{\tau}\), \(\widetilde{\tau}_{1}\), and \(\widetilde{\tau}_{2}\) on \(\overline{A}^{X}\), respectively, using that the unit ball of \(\alpha_{X}(A)\) is \(\|\cdot\|_{2,\widetilde{X}}\)-dense in the unit ball of \(\overline{A}^{X}\). Then \(\widetilde{\tau}\in\widetilde{X}\) and \(\widetilde{\tau}=\frac{1}{2}(\widetilde{\tau}_{1}+\widetilde{\tau}_{2})\). Since \((\overline{A}^{X},\widetilde{X})\) is factorial, we have \(\widetilde{\tau}_{1}\) and \(\widetilde{\tau}_{2}\) are in \(\widetilde{X}\), so \(\tau_{1}\) and \(\tau_{2}\) are in \(X\). (v): The map \(\alpha_{X}\colon A\to\overline{A}^{X}\) is \(\|\cdot\|_{2,Y}\)-\(\|\cdot\|_{2,\widetilde{Y}}\)-isometric. Therefore, it sends \(\|\cdot\|_{2,Y}\)-Cauchy and \(\|\cdot\|_{2,Y}\)-null sequences in \(A\) to \(\|\cdot\|_{2,\widetilde{Y}}\)-Cauchy and \(\|\cdot\|_{2,\widetilde{Y}}\)-null sequences in \(\overline{A}^{Y}\) (respectively). Accordingly, it induces \(\theta\) making (3.25) commute. By commutativity of this diagram and the inequality \(\|\cdot\|_{2,\widetilde{Y}}\leq\|\cdot\|_{2,X}\) on \(\overline{A}^{X}\), the image of the unit ball of \(\overline{A}^{Y}\) under \(\theta\) is dense in the unit ball of \(\overline{\overline{A}^{X}}^{\widetilde{Y}}\). Since \(\theta\) is isometric with respect to the uniform \(2\)-norms and the unit ball of \(\overline{A}^{Y}\) is \(\|\cdot\|_{2,Y}\)-complete, \(\theta\) is surjective. (vi): This follows from (v) using Proposition 3.21 to identify uniform tracial completions at \(\tau\) and \(\tilde{\tau}\) with the von Neumann algebras generated by the associated GNS representations. **Notation 3.24**.: With Proposition 3.23 now established, we will typically identify \(X\) with \(\widetilde{X}\) henceforth. Furthermore, for a trace \(\tau\in X\), we will write \(\tau\) in place of \(\widetilde{\tau}\) for the induced trace on \(\overline{A}^{X}\). Not surprisingly, tracial completions satisfy a universal property allowing for trace-preserving u.c.p. maps to be extended by continuity to tracial completions. **Proposition 3.25**.: _Let \(A\) be a \(C^{*}\)-algebra and let \(X\) be a compact convex subset of \(T(A)\). If \((\mathcal{N},Y)\) is a tracially complete \(C^{*}\)-algebra and \(\theta\colon A\to\mathcal{N}\) is a u.c.p. map such that \(\tau\circ\theta\in X\) for all \(\tau\in Y\), then there is a unique u.c.p. map \(\overline{\theta}\colon\overline{A}^{X}\to\mathcal{N}\) of tracially complete \(C^{*}\)-algebras such that \(\overline{\theta}\circ\alpha_{X}=\theta\) and \(\tau\circ\overline{\theta}\in X\) for all \(\tau\in Y\). Further, if \(\theta\) is a \({}^{*}\)-homomorphism, then so is \(\overline{\theta}\)._ Proof.: For uniqueness, consider a u.c.p. map \(\phi\colon\overline{A}^{X}\to\mathcal{N}\) with \(\tau\circ\phi\in X\) for all \(\tau\in Y\). Then for \(\tau\in Y\) and \(a\in\overline{A}^{X}\), \[\tau(\phi(a)^{*}\phi(a))\leq\tau(\phi(a^{*}a))\leq\|a\|_{2,X}^{2}, \tag{3.29}\] where the first inequality follows from the Schwarz inequality for u.c.p. maps (cf. [88, Proposition 3.3]) and the second inequality is a consequence of the hypothesis on traces. Accordingly, \(\phi\) is \(\|\cdot\|_{2,X}\)-\(\|\cdot\|_{2,Y}\)-continuous. By the \(\|\cdot\|_{2,X}\)-density of \(\alpha_{A}(A)\) in \(\overline{A}^{X}\), it follows that any extension of \(\theta\) as in the proposition must be unique. We turn to the proof of existence. Since \(\theta^{*}(Y)\subseteq X\), the same computation used above shows \(\|\theta(a)\|_{2,Y}\leq\|a\|_{2,X}\) for all \(a\in A\). Hence if \((a_{n})_{n=1}^{\infty}\subseteq A\) is a \(\|\cdot\|\)-bounded, \(\|\cdot\|_{2,X}\)-Cauchy sequence in \(A\), then \((\theta(a_{n}))_{n=1}^{\infty}\) is a \(\|\cdot\|\)-bounded, \(\|\cdot\|_{2,Y}\)-Cauchy sequence in \(\mathcal{N}\). Therefore, we obtain a well-defined map \(\overline{\theta}\colon\overline{A}^{X}\to\mathcal{N}\) by \(\overline{\theta}(a)\coloneqq\lim_{n\to\infty}\theta(a_{n})\), where \((a_{n})_{n=1}^{\infty}\subseteq A\) is a \(\|\cdot\|\)-bounded, \(\|\cdot\|_{2,X}\)-Cauchy sequence representing \(a\in\overline{A}^{X}\) and where the limit is taken in the norm \(\|\cdot\|_{2,Y}\). By construction, \(\overline{\theta}\circ\alpha_{X}=\theta\). Using Propositions 3.2(iii) and 3.18, a density argument gives that \(\overline{\theta}\) is a u.c.p. map. Likewise when \(\theta\) is a \({}^{*}\)-homomorphism, a density argument, this time using continuity of multiplication in the uniform \(2\)-norm on norm-bounded sets, shows that so too is \(\bar{\theta}\). It remains to check that \(\tau\circ\overline{\theta}\in X\) for all \(\tau\in Y\). Note that \(\|\overline{\theta}(a)\|_{2,Y}\leq\|a\|_{2,X}\) for all \(a\in\overline{A}^{X}\). Therefore, if \(\tau\in Y\), then \(\tau\circ\overline{\theta}\) is \(\|\cdot\|_{2,X}\)-continuous. Now since \(\tau\circ\overline{\theta}\circ\alpha_{X}=\tau\circ\theta\in X\), we have \(\tau\circ\overline{\theta}\in X\). ### Dense subalgebras of tracially complete \(C^{*}\)-algebras We will frequently need a version of Kaplansky's density theorem for tracially complete \(C^{*}\)-algebras (Lemma 3.27 and Proposition 3.28 below). This is obtained using the well-known matrix amplification trick to reduce to the self-adjoint case. At the core is the following estimate. **Lemma 3.26**.: _Let \(A\) be a commutative \(C^{*}\)-algebra and \(\tau\in T(A)\). Suppose \(f\colon\mathbb{R}\to\mathbb{R}\) is Lipschitz continuous with constant \(M>0\). If \(a,b\in A\) are self-adjoint, then \(\|f(a)-f(b)\|_{2,\tau}\leq M\|a-b\|_{2,\tau}\)._ Proof.: View \(A\cong C_{0}(K)\) and \(a,b\in C_{0}(K)\) for a locally compact Hausdorff space \(K\). Since \(f\) is \(M\)-Lipschitz continuous, we have \[|f(a(t))-f(b(t))|\leq M|a(t)-b(t)| \tag{3.30}\] for all \(t\in K\). It follows that \[|f(a)-f(b)|^{2}\leq M^{2}|a-b|^{2} \tag{3.31}\] in \(A\), and the claim follows by applying \(\tau\). We isolate the following quantitative version of Kaplansky's density theorem for later use. **Lemma 3.27**.: _Let \((\mathcal{M},X)\) be a tracially complete \(C^{*}\)-algebra and let \(A\subseteq\mathcal{M}\) be a \(C^{*}\)-subalgebra. If \(b\in\mathcal{M}\) is a contraction, \(\epsilon>0\), and there is an \(a\in A\) with \(\|a-b\|_{2,X}<\epsilon\), then there is a contraction \(a^{\prime}\in A\) with \(\|a^{\prime}-b\|_{2,X}<3\epsilon\)._ Proof.: We first prove the result in the case where \(b\) is self-adjoint. By replacing \(a\) with its real part, we may assume \(a\) is self-adjoint as well. Consider the function \(f\colon\mathbb{R}\to\mathbb{R}\) given by \[f(t)\coloneqq\begin{cases}-1&t<-1;\\ t,&t\in[-1,1];\\ 1,&t>1.\end{cases} \tag{3.32}\] We will show \(\|f(a)-b\|_{2,X}<3\epsilon\), so that the result (in this case) holds by setting \(a^{\prime}\coloneqq f(a)\). Fix \(\tau\in X\) and let \(E\colon\pi_{\tau}(\mathcal{M})^{\prime\prime}\to\{\pi_{\tau}(a)\}^{\prime\prime}\) be the \(\tau\)-preserving conditional expectation, which is contractive with respect to each of the norms \(\|\cdot\|\) and \(\|\cdot\|_{2,\tau}\).33 Set \(a_{1}\coloneqq\pi_{\tau}(a)\), and note that \(E(a_{1})=a_{1}\). Therefore, for \(b_{1}\coloneqq E(\pi_{\tau}(b))\), \[\|a_{1}-b_{1}\|_{2,\tau}\leq\|\pi_{\tau}(a)-\pi_{\tau}(b)\|_{2,\tau}<\epsilon, \tag{3.33}\] and Lemma 3.26 applied to \(\{\pi_{\tau}(a)\}^{\prime\prime}\) implies \[\|f(a_{1})-f(b_{1})\|_{2,\tau}\leq\|a_{1}-b_{1}\|_{2,\tau}<\epsilon. \tag{3.34}\] Since \(b_{1}\) is a self-adjoint contraction, \(f(b_{1})=b_{1}\). Hence, in the norm \(\|\cdot\|_{2,\tau}\), we have strict approximations \[\pi_{\tau}(f(a))=f(a_{1})\approx_{\epsilon}f(b_{1})=b_{1}\approx_{\epsilon}a_ {1}=\pi_{\tau}(a)\approx_{\epsilon}\pi_{\tau}(b). \tag{3.35}\] Therefore, \(\|\pi_{\tau}(f(a))-\pi_{\tau}(b)\|_{2,\tau}<3\epsilon\). This completes the proof for the case when \(b\) is self-adjoint. In the general case, view \(\mathcal{M}\otimes M_{2}\) as a tracially complete \(C^{*}\)-algebra as in Proposition 3.18. By the first part of the proof, there is a self-adjoint contraction \(a^{\prime\prime}=(a^{\prime\prime}_{ij})\in A\otimes M_{2}\) such that \[\left\|a^{\prime\prime}-\begin{pmatrix}0&b\\ b^{*}&0\end{pmatrix}\right\|_{2,X\otimes\{\mathrm{tr}_{2}\}}<3\epsilon. \tag{3.36}\] Since \(a^{\prime\prime}\) is self-adjoint, \(a^{\prime\prime}_{21}=a^{\prime\prime*}_{12}\). For any \(\tau\in X\), we compute that \[\|a^{\prime\prime}_{12}-b\|_{2,\tau}^{2} \leq\frac{1}{2}\tau\big{(}|a^{\prime\prime}_{11}|^{2}+2|a^{ \prime\prime}_{12}-b|^{2}+|a^{\prime\prime}_{22}|^{2}\big{)}\] \[=\left\|a^{\prime\prime}-\begin{pmatrix}0&b\\ b^{*}&0\end{pmatrix}\right\|_{2,\tau\otimes\{\mathrm{tr}_{2}\}}^{2}\] \[<(3\epsilon)^{2}. \tag{3.37}\] Hence, taking \(a^{\prime}\coloneqq a^{\prime\prime}_{12}\), we have \(\|a^{\prime}-b\|_{2,X}<3\epsilon\). The following version of Kaplansky's density theorem follows immediately from the previous lemma. **Proposition 3.28** (cf. [80, Theorem 4.3.3]).: _Let \((\mathcal{M},X)\) be a tracially complete \(C^{*}\)-algebra and let \(A\subseteq\mathcal{M}\) be a \(\|\cdot\|_{2,X}\)-dense \(C^{*}\)-subalgebra. Then the unit ball of \(A\) is \(\|\cdot\|_{2,X}\)-dense in the unit ball of \(\mathcal{M}\)._ As an application, a tracially complete \(C^{*}\)-algebra is the tracial completion of any of its \(\|\cdot\|_{2,X}\)-dense subalgebras. **Corollary 3.29**.: _Suppose \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra and let \(A\subseteq\mathcal{M}\) be a \(\|\cdot\|_{2,X}\)-dense \(C^{*}\)-subalgebra._ 1. \(X_{A}\coloneqq\{\tau|_{A}:\tau\in X\}\subseteq T(A)\) _is a compact convex set and the inclusion_ \(A\hookrightarrow\mathcal{M}\) _induces an isomorphism_ \((\overline{A}^{X_{A}},X_{A})\to(\mathcal{M},X)\)_._ 2. _If_ \((\mathcal{M},X)\) _is factorial, then_ \(X_{A}\) _is a closed face in_ \(T(A)\)_._ 3. _If_ \((\mathcal{N},Y)\) _is another tracially complete_ \(C^{*}\)_-algebra and_ \(\phi\colon A\to\mathcal{N}\) _is a_ \({}^{*}\)_-homomorphism with_ \(\phi^{*}(Y)\subseteq X_{A}\)_, then_ \(\phi\) _has a unique extension to_ \(\tilde{\phi}\colon(\mathcal{M},X)\to(\mathcal{N},Y)\)_._ Proof.: (i): Proposition 3.28 implies the unit ball of \(A\) is \(\|\cdot\|_{2,X}\)-dense in the unit ball of \(\mathcal{M}\). Therefore, if \(\tau\in X\), then \(\tau|_{A}\) has norm \(1\), and hence \(X_{A}\subseteq T(A)\). As \(X\) is compact and convex so too is \(X_{A}\). By Proposition 3.25, the inclusion \(A\hookrightarrow\mathcal{M}\) extends to a morphism \(\theta\colon(\overline{A}^{X_{A}},X_{A})\to(\mathcal{M},X)\) of tracially complete \(C^{*}\)-algebras. Note that \(\theta\) is an embedding in the sense of Definition 3.11, and hence is isometric by Proposition 3.12. Further, the range of \(\theta\) contains the \(\|\cdot\|_{2,X}\)-dense subalgebra \(A\subseteq\mathcal{M}\). Since the unit ball of \(\overline{A}^{X_{A}}\) is \(\|\cdot\|_{2,X_{A}}\)-complete, Kaplansky's density theorem (Proposition 3.28) implies \(\theta\) maps the unit ball of \(\overline{A}^{X_{A}}\) onto the unit ball of \(\mathcal{M}\), and in particular \(\theta\) is surjective. Finally, for \(\tau\in X\), \(\tau\circ\theta^{-1}\) is a \(\|\cdot\|_{2,X}\)-continuous trace on \(\overline{A}^{X_{A}}\) extending \(\tau|_{A}\), so \(\tau\in X_{A}\). This shows \(\theta^{-1}\) is also a morphism of tracially complete \(C^{*}\)-algebras. (ii): In light of (i), this follows from Proposition 3.23(iv). (iii): This follows from (i) and Proposition 3.25. ### Constructions There is a recipe for producing constructions on tracially complete \(C^{*}\)-algebras. * Perform the corresponding (spatial) \(C^{*}\)-construction on the underlying \(C^{*}\)-algebras. * Identify the suitable collection of traces on the newly constructed \(C^{*}\)-algebra * Take the tracial completion with respect to these traces. We illustrate this process here with direct sums, tensor products, and sequential inductive limits. Products and reduced products (including ultraproducts) are constructed in Section 5.1. In the case of direct sums, step (iii) above is redundant as the \(C^{*}\)-direct sum is already tracially complete. **Definition 3.30**.: Let \((\mathcal{M},X)\) and \((\mathcal{N},Y)\) be tracially complete \(C^{*}\)-algebras. Let \(X\oplus Y\) denote the set of traces of the form \(\lambda\tau_{X}+(1-\lambda)\tau_{Y}\) for \(\tau_{X}\in X\), \(\tau_{Y}\in Y\) and \(0\leq\lambda\leq 1\). The direct sum of \((\mathcal{M},X)\) and \((\mathcal{N},Y)\) is the tracially complete \(C^{*}\)-algebra \((\mathcal{M}\oplus\mathcal{N},X\oplus Y)\). Note that the tracially complete direct sum of two finite von Neumann algebras \((\mathcal{M},T(\mathcal{M})\) and \((\mathcal{N},T(\mathcal{N}))\) is the finite von Neumann algebra \((\mathcal{M}\oplus\mathcal{N},T(\mathcal{M}\oplus N))\). As the extreme traces on \(X\oplus Y\) are precisely the union of the extreme traces on \(X\) and the extreme traces on \(Y\), the direct sum \((\mathcal{M},X)\oplus(\mathcal{N},Y)\) is factorial if and only if both \((\mathcal{M},X)\) and \((\mathcal{N},Y)\) are factorial. Next up are tensor products. **Definition 3.31**.: Let \((\mathcal{M},X)\) and \((\mathcal{N},Y)\) be tracially complete \(C^{*}\)-algebras. Let \(\mathcal{M}\otimes\mathcal{N}\) denote the minimal \(C^{*}\)-tensor product of \(\mathcal{M}\) and \(\mathcal{N}\) and let \(X\otimes Y\subseteq T(\mathcal{M}\otimes\mathcal{N})\) be the closed convex hull of the traces \(\sigma\otimes\rho\) for \(\sigma\in X\) and \(\rho\in Y\). Define \(\mathcal{M}\bar{\otimes}\mathcal{N}\) to be the tracial completion of \(\mathcal{M}\otimes\mathcal{N}\) with respect to \(X\otimes Y\). The tensor product \((\mathcal{M},X)\bar{\otimes}(\mathcal{N},Y)\) is the pair \((\mathcal{M}\bar{\otimes}\mathcal{N},X\otimes Y)\). **Proposition 3.32**.: _If \((\mathcal{M},X)\) and \((\mathcal{N},Y)\) are factorial tracially complete \(C^{*}\)-algebras, then \((\mathcal{M},X)\bar{\otimes}(\mathcal{N},Y)\) is also factorial._ Proof.: Let \(F\subseteq T(\mathcal{M}\otimes\mathcal{N})\) be the closed face generated by \(X\otimes Y\). By Proposition 3.23(iv), it is enough to show that \(F=X\otimes Y\). In fact, by the Krein-Milman theorem, it is enough to show \(\partial_{e}F\subseteq X\otimes Y\). To this end, let \(\tau_{0}\in\partial_{e}F\) be given. Since \(F\) is a closed face in \(T(\mathcal{M}\otimes\mathcal{N})\), we have \(\tau_{0}\in\partial_{e}T(\mathcal{M}\otimes\mathcal{N})\). By [10, Proposition 3.5], there are traces \(\sigma_{0}\in T(\mathcal{M})\) and \(\rho_{0}\in T(\mathcal{N})\) such that \(\tau_{0}=\sigma_{0}\otimes\rho_{0}\). Now, it is enough to show \(\sigma_{0}\in X\) and \(\rho_{0}\in Y\) - we will only show the former as the latter follows by symmetry. By Theorem 2.4(ii), to show \(\sigma_{0}\in X\), it is enough to show \(f(\sigma_{0})=0\) for every continuous affine function \(f\colon T(\mathcal{M})\to[0,1]\) with \(f|_{X}=0\). Let \(f\) be such a function and apply Proposition 2.7 to produce a self-adjoint \(a\in\mathcal{M}\) with \(\sigma(a)=f(\sigma)\) for all \(\sigma\in T(\mathcal{M})\). Define \[G\coloneqq\{\tau\in T(\mathcal{M}\otimes\mathcal{N}):\tau(a\otimes 1_{ \mathcal{N}})=0\}. \tag{3.38}\] For any \(\sigma\in T(\mathcal{M})\) and \(\rho\in T(\mathcal{N})\), we have \((\sigma\otimes\rho)(a\otimes 1_{\mathcal{N}})=f(\sigma)\geq 0\). By [10, Proposition 3.5] and the Krein-Milman theorem, \[T(\mathcal{M}\otimes\mathcal{N})=\overline{\mathrm{co}}\,\big{\{}\sigma \otimes\rho:\sigma\in T(\mathcal{M}),\ \rho\in T(\mathcal{N})\big{\}}. \tag{3.39}\] Hence \(\tau(a\otimes 1_{\mathcal{N}})\geq 0\) for all \(\tau\in T(\mathcal{M}\otimes\mathcal{N})\). It follows that \(G\) is a closed face in \(T(\mathcal{M}\otimes\mathcal{N})\). For all \(\sigma\in X\) and \(\rho\in Y\), \[(\sigma\otimes\rho)(a\otimes 1_{\mathcal{N}})=\sigma(a)=f(\sigma)=0, \tag{3.40}\] and hence \(X\otimes Y\subseteq G\). Then, as \(G\) is a closed face containing \(X\otimes Y\), we have \(F\subseteq G\), and so \(\tau_{0}\in G\). Since \(\tau_{0}=\sigma_{0}\otimes\rho_{0}\), this shows that \(f(\sigma_{0})=\sigma_{0}(a)=0\). Finally, we construct sequential inductive limits in the category of tracially complete \(C^{*}\)-algebras. **Definition 3.33**.: Let \[\cdots\to(\mathcal{M}_{n},X_{n})\xrightarrow{\phi_{n}^{n+1}}(\mathcal{M}_{n+1 },X_{n+1})\xrightarrow{\phi_{n+1}^{n+2}}(\mathcal{M}_{n+2},X_{n+2})\to\cdots \tag{3.41}\] be a sequential inductive system of tracially complete \(C^{*}\)-algebras. Form the \(C^{*}\)-inductive limit \(A\coloneqq\varinjlim(\mathcal{M}_{n},\phi_{n}^{n+1})\) and let \(\hat{\phi}_{n}^{\infty}\colon\mathcal{M}_{n}\to A\) be the canonical unital \({}^{*}\)-homomorphism. The inductive system (3.41) induces a projective system \[\cdots\longleftarrow X_{n}\xleftarrow{(\phi_{n}^{n+1})^{*}}X_{n+1} \xleftarrow{(\phi_{n+1}^{n+2})^{*}}X_{n+2}\longleftarrow\cdots \tag{3.42}\] of compact convex sets. Set \[X\coloneqq\{\tau\in T(A):\tau\circ\hat{\phi}_{n}^{\infty}\in X_{n}\text{ for all }n\geq 1\}\subseteq T(A) \tag{3.43}\] and note that \(X\) is a compact convex subset of \(T(A)\). Further, the maps \((\hat{\phi}_{n}^{\infty})^{*}\colon X\to X_{n}\) induce an affine homeomorphism \[X\xrightarrow{\cong}\varinjlim\big{(}X_{n},(\phi_{n}^{n+1})^{*}\big{)} \tag{3.44}\] (and in particular, \(X\) is non-empty whenever all of the \(X_{n}\) are non-empty; see [35, Theorem VIII.3.5], for example). Define \(\mathcal{M}\coloneqq\overline{A}^{X}\) and \[\varinjlim\big{(}(\mathcal{M}_{n},X_{n}),\phi_{n}^{n+1}\big{)}\coloneqq( \mathcal{M},X) \tag{3.45}\] Finally, write \(\phi_{n}^{\infty}\colon\mathcal{M}_{n}\to\mathcal{M}\) for the unital \({}^{*}\)-homomorphism obtained by composing \(\hat{\phi}_{n}^{\infty}\) with the canonical map \(\alpha_{X}\colon A\to\mathcal{M}\). We now show the definition of inductive limit in Definition 3.33 satisfies the required universal property and hence is an inductive limit in the category of tracially complete \(C^{*}\)-algebras. **Proposition 3.34**.: _Let \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) be a sequence of tracially complete \(C^{*}\)-algebras and \(\big{(}\phi_{n}^{n+1}\colon(\mathcal{M}_{n},X_{n})\to(\mathcal{M}_{n+1},X_{n+1}) \big{)}_{n=1}^{\infty}\) be a sequence of morphisms. Set_ \[(\mathcal{M},X)\coloneqq\varinjlim\big{(}(\mathcal{M}_{n},X_{n}),\phi_{n}^{n+ 1}\big{)} \tag{3.46}\] _as in Definition 3.33 and write \(\phi_{n}^{\infty}\colon\mathcal{M}_{n}\to\mathcal{M}\) for the canonical \({}^{*}\)-homomorphisms as above. Then \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra, \(\phi_{n}^{\infty}\) is a morphism for \(n\geq 1\), and together, they define the inductive limit of \(\big{(}(\mathcal{M}_{n},X_{n}),\phi_{n}^{n+1}\big{)}_{n=1}^{\infty}\) in the category of tracially complete \(C^{*}\)-algebras. If each \(X_{n}\) is metrisable, then so is \(X\). Further, if each \((\mathcal{M}_{n},X_{n})\) is factorial, then so is \((\mathcal{M},X)\)._ Proof.: Define \(A\), \(X\), and \(\hat{\phi}_{n}^{\infty}\) as in Definition 3.33, so that \((\mathcal{M},X)=(\overline{A}^{X},X)\). By Proposition 3.23(iii), \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra. By construction, \((\hat{\phi}_{n}^{\infty})^{*}(X)\subseteq X_{n}\), so \((\phi_{n}^{\infty})^{*}(X)\subseteq X_{n}\) and each \(\phi_{n}^{\infty}\) is a morphism. As sequential projective limits of metrisable spaces are metrisable, \(X\) is metrisable if each \(X_{n}\) is metrisable. If each \((\mathcal{M}_{n},X_{n})\) is factorial, then \(X_{n}\) is a face in \(T(\mathcal{M}_{n})\) for all \(n\geq 1\). It follows from (3.43) that \(X\) is a face in \(T(A)\). Proposition 3.23(iv) then implies that \((\mathcal{M},X)\) is factorial. To show that \((\mathcal{M},X)\) is the inductive limit of \(\big{(}(\mathcal{M}_{n},X_{n}),\phi_{n}^{n+1}\big{)}\), suppose we have a tracially complete \(C^{*}\)-algebra \((\mathcal{N},Y)\) and morphisms \[\psi_{n}\colon(\mathcal{M}_{n},X_{n})\to(\mathcal{N},Y),\qquad n\in\mathbb{N}, \tag{3.47}\] such that \(\psi_{n+1}\circ\phi_{n}^{n+1}=\psi_{n}\). Then, working in the category of unital \(C^{*}\)-algebras, we obtain a unique unital \({}^{*}\)-homomorphism \(\hat{\psi}\colon A\to\mathcal{N}\) such that \(\psi_{n}=\hat{\psi}\circ\hat{\phi}_{n}^{\infty}\) for each \(n\in\mathbb{N}\). For \(\tau\in Y\), we have \[\tau\circ\hat{\psi}\circ\hat{\phi}_{n}^{\infty}=\tau\circ\psi_{n}\in X_{n} \tag{3.48}\] for all \(n\geq 1\), so by (3.43), \(\tau\circ\hat{\psi}\in X\). This shows \(\hat{\psi}^{*}(Y)\subseteq X\). Accordingly, there is a unique morphism \(\psi\colon(\mathcal{M},X)\to(\mathcal{N},Y)\) with \(\hat{\psi}=\psi\circ\alpha_{X}\) by Proposition 3.25. Moreover \[\psi_{n}=\hat{\psi}\circ\hat{\phi}_{n}^{\infty}=\psi\circ\alpha_{X}\circ\hat{ \phi}_{n}^{\infty}=\psi\circ\phi_{n}^{\infty}, \tag{3.49}\] as required. Uniqueness of \(\psi\) follows from uniqueness in Proposition 3.25 as any morphism \(\psi\colon(\mathcal{M},X)\to(\mathcal{N},Y)\) with \(\psi\circ\phi_{n}=\psi_{n}\) for all \(n\) satisfies \(\psi\circ\alpha_{X}=\hat{\psi}\). For each metrisable Choquet simplex \(X\), we now use an inductive limit construction to produce the concrete model \((\mathcal{R}_{X},X)\) of a tracially complete \(C^{*}\)-algebra covered by Theorem C as discussed in the overview of results. This is achieved by mimicking construction of a simple AF algebra \(A\) for which \(T(A)=X\), found in [6, 54], and can be deduced from these results by considering the tracial completion of such an AF algebras. The details are slightly easier in the tracially complete setting since \(K_{0}(\mathcal{R})\cong\mathbb{R}\) and one need not worry about simplicity. **Example 3.35**.: Let \(X\) be a metrisable Choquet simplex. By [55, Theorem 11.6], we can write \(X\) as the inverse limit of a system of finite dimensional simplices \[X_{1}\xleftarrow{\alpha_{2}^{1}}X_{2}\xleftarrow{\alpha_{2}^{2}}\cdots, \tag{3.50}\] where the connecting maps are continuous and affine. We will construct an inductive system of tracially complete \(C^{*}\)-algebras realising this data. Set \(\mathcal{M}_{n}\coloneqq\mathcal{R}^{\oplus\partial_{e}X_{n}}\), where \(\mathcal{R}\) is the hyperfinite II\({}_{1}\) factor. Note that \(T(\mathcal{M}_{n})\) can be canonically identified with \(X_{n}\) and \(\big{(}\mathcal{M}_{n},T(\mathcal{M}_{n})\big{)}\) is tracially complete and factorial. Next, we can choose a \({}^{*}\)-homomorphism \(\phi_{n}^{n+1}\colon\mathcal{M}_{n}\to\mathcal{M}_{n+1}\) that induces the map \(\alpha_{n+1}^{n}\). To do this explicitly, write \(\partial_{e}X_{n+1}=\{x_{1},\ldots,x_{k}\}\) and \(\partial_{e}X_{n}=\{y_{1},\ldots,y_{l}\}\). For each \(i=1,\ldots,k\), write \(\alpha_{n+1}^{n}(x_{i})\) as the convex combination \(\sum_{j=1}^{l}\lambda_{i,j}y_{j}\). Fixing \(i\), find a partition of unity \(p_{i,1},\ldots,p_{i,l}\in\mathcal{R}\) of projections such that \(\tau(p_{i,j})=\lambda_{i,j}\) for each \(j\). For each \(i\) and \(j\), choose any unital \({}^{*}\)-homomorphism \(\psi_{i,j}\colon\mathcal{R}\to p_{i,j}\mathcal{R}p_{i,j}\) (which exists since \(\mathcal{R}\) has full fundamental group, a result which goes back to Murray and von Neumann in [83]). Then define \(\phi_{n}^{n+1}\colon\mathcal{R}^{\oplus l}\to\mathcal{R}^{\oplus k}\) by \[\phi_{n}^{n+1}(a_{1}\oplus\cdots\oplus a_{l})\coloneqq\Big{(}\sum_{j=1}^{l} \psi_{1,j}(a_{j}),\ldots,\sum_{j=1}^{l}\psi_{k,j}(a_{j})\Big{)}, \tag{3.51}\] and note that \((\phi_{n}^{n+1})^{*}=\alpha_{n+1}^{n}\). Define \((\mathcal{R}_{X},X^{\prime})\coloneqq\varinjlim\big{(}(\mathcal{M}_{n},X_{n}),\phi_{n}^{n+1}\big{)}\). As \((\phi_{n}^{n+1})^{*}=\alpha_{n+1}^{n}\), (3.44) provides an isomorphism \(X\to X^{\prime}\), so after identifying \(X\) with \(X^{\prime}\) via this isomorphism, \((\mathcal{R}_{X},X)\) is the desired factorial tracially complete \(C^{*}\)-algebra. A priori, the construction outlined above depends not only on the Choquet simplex \(X\) but also on the choice of inverse limit and the choices made when defining connecting maps. However, it will follow from Theorem 9.13 that the tracially complete \(C^{*}\)-algebra \((\mathcal{R}_{X},X)\) depends only on \(X\). Moreover, as \(X\) varies over all metrisable Choquet simplices, these will provide models for the classifiable tracially complete \(C^{*}\)-algebras (see Theorem 9.15, which contains Theorem C). **Remark 3.36**.: The infinite tensor product \(\bigotimes_{n=1}^{\infty}(\mathcal{M}_{n},X_{n})\) of a countable family \((\mathcal{M}_{n},X_{n})\) of tracially complete \(C^{*}\)-algebras can now be constructed as the inductive limit of the finite tensor products \(\bigotimes_{i=1}^{n}(\mathcal{M}_{i},X_{i})\) with the obvious connecting maps. ### \(W^{*}\)-bundles In Proposition 3.6, we showed Ozawa's \(W^{*}\)-bundles can be viewed as tracially complete \(C^{*}\)-algebras. We show here that by a reformulation of a theorem due to Ozawa, any factorial tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\) with \(X\) a Bauer simplex can be given the structure of a \(W^{*}\)-bundle whose fibres are factors. This is essentially due to Ozawa in [86]. Recall from Section 2.1 that if \(X\) is a Bauer simplex, then there is an affine homeomorphism \[X\stackrel{{\cong}}{{\longrightarrow}}\operatorname{Prob}( \partial_{e}X)\colon\tau\mapsto\mu_{\tau}, \tag{3.52}\] where \(\mu_{\tau}\) is the measure with barycentre \(\tau\). **Theorem 3.37** (cf. [86, Theorem 3]).: _If \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebra such that \(X\) is a Bauer simplex with \(K\coloneqq\partial_{e}X\), then there is a unique embedding \(C(K)\subseteq Z(\mathcal{M})\) such that_ \[\tau(fa)=\int_{K}f(\sigma)\sigma(a)\,\mathrm{d}\mu_{\tau}(\sigma),\qquad f\in C (K),\,a\in\mathcal{M},\,\tau\in X. \tag{3.53}\] _Further, the map \(E\colon\mathcal{M}\to C(K)\) given by \(E(a)(\tau)\coloneqq\tau(a)\) is a conditional expectation endowing \(\mathcal{M}\) with the structure of a \(W^{*}\)-bundle._ Proof.: We first show the embedding of \(C(K)\) is unique, assuming it exists. Let \(\phi,\psi\colon C(K)\to Z(\mathcal{M})\) be two embeddings satisfying (3.53). Fix \(f\in C(K)\). If \(\tau\in K\), then \(\mu_{\tau}\) is the point mass at \(\tau\), so (3.53) gives \[\tau(\phi(f)\psi(f))=f(\tau)\tau(\psi(f))=f(\tau)^{2}. \tag{3.54}\] Similarly, \(\tau(\phi(f)^{2})=\tau(\psi(f)^{2})=f(\tau)^{2}\). Now, if \(f\in C(K)\) is self-adjoint, we have \[\|\phi(f)-\psi(f)\|_{2,\tau}^{2}=\tau\big{(}(\phi(f)-\psi(f))^{2}\big{)}=0 \tag{3.55}\] for all \(\tau\in\partial_{e}X\), and hence for all \(\tau\in X\). Therefore, \(\phi(f)=\psi(f)\) for all self-adjoint \(f\in C(K)\), and hence for all \(f\in C(K)\). Note that once we know the embedding of \(C(K)\) exists, then \(E\) is necessarily a tracial conditional expectation onto \(C(K)\) such that \(\|\cdot\|_{2,X}=\|\cdot\|_{2,E}\). In particular, the unit ball of \(\mathcal{M}\) is \(\|\cdot\|_{2,E}\)-complete. The hardest portion of the proof is the existence of the embedding of \(C(K)\). This is constructed in [86, Theorem 3]. Note that there is a standing assumption of metrisability in [86]. The metrisability in this result is only needed so that \(\partial_{e}X\) is Borel, but in our case, this is clear since \(\partial_{e}X\) is closed. The use of factoriality in Theorem 3.37 is subtle - it enters in the proof implicitly through the computation of the centre of \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) for \(\tau\in X\) in [86, Lemma 10]. In more detail, if \(\tau\in X\), and \(z\in Z(\pi_{\tau}(\mathcal{M})^{\prime\prime})\), then we may define \(z\tau\in\mathcal{M}^{*}\) by \((z\tau)(a)\coloneqq\langle\pi_{\tau}(a)z\xi_{\tau},\xi_{\tau}\rangle\). Then \(z\tau\) is a tracial linear functional with \(|z\tau|\leq\|z\|\tau\). The assumption that \(X\) is a face implies \(z\tau\) belongs to the span of \(X\) - this is necessary to define the representing measure \(\mu_{z\tau}\) on \(\partial_{e}X\) as needed in the proof of [86, Lemma 10]. As the following example demonstrates, Theorem 3.37 can fail without factoriality even for the one-dimensional simplex. The salient point is that in a \(W^{*}\)-bundle the fibres are fairly independent of each other, whereas in a non-factorial tracially complete algebra \((\mathcal{M},X)\), the GNS representation \(\pi_{\tau}\) with respect to the extreme traces \(\tau\in\partial_{e}X\) may not be. **Example 3.38**.: Let \(\mathcal{M}\coloneqq\mathcal{R}\oplus\mathcal{R}\oplus\mathcal{R}\), and let \(\tau_{i}\in T(\mathcal{M})\) be given by \(\tau_{i}(a_{1},a_{2},a_{3})\coloneqq\tau_{\mathcal{R}}(a_{i})\) for \(i=1,2,3\). Define \(\sigma_{1},\sigma_{2}\in T(\mathcal{M})\) by \[\sigma_{1}\coloneqq\frac{1}{2}(\tau_{1}+\tau_{2})\qquad\text{and}\qquad \sigma_{2}\coloneqq\frac{1}{2}(\tau_{1}+\tau_{3}) \tag{3.56}\] Let \(Y\coloneqq\operatorname{co}\{\sigma_{1},\sigma_{2}\}\subseteq T(\mathcal{M})\). Then \((\mathcal{M},Y)\) is a tracially complete \(C^{*}\)-algebra and \(Y\) is a Bauer simplex. Note that \((\mathcal{M},Y)\) is not factorial - in fact, neither extreme point of \(Y\) is an extreme point of \(T(\mathcal{M})\). Let \(K\coloneqq\partial_{e}Y=\{\sigma_{1},\sigma_{2}\}\). We will show there is no embedding of \(C(K)\cong\mathbb{C}^{2}\) into \(Z(\mathcal{M})\cong\mathbb{C}^{3}\) satisfying (3.53). Note that any embedding satisfying (3.53) is unital. Let \(f_{1},f_{2}\in C(K)\) be given by \(f_{1}(\sigma_{1})\coloneqq 1\) and \(f_{1}(\sigma_{2})\coloneqq 0\), and \(f_{2}\coloneqq 1_{C(K)}-f_{1}\). Exploiting the symmetry of \(\tau_{2}\) and \(\tau_{3}\), it suffices to show that the three unital embeddings \(j_{1},j_{2},j_{3}\colon C(K)\to Z(\mathcal{M})\) determined by \[j_{1}(f_{1})=(1_{\mathcal{R}},0,0),\ j_{2}(f_{1})=(1_{\mathcal{R}},1_{\mathcal{ R}},0),\text{ and }j_{3}(f_{1})=(0,1_{\mathcal{R}},0) \tag{3.57}\] fail (3.53). In the first case, for \(a\coloneqq(0,1_{\mathcal{R}},0)\), \(\sigma_{1}(j_{1}(f_{1})a)=0\), but \[\int_{K}f_{1}(\sigma)\sigma(a)\,\mathrm{d}\mu_{\sigma_{1}}=f_{1}(\sigma_{1}) \sigma_{1}(a)=\frac{1}{2}. \tag{3.58}\] In the second case, for \(a\coloneqq(1_{\mathcal{R}},0,0)\), \(\sigma_{2}(j_{2}(f_{2})a)=0\), but \[\int_{K}f_{2}(\sigma)\sigma(a)\,\mathrm{d}\mu_{\sigma_{2}}=f_{2}(\sigma_{2}) \sigma_{2}(a)=\frac{1}{2}. \tag{3.59}\] In the third case, for \(a\coloneqq(1_{\mathcal{R}},0,0)\), \(\sigma_{1}(j_{3}(f_{1})a)=0\), but \[\int_{K}f_{1}(\sigma)\sigma(a)\,\mathrm{d}\mu_{\sigma_{1}}=f_{1}(\sigma_{1}) \sigma_{1}(a)=\frac{1}{2}. \tag{3.60}\] So (3.53) fails in all three cases. ## 4. Amenability for tracially complete \(C^{*}\)-algebras In this section, we show how to combine fibrewise amenability to obtain a global uniform \(2\)-norm completely positive approximation property. The main result is Theorem 4.9, characterising morphisms into tracially complete \(C^{*}\)-algebras which are _tracially nuclear_ - the appropriate notion of amenability which feeds into classification. Theorem 1.2 will follow as a special case of Theorem 4.9. ### Definition and basic properties Our notion of amenability for tracially complete \(C^{*}\)-algebras is given by the following version of the completely positive approximation property. **Definition 4.1**.: Let \(A\) be a \(C^{*}\)-algebra and let \((\mathcal{N},Y)\) be a tracially complete \(C^{*}\)-algebra. We say that a c.p. map \(\theta\colon A\to\mathcal{N}\) is _tracially nuclear_ if there are nets of finite dimensional \(C^{*}\)-algebras \(F_{\lambda}\) and c.p. maps \[A\xrightarrow{\psi_{\lambda}}F_{\lambda}\xrightarrow{\phi_{\lambda}} \mathcal{N} \tag{4.1}\] such that for all \(a\in A\), \[\lim_{\lambda}\|\phi_{\lambda}(\psi_{\lambda}(a))-\theta(a)\|_{2,Y}=0. \tag{4.2}\] Further, we say \((\mathcal{N},Y)\) is _amenable_ if \(\mathrm{id}_{\mathcal{N}}\) is tracially nuclear. As usual, we may restrict to sequences in Definition 4.1 when \(A\) is separable.34 Footnote 34: This requires having a uniform bound on the norms of \(\phi_{\lambda}\) and \(\psi_{\lambda}\), which is always possible by Proposition 4.4. Using the canonical inclusion and projection maps, it immediate that amenability passes to direct sums. **Proposition 4.2**.: _For tracially complete \(C^{*}\)-algebras \((\mathcal{M},X)\) and \((\mathcal{N},Y)\) the direct sum \((\mathcal{M}\oplus\mathcal{N},X\oplus Y)\) is amenable if and only if both \((\mathcal{M},X)\) and \((\mathcal{N},Y)\) are amenable._ Recall that if \(\mathcal{N}\) is a von Neumann algebra, then a c.p. map \(\theta\colon A\to\mathcal{N}\) is _weakly nuclear_ if there are c.p. maps as in (4.1) such that for all \(a\in\mathcal{N}\), \(\phi_{\lambda}(\psi_{\lambda}(a))\to a\) in the weak\({}^{*}\)-topology on \(\mathcal{N}\). When \(\tau\) is a faithful normal trace, \((\mathcal{N},\{\tau\})\) is a tracially complete \(C^{*}\)-algebra, tracial nuclearity and weak nuclearity agree (and likewise, semidiscreteness of \(\mathcal{N}\) agrees with amenability of \((\mathcal{N},\{\tau\})\). We defer the proof to Proposition 4.5 as we first need some prerequisite results. The following standard lemma allows us to reduce problems about tracial nuclearity to the unital case. We use \(A^{\dagger}\) for the forced unitisation of \(A\); i.e. when \(A\) is unital, we add a new unit to form \(A^{\dagger}\cong A\oplus\mathbb{C}\). **Lemma 4.3**.: _Suppose \(A\) is a \(C^{*}\)-algebra, \((\mathcal{N},Y)\) is a tracially complete \(C^{*}\)-algebra, and \(\theta\colon A\to\mathcal{N}\) is a c.p.c. map. Then \(\theta\) is tracially nuclear if and only if the unitisation \(\theta^{\dagger}\colon A^{\dagger}\to\mathcal{N}\) is also tracially nuclear._ Proof.: First note that \(\theta\) factorises as the inclusion \(A\to A^{\dagger}\) followed by \(\theta^{\dagger}\), so that tracial nuclearity of \(\theta^{\dagger}\) implies that of \(\theta\). Conversely, suppose that \(\theta\) is tracially nuclear. Let \((e_{\lambda})\) be an approximate unit for \(A\) consisting of positive contractions and define \(\theta_{\lambda}\colon A^{\dagger}\to\mathcal{N}\) by \[\theta_{\lambda}(a+\alpha 1_{A^{\dagger}})\coloneqq\theta(e_{\lambda}ae_{ \lambda})+\alpha 1_{\mathcal{N}}. \tag{4.3}\] Then \(\theta_{\lambda}\) is unital. Further, \(\theta_{\lambda}\) is tracially nuclear as it is the sum of the tracially nuclear maps \[a+\alpha 1_{A^{\dagger}}\mapsto\theta(e_{\lambda}(a+\alpha 1_{A^{\dagger}})e_{ \lambda}) \tag{4.4}\] and \[a+\alpha 1_{A^{\dagger}}\mapsto\alpha(1_{\mathcal{N}}-\theta(e_{\lambda}^{2})) \tag{4.5}\] Further, since each \(\theta_{\lambda}\) is unital and \(\|\theta_{\lambda}(a)-\theta(a)\|\to 0\) for all \(a\in A\), we have \(\|\theta_{\lambda}(a)-\theta^{\dagger}(a)\|\to 0\) for all \(a\in A^{\dagger}\). As each \(\theta_{\lambda}\) is tracially nuclear, so is \(\theta\). As with the analogous versions of the completely positive approximation property for both \(C^{*}\)-algebras and von Neumann algebras, we can arrange for a uniform bound on the norms of the \(\phi_{\lambda}\) and \(\psi_{\lambda}\) in Definition 4.1. The proof here follows the proof of the von Neumann algebraic version in [15, Proposition 3.8.2] taking care to avoid the use of Borel functional calculus, which, in general, does not exist in tracially complete \(C^{*}\)-algebras. **Proposition 4.4**.: _Suppose that \(A\) is a \(C^{*}\)-algebra, \((\mathcal{N},Y)\) is a tracially complete \(C^{*}\)-algebra, and \(\theta\colon A\to\mathcal{N}\) is a tracially nuclear map. Then there are nets_ \[A\xrightarrow{\psi_{\lambda}}F_{\lambda}\xrightarrow{\phi_{\lambda}}\mathcal{N} \tag{4.6}\] _as in Definition 4.1 with \(\|\psi_{\lambda}\|\leq\|\theta\|\) and \(\phi_{\lambda}(1_{F_{\lambda}})=1_{\mathcal{N}}\). Further, if \(A\) and \(\theta\) are unital, we may arrange each \(\psi_{\lambda}\) to be unital._ Proof.: Rescaling \(\theta\), we may assume that \(\|\theta\|\leq 1\). Then by adding a unit to \(A\) and using Lemma 4.3, we may assume \(A\) and \(\theta\) are unital. We will construct u.c.p. maps \(\phi_{\lambda}\) and \(\psi_{\lambda}\) as in (4.6) approximately factorising \(\theta\). Since \(\theta\) is assumed to be tracially nuclear, there are nets of finite dimensional \(C^{*}\)-algebras \(F_{\lambda}\) and c.p. maps \[A\xrightarrow{\psi_{\lambda}^{\prime\prime}}F_{\lambda}\xrightarrow{\phi_{ \lambda}^{\prime\prime\prime}}\mathcal{N} \tag{4.7}\] such that for all \(a\in A\), \[\lim_{\lambda}\|\phi_{\lambda}^{\prime\prime\prime}(\psi_{\lambda}^{\prime \prime}(a))-\theta(a)\|_{2,Y}=0. \tag{4.8}\] By [15, Lemma 2.2.5], there is a u.c.p. map \(\psi_{\lambda}^{\prime}\colon A\to F_{\lambda}\) such that \[\psi_{\lambda}^{\prime\prime}(a)=\psi_{\lambda}^{\prime\prime}(1_{A})^{1/2} \psi_{\lambda}^{\prime}(a)\psi_{\lambda}^{\prime\prime}(1_{A})^{1/2},\qquad a \in A. \tag{4.9}\] Define \(\phi_{\lambda}^{\prime\prime}\colon F_{\lambda}\to\mathcal{N}\) by \[\phi_{\lambda}^{\prime\prime}(x)\coloneqq\phi_{\lambda}^{\prime\prime\prime} \big{(}\psi_{\lambda}^{\prime\prime}(1_{A})^{1/2}x\psi_{\lambda}^{\prime\prime }(1_{A})^{1/2}\big{)},\qquad x\in F_{\lambda}, \tag{4.10}\] and note that \(\phi_{\lambda}^{\prime\prime}\circ\psi_{\lambda}^{\prime}=\phi_{\lambda}^{ \prime\prime\prime}\circ\psi_{\lambda}^{\prime\prime}\). Therefore, we have \[\lim_{\lambda}\|\phi_{\lambda}^{\prime\prime}(\psi_{\lambda}^{\prime}(a))- \theta(a)\|_{2,Y}=0,\qquad a\in A. \tag{4.11}\] Define a continuous function \(f\colon\mathbb{R}\to[0,1]\) by \[f(t)\coloneqq\begin{cases}1,&t\leq 1;\\ 2-t,&1<t<2;\\ 0,&t\geq 2.\end{cases} \tag{4.12}\] Let \(b_{\lambda}\coloneqq f(\phi_{\lambda}^{\prime\prime}(1_{F_{\lambda}}))\) and define \(\phi_{\lambda}^{\prime}\colon F_{\lambda}\to\mathcal{N}\) by \[\phi_{\lambda}^{\prime}(x)\coloneqq b_{\lambda}\phi_{\lambda}^{\prime\prime}(x )b_{\lambda},\qquad x\in F_{\lambda}. \tag{4.13}\] By elementary calculus, \(0\leq tf(t)^{2}\leq 1\) for all \(t\in[0,\infty)\). Therefore, \(\|\phi_{\lambda}^{\prime}(1_{F_{\lambda}})\|\leq 1\), and \(\phi_{\lambda}^{\prime}\) is a c.p.c. map. Also, for all \(t\in\mathbb{R}\), we have \(0\leq 1-f(t)\leq|t-1|\), and so \[0\leq 1_{\mathcal{N}}-b_{\lambda}\leq|\phi_{\lambda}^{\prime\prime}(1_{F_{ \lambda}})-1_{\mathcal{N}}|. \tag{4.14}\] Since \(\theta\) and \(\psi_{\lambda}^{\prime}\) are unital, (4.11) and (4.14) imply \[\lim_{\lambda}\|1_{\mathcal{N}}-b_{\lambda}\|_{2,Y}=0. \tag{4.15}\] Fix \(a\in A\). Since the elements \(b_{\lambda}\) are contractions, we have \[\|\phi_{\lambda}^{\prime}(\psi_{\lambda}^{\prime}(a))-b_{\lambda}\theta(a)b_{ \lambda}\|_{2,Y}\leq\|\phi_{\lambda}^{\prime\prime}(\psi_{\lambda}^{\prime}(a ))-\theta(a)\|_{2,Y}, \tag{4.16}\] so \(\lim_{\lambda}\|\phi_{\lambda}^{\prime}(\psi_{\lambda}^{\prime}(a))-b_{\lambda }\theta(a)b_{\lambda}\|_{2,Y}=0\) by (4.11). Moreover, as multiplication in \(\mathcal{N}\) is \(\|\cdot\|_{2,Y}\)-continuous on \(\|\cdot\|\)-bounded sets, we have \[\lim_{\lambda}\|b_{\lambda}\theta(a)b_{\lambda}-\theta(a)\|_{2,Y}=0,\qquad a \in A, \tag{4.17}\] by (4.15). Therefore, \[\lim_{\lambda}\|\phi_{\lambda}^{\prime}(\psi_{\lambda}^{\prime}(a))-\theta(a) \|_{2,Y}=0,\qquad a\in A. \tag{4.18}\] While \(\psi_{\lambda}^{\prime}\) was arranged to be unital, the map \(\phi_{\lambda}^{\prime}\) need not be. We modify the maps once more to correct this. Let \(\rho\) be a state on \(A\) and define \(\psi_{\lambda}\colon A\to F_{\lambda}\oplus\mathbb{C}\) by \(\psi_{\lambda}(a)\coloneqq(\psi_{\lambda}^{\prime}(a),\rho(a))\). Then \(\psi_{\lambda}\) is u.c.p. as both \(\psi_{\lambda}^{\prime}\) and \(\rho\) are. Also, define \(\phi_{\lambda}\colon F_{\lambda}\oplus\mathbb{C}\to\mathcal{N}\) by \[\phi_{\lambda}(x,\alpha)\coloneqq\phi_{\lambda}^{\prime}(x)+\alpha(1_{\mathcal{ N}}-\phi_{\lambda}^{\prime}(1_{F_{\lambda}})) \tag{4.19}\] Then \(\phi_{\lambda}\) is u.c.p. as \(\phi^{\prime}_{\lambda}\) is c.p.c. By (4.18), to see that \[\lim_{\lambda}\|\phi_{\lambda}(\psi_{\lambda}(a))-\theta(a)\|_{2,Y}=0,\qquad a \in A, \tag{4.20}\] it suffices to show that \(\lim_{\lambda}\|1_{\mathcal{N}}-\phi_{\lambda}(1_{F_{\lambda}})\|_{2,Y}=0\). However, this follows immediately from taking \(a\coloneqq 1_{A}\) in (4.18) using that the maps \(\psi^{\prime}_{\lambda}\) and \(\theta\) are both unital. We now return to the promised connection between tracial and weak nuclearity. **Proposition 4.5**.: _Let \(A\) be a \(C^{*}\)-algebra and \((\mathcal{N},\tau)\) be a tracial von Neumann algebra. A c.p. map \(\theta\colon A\to(\mathcal{N},\{\tau\})\) is tracially nuclear if and only if it is weakly nuclear. In particular, \((\mathcal{N},\tau)\) is semidiscrete as a tracial von Neumann algebra if and only if \((\mathcal{N},\{\tau\})\) is amenable as a tracially complete \(C^{*}\)-algebra._ Proof.: The key observation is that the strong operator topology on the operator norm unit ball of \(\mathcal{N}\) coincides with the topology induced by \(\|\cdot\|_{2,\tau}\) (see [7, Proposition III.2.2.17], for example). From here, assume that \(\theta\) is weakly nuclear. By scaling \(\theta\), we may assume \(\|\theta\|\leq 1\). Then, as in the proof of Lemma 4.3, the unitisation \(\theta^{\dagger}\colon A^{\dagger}\to\mathcal{N}\) is weakly nuclear. Hence we may assume that \(A\) and \(\theta\) are unital. By [15, Proposition 3.8.2], there are nets of finite dimensional \(C^{*}\)-algebras and c.p.c. maps \[A\xrightarrow{\psi_{\lambda}}F_{\lambda}\xrightarrow{\phi_{\lambda}} \mathcal{N} \tag{4.21}\] such that \(\phi_{\lambda}(\psi_{\lambda}(a))\to a\) weak\({}^{*}\), and in particular in the weak operator topology for all \(a\in A\). Since the set of c.p.c. maps \(A\to\mathcal{N}\) which admit c.p.c. factorisations through a finite dimensional \(C^{*}\)-algebra form a convex set ([15, Lemma 2.3.6 and Remark 2.3.7]), the Hahn-Banach theorem, in the form of [15, Lemma 3.8.1], shows that there is a net of c.p.c. maps \(\theta_{\lambda}\colon A\to\mathcal{N}\) such that \(\theta_{\lambda}(a)\to\theta(a)\) in the strong-operator topology for all \(a\in A\), and each \(\theta_{\lambda}\) admits a c.p.c. factorisation through a finite dimensional \(C^{*}\)-algebra. As each \(\theta_{\lambda}\) is contractive, this implies \(\|\theta(a)-\theta_{\lambda}(a)\|_{2,\tau}\to 0\) for all \(a\in A\), and hence \(\theta\) is tracially nuclear. Conversely, suppose that \(\theta\) is tracially nuclear. Again, after scaling \(\theta\), we may assume that \(\theta\) is contractive. By Proposition 4.4, there are nets of finite dimensional \(C^{*}\)-algebras and c.p.c. maps \[A\xrightarrow{\psi_{\lambda}}F_{\lambda}\xrightarrow{\phi_{\lambda}} \mathcal{N} \tag{4.22}\] such that \(\|\theta(a)-\phi_{\lambda}(\psi_{\lambda}(a))\|_{2,\tau}\to 0\) for all \(a\in A\). Then \(\phi_{\lambda}(\psi_{\lambda}(a))\to a\) in the strong operator topology, and hence in the weak operator topology. As the weak operator topology and weak\({}^{*}\) topology on \(\mathcal{N}\) agree on bounded sets, the convergence holds weak\({}^{*}\), and this implies that \(\theta\) is weakly nuclear. It is known that if \(A\) is an exact \(C^{*}\)-algebra and \(\mathcal{N}\) is a von Neumann algebra, then any weakly nuclear map \(A\to\mathcal{N}\) is nuclear - see [59, Remark 3.4] (as observed there, this characterises exactness of \(A\)). This suggests the following question, which we have been unable to answer even with the additional assumptions that \((\mathcal{N},Y)\) is factorial and has CPoU. The difficulty is that the passage from weak nuclearity to nuclearity involves taking point-weak\({}^{*}\)-limit points.35 We do not have the required compactness for this in the setting of tracially complete \(C^{*}\)-algebras. Footnote 35: The proof in [59, Remark 3.4] uses Kirchberg’s \(\mathcal{O}_{2}\)-embedding theorem, but this can be avoided using the following slight variation of their proof. Let \(\theta\colon A\to\mathcal{N}\) be weakly nuclear. Fix nets of finite dimensional \(C^{*}\)-algebras \(F_{\lambda}\) and c.p.c. maps \(\psi_{\lambda}\colon A\to F_{\lambda}\) and \(\phi_{\lambda}\colon F_{\lambda}\to\mathcal{N}\) such that \(\phi_{\lambda}\circ\psi_{\lambda}\) converges to \(\theta\) point-weak\({}^{*}\). Let \(\pi\colon A\to\mathcal{B}(\mathcal{H})\) be a faithful representation of \(A\) and use Arveson’s extension theorem to find c.p.c. maps \(\tilde{\psi}_{\lambda}\colon\mathcal{B}(\mathcal{H})\to F_{\lambda}\) with \(\tilde{\psi}_{\lambda}\circ\pi=\psi_{\lambda}\). Let \(\rho\colon\mathcal{B}(\mathcal{H})\to\mathcal{N}\) be a point-weak\({}^{*}\) limit point of \(\phi\circ\tilde{\psi}_{\lambda}\) and note that \(\theta=\rho\circ\pi\). As \(A\) is exact, \(\pi\) is nuclear, and hence so is \(\theta\). **Question 4.6**.: Suppose \(A\) is an exact \(C^{*}\)-algebra and \((\mathcal{N},Y)\) is a tracially complete \(C^{*}\)-algebra. Is every tracially nuclear map \(A\to\mathcal{N}\) nuclear? ### Fibrewise amenability We work towards showing that amenability of tracially complete \(C^{*}\)-algebras can be detected in its tracial von Neumann algebra completions. The basic strategy is to use a convexity argument similar to the one in the fundamental result from [34, 24, 25] stating that if \(A\) is a \(C^{*}\)-algebra such that \(A^{**}\) is semidiscrete, then \(A\) is nuclear (see [15, Proposition 2.3.8], for example). We set up the convexity argument in a somewhat general setting for later use in Section 6.4. Both the statement and proof are abstracted from an argument in the proof of Ozawa's [86, Theorem 3] computing the centre of tracial completions of \(C^{*}\)-algebras. A similar Hahn-Banach argument appears in the proof of (8)\(\Rightarrow\)(1) in [48, Theorem 2.5]. **Lemma 4.7**.: _Let \(\mathcal{C}\) be a convex subset of a real vector space and let \((\mathcal{M},X)\) be a tracially complete \(C^{*}\)-algebra. Suppose we are given \(m\in\mathbb{N}\) and a finite collection \(f_{1},\ldots,f_{m}\colon\mathcal{C}\to\mathcal{M}\) of affine functions such that for all \(\tau\in X\) and \(\epsilon>0\), there is \(c_{\tau}\in\mathcal{C}\) such that_ \[\max_{1\leq i\leq m}\|f_{i}(c_{\tau})\|_{2,\tau}<\epsilon. \tag{4.23}\] _Then for all \(\epsilon>0\), there is \(c\in\mathcal{C}\) such that_ \[\max_{1\leq i\leq m}\|f_{i}(c)\|_{2,X}<\epsilon. \tag{4.24}\] Proof.: Let \(\Lambda\) be the directed set of pairs \((\mathcal{T},\epsilon)\) where \(\mathcal{T}\subseteq X\) is a non-empty finite set and \(\epsilon>0\), equipped with the ordering \((\mathcal{T}_{1},\epsilon_{1})\leq(\mathcal{T}_{2},\epsilon_{2})\) if and only if \(\mathcal{T}_{1}\subseteq\mathcal{T}_{2}\) and \(\epsilon_{1}\geq\epsilon_{2}\). Fix \(\lambda\coloneqq(\mathcal{T}_{\lambda},\epsilon_{\lambda})\in\Lambda\) and let \(\sigma_{\lambda}\coloneqq|\mathcal{T}_{\lambda}|^{-1}\sum_{\tau\in\mathcal{T} _{\lambda}}\tau\in X\) denote the average of the traces in \(\mathcal{T}_{\lambda}\). By assumption, there is \(c_{\lambda}\in\mathcal{C}\) such that \[|\mathcal{T}_{\lambda}|^{1/2}\max_{1\leq i\leq m}\|f_{i}(c_{\lambda})\|_{2, \sigma_{\lambda}}<\epsilon_{\lambda}. \tag{4.25}\] It follows from the definition of \(\sigma_{\lambda}\) that \[\max_{\tau\in\mathcal{T}_{\lambda}}\max_{1\leq i\leq m}\|f_{i}(c_{\lambda})\|_ {2,\tau}<\epsilon_{\lambda}. \tag{4.26}\] For \(\lambda\in\Lambda\) and \(i=1,\ldots,m\), define \(h_{i,\lambda}\colon X\to\mathbb{R}\) by \[h_{i,\lambda}(\tau)\coloneqq\|f_{i}(c_{\lambda})\|_{2,\tau}^{2} \tag{4.27}\] and note that \(h_{i,\lambda}\in\operatorname{Aff}(X)\). For all \(i=1,\ldots,m\), \(h_{i,\lambda}\to 0\) pointwise on \(X\) by (4.26), and hence Proposition 2.1 implies \(h_{i,\lambda}\to 0\) weakly. View \(h_{\lambda}\coloneqq(h_{i,\lambda},\dots,h_{m,\lambda})\) as a net in the Banach space \(\operatorname{Aff}(X)^{\oplus m}\), and note that \(h_{\lambda}\to 0\) weakly. Fix \(\epsilon>0\). By the Hahn-Banach theorem, there are \(l\in\mathbb{N}\), \(\lambda_{1},\dots,\lambda_{l}\in\Lambda\), and real numbers \(\alpha_{1},\dots,\alpha_{l}\geq 0\) such that \(\sum_{k=1}^{l}\alpha_{k}=1\) and \[\max_{1\leq i\leq m}\Big{\|}\sum_{k=1}^{l}\alpha_{k}h_{i,\lambda_{k}}\Big{\|}_ {\infty}<\epsilon^{2}. \tag{4.28}\] Define \(c\coloneqq\sum_{k=1}^{l}\alpha_{k}c_{\lambda_{k}}\in\mathcal{C}\). For \(i=1,\dots,m\), using the triangle inequality and the Cauchy-Schwarz inequality, we have \[\begin{split}\|f_{i}(c)\|_{2,X}&=\Big{\|}\sum_{k= 1}^{l}\alpha_{k}f_{i}(c_{\lambda_{k}})\Big{\|}_{2,X}\\ &\leq\sum_{k=1}^{l}\alpha_{k}\|f_{i}(c_{\lambda_{k}})\|_{2,X}\\ &=\sum_{k=1}^{l}\big{(}\alpha_{k}^{1/2}\big{)}\big{(}\alpha_{k}^{ 1/2}\|f_{i}(c_{\lambda_{k}})\|_{2,X}\big{)}\\ &\leq\Big{(}\sum_{k=1}^{l}\alpha_{k}\Big{)}^{1/2}\Big{(}\sum_{k =1}^{l}\alpha_{k}\|f_{i}(c_{\lambda_{k}})\|_{2,X}^{2}\Big{)}^{1/2}\\ &=\Big{(}\sum_{k=1}^{l}\alpha_{k}\|f_{i}(c_{\lambda_{k}})\|_{2,X }^{2}\Big{)}^{1/2}\end{split} \tag{4.29}\] Then for \(i=1,\dots,m\) and \(\tau\in X\), we have \[\|f_{i}(c)\|_{2,\tau}^{2}\overset{\eqref{eq:Hahn-Banach-B},\eqref{eq:Hahn-Banach-C}}{\leq}\sum_{k=1}^{l} \alpha_{k}h_{i,\lambda_{k}}(\tau)\overset{\eqref{eq:Hahn-Banach-C}}{<} \epsilon^{2}. \tag{4.30}\] Hence \(\|f_{i}(c)\|_{2,X}<\epsilon\) for all \(i=1,\dots,m\). The following lemma is standard. For example, it follows from the proof of [59, Lemma 1.1] by quoting the Choi-Effros lifting theorem in place of the projectivity of order zero maps in the last paragraph. **Lemma 4.8**.: _If \(F\) and \(B\) are \(C^{*}\)-algebras with \(F\) finite dimensional, \(\pi\colon B\to\mathcal{B}(H)\) is a \({}^{*}\)-homomorphism, and \(\phi\colon F\to\pi(B)^{\prime\prime}\) is a c.p. map, then there is a net of c.p. maps \(\phi_{\lambda}\colon F\to B\) such that \(\|\phi_{\lambda}\|\leq\|\phi\|\) for all \(\lambda\) and_ \[\phi(b)=\operatorname{strong}^{*}\!\!-\lim_{\lambda}\pi(\phi_{i}(b)),\qquad b \in B. \tag{4.31}\] _Further, if \(\phi\) is unital, we may arrange for each \(\phi_{\lambda}\) to be unital._ We are now in a position to give the 'fibrewise' characterisation of tracially nuclear \({}^{*}\)-homomorphisms (Theorem 4.9) from which Theorem 1.2 follows immediately (by taking \(A\coloneqq\mathcal{M}\) and \(\theta\coloneqq\operatorname{id}_{\mathcal{M}}\)). Note that the following theorem also implies the first part of Theorem D from the overview. The equivalence of (ii) and (iii) in Theorem 4.9 is essentially obtained by Brown in [12, Theorem 3.2.2] (which is a variation on Connes' theorem), so the main implication we need to show is (ii) implies (i). This will follow from the Hahn-Banach argument of Lemma 4.7. **Theorem 4.9**.: _Let \(A\) be a \(C^{*}\)-algebra, let \((\mathcal{M},X)\) be a tracially complete \(C^{*}\)-algebra, and let \(\theta\colon A\to\mathcal{M}\) be a c.p. map. The following are equivalent:_ 1. \(\theta\) _is tracially nuclear;_ 2. _for all_ \(\tau\in X\)_,_ \(\pi_{\tau}\circ\theta\colon A\to\pi_{\tau}(\mathcal{M})^{\prime\prime}\) _is weakly nuclear._ _If \(\theta\) is a \({}^{*}\)-homomorphism then these are also equivalent to:_ 1. _for every_ \(\tau\in X\) _with_ \(\tau\circ\theta\neq 0\)_, the trace_ \(\|\tau\circ\theta\|^{-1}\cdot\tau\circ\theta\in T(A)\) _is uniformly amenable in the sense of_ _[_12_, Definition 3.2.1]__._36__ Footnote 36: Brown implicitly only defines uniformly amenable for traces on unital \(C^{*}\)-algebras in [15] (the definition there requires u.c.p. maps out of \(A\)). We extend this definition to the non-unital case, by saying that \(\tau\in T(A)\) is uniformly amenable if its unitisation, \(\tau^{\dagger}\in T(A^{\dagger})\), is uniformly amenable. Proof.: To see (i) implies (ii), note that if \(\theta\) is tracially nuclear, then for all \(\tau\in X\), \(\pi_{\tau}\circ\theta\) is tracially nuclear as a map into the tracial von Neumann algebra \((\pi_{\tau}(\mathcal{M})^{\prime\prime},\tau)\). As noted in the remarks following Definition 4.1, for maps into tracial von Neumann algebras, weak and tracial nuclearity are equivalent, so (ii) follows. When \(\theta\) is a \({}^{*}\)-homomorphism, the equivalence of (ii) and (iii) can be reduced to the case that \(A\) and \(\theta\) are unital by adding a unit to \(A\) and using Lemma 4.3. For \(\tau\in X\), since there is a normal trace-preserving conditional expectation \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\to\pi_{\tau}(\theta(A))^{\prime\prime}\) (see [15, Lemma 1.5.10], for example), \(\pi_{\tau\circ\theta}=\pi_{\tau}\circ\theta\) is weakly nuclear when viewed as a map into \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) if and only if it is weakly nuclear when viewed as a map into \(\pi_{\tau\circ\theta}(A)^{\prime\prime}=\pi_{\tau}(\theta(A))^{\prime\prime}\). The equivalence of (ii) and (iii) then follows from the equivalence of (1) and (6) in [12, Theorem 3.2.2]. It remains to show that (ii) implies (i). Assume that \(\theta\) is contractive. Fix a finite set \(\mathcal{F}\subseteq A\) and \(\epsilon>0\). Let \(\mathcal{C}\) denote the set of all c.p. maps \(A\to\mathcal{M}\) which factor by c.p. maps through a finite dimensional \(C^{*}\)-algebra and note that \(\mathcal{C}\) is convex (cf. [15, Lemma 2.3.6]). We will apply Lemma 4.7 to the affine functions \[f_{a}\colon\mathcal{C}\to\mathcal{M}\colon\eta\mapsto\eta(a)-\theta(a),\qquad a \in\mathcal{F}. \tag{4.32}\] For \(\tau\in X\), \(\pi_{\tau}\circ\theta\) is weakly nuclear by (ii), so there are a finite dimensional \(C^{*}\)-algebra \(F_{\tau}\) and c.p.c. maps \[A\stackrel{{\psi_{\tau}}}{{\longrightarrow}}F_{\tau}\stackrel{{ \tilde{\phi}_{\tau}}}{{\longrightarrow}}\pi_{\tau}(\mathcal{M})^{\prime\prime} \tag{4.33}\] such that \[\|\bar{\phi}_{\tau}(\psi_{\tau}(a))-\pi_{\tau}(\theta(a))\|_{2,\tau}<\epsilon, \qquad a\in\mathcal{F}. \tag{4.34}\] By Lemma 4.8, there is then a c.p.c. map \(\phi_{\tau}\colon F_{\tau}\to\mathcal{M}\) such that \[\|\phi_{\tau}(\psi_{\tau}(a))-\theta(a)\|_{2,\tau}<\epsilon,\qquad a\in \mathcal{F}. \tag{4.35}\] Then \(\eta_{\tau}\coloneqq\phi_{\tau}\circ\psi_{\tau}\in\mathcal{C}\) and \(\|f_{a}(\eta_{\tau})\|_{2,\tau}<\epsilon\) for all \(a\in\mathcal{F}\). By Lemma 4.7, there is \(\eta\in\mathcal{C}\) such that \(\|f_{a}(\eta)\|_{2,X}<\epsilon\) for all \(a\in\mathcal{F}\). Unpacking notation, \(\eta\) is a c.p. map factoring through a finite dimensional \(C^{*}\)-algebra, and \[\|\eta(a)-\theta(a)\|_{2,X}<\epsilon,\qquad a\in\mathcal{F}, \tag{4.36}\] so \(\theta\) is tracially nuclear. Combining the previous result with Connes' theorem gives a tracially complete analogue of the fact that a von Neumann algebra completion of a nuclear \(C^{*}\)-algebra is semidiscrete. Let \(T_{\mathrm{am}}(A)\subseteq T(A)\) denote the set of amenable traces on \(A\). **Corollary 4.10**.: _If \(A\) is an exact \(C^{*}\)-algebra with \(T_{\mathrm{am}}(A)\) compact,37 then the tracial completion of \(A\) with respect to \(T_{\mathrm{am}}(A)\) is an amenable factorial tracially complete \(C^{*}\)-algebra. In particular, if \(A\) is a nuclear \(C^{*}\)-algebra with \(T(A)\) compact, then the tracial completion of \(A\) with respect to \(T(A)\) is an amenable factorial tracially complete \(C^{*}\)-algebra._ Footnote 37: Note that \(T_{\mathrm{am}}(A)\) is closed in \(T(A)\) by [12, Proposition 3.5.1], so the compactness is automatic if \(A\) is unital. Proof.: By [66, Lemma 3.4], \(T_{\mathrm{am}}(A)\) is a closed face in \(T(A)\). Therefore, the tracial completion of \(A\) with respect to \(T_{\mathrm{am}}(A)\) is a factorial tracially complete \(C^{*}\)-algebra by Proposition 3.23(iv). As \(A\) is exact, \(A\) is locally reflexive by [65, Remark (11)], and hence by [12, Theorem 4.3.3], all amenable traces on \(A\) are uniformly amenable. By the equivalence of (1) and (5) in [12, Theorem 3.2.2], it follows that \(\pi_{\tau}(A)^{\prime\prime}\) is semidiscrete for \(\tau\in T_{\mathrm{am}}(A)\). Using Proposition 3.23(vi), we also have that \(\pi_{\tau}(\overline{A}^{T_{\mathrm{am}}(A)})^{\prime\prime}\) is semidiscrete for all \(\tau\in T_{\mathrm{am}}(A)\). By Theorem 4.9, this implies that \(\big{(}\overline{A}^{T_{\mathrm{am}}(A)},T_{\mathrm{am}}(A)\big{)}\) is amenable. This proves the first sentence of the theorem. The second sentence follows since \(T(A)=T_{\mathrm{am}}(A)\) when \(A\) is nuclear (see [12, Theorem 4.2.1], for example). Without exactness, it need not be the case that all amenable traces are uniformly amenable. Indeed, given a sequence \((k_{n})\) of natural numbers converging to \(\infty\), let \(A\coloneqq\prod M_{k_{n}}\). For a free ultrafilter \(\omega\) on \(\mathbb{N}\), the trace \(\tau_{\omega}((x_{n}))\coloneqq\lim_{n\to\omega}\mathrm{tr}_{k_{n}}(x_{n})\) is an amenable trace which is not uniformly amenable.38 This observation leads to the following characterisation of those finite von Neumann algebras which are amenable as tracially complete \(C^{*}\)-algebras over their trace space. Footnote 38: This is because the tracial von Neumann ultraproduct \(\prod^{\omega}M_{n_{k}}\) is not amenable, as it contains a copy of \(L(F_{2})\).. **Proposition 4.11**.: _Let \(\mathcal{M}\) be a semidiscrete finite von Neumann algebra. Then \(\big{(}\mathcal{M},T(\mathcal{M})\big{)}\) is amenable as a tracially complete \(C^{*}\)-algebra if and only if its type \(\mathrm{II}_{1}\) summand has only finitely many extremal traces and it has no type \(\mathrm{I}_{n}\) summand for sufficiently large \(n\)._ Proof.: Suppose \(\mathcal{M}\) satisfies the stated condition. Then the type \(\mathrm{I}\) part \(\mathcal{M}_{\mathrm{I}}\) of \(\mathcal{M}\) is a finite direct sum of matrices over abelian \(C^{*}\)-algebras (see [104, Theorem V.1.27], for example) and hence nuclear so has the completely positive approximation property in norm. Hence \(\big{(}\mathcal{M}_{\mathrm{I}},T(\mathcal{M}_{\mathrm{I}})\big{)}\) is amenable as a tracially complete \(C^{*}\)-algebra. Let \(e_{1},\ldots,e_{m}\) denote the minimal central projections of the type \(\mathrm{II}_{1}\) part \(\mathcal{M}_{\mathrm{II}}\) of \(\mathcal{M}\) so that each \(\mathcal{M}e_{i}=\mathcal{M}_{\mathrm{II}}e_{i}\) is a semidiscrete factor, whence \(\big{(}\mathcal{M}e_{i},T(\mathcal{M}e_{i})\big{)}\) is an amenable tracially complete \(C^{*}\)-algebra (by Proposition 4.5). Since the finite direct sum of amenable tracially complete \(C^{*}\)-algebras is amenable (Proposition 4.2), it follows that \(\big{(}\mathcal{M},T(\mathcal{M})\big{)}\) is amenable. Conversely, supposing the condition does not hold, we can find a sequence \((n_{k})\) converging to \(\infty\) and orthogonal central projections \((e_{k})\) in \(\mathcal{M}\) such that there is a unital embedding \(M_{n_{k}}\to\mathcal{M}e_{k}\).39 In this way, we have an embedding of the infinite product \(C^{*}\)-algebra \(\prod M_{n_{k}}\) in \(\mathcal{M}\). For each \(k\), let \(\tau_{k}\) be a trace on \(\mathcal{M}\) with \(\tau_{k}(e_{k})=1\) and for a free ultrafilter \(\omega\) on \(\mathbb{N}\), let \(\tau_{\omega}\coloneqq\lim_{k\to\omega}\tau_{k}\), which exists by compactness of the trace space of \(\prod M_{n_{k}}\). Then \(\tau_{\omega}\) restricts to the trace \((x_{n})\mapsto\lim_{n\to\omega}\operatorname{tr}_{n_{k}}(x_{n})\) on \(\prod M_{n_{k}}\). This restricted trace is not uniformly amenable (see Footnote 38). As uniform amenability of traces passes to subalgebras (this is immediate from the approximation form of the definition in [12, Definition 3.2.1]), \(\tau_{\omega}\) is not uniformly amenable, and hence \(\big{(}\mathcal{M},T(\mathcal{M})\big{)}\) is not amenable. Footnote 39: This uses the fact that for any \(\mathrm{II}_{1}\) von Neumann algebra \(\mathcal{N}\), and any \(n\in\mathbb{N}\), one can find a unital embedding \(M_{n}\to\mathcal{N}\). This goes back to Murray and von Neumann. The proof for \(n=2\) from [104, Proposition V.1.35] can be readily modified to cover general \(n\). In particular, the tracially complete \(C^{*}\)-algebra \(\big{(}\ell^{\infty}(\mathcal{R}),T(\ell^{\infty}(\mathcal{R}))\big{)}\) is not amenable even though \(\ell^{\infty}(\mathcal{R})\) is semidiscrete as a von Neumann algebra. We end this section by applying the characterisation of tracial nuclearity, to show that it can be tested on \(C^{*}\)-subalgebras which are dense in the uniform \(2\)-norm. A naive argument (showing that the witnesses of tracial nuclearity for \(\phi|_{A}\) extend to witnesses of tracial nuclearity for \(\phi\)) is not possible, as one cannot a priori assume uniform \(\|\cdot\|_{2,X}\)-boundedness of the c.p.c. approximations in the definition of tracial nuclearity. **Lemma 4.12**.: _Suppose \((\mathcal{M},X)\) and \((\mathcal{N},Y)\) are tracially complete \(C^{*}\)-algebras and \(\phi\colon(\mathcal{M},X)\to(\mathcal{N},Y)\) is a morphism. If \(A\subseteq\mathcal{M}\) is a \(\|\cdot\|_{2,X}\)-dense \(C^{*}\)-subalgebra, then \(\phi\) is tracially nuclear if and only if \(\phi|_{A}\) is tracially nuclear._ Proof.: For all \(\tau\in X\), the inclusion \(A\hookrightarrow\mathcal{M}\) induces an isomorphism \(\pi_{\tau}(A)^{\prime\prime}\to\pi_{\tau}(\mathcal{M})^{\prime\prime}\) (by combining Corollary 3.29(i) and Proposition 3.23(vi), for example). Since a trace on a \(C^{*}\)-algebra is uniformly amenable if and only if it generates a semidiscrete von Neumann algebra (see [12, Theorem 3.2.2(2)\(\Leftrightarrow\)(3)]), we have that for each \(\tau\in Y\), \(\tau\circ\phi\) is uniformly amenable if and only if \(\tau\circ\phi|_{A}\) is uniformly amenable. The result follows from Theorem 4.9. ## 5. Reduced products and central sequences In this section, we define the reduced product \(\prod^{\omega}(\mathcal{M}_{n},X_{n})\) associated with a sequence of tracially complete \(C^{*}\)-algebras \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) and a free filter \(\omega\) on \(\mathbb{N}\). Aided by the language of reduced products and motivated by the analogous conditions in von Neumann algebra theory, we introduce two properties of (factorial) tracially complete \(C^{*}\)-algebras: the McDuff property and property \(\Gamma\). Both of these properties have been studied in the setting of \(C^{*}\)-algebras with their uniform \(2\)-norm in [22, 23], where they are referred to as the _uniform McDuff property_ and _uniform property_\(\Gamma\), respectively. We view tracially complete \(C^{*}\)-algebras as the natural framework to develop these properties, and the definitions given here extend the definitions from [22, 23] when restricted to tracial completions of \(C^{*}\)-algebras with respect to the full trace simplex. Moreover, in the unique trace case, these properties reduce to the homonymous properties for \(\mathrm{II}_{1}\) factors. ### Reduced products Reduced products provide an algebraic setting for manipulating properties involving approximations. The most common constructions are ultrapowers with respect to a free ultrafilter on the natural numbers \(\mathbb{N}\) and sequence algebras consisting of the algebra of bounded sequences modulo the ideal of \(c_{0}\)-sequences. For many basic applications, ultrapowers and sequence algebras can be used interchangeably, but each has its technical advantages. In settings where traces are considered, ultrapowers are often more natural, as the resulting set of limit traces is already convex; on the other hand, working with sequence algebras allows the reparameterisation argument of [46, Theorem 4.3] (see Theorem 5.11), which will be used in our classification result in Section 9. In order to allow for both constructions simultaneously, we will work with reduced products defined with respect to a free filter. For the remainder of the paper, \(\omega\) will denote a free filter on the natural numbers \(\mathbb{N}\). We recall that a filter \(\omega\) on the natural numbers is free if and only if it contains all cofinite sets (see [15, Appendix A], for example, for a general discussion of filters). The following selection theorem of Kirchberg will be used frequently. It is most often stated for ultrafilters, but the result (and proof) is equally valid for general filters. For the readers convenience, we include the details. **Lemma 5.1** (Kirchberg's \(\epsilon\)-test, cf. [67, Lemma A.1]).: _Let \(\omega\) be a free filter on \(\mathbb{N}\). Let \((X_{n})_{n=1}^{\infty}\) be a sequence of non-empty sets and for \(k,n\in\mathbb{N}\), let \(f_{n}^{(k)}\colon X_{n}\to[0,\infty]\) be a function. Define functions \(f^{(k)}\colon X_{1}\times X_{2}\times\cdots\to[0,\infty]\) by_ \[f^{(k)}(x_{1},x_{2},\dots)\coloneqq\limsup_{n\to\omega}f_{n}^{(k)}(x_{n}). \tag{5.1}\] _If for every \(\epsilon>0\) and \(k_{0}\in\mathbb{N}\), there exists \(x\in X_{1}\times X_{2}\times\cdots\) such that \(f^{(k)}(x)<\epsilon\) for \(k=1,\dots,k_{0}\), then there exists \(y\in X_{1}\times X_{2}\times\cdots\) such that \(f^{(k)}(y)=0\) for all \(k\in\mathbb{N}\)._ Proof.: For each \(r\in\mathbb{N}\), there exists \(x^{(r)}=(x_{1}^{(r)},x_{2}^{(r)},\dots)\in X_{1}\times X_{2}\times\cdots\) such that \(f^{(k)}(x^{(r)})<\frac{1}{r}\) for \(k=1,\dots,r\). By (5.1), there exists \(I_{r}\in\omega\) such that \(f_{n}^{(k)}(x_{n}^{(r)})<\frac{1}{r}\) for all \(n\in I_{r}\) and \(k=1,\dots,r\). As \(\omega\) is a free filter we may assume that \(I_{r}\subseteq\{r,r+1,\dots\}\). For each \(n\in\mathbb{N}\), if \(n\) lies in \(\bigcup_{r=1}^{\infty}I_{r}\), then let \(r_{n}\in\mathbb{N}\) be maximal such that \(n\in I_{r_{n}}\) (noting that \(n\notin I_{r}\) for \(r>n\)) and set \(y_{n}\coloneqq x_{n}^{(r_{n})}\). Otherwise, define \(y_{n}\in X_{n}\) arbitrarily. Fix \(k,r\in\mathbb{N}\) with \(k\leq r\). Then for all \(n\in I_{r}\), it follows that \(r_{n}\geq r\), and hence \[f_{n}^{(k)}(y_{n})<\frac{1}{r_{n}}\leq\frac{1}{r}. \tag{5.2}\] Thus \[f^{(k)}(y)=\limsup_{n\to\omega}f_{n}(y_{n})\leq\sup_{n\in I_{r}}f_{n}(y_{n}) \leq\frac{1}{r}. \tag{5.3}\] Since this holds for all \(r\geq k\), we obtain \(f^{(k)}(y)=0\), as required. We now formally define reduced products. **Definition 5.2**.: For a sequence \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) of tracially complete \(C^{*}\)-algebras, define a \(C^{*}\)-algebra40 Footnote 40: Here, \(\prod_{n=1}^{\infty}\mathcal{M}_{n}\) denotes the \(\ell^{\infty}\)-product. Using (3.2), it is easy to see that \[\big{\{}(a_{n})_{n=1}^{\infty}:\lim_{n\to\omega}\|a_{n}\|_{2,X_{n}}=0\big{\}}\] is an ideal of \(\prod_{n=1}^{\infty}\mathcal{M}_{n}\), so this quotient is a \(C^{*}\)-algebra. \[\prod^{\omega}\mathcal{M}_{n}\coloneqq\prod_{n=1}^{\infty}\mathcal{M}_{n} \big{/}\big{\{}(a_{n})_{n=1}^{\infty}:\lim_{n\to\omega}\|a_{n}\|_{2,X_{n}}=0 \big{\}}. \tag{5.4}\] For every sequence of traces \((\tau_{n})_{n=1}^{\infty}\in\prod_{n=1}^{\infty}X_{n}\) and every ultrafilter \(\omega^{\prime}\) containing \(\omega\), there is a trace defined on \(\prod^{\omega}\mathcal{M}_{n}\) by \(a\mapsto\lim_{n\to\omega^{\prime}}\tau_{n}(a_{n})\), where \((a_{n})_{n=1}^{\infty}\in\prod_{n=1}^{\infty}\mathcal{M}_{n}\) is any sequence representing \(a\) - such traces are called _limit traces_. Let \(\sum^{\omega}X_{n}\) be the closed convex hull of the set of limit traces on \(\prod^{\omega}\mathcal{M}_{n}\). Then the pair \[\prod^{\omega}(\mathcal{M}_{n},X_{n})\coloneqq\Big{(}\prod^{\omega}\mathcal{ M}_{n},\sum^{\omega}X_{n}\Big{)} \tag{5.5}\] is called the _reduced product_ of the sequence \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) with respect to \(\omega\) (and the _ultraproduct_ when \(\omega\) is an ultrafilter). In the case when \(\omega\) is the Frechet filter, we write the reduced product as \[\prod^{\infty}(\mathcal{M}_{n},X_{n})\coloneqq\Big{(}\prod^{\infty}\mathcal{ M}_{n},\sum^{\infty}X_{n}\Big{)}. \tag{5.6}\] Our first goal is to prove that the reduced product of a sequence of tracially complete \(C^{*}\)-algebras (with respect to a given free filter \(\omega\)) is itself a tracially complete \(C^{*}\)-algebra. Before doing that, we isolate a useful lemma that will be used frequently in our analysis of reduced products. **Lemma 5.3**.: _Let \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) be a sequence of tracially complete \(C^{*}\)-algebras. If \(a\in\prod^{\omega}(\mathcal{M}_{n},X_{n})\) be represented by the sequence \((a_{n})_{n=1}^{\infty}\in\prod_{n=1}^{\infty}\mathcal{M}_{n}\), then_ \[\|a\|_{2,\sum^{\omega}X_{n}}=\limsup_{n\to\omega}\|a_{n}\|_{2,X_{n}}. \tag{5.7}\] Proof.: Given a sequence of traces \((\tau_{n})_{n=1}^{\infty}\) and an ultrafilter \(\omega^{\prime}\supseteq\omega\), let \(\tau\) be the associated limit trace. Since \(\|a_{n}\|_{2,\tau_{n}}\leq\|a_{n}\|_{2,X_{n}}\) for all \(n\in\mathbb{N}\), we have \[\|a\|_{2,\tau}=\lim_{n\to\omega^{\prime}}\|a_{n}\|_{2,\tau_{n}}\leq\lim_{n\to \omega^{\prime}}\|a_{n}\|_{2,X_{n}}\leq\limsup_{n\to\omega}\|a_{n}\|_{2,X_{n}}. \tag{5.8}\] Hence \(\|a\|_{2,\sum^{\omega}X_{n}}\leq\limsup_{n\to\omega}\|a_{n}\|_{2,X_{n}}\). Conversely, let \(\omega^{\prime}\supseteq\omega\) be an ultrafilter with \[\lim_{n\to\omega^{\prime}}\|a_{n}\|_{2,X_{n}}=\limsup_{n\to\omega}\|a_{n}\|_{2,X_{n}}. \tag{5.9}\] For every \(n\in\mathbb{N}\), there exists \(\tau_{n}\in X_{n}\) with \(\|a_{n}\|_{2,\tau_{n}}>\|a_{n}\|_{2,X_{n}}-2^{-n}\). Let \(\tau\) be the limit trace corresponding to the sequence \((\tau_{n})_{n=1}^{\infty}\) and the ultrafilter \(\omega^{\prime}\). Then \[\|a\|_{2,\sum^{\omega}X_{n}}\geq\|a\|_{2,\tau}=\lim_{n\to\omega^{\prime}}\|a_{ n}\|_{2,\tau_{n}}\geq\limsup_{n\to\omega}\|a_{n}\|_{2,X_{n}}.\qed \tag{5.10}\] We now prove that a reduced product of a sequence of tracially complete \(C^{*}\)-algebras is a tracially complete \(C^{*}\)-algebra. **Proposition 5.4** (cf. [23, Lemma 1.6]).: _Let \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) be a sequence of tracially complete \(C^{*}\)-algebras. Then \(\prod^{\omega}(\mathcal{M}_{n},X_{n})\) is a tracially complete \(C^{*}\)-algebra._ Proof.: By Lemma 5.3 and the definition of \(\prod^{\omega}\mathcal{M}_{n}\), it is clear that \(\|\cdot\|_{2,\sum^{\omega}X_{n}}\) is a norm. It remains to show that the \(\|\cdot\|\)-closed unit ball is \(\|\cdot\|_{2,\sum^{\omega}X_{n}}\)-complete. Let \((a^{(k)})_{k=1}^{\infty}\) be a \(\|\cdot\|_{2,\sum^{\omega}X_{n}}\)-Cauchy sequence in the unit ball of \(\prod^{\omega}\mathcal{M}_{n}\), and for each \(k\in\mathbb{N}\), fix a sequence \((a_{n}^{(k)})_{n=1}^{\infty}\) of contractions which represents \(a^{(k)}\). Set \[\epsilon^{(k)}\coloneqq\sup\big{\{}\|a^{(l)}-a^{(l^{\prime})}\|_{2,\sum^{ \omega}X_{n}}:l,l^{\prime}\geq k\big{\}}, \tag{5.11}\] noting that \(\epsilon^{(k)}\to 0\) as \(k\to\infty\) since \((a^{(k)})_{k=1}^{\infty}\) is \(\|\cdot\|_{2,\sum^{\omega}X_{n}}\)-Cauchy. Define functions \(f_{n}^{(k)}\colon\{b\in\mathcal{M}:\|b\|\leq 1\}\to[0,2]\) by \[f_{n}^{(k)}(b)\coloneqq\max\big{\{}\|b-a_{n}^{(k)}\|_{2,X_{n}}-\epsilon^{(k)},0\big{\}}. \tag{5.12}\] For \(k_{0}\in\mathbb{N}\) and \(k=1,\ldots,k_{0}\), since \(\|a^{(k_{0})}-a^{(k)}\|_{2,\sum^{\omega}X_{n}}\leq\epsilon^{(k)}\), we have \(\limsup_{n\to\omega}f_{n}^{(k)}(a_{n}^{(k_{0})})=0\) by Lemma 5.3. Therefore, Kirchberg's \(\epsilon\)-test (Lemma 5.1) gives a sequence \((a_{n})_{n=1}^{\infty}\) of contractions representing an element \(a\) in the unit ball of \(\prod^{\omega}\mathcal{M}_{n}\) such that \(\limsup_{n\to\omega}f_{n}^{(k)}(a_{n})=0\) for all \(k\in\mathbb{N}\). By Lemma 5.3 and (5.12), this means that \[\|a-a^{(k)}\|_{2,\sum^{\omega}X_{n}}\leq\epsilon^{(k)}\to 0 \tag{5.13}\] as \(k\to\infty\). Hence the unit ball of \(\prod^{\omega}\mathcal{M}_{n}\) is \(\|\cdot\|_{2,\sum^{\omega}X_{n}}\)-complete. A particularly relevant case of a reduced product is when \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) is a constant sequence; i.e. for some tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\), we have \((\mathcal{M}_{n},X_{n})=(\mathcal{M},X)\) for all \(n\in\mathbb{N}\). In this case, we write \[(\mathcal{M}^{\omega},X^{\omega})\coloneqq\prod^{\omega}(\mathcal{M}_{n},X_{ n}) \tag{5.14}\] and call \((\mathcal{M}^{\omega},X^{\omega})\) the _reduced power_ of \((\mathcal{M},X)\) with respect to the free filter \(\omega\), or the _ultrapower_ if \(\omega\) is an ultrafilter. Again, when \(\omega\) is the Frechet filter, we write \((\mathcal{M}^{\infty},X^{\infty})\) in place of \((\mathcal{M}^{\omega},X^{\omega})\). Note that there is a natural embedding \(\iota_{(\mathcal{M},X)}\colon(\mathcal{M},X)\to(\mathcal{M}^{\omega},X^{ \omega})\) of tracially complete \(C^{*}\)-algebras given by identifying \(\mathcal{M}\) with constant sequences in \(\mathcal{M}^{\omega}\). Typically, the map \(\iota_{(\mathcal{M},X)}\) will be suppressed and we will view \(\mathcal{M}\) as a subalgebra of \(\mathcal{M}^{\omega}\) - this is the case for example when considering the central sequence algebra \(\mathcal{M}^{\omega}\cap\mathcal{M}^{\prime}\). For a \(C^{*}\)-algebra \(A\) with \(T(A)\) compact, we write \(A^{\omega}\) for the reduced power of \(A\) in the norm \(\|\cdot\|_{2,T(A)}\), which is defined in a way analogous to Definition 5.2, and we write \(T_{\omega}(A)\) for the limit traces on \(A^{\omega}\) induced by \(T(A)\). These uniform tracial reduced powers of \(C^{*}\)-algebras appear explicitly in connection with the Toms-Winter conjecture in [23] (working with ultrafilters \(\omega\)) and in the abstract approach to classification [16] (working with the Frechet filter).41 In our formalism, the pair \(\big{(}A^{\omega},\overline{\operatorname{co}}(T_{\omega}(A))\big{)}\) is a tracially complete \(C^{*}\)-algebra and, in fact, if \((\mathcal{M},X)\) is the tracial completion of \(A\) with respect to \(T(A)\), then the canonical map \(\alpha_{X}\colon A\to\mathcal{M}\) induces an isomorphism Footnote 41: Various related constructions appeared earlier. A suitable quotient of the norm ultrapower, which is isomorphic to the tracial ultrapower, appeared in [68, 110], and all these ideas have their spiritual origins in Matui and Sato’s work [75, 76]. Uniform tracial ultrapowers of \(W^{*}\)-bundles also were used in [10]. \[\big{(}A^{\omega},\overline{\operatorname{co}}(T_{\omega}(A))\big{)}\to( \mathcal{M}^{\omega},X^{\omega}), \tag{5.15}\] defined on representative sequences by \((a_{n})_{n=1}^{\infty}\mapsto(\alpha_{X}(a_{n}))_{n=1}^{\infty}\). This follows from a more general result on the compatibility of tracial completions: Proposition 5.6) below. We first introduce some more notation. **Definition 5.5**.: Let \((A_{n})_{n=1}^{\infty}\) be a sequence of \(C^{*}\)-algebras and let \(X_{n}\subseteq T(A_{n})\) be a compact convex set for each \(n\geq 1\). We write \[\prod\omega A_{n}\coloneqq\prod_{n=1}^{\infty}A_{n}\big{/}\big{\{}(a_{n})_{n= 1}^{\infty}:\lim_{n\to\omega}\|a_{n}\|_{2,X_{n}}=0\big{\}}. \tag{5.16}\] Let \(\sum^{\omega}X_{n}\) be the closed convex hull of the limit traces on \(\prod^{\omega}A_{n}\) defined by sequences \((\tau_{n})_{n=1}^{\infty}\in\prod_{n=1}^{\infty}X_{n}\). In the case of a constant sequence, say \(A_{n}=A\) and \(X_{n}=X\), we write \(A^{\omega}=\prod^{\omega}A_{n}\) and \(X^{\omega}=\sum^{\omega}X_{n}\). **Proposition 5.6**.: _Let \((A_{n})_{n=1}^{\infty}\) be a sequence of \(C^{*}\)-algebras, let \(X_{n}\subseteq T(A_{n})\) be a compact convex set for each \(n\geq 1\), let \((\mathcal{M}_{n},X_{n})\) be the tracial completion of \(A_{n}\) with respect to \(X_{n}\) as in Definition 3.19, and let \(\alpha_{n}\colon A_{n}\to\mathcal{M}_{n}\) be the canonical map for \(n\geq 1\). Then \(\big{(}\prod^{\omega}A_{n},\sum^{\omega}X_{n}\big{)}\) is a tracially complete \(C^{*}\)-algebra and there is an isomorphism of tracially complete \(C^{*}\)-algebras_ \[\prod\omega\alpha_{n}\colon\Big{(}\prod^{\omega}A_{n},\sum^{\omega}X_{n}\Big{)} \xrightarrow{\cong}\Big{(}\prod^{\omega}\mathcal{M}_{n},\sum^{\omega}X_{n} \Big{)} \tag{5.17}\] _defined at the level of representative sequences by \((a_{n})_{n=1}^{\infty}\mapsto(\alpha_{n}(a_{n}))_{n=1}^{\infty}\)._ Proof.: It is easy to see that \(\alpha\coloneqq\prod^{\omega}\alpha_{n}\) is a \({}^{*}\)-homomorphism and is isometric with respect to the uniform \(2\)-norms, so it suffices to show that \(\alpha\) is surjective. Fix \(b\in\prod^{\omega}\mathcal{M}_{n}\) and represent \(b\) by a bounded sequence \((b_{n})_{n=1}^{\infty}\). By the construction of the tracial completion \((\mathcal{M}_{n},X_{n})\), there is, for each \(n\geq 1\), \(a_{n}\in A_{n}\) such that \(\|\alpha_{n}(a_{n})-b_{n}\|_{2,X_{n}}<\frac{1}{n}\) and \(\|a_{n}\|\leq\|b_{n}\|\). Then the sequence \((a_{n})_{n=1}^{\infty}\) defines an element \(a\in\prod^{\omega}A_{n}\) such that \(\alpha(a)=b\). In the case of reduced powers, an important additional observation is that the isomorphism defined in Proposition 5.6 is well behaved on the central sequences. **Proposition 5.7**.: _Let \(A\) be a \(C^{*}\)-algebra, let \(X\subseteq T(A)\) be compact and convex, let \((A^{\omega},X^{\omega})\) be the uniform tracial reduced power, let \(\iota:A\to A^{\omega}\) be given by constant sequences, let \((\mathcal{M},X)\) be the tracial completion of \(A\) with respect to \(X\), and let \(\alpha_{X}:A\to\mathcal{M}\) be the canonical map. The isomorphism \(\alpha^{\omega}:(A^{\omega},X^{\omega})\to(\mathcal{M}^{\omega},X^{\omega})\) defined at the level of representative sequences by \((a_{n})_{n=1}^{\infty}\mapsto(\alpha_{X}(a_{n}))_{n=1}^{\infty}\) satisfies_ \[\alpha^{\omega}(A^{\omega}\cap\iota(A)^{\prime})=\mathcal{M}^{\omega}\cap \mathcal{M}^{\prime}. \tag{5.18}\] _More generally, for any \(\|\cdot\|_{2,X}\)-separable subset \(S\subseteq\mathcal{M}\), there is a \(\|\cdot\|\)-separable subset \(S_{0}\subseteq A\) such that \(\alpha(A^{\omega}\cap\iota(S_{0})^{\prime})\subseteq\mathcal{M}^{\omega}\cap S ^{\prime}\)._ Proof.: By Proposition 5.6, \(\alpha^{\omega}\) is an isomorphism. Since \[\alpha^{\omega}(\iota(A))=\alpha_{X}(A)\subseteq\mathcal{M}^{\omega}, \tag{5.19}\] we have \[\alpha^{\omega}(A^{\omega}\cap\iota(A)^{\prime})=\mathcal{M}^{\omega}\cap \alpha_{X}(A)^{\prime}. \tag{5.20}\] By Lemma 5.3, \(\iota_{(\mathcal{M},X)}\) is an isometry for the respective uniform \(2\)-norms. Hence \(\alpha_{X}(A)\) is \(\|\cdot\|_{2,X^{\omega}}\)-dense in \(\mathcal{M}\). In a tracially complete \(C^{*}\)-algebra, left and right multiplication by a fixed element are continuous with respect to the uniform \(2\)-norm by (3.2). Therefore, any element of \(\mathcal{M}^{\omega}\) that commutes with \(\alpha_{X}(A)\) must also commute with \(\mathcal{M}\). Hence \(\mathcal{M}^{\omega}\cap\alpha_{X}(A)^{\prime}=\mathcal{M}^{\omega}\cap \mathcal{M}^{\prime}\). Let \(S\subseteq\mathcal{M}\) be \(\|\cdot\|_{2,X}\)-separable. Since \(\alpha_{X}(A)\) is \(\|\cdot\|_{2,X^{\omega}}\)-dense in \(\mathcal{M}\), there is a countable subset \(S_{0}\subseteq A\) such that \(\overline{\alpha_{X}(S_{0})}^{\|\cdot\|_{2,X}}\supseteq S\). Then \[\alpha^{\omega}(A^{\omega}\cap\iota(S_{0})^{\prime})=\mathcal{M}^{\omega}\cap \alpha_{X}(S_{0})^{\prime}=\mathcal{M}^{\omega}\cap(\overline{\alpha_{X}(S_{0} )}^{\|\cdot\|_{2,X}})^{\prime}\subseteq\mathcal{M}^{\omega}\cap S^{\prime}, \tag{5.21}\] as claimed. As with \(C^{*}\)-norm reduced products (see [15, Lemma 3.9.5]), matrix amplifications commute with reduced products in the sense of the following theorem. The proof is essentially the same as in the \(C^{*}\)-algebra setting. **Proposition 5.8**.: _For any sequence \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) of tracially complete \(C^{*}\)-algebras with reduced product \((\mathcal{M},X)\) and \(d\in\mathbb{N}\), there is a natural isomorphism_ \[\big{(}\mathcal{M}\otimes M_{d},X\otimes\{\mathrm{tr}_{d}\}\big{)}\xrightarrow {\cong}\prod^{\omega}\big{(}\mathcal{M}_{n}\otimes M_{d},X_{n}\otimes\{ \mathrm{tr}_{d}\}\big{)}. \tag{5.22}\] _defined on representing sequences by \((a_{n})_{n=1}^{\infty}\otimes b\mapsto(a_{n}\otimes b)_{n=1}^{\infty}\)._ Proof.: The natural map \[\phi\colon\Big{(}\prod_{n=1}^{\infty}\mathcal{M}_{n}\Big{)}\otimes M_{d}\to \prod_{n=1}^{\infty}(\mathcal{M}_{n}\otimes M_{d}) \tag{5.23}\] is an isomorphism of \(C^{*}\)-algebras (see the proof of [15, Lemma 3.9.4], for example). Now, suppose \((\tau_{n})_{n=1}^{\infty}\in\prod_{n=1}^{\infty}T(\mathcal{M}_{n})\) and \(\omega^{\prime}\) is an ultrafilter containing \(\omega\), and let \(\tau\) denote the trace on \(\prod_{n=1}^{\infty}\mathcal{M}_{n}\) given by \[\tau(a)\coloneqq\lim_{n\to\omega^{\prime}}\tau_{n}(a_{n}),\qquad a=(a_{n})_{n= 1}^{\infty}\in\prod_{n=1}^{\infty}\mathcal{M}_{n}. \tag{5.24}\] Then for \(a=(a_{n})_{n=1}^{\infty}\in\prod_{n=1}^{\infty}(\mathcal{M}_{n}\otimes M_{d})\), we have \[\lim_{n\to\omega^{\prime}}(\tau_{n}\otimes\mathrm{tr}_{d})(a_{n})=(\tau\otimes \mathrm{tr}_{d})(\phi^{-1}(a)). \tag{5.25}\] Therefore, \(\phi\) descends to an isomorphism as in (5.22). We now turn to the question of traces on reduced products. It is well known that the tracial ultrapower of \(\mathrm{II}_{1}\) factor is also a \(\mathrm{II}_{1}\) factor. We have been unable to answer the general question for tracially complete \(C^{*}\)-algebras. Under the additional hypothesis of CPoU (see Section 6), a positive answer to the following question is given in Theorem 7.5. In particular, the answer is affirmative in the presence of property \(\Gamma\) (see Section 5.3) by Theorem 1.4. **Question 5.9**.: If \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) is a sequence of type \(\mathrm{II}_{1}\) factorial tracially complete \(C^{*}\)-algebras, is the reduced power \(\big{(}\prod^{\omega}\mathcal{M}_{n},\sum^{\omega}X_{n}\big{)}\) also factorial? **Remark 5.10**.: In an earlier draft of this paper, we asked this question without the type \(\mathrm{II}_{1}\) assumption. A counterexample in the type I setting was given by Vaccaro in [113]. It is based on a family of \(C^{*}\)-algebras introduced in [89]: \(\mathcal{M}_{n}\) is the continuous sections of a bundle over the complex projective space \(\mathbb{C}P^{n}\) with fibre \(M_{2}\), and \(X_{n}=T(\mathcal{M}_{n})\). We end this subsection with a reparameterisation theorem which will allow us to prove existence results for morphisms given both existence and uniqueness results for approximate morphisms. This should be regarded as an abstract version of the standard intertwining arguments commonly used in \(C^{*}\)-algebra theory. An operator norm version of this result appears in [46, Theorem 4.3], which, in turn, is a sequential version of [90, Proposition 1.3.7], attributed to Kirchberg. If \((\mathcal{N},Y)\) is a tracially complete \(C^{*}\)-algebra and \(r\colon\mathbb{N}\to\mathbb{N}\) is a function such that \(\lim_{n\to\infty}r(n)=\infty\), then there is an induced endomorphism \(r^{*}\) of \((\mathcal{N}^{\infty},Y^{\infty})\) given on representing sequences by \[r^{*}\big{(}(b_{n})_{n=1}^{\infty}\big{)}\coloneqq(b_{r(n)})_{n=1}^{\infty}, \qquad(b_{n})_{n=1}^{\infty}\in\ell^{\infty}(\mathcal{N}). \tag{5.26}\] Equivalently, viewing \(\ell^{\infty}(\mathcal{N})\) as bounded functions \(\mathbb{N}\to\mathcal{N}\), \(r^{*}\) is the map induced by \[\ell^{\infty}(\mathcal{N})\to\ell^{\infty}(\mathcal{N})\colon f\mapsto f\circ r. \tag{5.27}\] Note that the map \(r^{*}\) would typically not be well-defined if \((\mathcal{N}^{\infty},Y^{\infty})\) were replaced with an ultrapower - this is the main reason for working with general reduced products of tracially complete \(C^{*}\)-algebras. In applications of the following theorem, the metric space \(S\) will typically be either a separable \(C^{*}\)-algebra with the operator norm or a tracially complete \(C^{*}\)-algebra which is separable in its uniform \(2\)-norm. **Theorem 5.11** (Intertwining via reparameterisation).: _Let \((\mathcal{N},Y)\) be a tracially complete \(C^{*}\)-algebra, let \(S\) be a separable metric space, and let \(\phi\colon S\to\mathcal{N}^{\infty}\) be a \(\|\cdot\|_{2,Y^{\infty}}\)-continuous function. Suppose also that every unitary in \(\mathcal{N}^{\infty}\) lifts to a unitary in \(\ell^{\infty}(\mathcal{N})\). If for every function \(r\colon\mathbb{N}\to\mathbb{N}\) with \(\lim_{n\to\infty}r(n)=\infty\), we have \(r^{*}\circ\phi\) is approximately unitarily equivalent to \(\phi\), then there is a \(\|\cdot\|_{2,Y^{\infty}}\)-continuous function \(\psi\colon S\to(\mathcal{N},Y)\) such that \(\iota_{(\mathcal{N},Y)}\circ\psi\) is unitarily equivalent to \(\phi\)._ The proof is a minor modification of the \(C^{*}\)-algebra version obtained in [46, Theorem 4.3], where the reduced power is taken in the \(C^{*}\)-norm. In the \(C^{*}\)-norm setting, the condition on unitaries is automatic since every approximate unitary is close to a genuine unitary. We do not know if the condition on unitaries in \((\mathcal{N}^{\infty},Y^{\infty})\) is necessary. In all of our applications of Theorem 5.11, \((\mathcal{N},Y)\) will satisfy CPoU (see Section 6), and hence the condition on unitaries will follow from Corollary 7.11. Proof of Theorem 5.11.: Define a sequence of functions \(\phi_{n}\colon S\to Y\) by choosing a \(\|\cdot\|\)-preserving lift \((\phi_{n}(b))_{n=1}^{\infty}\in\ell^{\infty}(\mathcal{N})\) of \(\phi(b)\in\mathcal{N}^{\infty}\) for every \(b\in S\). We claim that for every finite subset \(\mathcal{F}\subseteq S\), every \(\epsilon>0\), and every \(m\in\mathbb{N}\), there exists \(k\geq m\) such that for every \(n\geq k\), there exists a unitary \(u\in\mathcal{N}\) such that \[\max_{a\in F}\|u^{*}\phi_{n}(a)u-\phi_{k}(a)\|_{2,Y}<\epsilon,\quad a\in \mathcal{F}. \tag{5.28}\] Suppose for a contradiction that the claim is false. Then there exist a finite set \(\mathcal{F}_{0}\subseteq S\), \(\epsilon_{0}>0\), \(m_{0}\in\mathbb{N}\), and a sequence of natural numbers \((n_{k})_{k=m_{0}}^{\infty}\) with \(n_{k}\geq k\) such that \[\max_{a\in\mathcal{F}_{0}}\|u^{*}\phi_{n_{k}}(a)u-\phi_{k}(a)\|_{2,Y}\geq \epsilon_{0} \tag{5.29}\] for all unitaries \(u\in\mathcal{N}\) and all \(k\geq m_{0}\). Let \(r:\mathbb{N}\to\mathbb{N}\) be given by \(r(k)\coloneqq n_{k}\) for \(k\geq m_{0}\) and define \(r(k)\) arbitrarily for \(k<m_{0}\). Then \(\lim_{k\to\infty}r(k)=\infty\). By our hypothesis, \(\phi\) and \(r^{*}\circ\phi\) are approximately unitarily equivalent. Therefore, there exists a unitary \(u\in\mathcal{N}^{\infty}\) such that \[\|u^{*}(r^{*}\circ\phi)(a)u-\phi(a)\|_{2,Y^{\infty}}<\epsilon_{0} \tag{5.30}\] for all \(a\in\mathcal{F}_{0}\). By our hypothesis, we may lift \(u\) to a sequence of unitaries \((u_{k})_{k=1}^{\infty}\) in \(\mathcal{N}\). By Lemma 5.3, we have \[\limsup_{k\to\infty}\|u_{k}^{*}\phi_{n_{k}}(a)u_{k}-\phi_{k}(a)\|_{2,Y}< \epsilon_{0} \tag{5.31}\] for all \(a\in\mathcal{F}_{0}\). Then (5.31) contradicts (5.29) for some sufficiently large \(k\in\mathbb{N}\). This proves the claim. We now use this to construct \(\psi\). Let \((\mathcal{F}_{i})_{i=1}^{\infty}\) be an increasing sequence of finite subsets whose union is dense in \(S\). Applying the claim recursively, we obtain an increasing sequence of natural numbers \((k_{n})_{n=1}^{\infty}\) and a sequence of unitaries \((u_{n})_{n=1}^{\infty}\) in \(\mathcal{N}\) such that \[\|u_{n}^{*}\phi_{k_{n}}(a)u_{n}-\phi_{k_{n-1}}(a)\|_{2,Y}<2^{-n} \tag{5.32}\] for all \(a\in\mathcal{F}_{n}\). Set \(v_{n}\coloneqq u_{n}u_{n-1}\cdots u_{1}\) (and put \(v_{0}\coloneqq 1_{\mathcal{N}}\)). Then, since \(v_{n}=u_{n}v_{n-1}\) and \(\|v_{n-1}\|\leq 1\), we have \[\|v_{n}^{*}\phi_{k_{n}}(a)v_{n}-v_{n-1}^{*}\phi_{k_{n-1}}(a)v_{n-1}\|_{2,Y}<2^ {-n} \tag{5.33}\] for all \(n\in\mathbb{N}\) and \(a\in\mathcal{F}_{n}\). By construction, for every \(a\in\bigcup_{i=1}^{\infty}\mathcal{F}_{i}\), the sequence \((v_{n}^{*}\phi_{k_{n}}(a)v_{n})_{n=1}^{\infty}\) is \(\|\cdot\|_{2,Y}\)-Cauchy. Indeed, if \(a\in\mathcal{F}_{i_{0}}\), then for \(n>m>i_{0}\), we have \[\|v_{m}^{*}\phi_{k_{m}}(a)v_{m}-v_{n}^{*}\phi_{k_{n}}(a)v_{n}\|_{2,Y}<\sum_{i =n+1}^{m}2^{-i}<2^{-n}. \tag{5.34}\] Let \(b\in S\) and \(\epsilon>0\). Since \(\bigcup_{i=1}^{\infty}\mathcal{F}_{i}\) is dense in \(S\) and \(\phi:S\to(\mathcal{N}^{\infty},Y^{\infty})\) is \(\|\cdot\|_{2,Y^{\infty}}\)-continuous, there exists \(i_{0}\in\mathbb{N}\) and some \(a\in F_{i_{0}}\) with \(\|\phi(b)-\phi(a)\|_{2,Y^{\infty}}<\epsilon\). Hence, by Lemma 5.3, we have \[\limsup_{k\to\infty}\|\phi_{k}(b)-\phi_{k}(a)\|_{2,Y}<\epsilon. \tag{5.35}\] Choose \(N_{1}\in\mathbb{N}\) with \(\|\phi_{k}(b)-\phi_{k}(a)\|_{2,Y}<\epsilon\) for all \(k>N_{1}\), and choose \(N_{2}\in\mathbb{N}\) with \(\|v_{m}^{*}\phi_{k_{m}}(a)v_{m}-v_{n}^{*}\phi_{k_{n}}(a)v_{n}\|_{2,Y}<\epsilon\) for all \(n,m>N_{2}\). Then a simple \(3\epsilon\)-argument gives that \[\|v_{m}^{*}\phi_{k_{m}}(b)v_{m}-v_{n}^{*}\phi_{k_{n}}(b)v_{n}\|_{2,Y}<3\epsilon \tag{5.36}\] whenever \(n,m>\max(N_{1},N_{2})\). Hence \((v_{n}^{*}\phi_{k_{n}}(b)v_{n})_{n=1}^{\infty}\) is \(\|\cdot\|_{2,Y}\)-Cauchy for all \(b\in S\). Moreover, we have \(\|v_{n}^{*}\phi_{k_{n}}(b)v_{n}\|\leq\|\phi_{n}(b)\|\leq\|\phi(b)\|\) for all \(n\in N\). Since \((\mathcal{N},Y)\) is a tracially complete \(C^{*}\)-algebra, \((v_{n}^{*}\phi_{k_{n}}(b)v_{n})_{n=1}^{\infty}\) converges in the uniform \(2\)-norm for all \(b\in S\). Hence we may define \(\psi:S\to(\mathcal{N},Y)\) by \(\psi(b)\coloneqq\lim_{n\to\infty}v_{n}^{*}\phi_{k_{n}}(b)v_{n}\). By construction, \(\iota_{(\mathcal{N},Y)}\circ\psi\) is unitarily equivalent to \(r^{*}\circ\phi\), where \(r:\mathbb{N}\to\mathbb{N}\) is given by \(r(n)\coloneqq k_{n}\), via the unitary represented by the sequence \((v_{k_{n}})_{n=1}^{\infty}\). Since \(r^{*}\circ\phi\) is unitarily equivalent to \(\phi\) by hypothesis (using Kirchberg's \(\epsilon\)-test to replace approximate unitary equivalence with unitary equivalence), we conclude that \(\iota_{(\mathcal{N},Y)}\circ\psi\) is unitarily equivalent to \(\phi\). ### The McDuff property Central sequences in II\({}_{1}\) factors have been studied beginning with the foundational work of Murray and von Neumann ([83]) when they used them to distinguish the hyperfinite II\({}_{1}\) factor from any free group factor. The systematic study of central sequences was later instigated by two breakthrough results of McDuff: the existence of infinitely many (and in fact, uncountably many) non-isomorphic II\({}_{1}\) factors ([77, 78]) and, of particular relevance to us, her characterisation those II\({}_{1}\) factors \(\mathcal{M}\) with separable predual which tensorially absorb the hyperfinite II\({}_{1}\) factor (i.e. \(\mathcal{M}\cong\mathcal{M}\bar{\otimes}\mathcal{R}\)) as those \(\mathcal{M}\) with non-abelian central sequence algebras. Motivated by this last result, a II\({}_{1}\) factor \(\mathcal{M}\) is said to have the _McDuff property_ if there are approximately central unital embeddings of matrix algebras into \(\mathcal{M}\).42 Footnote 42: From a ultrapower viewpoint, McDuff’s work shows that the central sequence algebra \(\mathcal{M}^{\omega}\cap\mathcal{M}^{\prime}\) is non-abelian if and only if it is type II\({}_{1}\) (and so admits unital embeddings of all matrix algebras). For II\({}_{1}\) factors with non-separable predual, the equivalence of tensorial absorption of \(\mathcal{R}\) and the McDuff property no longer holds ([44, Theorem 1.5]). Experience has shown that the formulation in terms of approximately central matrix embeddings is the correct way to extend the McDuff property to the non-separable predual situation (where one should of course work with ultrapowers over uncountable sets). Indeed, the ultrapower \(\mathcal{M}^{\omega}\) of a McDuff II\({}_{1}\) factor \(\mathcal{M}\) with separable predual has the McDuff property but is not stable under tensoring by \(\mathcal{R}\) – see [16, Footnote 65], which uses [50]. The McDuff property of II\({}_{1}\) factors has been of considerable interest to \(C^{*}\)-algebraists working in the classification programme because of its relation to \(\mathcal{Z}\)-stability at both conceptual and technical levels ([111, 75, 68, 110, 97, 22, 16]). In this section, we generalise the McDuff property to the setting of tracially complete \(C^{*}\)-algebras and show that the McDuff property is equivalent to \(\mathcal{R}\)-stability in the separable case (Theorem 5.18). We begin with a formal definition of the McDuff property before establishing some useful technical reformulations and permanence properties. **Definition 5.12** (cf. [22, Definition 4.2]).: Let \((\mathcal{M},X)\) be a tracially complete \(C^{*}\)-algebra. We say that \((\mathcal{M},X)\) is _McDuff_ if for any \(\|\cdot\|_{2,X}\)-separable set \(S\subseteq\mathcal{M}\) and \(k\geq 1\), there is a unital embedding \(M_{k}\to\mathcal{M}^{\omega}\cap S^{\prime}\).43 Footnote 43: Of course, when \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-separable, we can just take \(S=\mathcal{M}\). The _uniform McDuff property_ was defined for a separable \(C^{*}\)-algebra \(A\) with \(T(A)\) compact in [22, Definition 4.2] as the existence of unital embeddings \(M_{k}\to A^{\omega}\cap A^{\prime}\) for all \(k\geq 1\). By Proposition 5.7, this is consistent with our definition. **Proposition 5.13**.: _Let \(A\) be a separable \(C^{*}\)-algebra with \(T(A)\) compact. Then \(A\) is uniformly McDuff in the sense of [22] if and only if its tracial completion with respect to \(T(A)\) is McDuff in the sense of Definition 5.12._ We now establish some equivalent reformulations of the McDuff property. Note that the equivalence of (i) and (ii) in the following theorem shows that the McDuff property is independent of choice of free filter \(\omega\). The result is standard in the setting of II\({}_{1}\) factors, and the same techniques work in this context. **Proposition 5.14** (cf. [10, Proposition 3.11]).: _Let \((\mathcal{M},X)\) be a tracially complete \(C^{*}\)-algebra. The following are equivalent:_ (i)_\((\mathcal{M},X)\) is McDuff._ (ii) _Given a finite set_ \(\mathcal{F}\subseteq\mathcal{M}\) _and_ \(\epsilon>0\)_, there is a contraction_ \(v\in\mathcal{M}\) _such that_ \[\max_{a\in\mathcal{F}}\|[v,a]\|_{2,X}<\epsilon,\ \|v^{*}v+vv^{*}-1_{ \mathcal{M}}\|_{2,X}<\epsilon,\ \text{and}\ \|v^{2}\|_{2,X}<\epsilon. \tag{5.37}\] (iii) _For each_ \(\|\cdot\|_{2,X}\)_-separable subset_ \(S\subseteq\mathcal{M}^{\omega}\)_, there is a contraction_ \(v\in\mathcal{M}^{\omega}\cap S^{\prime}\) _such_ \(v^{2}=0\) _and_ \(vv^{*}+vv^{*}=1_{\mathcal{M}^{\omega}}\)_._ (iv) _For each_ \(\|\cdot\|_{2,X}\)_-separable subset_ \(S\subseteq\mathcal{M}^{\omega}\)_, there is_ \(k\geq 2\) _and a unital embedding_ \(M_{k}\to\mathcal{M}^{\omega}\cap S^{\prime}\)_._ (v) _For each_ \(\|\cdot\|_{2,X}\)_-separable subset_ \(S\subseteq\mathcal{M}^{\omega}\)_, there is a unital embedding_ \(\mathcal{R}\to\mathcal{M}^{\omega}\cap S^{\prime}\)_._ Proof.: (i)\(\Rightarrow\)(ii): Given a finite set \(\mathcal{F}\subseteq\mathcal{M}\), fix a unital embedding \(\phi\colon M_{2}\to\mathcal{M}^{\omega}\cap\mathcal{F}^{\prime}\). Let \((v_{k})_{k=1}^{\infty}\subseteq\mathcal{M}\) be a sequence of contractions representing \(\phi(e_{1,2})\) and set \(v\coloneqq v_{k}\) for some suitable index \(k\). (ii)\(\Rightarrow\)(iii): Let \(\{s^{(i)}:i\in\mathbb{N}\}\) be a countable dense set in \(S\). For each \(i\in\mathbb{N}\), let \((s^{(i)}_{n})_{n=1}^{\infty}\) be a sequence representing \(s^{(i)}\). For each \(n\in\mathbb{N}\), set \(\epsilon_{n}\coloneqq 2^{-n}\) and \(\mathcal{F}_{n}\coloneqq\{s^{(i)}_{n}:i\leq n\}\). Let \(v_{n}\in\mathcal{M}\) be given as in (ii) with \((\mathcal{F}_{n},\epsilon_{n})\) in place of \((\mathcal{F},\epsilon)\). Then the sequence \((v_{n})_{n=1}^{\infty}\) induces a contraction \(v\in\mathcal{M}^{\omega}\cap S^{\prime}\) with \(v^{2}=0\) and \(v^{*}v+vv^{*}=1_{\mathcal{M}^{\omega}}\). (iii)\(\Rightarrow\)(iv): Let \(v\in\mathcal{M}^{\omega}\cap S^{\prime}\) be as in (iii). Since \(v\) is a contraction and \(v^{2}=0\), we have \(v^{*}v\) and \(vv^{*}\) are orthogonal positive contractions. As \(v^{*}v+vv^{*}=1_{\mathcal{M}}\), \(v^{*}v\) and \(vv^{*}\) are projections, and in particular, \(v\) is a partial isometry. It follows that the \(C^{*}\)-subalgebra generated by \(v\) is spanned by \(1_{\mathcal{M}}\), \(v\), \(v^{*}\), and \(v^{*}v\). As this subalgebra is non-commutative and has dimension at most four, it is isomorphic to \(M_{2}\). This verifies (iv) with \(k=2\). (iv)\(\Rightarrow\)(v): Fix the subset \(S\) from (v). Using (iv), let \(k_{1}\geq 2\) and let \(\phi_{1}\colon M_{k_{1}}\to\mathcal{M}^{\omega}\cap S^{\prime}\) be a unital embedding. Let \(S_{1}\coloneqq C^{*}(\phi_{1}(M_{k_{1}})\cup S)\) and use (iv) again to produce an integer \(k_{2}\geq 2\) and a unital embedding \(\phi_{2}\colon M_{k_{2}}\to\mathcal{M}^{\omega}\cap S^{\prime}_{1}\). Continuing inductively, there are integers \(k_{n}\geq 2\) and unital embeddings \(\phi_{n}\colon M_{k_{n}}\to\mathcal{M}^{\omega}\cap S^{\prime}\) with commuting ranges. The \(\phi_{n}\) induce a unital embedding \[\mathcal{R}\cong\overline{\bigotimes}_{n=1}^{\infty}(M_{k_{n}},\operatorname{tr}_ {k_{n}})\to\mathcal{M}^{\omega}\cap S^{\prime}. \tag{5.38}\] (v)\(\Rightarrow\)(i): This follows as there is a unital embedding \(M_{k}\to\mathcal{R}\) for all \(k\geq 1\). **Remark 5.15**.: Other equivalent properties to McDuffness can be given by strengthening Definition 5.12 to allow \(S\) to be any \(\|\cdot\|_{2,X^{\omega}}\)-separable subset of \(\mathcal{M}^{\omega}\) or weakening any of (iii), (iv) or (v) to only have \(S\subseteq\mathcal{M}\). In all cases the argument (for the non-trivial direction) is by reindexing. For example, to go from the weakening of (iv) to (iv), given a \(\|\cdot\|_{2,X^{\ast}}\)-separable subset \(S\subseteq\mathcal{M}\), let \(T\subseteq\mathcal{M}\) be a countable set consisting of the sequence entries of lifts of a countable dense subset of \(S\). Then, given a unital embedding \(\phi:M_{k}\to\mathcal{M}^{\omega}\cap T^{\prime}\), we may lift \(\phi\) to a sequence of c.p.c. maps \(\phi_{n}:M_{k}\to\mathcal{M}\) which are asymptotically multiplicative, unital, and commute with \(T\). An appropriate reindexing \((\phi_{m_{n}})\) will provide a unital embedding \(\phi:M_{k}\to\mathcal{M}^{\omega}\cap S^{\prime}\). As a corollary of the equivalence between (i) and (ii) above, we obtain the following permanence properties. **Corollary 5.16**.: _Inductive limits and reduced products of McDuff tracially complete \(C^{\ast}\)-algebras are McDuff._ The following result gives a large supply of examples of McDuff tracially complete \(C^{\ast}\)-algebras. The result extends [23, Proposition 2.3], and the proof is very similar. **Proposition 5.17** (cf. [23, Proposition 2.3]).: _If \(A\) is a separable \(\mathcal{Z}\)-stable \(C^{\ast}\)-algebra and \(X\subseteq T(A)\) is a compact convex set, then \((\overline{A}^{X},X)\) is a McDuff tracially complete \(C^{\ast}\)-algebra._ Proof.: Let \(\omega\) be a free ultrafilter on \(\mathbb{N}\), let \(A_{\omega}\coloneqq\ell^{\infty}(A)/c_{\omega}(A)\) be the operator norm ultrapower of \(A\), and let \(\mathcal{M}\coloneqq\overline{A}^{X}\). By [67, Proposition 4.4(4)], there is a unital embedding \(\phi\colon\mathcal{Z}\to(A_{\omega}\cap A^{\prime})/A^{\perp}\), where \[A^{\perp}\coloneqq\{a\in A_{\omega}:ab=ba=0\text{ for all }b\in A\}. \tag{5.39}\] The natural map \(\alpha_{X}\colon A\to\mathcal{M}\) induces a \({}^{\ast}\)-homomorphism \(q\colon A_{\omega}\to\mathcal{M}^{\omega}\). Since \(\alpha_{X}\) maps the unit ball of \(A\) onto a \(\|\cdot\|_{2,X}\)-dense subset of the unit ball of \(\mathcal{M}\), we have \(q\) is surjective and \(q(A_{\omega}\cap A^{\prime})\subseteq\mathcal{M}^{\omega}\cap\mathcal{M}^{\prime}\). Following the proof of [23, Lemma 1.10], we show \(q(A^{\perp})=0\). Assume \(b\in A^{\perp}\) with \(0\leq b\leq 1\). Let \(\tau\in X^{\omega}\subseteq T(\mathcal{M}^{\omega})\) be given and note that \[\tau\circ q|_{A}=\tau|_{\mathcal{M}}\circ\alpha_{X}. \tag{5.40}\] Combining Corollary 3.29(i) with (5.40) shows that \(\tau\circ q|_{A}\) has norm \(1\). Fix \(\epsilon>0\) and a positive contraction \(e\in A\) with \(\tau(q(e))\geq 1-\epsilon\). Since \(b\in A^{\perp}\) and \(e\in A\) are orthogonal positive contractions, their sum is also a positive contraction. Therefore, \[0\leq\tau(q(b))=\tau(q(b+e))-\tau(q(e))\leq 1-(1-\epsilon)=\epsilon. \tag{5.41}\] Since \(\epsilon>0\) was arbitrary, \(\tau(q(b))=0\) for all \(\tau\in X^{\omega}\), and since \(q(b)\geq 0\), this implies \(q(b)=0\). Hence \(q(A^{\perp})=0\). Let \(\bar{q}\colon(A_{\omega}\cap A^{\prime})/A^{\perp}\to\mathcal{M}^{\omega}\cap \mathcal{M}^{\prime}\) be the \({}^{*}\)-homomorphism determined by \(q\). Then \(\bar{q}\circ\phi\colon\mathcal{Z}\to\mathcal{M}^{\omega}\cap\mathcal{M}^{\prime}\) is a unital \({}^{*}\)-homomorphism. As \(\mathcal{M}^{\omega}\cap\mathcal{M}^{\prime}\) is a \(\|\cdot\|_{2,X^{\omega}}\)-closed, unital \(C^{*}\)-subalgebra of \(\mathcal{M}^{\omega}\), we may view \(\mathcal{M}^{\omega}\cap\mathcal{M}^{\prime}\) as a tracially complete \(C^{*}\)-algebra as in the comments preceding Definition 3.11. Since \(\mathcal{Z}\) has a unique trace \(\tau\) and \(\pi_{\tau}(\mathcal{Z})^{\prime\prime}\cong\mathcal{R}\), Proposition 3.25 allows us to extend \(\bar{q}\circ\phi\) to a unital embedding \(\mathcal{R}\to\mathcal{M}^{\omega}\cap\mathcal{M}^{\prime}\). Since \(M_{k}\) embeds unitally in \(\mathcal{R}\) for all \(k\in\mathbb{N}\), we see that \(\mathcal{M}\) has the McDuff property. The following result gives a tensor product characterisation of McDuff tracially complete \(C^{*}\)-algebras in the separable setting, analogous to McDuff's original result for \(\mathrm{II}_{1}\) factors from [79]. Our proof is an adaptation of the argument found in [10, Proposition 3.11] in the setting of \(W^{*}\)-bundles. These follow the framework of the analogous results for absorption of a strongly self-absorbing \(C^{*}\)-algebra (see [111, Theorem 2.2] or [93, Theorem 7.2.2]), which are powered by an Elliott intertwining argument. **Theorem 5.18**.: _Suppose \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra such that \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-separable. Then \((\mathcal{M},X)\) is uniformly McDuff if and only if \((\mathcal{M},X)\bar{\otimes}(\mathcal{R},\mathrm{tr}_{\mathcal{R}})\cong( \mathcal{M},X)\)._ Proof.: Since \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-separable and McDuff, there exists a unital embedding \(\phi:\mathcal{R}\to\mathcal{M}^{\omega}\cap\mathcal{M}^{\prime}\). Take a set-theoretic lift of \(\phi\) to a sequence \((\phi_{n}\colon\mathcal{R}\to\mathcal{M})_{n=1}^{\infty}\) of maps. We define commuting unital embeddings \[\alpha,\beta:\mathcal{R}\to(\mathcal{M}\bar{\otimes}\mathcal{R})^{\omega}\cap (\mathcal{M}\otimes 1_{\mathcal{R}})^{\prime} \tag{5.42}\] at the level of representative sequences by \[\alpha(x)\coloneqq(1_{\mathcal{M}}\otimes x)_{n=1}^{\infty}\quad\text{and} \quad\beta(x)\coloneqq(\phi_{n}(x)\otimes 1_{\mathcal{R}})_{n=1}^{\infty}, \tag{5.43}\] respectively. Let \(M_{2^{\infty}}\) be the UHF algebra with supernatural number \(2^{\infty}\), which we view as a \(\|\cdot\|_{2,\mathrm{tr}_{\mathcal{R}}}\)-dense subalgebra of \(\mathcal{R}\). As \(M_{2^{\infty}}\) is nuclear, there exists an embedding \[\gamma:\mathcal{M}\otimes M_{2^{\infty}}\otimes M_{2^{\infty}}\to(\mathcal{M} \bar{\otimes}\mathcal{R})^{\omega} \tag{5.44}\] such that \[\gamma(a\otimes b\otimes c)=(a\otimes 1_{\mathcal{R}})\alpha(b)\beta(c)=(a \otimes b)\beta(c) \tag{5.45}\] for all \(a\in\mathcal{M}\) and \(b,c\in M_{2^{\infty}}\). Since \(M_{2^{\infty}}\) has a unique trace, it follows that \(\gamma\) is trace-preserving44 and, accordingly, extends to an embedding of the tracial completion Footnote 44: To ease notation, set \(A\coloneqq\mathcal{M}\otimes\mathcal{M}_{2^{\infty}}\) and \((\mathcal{N},Y)=(\mathcal{M},X)\bar{\otimes}(\mathcal{R},\mathrm{tr}_{ \mathcal{R}})\), and view \(A\) as a \(\|\cdot\|_{2,Y}\)-dense subalgebra of \(\mathcal{N}\). Then \(\gamma\) is a map \(A\otimes\mathcal{M}_{2^{\infty}}\to(\mathcal{N}\otimes\mathcal{R})^{\omega}\) given by \(\gamma(a\otimes c)=a\beta(c)\). It suffices to show \(\gamma^{*}(\tau)=\tau|_{A}\otimes\mathrm{tr}_{2^{\infty}}\) for all \(\tau\in Y^{\infty}\). Fix \(\tau\in Y^{\infty}\) and positive \(a\in A\). If \(\tau(a)=0\), then \(\tau(a\beta(c))=0=\tau(a)\mathrm{tr}_{2^{\infty}}(c)\) by the Cauchy–Schwarz inequality. If \(\tau(a)\neq 0\), then \(c\mapsto\tau(\gamma(a\otimes c))/\tau(a)\) is a trace on \(M_{2^{\infty}}\) and hence is the unique trace \(\mathrm{tr}_{2^{\infty}}\). Therefore, for all \(a\in A\) and \(c\in M_{2^{\infty}}\), we have \(\tau(\gamma(a\otimes c))=\tau(a)\mathrm{tr}_{2^{\infty}}(c)\). \[\bar{\gamma}:\mathcal{M}\bar{\otimes}\mathcal{R}\bar{\otimes}\mathcal{R}\to( \mathcal{M}\bar{\otimes}\mathcal{R})^{\omega} \tag{5.46}\] with \[\bar{\gamma}(a\otimes b\otimes c)=(a\otimes 1_{\mathcal{R}})\alpha(b)\beta(c)=(a \otimes b)\beta(c) \tag{5.47}\] for all \(a\in\mathcal{M}\) and \(b,c\in\mathcal{R}\). Recall that \(\mathcal{R}\) has an approximately inner half-flip, in the sense that there exists unitaries \((\bar{u}_{m})_{m=1}^{\infty}\) in \(\mathcal{R}\bar{\otimes}\mathcal{R}\) such that for all \(b\in\mathcal{R}\) \[\lim_{m\to\infty}\|\bar{u}_{m}^{*}(b\otimes 1_{\mathcal{R}})\bar{u}_{m}-1_{ \mathcal{R}}\otimes b\|_{2,\operatorname{tr}_{\mathcal{R}}\otimes\operatorname {tr}_{\mathcal{R}}}=0. \tag{5.48}\] Moreover, since \(\mathcal{R}\bar{\otimes}\mathcal{R}\) is a von Neumann algebra, each unitary \(\bar{u}_{m}\) is of the form \(\exp(2\pi i\bar{h}_{m})\) for some self-adjoint \(\bar{h}_{m}\in\mathcal{R}\bar{\otimes}\mathcal{R}\). Set \(u_{m}:=\bar{\gamma}(1_{\mathcal{M}}\otimes\bar{u}_{m})\in(\mathcal{M}\bar{ \otimes}\mathcal{R})^{\omega}\cap(\mathcal{M}\otimes 1_{\mathcal{R}})^{\prime}\) (it is in this commutant because \(\bar{\gamma}(\mathcal{M}\otimes 1_{\mathcal{R}}\otimes 1_{\mathcal{R}})= \mathcal{M}\otimes 1_{\mathcal{R}}\)). Then the two embeddings \(\mathcal{M}\bar{\otimes}\mathcal{R}\to(\mathcal{M}\bar{\otimes}\mathcal{R})^{\omega}\) given by \[a\!\otimes\!b\mapsto\bar{\gamma}(a\!\otimes\!b\!\otimes\!1_{\mathcal{R}})=a \!\otimes\!b\;\;\text{and}\;\;a\!\otimes\!b\mapsto\bar{\gamma}(a\!\otimes\!1_{ \mathcal{R}}\otimes b)=(a\!\otimes\!1)\beta(b) \tag{5.49}\] are approximately unitary equivalent in \(\|\cdot\|_{2,(X\otimes\{\operatorname{tr}_{\mathcal{R}}\})^{\omega}}\), via the sequence \((u_{m})_{m=1}^{\infty}\). Since \(\beta(b)\in(\mathcal{M}\otimes 1_{\mathcal{R}})^{\omega}\), it follows that \[\lim_{m\to\infty}\inf\Big{\{}\|u_{m}^{*}xu_{m}-z\|_{2,(X\otimes\{ \operatorname{tr}_{\mathcal{R}}\})^{\omega}}:\begin{matrix}z\in(\mathcal{M} \otimes 1_{\mathcal{R}})^{\omega}\\ \|z\|\leq 1\end{matrix}\Big{\}}=0 \tag{5.50}\] for all contractions \(x\in\mathcal{M}\otimes\mathcal{R}\). We also have \(u_{m}=\exp(2\pi ih_{m})\), where \(h_{m}=\bar{\gamma}(1_{\mathcal{M}}\otimes\bar{h}_{m})\). Since we can lift the self adjoint \(h_{m}\in(\mathcal{M}\bar{\otimes}\mathcal{R})^{\omega}\) to a sequence of self-adjoint elements \((h_{m,n})_{n=1}^{\infty}\) in \(\ell^{\infty}(\mathcal{M}\bar{\otimes}\mathcal{R})\), we can find a sequence of unitaries \((u_{m,n})_{n=1}^{\infty}\) in \(\mathcal{M}\otimes\mathcal{R}\) representing \(u_{m}\). We are now ready to construct an isomorphism \(\psi:\mathcal{M}\to\mathcal{M}\otimes\mathcal{R}\). Let \((x_{k})_{k=1}^{\infty}\) be a \(\|\cdot\|_{2,X}\)-dense sequence in the unit ball of \(\mathcal{M}\) and \((y_{k})_{k=1}^{\infty}\) be a \(\|\cdot\|_{2,X\otimes\{\operatorname{tr}_{\mathcal{R}}\}}\)-dense sequence in the unit ball of \(\mathcal{M}\otimes\mathcal{R}\). We shall iteratively produce unitaries \(w_{k}\) in \(\mathcal{M}\otimes\mathcal{R}\) and contractions \(z_{k}^{(j)}\in\mathcal{M}\) for \(k\in\mathbb{N}\) and \(1\leq j\leq k\). The construction begins with \(w_{0}\coloneqq 1\). Fix \(k\geq 1\) and suppose that \(w_{s},z_{s}^{(r)}\in\mathcal{M}\) for \(1\leq r\leq s<k\) have already been constructed. By (5.50), there exists contractions \(\bar{z}^{(1)},\ldots,\bar{z}^{(k)}\in(\mathcal{M}\otimes 1_{\mathcal{R}})^{\omega}\) and \(m_{k}\in\mathbb{N}\) such that \[\|u_{m_{k}}^{*}w_{k-1}^{*}y_{j}w_{k-1}u_{m_{k}}-\bar{z}_{k}^{(j)}\otimes 1_{ \mathcal{R}}\|_{2,(X\otimes\{\operatorname{tr}_{\mathcal{R}}\})^{\omega}}< \frac{1}{k}. \tag{5.51}\] for \(j=1,\ldots,k-1\). Let \((u_{m_{k},n})_{n=1}^{\infty}\) be a sequence of unitaries in \(\mathcal{M}\otimes\mathcal{R}\) representing \(u_{m_{k}}\) and let \((z_{k,n}^{(j)})_{n=1}^{\infty}\) be sequences of contractions representing each \(\bar{z}_{k}^{(j)}\). Since \(u_{m_{k}}\in(\mathcal{M}\bar{\otimes}\mathcal{R})^{\omega}\cap(\mathcal{M} \otimes 1_{\mathcal{R}})^{\prime}\), we can choose \(n_{k}\in\mathbb{N}\) such that \[\|u_{m_{k},n_{k}}^{*}w_{k-1}^{*}y_{j}w_{k-1}u_{m_{k},n_{k}}-z_{k,n_{k}}^{(j)} \otimes 1_{\mathcal{R}}\|_{2,X\otimes\{\operatorname{tr}_{\mathcal{R}}\}} <\frac{1}{k}, \tag{5.52}\] \[\|[u_{m_{k},n_{k}},x_{j}\otimes 1_{\mathcal{R}}]\|_{2,X\otimes\{\operatorname{tr}_{ \mathcal{R}}\}} <2^{-k}, \tag{5.53}\] for all \(1\leq j\leq k\), and \[\|[u_{m_{k},n_{k}},z_{s}^{(r)}\otimes 1_{\mathcal{R}}]\|_{2,X\otimes\{\operatorname{tr}_{ \mathcal{R}}\}}<2^{-k}, \tag{5.54}\] for all \(1\leq r\leq s<k\). Set \(w_{k}\coloneqq w_{k-1}u_{m_{k},n_{k}}\) and set \(z_{k}^{(j)}\coloneqq z_{k,n_{k}}^{(j)}\) for \(j=1,\ldots,k\). For \(j\in\mathbb{N}\), the sequence \((w_{k}(x_{j}\otimes 1_{\mathcal{R}})w_{k}^{*})_{k=1}^{\infty}\) is \(\|\cdot\|_{2,X\otimes\{\operatorname{tr}_{\mathcal{R}}\}}\)-Cauchy by (5.53) as \(\sum_{k=1}^{\infty}2^{-k}\) converges. Since \((x_{j})_{j=1}^{\infty}\) is a \(\|\cdot\|_{2,X}\)-dense sequence in the unit ball of \(\mathcal{M}\), a \(3\)\(\varepsilon\)-argument gives that \((w_{k}(a\otimes 1_{\mathcal{R}})w_{k}^{*})_{n=1}^{\infty}\) is \(\|\cdot\|_{2,X\otimes\{\operatorname{tr}_{\mathcal{R}}\}}\)-Cauchy for all \(a\in\mathcal{M}\). As \((w_{k}(a\otimes 1_{\mathcal{R}})w_{k}^{*})_{n=1}^{\infty}\) is \(\|\cdot\|\)-bounded, we may define a map \(\psi:\mathcal{M}\to\mathcal{M}\otimes\mathcal{R}\) by \(\psi(a)\coloneqq\lim_{k\to\infty}w_{k}(a\otimes 1_{\mathcal{R}})w_{k}^{*}\). Since the are unitaries, we see that \(\psi\) is an injective \({}^{*}\)-homomorphism and we have \((\tau\otimes\operatorname{tr}_{\mathcal{R}})\circ\psi=\tau\) for all \(\tau\in X\). Surjectivity follows from the \(\|\cdot\|_{2,X\otimes\{\operatorname{tr}_{\mathcal{R}}\}}\)-density of \((y_{j})_{j=1}^{\infty}\) in the unit ball of \(\mathcal{M}\), because for \(k>j\) we have \[\begin{split}\|y_{j}-\psi(z_{k}^{(j)})\|_{2,X\otimes\{ \operatorname{tr}_{\mathcal{R}}\}}&\stackrel{{(\ref {eq:2011})}}{{\leq}}&\sum_{r=k+1}^{\infty}2^{-r}+\|w_{k}(z_{k,n_{k} }^{(j)}\otimes 1_{\mathcal{R}})w_{k}^{*}\|\\ &\stackrel{{(\ref{eq:2012})}}{{<}}& \sum_{r=k+1}^{\infty}2^{-r}+\frac{1}{k},\end{split} \tag{5.55}\] which converges to zero as \(k\to\infty\). Indeed, as the unit ball of \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-complete and \(\psi\) preserves the uniform \(2\)-norm, its image must be \(\|\cdot\|_{2,X\otimes\{\operatorname{tr}_{\mathcal{R}}\}}\)-closed. ### Property \(\Gamma\) In its original formulation, a \(\Pi_{1}\) factor \(\mathcal{M}\) has property \(\Gamma\) if there is an approximately central net of trace-zero unitaries in \(\mathcal{M}\). Since \(\mathcal{R}\cong\overline{\bigotimes_{n=1}^{\infty}(\mathcal{R}, \operatorname{tr}_{\mathcal{R}})}\), the hyperfinite \(\Pi_{1}\) factor has property \(\Gamma\). Consequently, all McDuff factors have property \(\Gamma\). On the other hand, Murray and von Neumann's \(14\epsilon\)-argument shows that the factors associated to free groups do not have property \(\Gamma\) ([83, Lemma 6.2.1]). Dixmier extended the work of Murray and von Neumann, proving that property \(\Gamma\) was equivalent to the existence of systems of approximately central projections that sum to the unit ([32]). It is through this reformulation that most structural consequence of property \(\Gamma\) are obtained for \(\Pi_{1}\) factors (see [26, 27, 49, 91]), and so it was the basis for the definition of _uniform property_\(\Gamma\) for \(C^{*}\)-algebras introduced in [23] and studied further in [22]. Here, we define property \(\Gamma\) for tracially complete \(C^{*}\)-algebras. It is an immediate consequence of Proposition 5.7 and a reindexing argument (see Remark 5.24 below) that a \(C^{*}\)-algebra \(A\) with \(T(A)\) compact has uniform property \(\Gamma\) in the sense of [23, Definition 2.1] if and only if its tracial completion with respect to \(T(A)\) has property \(\Gamma\) as defined here (see Proposition 5.20). **Definition 5.19** (cf. [23, Definition 2.1]).: Let \((\mathcal{M},X)\) be a factorial tracially complete \(C^{*}\)-algebra. We say that \((\mathcal{M},X)\) has _property \(\Gamma\)_ if for any \(\|\cdot\|_{2,X}\)-separable subset \(S\subseteq\mathcal{M}\) and any \(k\in\mathbb{N}\) there exist projections \(p_{1},\dots,p_{k}\in\mathcal{M}^{\omega}\cap S^{\prime}\) summing to \(1_{\mathcal{M}^{\omega}}\) such that \[\tau(ap_{i})=\frac{1}{k}\tau(a),\qquad a\in S,\ \tau\in X^{\omega},\ i=1, \dots,k. \tag{5.56}\] **Proposition 5.20**.: _Let \(A\) be a separable \(C^{*}\)-algebra with \(T(A)\) compact. Then \(A\) has uniform property \(\Gamma\) as in [23, Definition 2.1] if and only if its tracial completion with respect to \(T(A)\) has property \(\Gamma\) in the sense of Definition 5.19._ **Remark 5.21**.: We have chosen to restrict our definition of property \(\Gamma\) to the case of factorial tracially complete \(C^{*}\)-algebras. Although Definition 5.19 makes sense for non-factorial tracially complete \(C^{*}\)-algebras, in the absence of results, we are not confident that this would be appropriate definition outside of the factorial setting. The following simple observation provides many examples of factorial tracially complete \(C^{*}\)-algebras with property \(\Gamma\). Combining this with Proposition 5.17 shows that the tracial completion of a separable \(\mathcal{Z}\)-stable \(C^{*}\)-algebra with respect to a compact face of traces has property \(\Gamma\) (cf. [23, Proposition 2.3]). **Proposition 5.22**.: _If \((\mathcal{M},X)\) is a factorial McDuff tracially complete \(C^{*}\)-algebra, then \((\mathcal{M},X)\) satisfies property \(\Gamma\)._ Proof.: Let \(S\subseteq\mathcal{M}\) be a \(\|\cdot\|_{2,X}\)-separable set and fix \(k\geq 1\). Fix a unital embedding \(\phi\colon M_{k}\to\mathcal{M}^{\omega}\cap S^{\prime}\). If \(a\in S\), then the function \(\tau(a\phi(\,\cdot\,))\) is a tracial functional on \(M_{k}\). Hence \(\tau(a\phi(e_{ii}))=\tau(a\phi(e_{11}))\) for all \(i=1,\ldots,k\), and since \(\phi\) is unital and \(\sum_{i=1}^{n}e_{ii}=1_{M_{k}}\), we have \(\tau(a\phi(e_{ii}))=\frac{1}{k}\tau(a)\) for all \(a\in S\). Thus the projections \(p_{i}\coloneqq\phi(e_{ii})\), \(i=1,\ldots,k\), satisfy the conditions in Definition 5.19. The following proposition records several equivalent formulation of property \(\Gamma\). **Proposition 5.23** (cf. [22, Proposition 2.3]).: _For a factorial tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\), the following are equivalent:_ (i)__\((\mathcal{M},X)\) _satisfies property_ \(\Gamma\)_._ (ii) _For every finite set_ \(\mathcal{F}\subseteq\mathcal{M}\) _and_ \(\epsilon>0\)_, there is a self-adjoint contraction_ \(p\in\mathcal{M}\) _such that for all_ \(a\in\mathcal{F}\) _and all_ \(\tau\in X\)_,_ \[\|[p,a]\|_{2,X}<\epsilon,\ \|p-p^{2}\|_{2,X}<\epsilon,\ and\ \ |\tau(ap)-\frac{1}{2}\tau(a) \big{|}<\epsilon. \tag{5.57}\] (iii) _There is_ \(c\in(0,1)\) _such that for every_ \(\|\cdot\|_{2,X}\)_-separable set_ \(S\subseteq\mathcal{M}^{\omega}\)_, there is a projection_ \(p\in\mathcal{M}^{\omega}\cap S^{\prime}\) _such that_ \(\tau(ap)=c\tau(a)\) _for all_ \(a\in S\) _and_ \(\tau\in X^{\omega}\)_._ (iv) _There exists a faithful trace_ \(\sigma\in T(\mathbb{C}^{2})\) _such that for every_ \(\|\cdot\|_{2,X}\)_-separable set_ \(S\subseteq\mathcal{M}^{\omega}\)_, there is a unital embedding_ \(\phi:\mathbb{C}^{2}\to\mathcal{M}^{\omega}\cap S^{\prime}\) _such that_ \(\tau(a\phi(x))=\tau(a)\sigma(x)\) _for all_ \(a\in S\)_,_ \(x\in\mathbb{C}^{2}\)_, and_ \(\tau\in X^{\omega}\)_._ (v) _For every_ \(\|\cdot\|_{2,X}\)_-separable set_ \(S\subseteq\mathcal{M}^{\omega}\)_, there is a_ \({}^{*}\)_-homomorphism_ \(\phi\colon L^{\infty}[0,1]\to\mathcal{M}^{\omega}\cap S^{\prime}\) _such that_ \[\tau(a\phi(f))=\tau(a)\int_{0}^{1}f(t)dt \tag{5.58}\] _for all_ \(a\in S\)_,_ \(f\in L^{\infty}[0,1]\)_, and_ \(\tau\in X^{\omega}\)_._ Proof.: (i)\(\Rightarrow\)(ii): Apply the definition of property \(\Gamma\) with \(k\coloneqq 2\) and \(S\coloneqq\mathcal{F}\) and take a suitable element of a representative sequence for one of the projections witnessing property \(\Gamma\) to give (ii). (ii)\(\Rightarrow\)(iii): Let \(\{s^{(i)}:i\in\mathbb{N}\}\) be a countable dense set in \(S\). For each \(i\in\mathbb{N}\), let \((s^{(i)}_{n})_{n=1}^{\infty}\) be a sequence representing \(s^{(i)}\). For each \(n\in\mathbb{N}\), set \(\epsilon_{n}\coloneqq 2^{-n}\) and \(\mathcal{F}_{n}\coloneqq\{s^{(i)}_{n}:i\leq n\}\). Let \(p_{n}\in\mathcal{M}\) be given as in (ii) with \((\mathcal{F}_{n},\epsilon_{n})\) in place of \((\mathcal{F},\epsilon)\). Then the sequence \((p_{n})_{n=1}^{\infty}\) induces a projection \(p\in\mathcal{M}^{\omega}\cap S^{\prime}\) such that \(\tau(ap)=\frac{1}{2}\tau(a)\) for all \(a\in S\) and \(\tau\in X^{\omega}\). (iii)\(\Rightarrow\)(iv): Let \(e_{1},e_{2}\in\mathbb{C}^{2}\) be the standard basis vectors and let \(p\in\mathcal{M}^{\omega}\cap S^{\prime}\) be the projection constructed in (iii). Define \(\phi:\mathbb{C}^{2}\to\mathcal{M}^{\omega}\cap S^{\prime}\) by \(\phi(e_{1})\coloneqq p\) and \(\phi(e_{2})\coloneqq 1_{\mathcal{M}^{\omega}}-p\). Define \(\sigma(x_{1},x_{2})\coloneqq cx_{1}+(1-c)x_{2}\). Then \(\tau(a\phi(x))=\tau(a)\sigma(x)\) for all \(a\in S\), \(x\in\mathbb{C}^{2}\), and \(\tau\in X^{\omega}\). (iv)\(\Rightarrow\)(v): Let \(\phi_{1}\colon\mathbb{C}^{2}\to\mathcal{M}^{\omega}\cap S^{\prime}\) be a unital embedding such that \(\tau(a\phi_{1}(x))=\tau(a)\sigma(x)\) for all \(a\in S\), \(x\in\mathbb{C}^{2}\), and \(\tau\in X^{\omega}\). Then define \(S_{1}\coloneqq C^{*}(\phi_{1}(\mathbb{C}^{2})\cup S)\) and use (iv) again to produce a unital embedding \(\phi_{2}\colon\mathbb{C}^{2}\to\mathcal{M}^{\omega}\cap S^{\prime}_{1}\) such that \(\tau(a\phi_{2}(x))=\tau(a)\sigma(x)\) for all \(a\in S_{1}\), \(x\in\mathbb{C}^{2}\), and \(\tau\in X^{\omega}\). Continuing inductively, there are unital embeddings \(\phi_{n}\colon\mathbb{C}^{2}\to\mathcal{M}^{\omega}\cap S^{\prime}\) with commuting ranges such that the induced embedding \[\phi\colon\bigotimes_{n=1}^{\infty}\mathbb{C}^{2}\to\mathcal{M}^{\omega}\cap S ^{\prime} \tag{5.59}\] satisfies \(\tau(a\phi(x))=\tau(a)\sigma^{\otimes\infty}(x)\) for all \(a\in S\), \(x\in\bigotimes_{n=1}^{\infty}\mathbb{C}^{2}\), and \(\tau\in X^{\omega}\). Since \(\mathcal{M}^{\omega}\cap S^{\prime}\) is tracially complete, \(\phi\) extends to a \({}^{*}\)-homomorphism \[\bar{\phi}\colon\pi_{\sigma^{\otimes\infty}}\Big{(}\bigotimes_{n=1}^{\infty} \mathbb{C}^{2}\Big{)}^{\prime\prime}\to\mathcal{M}^{\omega}\cap S^{\prime}. \tag{5.60}\] Writing, \(\bar{\sigma}\) for the trace on \(\pi_{\sigma^{\otimes\infty}}\Big{(}\bigotimes_{n=1}^{\infty}\mathbb{C}^{2} \Big{)}^{\prime\prime}\) induced by \(\sigma^{\otimes\infty}\), we have \(\tau(a\bar{\phi}(x))=\tau(a)\bar{\sigma}(x)\) for all \(a\in S\), \(x\in\pi_{\sigma^{\otimes\infty}}\big{(}\bigotimes_{n=1}^{\infty}\mathbb{C}^{2 }\big{)}^{\prime\prime}\), and \(\tau\in X^{\omega}\). The uniqueness of the standard probability space then provides an isomorphism \[\pi_{\tau_{0}}\Big{(}\bigotimes_{n=1}^{\infty}\mathbb{C}^{2}\Big{)}^{\prime \prime}\cong L^{\infty}[0,1] \tag{5.61}\] which carries \(\bar{\sigma}\) to the trace on \(L^{\infty}[0,1]\) given by integration with respect to the Lebesgue measure. (v)\(\Rightarrow\)(i): For \(k\geq 1\) and \(1\leq i\leq k\), set \(p_{i}\coloneqq\phi(\chi_{[(i-1)/k,i/k)})\). **Remark 5.24**.: Using reindexing, as in Remark 5.15, we can see that Definition 5.12 is equivalent to an _a priori_ stronger definition where \(S\) is allowed to be any \(\|\cdot\|_{2,X^{\omega}}\)-separable subset of \(\mathcal{M}^{\omega}\). Similarly, each of (iii), (iv) and (v) is equivalent to an _a priori_ weaker statement with \(S\subseteq\mathcal{M}\). The local characterisation of property \(\Gamma\) given in Proposition 5.23(ii) provides the following permanence property. Recall that factoriality is preserved by inductive limits by Proposition 3.34. **Proposition 5.25**.: _Property \(\Gamma\) is preserved by inductive limits of factorial tracial complete \(C^{*}\)-algebras._ **Remark 5.26**.: It is also true that the reduced product of a sequence of factorial tracially complete \(C^{*}\)-algebras with property \(\Gamma\) is both factorial and has property \(\Gamma\), but we do not yet have the machinery to prove the factorial part of this claim.45 Once the factorial issue is sorted, it is clear that the condition in Proposition 5.23(ii) is preserved by reduced products. See Corollary 7.7(ii). The definition of property \(\Gamma\) requires that one can tracially divide all elements of a tracially complete \(C^{*}\)-algebra in an approximately central fashion, whereas for a II\({}_{1}\) factor \(\mathcal{M}\) it suffices just to divide the unit (i.e. (5.56) only needs to hold for \(a=1_{\mathcal{M}^{\omega}}\)). In [22, Proposition 3.2], several of the present authors observed that such a result is also true for separable \(C^{*}\)-algebras with Bauer tracial simplices. The same holds in the tracially complete setting with a very similar proof. **Proposition 5.27** (cf. [22, Corollary 3.2]).: _Let \((\mathcal{M},X)\) be a factorial tracially complete \(C^{*}\)-algebra such that \(X\) is a Bauer simplex. Suppose that for any \(\|\cdot\|_{2,X}\)-separable subset \(S\subseteq\mathcal{M}\) and \(k\in\mathbb{N}\), there exists pairwise orthogonal projections \(p_{1},\ldots,p_{k}\in\mathcal{M}^{\omega}\cap S^{\prime}\) with \(\tau(p_{i})=1/k\) for all \(i=1,\ldots,k\). Then \((\mathcal{M},X)\) has property \(\Gamma\)._ Proof.: First assume that \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-separable, so we can take \(\mathcal{S}\coloneqq\mathcal{M}\) in the statement. Using Remark 5.24, it will suffice to show that we automatically have \[\tau(ap_{i})=\frac{1}{k}\tau(a),\qquad a\in\mathcal{M},\ \tau\in X^{\omega},\ i=1, \ldots k. \tag{5.62}\] By [10, Proposition 3.9], \(\mathcal{M}^{\omega}\) is a \(W^{*}\)-bundle over \((\partial_{e}X)^{\omega}\) (in the notation defined before [10, Proposition 3.9]),46 so it suffices to prove (5.62) for \(\tau\in(\partial_{e}X)^{\omega}\), which means it suffices to assume that \(\tau\) is a limit trace coming from a sequence \((\tau_{n})\) of traces in \(\partial_{e}X\) and an ultrafilter \(\omega^{\prime}\supseteq\omega\). In this case, since \(\partial_{e}X\) is compact, \(\tau|_{\mathcal{M}}\in\partial_{e}X\) (as it is the \(\omega^{\prime}\)-limit of the \(\tau_{n}\)). Define \(\sigma\colon\mathcal{M}\to\mathbb{C}\) by \(\sigma(x)\coloneqq\tau(xp_{i})\). This is a positive tracial functional such that \(\sigma(1_{\mathcal{M}})=\frac{1}{k}\); since \(\tau\) is an extreme point in \(T(\mathcal{M})\), we have \(\sigma=\frac{1}{k}\tau\), which amounts to (5.62). Footnote 46: In [10], \(\omega\) is taken to be an ultrafilter, but the same considerations apply to any free filter on \(\mathbb{N}\). In the non-separable case, we can use the fact that factoriality is separably inheritable (Theorem A.3(i)) to see that for any \(\|\cdot\|_{2,X}\)-separable subset \(S\) of \(\mathcal{M}\), there exists a \(\|\cdot\|_{2,X}\)-separable factorial tracially complete subalgebra \(\mathcal{M}_{0}\) of \(\mathcal{M}\) which contains \(S\) and satisfies the hypotheses of the proposition and, therefore, has property \(\Gamma\). Consequently, \(\mathcal{M}\) has property \(\Gamma\) as the projections witnessing property \(\Gamma\) for \(\mathcal{M}_{0}\) also witness property \(\Gamma\) for \(\mathcal{M}\). In the setting of \(W^{*}\)-bundles, abstracting the results in [110, 97, 68], an amenable factorial type II\({}_{1}\) tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\) with \(\partial_{e}X\) compact and finite dimensional turns out to have property \(\Gamma\). This will be contained in the forthcoming work by the third- and fifth-named authors, which will show that a factorial \(W^{*}\)-bundle over a finite dimensional space has property \(\Gamma\) when viewed as a tracially complete \(C^{*}\)-algebra if and only if each fibre has property \(\Gamma\) as a von Neumann algebra. We show here that this holds in the zero dimensional setting. **Proposition 5.28**.: _If \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebra such that \(\partial_{e}X\) is compact and totally disconnected and \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) has property \(\Gamma\) for each \(\tau\in\partial_{e}X\), then \((\mathcal{M},X)\) has property \(\Gamma\). In particular, if \((\mathcal{M},X)\) is an amenable type II\({}_{1}\) factorial tracially complete \(C^{*}\)-algebra such that \(\partial_{e}X\) is compact and totally disconnected, then \((\mathcal{M},X)\) has property \(\Gamma\)._ Proof.: Note that the second statement follows from the first. Indeed, if \((\mathcal{M},X)\) is as in the second statement, then for all \(\tau\in\partial_{e}X\), \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) is semidiscrete by Theorem 1.2 and is a II\({}_{1}\) factor by Proposition 3.14. It then follows from [29, Corollary 2.2] that \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) has property \(\Gamma\). We now prove the first statement. As \(\partial_{e}X\) is compact, \(\mathcal{M}\) can be viewed as a \(W^{*}\)-bundle over \(K\coloneqq\partial_{e}X\) by Theorem 3.37. Let \(C(K)\subseteq\mathcal{M}\) be the corresponding inclusion. Fix a finite set \(\mathcal{F}\subset\mathcal{M}\), \(\epsilon>0\), and \(k\in\mathbb{N}\). By Proposition 5.27, it suffices to show that there are positive contractions \(p_{1},\dots,p_{k}\in\mathcal{M}\) such that 1. \(\|[p_{i},a]\|_{2,X}<\epsilon\) for all \(a\in\mathcal{F}\) and \(i=1,\dots,k\), 2. \(\|p_{i}-p_{i}^{2}\|_{2,X}<\epsilon\) for all \(i,\dots,k\), 3. \(\|p_{i}p_{i^{\prime}}\|_{2,X}<\epsilon\) for \(i,i^{\prime}=1,\dots,k\) with \(i\neq i^{\prime}\), and 4. \(|\tau(p_{i})-1/k|<\epsilon\) for all \(\tau\in X\) and \(i=1,\dots,k\). For each \(\tau\in K\), as the II\({}_{1}\) factor \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) satisfies property \(\Gamma\), there are mutually orthogonal projections \(\bar{p}_{1,\tau},\dots,\bar{p}_{k,\tau}\in\pi_{\tau}(\mathcal{M})^{\prime\prime}\) with trace \(1/k\) such that \[\|[\bar{p}_{i,\tau},\pi_{\tau}(a)]\|_{2,\tau}<\epsilon,\qquad a\in\mathcal{F},\ i=1,\dots,k. \tag{5.63}\] Using Kaplansky's density theorem to approximate each \(\bar{p}_{\tau,i}\) by a positive contraction in \(\mathcal{M}\) in the norm \(\|\cdot\|_{2,\tau}\) together with the compactness of \(K\), there is a finite open cover \(U_{1},\dots,U_{n}\) of \(K\) and positive contractions \(p_{i,j}\in\mathcal{M}\) for \(i=1,\dots,k\) and \(j=1,\dots,n\) such that 1. \(\|[p_{i,j},a]\|_{2,U_{j}}<\epsilon\) for all \(a\in\mathcal{F}\), \(i=1,\dots,k\), and \(j=1,\dots,n\), 2. \(\|p_{i,j}-p_{i,j}^{2}\|_{2,U_{j}}<\epsilon\) for all \(i,\dots,k\) and \(j=1,\dots,n\), 3. \(\|p_{i,j}p_{i^{\prime},j}\|_{2,U_{j}}<\epsilon\) for \(i,i^{\prime}=1,\dots,k\) with \(i\neq i^{\prime}\) and \(j=1,\dots,n\), and 4. \(|\tau(p_{i,j})-1/k|<\epsilon\) for all \(\tau\in U_{j}\) and \(i=1,\dots,k\). As \(K\) is totally disconnected, after refining the open cover, we may assume that the sets \(U_{1},\dots,U_{n}\) form a clopen partition of \(K\). Then the elements \[p_{i}\coloneqq\sum_{j=1}^{n}\chi_{U_{j}}p_{i,j}\in\mathcal{M},\qquad j=1, \dots n, \tag{5.64}\] are positive contractions satisfying (i)-(iv). It remains open whether the approximately central division of the unit as in Proposition 5.27 implies property \(\Gamma\) outside the setting of Bauer simplices ([22, Question 3.5]). This question is just as valid in the tracially complete framework. **Question 5.29**.: Does Proposition 5.27 hold for all factorial tracially complete \(C^{*}\)-algebras (i.e. without assuming that \(X\) is a Bauer simplex)? Immediately following [22, Proposition 5.10], the second-, third-, and last two named authors asserted that all the Villadsen algebras of the first type - namely the examples from [114], which were used by Toms to give striking counterexamples to Elliott's original classification conjecture ([109]) - have property \(\Gamma\). These algebras are so-called diagonal AH algebras (i.e. inductive limits of homogeneous algebras of a particularly nice form), and they prove a fertile testing ground for comparing regularity properties of very different natures, with the analysis in [112] leading to a positive solution to the Toms-Winter conjecture on this class of algebras. Uniform tracial completions of diagonal AH algebras satisfy the central divisibility-of-the-unit hypothesis of Proposition 5.27 (this is [22, Proposition 5.10]) and the assertion that Villadsen algebras of the first type have property \(\Gamma\) was based on [112, Section 8] (where it is suggested, based on computations in [108, Theorem 4.1], that these have Bauer simplices of traces).47 However recent results of Elliott, Li, and Niu ([36, Theorem 4.5]) show that, in fact, the trace space of Villadsen algebras of the first type is the Poulsen simplex (whenever the algebra is not \(\mathcal{Z}\)-stable) - maximally far from being Bauer! Accordingly, the argument in [22] is not valid, and it does not seem straightforward to determine whether the Villadsen type I algebras have property \(\Gamma\). Footnote 47: In [22], the second-, third-, and last two named authors incorrectly referenced the non-existent [109, Theorem 4.1] rather than [108, Theorem 4.1]. **Question 5.30**.: Do the uniform tracial completions of the (non-\(\mathcal{Z}\)-stable) Villadsen algebras of the first type have property \(\Gamma\)? The connecting maps in the inductive limit construction of a Villadsen type I algebra involve a combination of coordinate projections and point evaluations, the latter of which ensure simplicity of the inductive limit. In the case of a non-\(\mathcal{Z}\)-stable Villadsen type I algebra \(A\), the point evaluations are relatively sparse (see [112, Theorem 3.4 and Lemma 5.1]). As such they do not affect the structure of the tracial completion of \(A\). For this reason, the heart of Question 5.30 lies in determining whether \(\Gamma\) holds for the tracially complete inductive limit of the trivial \(W^{*}\)-bundles \[C([0,1])\to C([0,1]^{2},M_{2})\to C([0,1]^{4},M_{4})\to\cdots, \tag{5.65}\] where each stage is equipped with all its traces, and the connecting maps are given as the direct sum of the two coordinate projections. ## 6. Complemented partitions of unity This section formally introduces the notation of CPoU for factorial tracially complete \(C^{*}\)-algebras as discussed in Section 1.4. The definition of CPoU is given in Section 6.1. Some examples of tracially complete \(C^{*}\)-algebras with CPoU can be found in Section 6.2, and permanence properties are discussed in Section 6.3. The proof Theorem 1.4 (property \(\Gamma\) implies CPoU) will be given in Section 6.4. ### Formal definition and its reformulations The following definition extends the notion of CPoU introduced in [23] in the \(C^{*}\)-setting to factorial tracial complete \(C^{*}\)-algebras. More precisely, a separable \(C^{*}\)-algebra \(A\) with \(T(A)\) compact has CPoU in the sense of [23] if and only its tracial completion with respect to \(T(A)\) has CPoU in the sense of Definition 6.1 below; this is immediate from the definition and Proposition 5.7, which identifies the uniform \(2\)-norm central sequence algebra of \(A\) with that of its uniform tracial completion.48 Recall our convention that (unless specified otherwise) \(\omega\) denotes a free filter on \(\mathbb{N}\). It will follow from the local characterisation in Proposition 6.2 that Definition 6.1 does not depend on the choice of filter \(\omega\). **Definition 6.1** (cf. [23, Definition 3.1]).: Let \((\mathcal{M},X)\) be a factorial tracially complete \(C^{*}\)-algebra. We say that \((\mathcal{M},X)\) has _complemented partitions of unity_ (CPoU) if for any \(\|\cdot\|_{2,X}\)-separable subset \(S\subseteq\mathcal{M}\), any family \(a_{1},\ldots,a_{k}\) of positive elements in \(\mathcal{M}\), and any scalar \[\delta>\sup_{\tau\in X}\min_{1\leq i\leq k}\tau(a_{i}), \tag{6.1}\] there exist orthogonal projections \(p_{1},\ldots,p_{k}\in\mathcal{M}^{\omega}\cap S^{\prime}\) summing to \(1_{\mathcal{M}^{\omega}}\) such that \[\tau(a_{i}p_{i})\leq\delta\tau(p_{i}),\qquad\tau\in X^{\omega},\ i=1,\ldots,k. \tag{6.2}\] We note that when \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-separable, it suffices to take \(S\coloneqq\mathcal{M}\). As with property \(\Gamma\), we have have chosen to restrict the definition to the setting of factorial tracially complete \(C^{*}\)-algebras since it is not clear if this definition is appropriate outside of the factorial setting. The following proposition provides an equivalent formulation of CPoU that avoids the language of reduced products. Moreover, a reindexing argument shows that Definition 6.1 is equivalent to an _a priori_ stronger definition where \(a_{1},\ldots,a_{k}\in\mathcal{M}^{\omega}\) and \(S\) can be any \(\|\cdot\|_{2,X}\)-separable subspace of \(\mathcal{M}^{\omega}\). **Proposition 6.2**.: _Let \((\mathcal{M},X)\) be a factorial tracially complete \(C^{*}\)-algebra. Then the following are equivalent:_ (i)_\((\mathcal{M},X)\) has CPoU;_ (ii) _for every finite set_ \(\mathcal{F}\subseteq\mathcal{M}\)_,_ \(\epsilon>0\)_,_ \(a_{1},\ldots,a_{k}\in\mathcal{M}_{+}\)_, and_ \[\delta>\sup_{\tau\in X}\min_{1\leq i\leq k}\tau(a_{i}), \tag{6.3}\] _there exist orthogonal positive contractions_ \(e_{1},\ldots,e_{k}\in\mathcal{M}\) _such that_ \[\begin{split}&\Big{\|}\sum_{j=1}^{k}e_{j}-1_{\mathcal{M}}\Big{\|}_{ 2,X}<\epsilon,\\ &\max_{x\in\mathcal{F}}\|[e_{i},x]\|_{2,X}<\epsilon,\ \ i=1,\ldots,k,\\ &\tau(a_{i}e_{i})-\delta\tau(e_{i})<\epsilon,\ \ \tau\in X,\ i=1,\ldots,k;\end{split} \tag{6.4}\] (iii) _for every_ \(\|\cdot\|_{2,X}\)_-separable subset_ \(S\subseteq\mathcal{M}\)_,_ \(a_{1},\ldots,a_{k}\in\mathcal{M}^{\omega}\)_, and_ \[\delta>\sup_{\tau\in X^{\omega}}\min_{1\leq i\leq k}\tau(a_{i}), \tag{6.5}\] _there exist orthogonal projections_ \(p_{1},\ldots,p_{k}\in\mathcal{M}^{\omega}\cap S^{\prime}\) _summing to_ \(1_{\mathcal{M}^{\omega}}\) _such that_ \[\tau(a_{i}p_{i})\leq\delta\tau(p_{i}),\qquad\tau\in X^{\omega},\ i=1,\ldots,k. \tag{6.6}\] Proof.: (i)\(\Rightarrow\)(ii): Let \(\epsilon>0\), let \(a_{1},\ldots,a_{k}\in\mathcal{M}_{+}\), let \(\mathcal{F}\subseteq\mathcal{M}\) be a finite subset, and let \(\delta>0\) be such that (6.1) holds. Since \((\mathcal{M},X)\) has CPoU, we obtain orthogonal projections \(p_{1},\ldots,p_{k}\in\mathcal{M}^{\omega}\cap\mathcal{F}^{\prime}\) summing to \(1_{\mathcal{M}^{\omega}}\) such that (6.2) hold. We can lift \(p_{1},\ldots,p_{k}\in\mathcal{M}^{\omega}\) to orthogonal positive contractions \(e^{(1)},\ldots,e^{(k)}\in\ell^{\infty}(\mathcal{M})\) by [1, Proposition 2.6] (this is a special case of projectivity of cones over finite dimensional \(C^{*}\)-algebras; see [72, Corollary 3.8]). Write \((e_{j}^{(i)})_{j=1}^{\infty}\) for a sequence of positive contractions representing \(e^{(i)}\) for \(i=1,\dots,k\). Note that for each \(j\in\mathbb{N}\), \(e_{j}^{(1)},\dots,e_{j}^{(k)}\) are orthogonal positive contractions. Since the \(p_{i}\) sum to \(1_{\mathcal{M}^{\omega}}\) and commute with \(\mathcal{F}\), we have \[\limsup_{j\to\omega}\left\|e_{j}^{(1)}+\dots+e_{j}^{(k)}-1_{\mathcal{M}} \right\|_{2,X}=0, \tag{6.7}\] and \[\limsup_{j\to\omega}\|[e_{j}^{(i)},x]\|_{2,X}=0,\quad x\in\mathcal{F},\ i=1, \dots,k. \tag{6.8}\] Moreover, we must have \[\limsup_{j\to\omega}\sup_{\tau\in X}\left(\tau\big{(}a_{i}e_{j}^{(i)}\big{)}- \delta\tau\big{(}e_{j}^{(i)}\big{)}\right)\leq 0,\quad i=1,\dots,k, \tag{6.9}\] as otherwise we could choose a sequence of traces \((\tau_{j})_{j=1}^{\infty}\) in \(X\) and an ultrafilter \(\omega^{\prime}\supseteq\omega\) such that the corresponding limit trace \(\tau\) does not satisfy \(\tau(a_{i}p_{i})\leq\delta\tau(p_{i})\). Therefore, taking \(e_{i}\coloneqq e_{j}^{(i)}\) for suitable choice of \(j\), the orthogonal positive contractions \(e_{1},\dots,e_{k}\) will satisfy (6.4). (ii)\(\Rightarrow\)(iii): Let \(S\subseteq\mathcal{M}^{\omega}\) be a \(\|\cdot\|_{2,X^{\omega}}\)-separable subset and fix a countable dense subset \(\{s^{(i)}:i\in\mathbb{N}\}\) of \(S\). For each \(i\in\mathbb{N}\), let \((s_{j}^{(i)})_{j=1}^{\infty}\) be a sequence representing \(s^{(i)}\). Let \(a_{1},\dots,a_{k}\in\mathcal{M}^{\omega}\) be positive contractions in \(\mathcal{M}^{\omega}\). Let \((a_{j}^{(i)})_{j=1}^{\infty}\) be a sequence of positive contractions representing \(a_{i}\) for \(i=1,\dots,k\). Let \(\delta>0\) satisfy (6.5). Then we have \[\limsup_{j\to\omega}\sup_{\tau\in X}\min_{1\leq i\leq k}\tau\big{(}a_{j}^{(i) }\big{)}<\delta. \tag{6.10}\] Therefore, there exists a set \(J\in\omega\) such that for all \(j\in J\), we have \[\sup_{\tau\in X}\min_{1\leq i\leq k}\tau\big{(}a_{j}^{(i)}\big{)}<\delta. \tag{6.11}\] For each \(j\in\mathbb{N}\), set \(\epsilon_{j}\coloneqq 2^{-j}\) and \(\mathcal{F}_{j}\coloneqq\{s_{j}^{(i)}:i\leq j\}\). Applying (ii) for each \(j\in J\), we obtain orthogonal positive contractions \(e_{j}^{(1)},\dots,e_{j}^{(k)}\) such that \[\begin{split}&\Big{\|}\sum_{i=1}^{k}e_{j}^{(i)}-1_{\mathcal{M}} \Big{\|}_{2,X}<\epsilon_{j},\\ &\max_{x\in\mathcal{F}_{j}}\big{\|}\big{[}e_{j}^{(i)},x\big{]} \big{\|}_{2,X}<\epsilon_{j},\ \ i=1,\dots,k,\ x\in\mathcal{F}_{j}\\ &\tau\big{(}a_{j}^{(i)}e_{j}^{(i)}\big{)}-\delta\tau\big{(}e_{j} ^{(i)}\big{)}<\epsilon_{j},\ \ \tau\in X,\ i=1,\dots,k.\end{split} \tag{6.12}\] For \(j\not\in J\), we may choose \(e_{j}^{(1)},\dots,e_{j}^{(k)}\) arbitrarily. Let \(p_{i}\in\mathcal{M}^{\omega}\) be the element represented by \(\big{(}e_{j}^{(i)}\big{)}_{j=1}^{\infty}\). Then (6.12) ensures that the \(p_{i}\) sum to \(1_{\mathcal{M}^{\omega}}\), commute with \(S\), and satisfy (6.6). Since \(p_{1},\dots,p_{k}\) are orthogonal positive contractions summing to the identity, they are in fact projections. (iii)\(\Rightarrow\)(i): If \(S\subseteq\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-separable, then viewed as a subset of \(\mathcal{M}^{\omega}\), it is \(\|\cdot\|_{2,X^{\omega}}\)-separable. If \(a_{1},\dots,a_{k}\in\mathcal{M}_{+}\), we have \[\sup_{\tau\in X^{\omega}}\min_{1\leq i\leq k}\tau(a_{i})=\sup_{\tau\in X}\min_ {1\leq i\leq k}\tau(a_{i}). \tag{6.13}\] Therefore, (i) is a special case of (iii). ### First examples and non-examples The main source of example of factorial tracially complete \(C^{*}\)-algebras with CPoU come from Theorem 1.4, which, as a special case, implies the uniform tracial completion of a separable \(\mathcal{Z}\)-stable \(Cj\)-algebra with respect to a compact face of traces will have CPoU. In this subsection, we shall investigate some more elementary examples of tracially complete \(C^{*}\)-algebras \((\mathcal{M},X)\) with CPoU in the case when \(X\) is a Bauer simplex. We will also prove a special case of Theorem 1.4 showing that \(\Gamma\) implies CPoU in the case of \(W^{*}\)-bundles (Proposition 6.7), which is easier and more conceptual than the general result taken up in Section 6.4). We also provide examples of type II\({}_{1}\) factorial tracially complete \(C^{*}\)-algebras without CPoU in Example 6.6. In the setting of \(W^{*}\)-bundles with totally disconnected base space, a strong form a CPoU holds: one can take the partition of unity to be central as opposed to approximately central. **Proposition 6.3**.: _Let \((\mathcal{M},X)\) be a factorial tracially complete \(C^{*}\)-algebra such that \(\partial_{e}X\) is compact and totally disconnected. Then for all \(a_{1},\dots,a_{k}\in\mathcal{M}_{+}\) and_ \[\delta>\sup_{\tau\in X}\min_{1\leq i\leq k}\tau(a_{i}), \tag{6.14}\] _there are projections \(p_{1},\dots,p_{k}\in Z(\mathcal{M})\) summing to \(1_{\mathcal{M}}\) such that \(\tau(a_{i}p_{i})\leq\delta\tau(p_{i})\) for all \(\tau\in X\) and \(i=1,\dots,k\). In particular, \((\mathcal{M},X)\) has CPoU._ Proof.: It suffices to prove the first part since for any \(\|\cdot\|_{2,X}\)-separable subset \(S\subseteq\mathcal{M}\), we have \(Z(\mathcal{M})\subseteq\mathcal{M}^{\omega}\cap S^{\prime}\). For \(i=1,\dots,k\), define \[U_{i}\coloneqq\{\tau\in\partial_{e}X:\tau(a_{i})<\delta\}, \tag{6.15}\] so that \(U_{1},\dots,U_{k}\) is an open cover of \(\partial_{e}X\). By total disconnectedness, we may partition \(\partial_{e}X\) into clopen sets \(V_{1},\dots,V_{k}\) such that \(V_{i}\subseteq U_{i}\) for \(i=1,\dots,k\). As \(\mathcal{M}\) is factorial, \(X\) is a closed face of \(T(\mathcal{M})\) and so is a Choquet simplex in its own right, which is then Bauer from the assumption that \(\partial_{e}X\) is compact. By Theorem 3.37, there is a natural embedding \(C(\partial_{e}X)\subseteq Z(\mathcal{M})\). Let us define \[p_{i}\coloneqq\chi_{V_{i}}\in C(\partial_{e}X)\subseteq Z(\mathcal{M}). \tag{6.16}\] For \(\tau\in\partial_{e}X\), we have \(p_{i}\in C(\partial_{e}X)\subseteq\mathcal{M}\) is in the multiplicative domain of \(\tau\), and hence \(\tau(a_{i}p_{i})=\tau(a_{i})\tau(p_{i})\). Since \(\tau(p_{i})=0\) for \(\tau\in\partial_{e}X\setminus V_{i}\) and \(\tau(a_{i})<\delta\) for \(\tau\in V_{i}\), we have \(\tau(a_{i}p_{i})\leq\delta\tau(p_{i})\) for all \(i=1,\dots,k\) and \(\tau\in\partial_{e}X\). It then follows from the Krein-Milman theorem that \(\tau(a_{i}p_{i})\leq\delta\tau(p_{i})\) for all \(i=1,\dots,k\) and \(\tau\in X\). A noteworthy special case of Proposition 6.3 is when \(\mathcal{M}\) is a finite dimensional \(C^{*}\)-algebra and \(X=T(\mathcal{M})\). This is easily seen to be a tracially complete \(C^{*}\)-algebra as the norms \(\|\cdot\|_{2,X}\) and \(\|\cdot\|\) are equivalent. Although this example is fairly trivial, it played a notable role in the proof of [23, Theorem 3.8], which could be viewed as bootstrapping CPoU from the finite dimensional setting (where it is easy to verify) to the nuclear setting with the aid of uniform property \(\Gamma\). The proof of Theorem 1.4 will similarly make use of the following special case of Proposition 6.3. **Corollary 6.4**.: _If \(\mathcal{M}\) is a finite von Neumann algebra, then the factorial tracially complete \(C^{*}\)-algebra \(\big{(}\mathcal{M},T(\mathcal{M})\big{)}\) has CPoU. In fact, if \(a_{1},\dots,a_{k}\in\mathcal{M}_{+}\) and_ \[\delta>\sup_{\tau\in X}\min_{1\leq i\leq k}\tau(a_{i}), \tag{6.17}\] _then there are projections \(p_{1},\dots,p_{k}\in Z(\mathcal{M})\) summing to \(1_{\mathcal{M}}\) such that \(\tau(a_{i}p_{i})\leq\delta\tau(p_{i})\) for all \(\tau\in T(\mathcal{M})\) and \(i=1,\dots,k\)._ Proof.: As all traces on \(\mathcal{M}\) factor through the centre-valued trace \(\mathcal{M}\to Z(\mathcal{M})\) (Proposition 2.8(i)), composition with the centre-valued trace produces a homeomorphism from the Gelfand spectrum of \(Z(\mathcal{M})\) to \(\partial_{e}T(\mathcal{M})\). In particular, \(\partial_{e}T(\mathcal{M})\) is compact and totally disconnected (in fact hyperstonean) as \(Z(\mathcal{M})\) is a von Neumann algebra, and so the result follows from Proposition 6.3. In the next proposition, we consider what it means for a trivial \(W^{*}\)-bundle to have CPoU. (See Example 3.7 for the definition of trivial \(W^{*}\)-bundles.) The heuristic idea is that for a \(W^{*}\)-bundle over a space that is not totally disconnected, the approximately central projections in Definition 6.1 cannot come from the centre of the algebra, and hence any constructions of approximately central projections must make use of central sequences in the fibres. In particular, if there are no central sequences in the fibres, then the \(W^{*}\)-bundle must not have CPoU. **Proposition 6.5**.: _Let \((\mathcal{M},X)\) be the trivial \(W^{*}\)-bundle over a compact Hausdorff space \(K\) with fibre a finite factor \(\mathcal{N}\). If \((\mathcal{M},X)\) has CPoU, then either \(K\) is totally disconnected or \(\mathcal{N}\) satisfies property \(\Gamma\)._ Proof.: Suppose \((\mathcal{M},X)\) satisfies CPoU and \(K\) is not totally disconnected. Let \(\tau_{\mathcal{N}}\) be the trace on \(\mathcal{N}\) and let \(S\subseteq\mathcal{N}\) be a \(\|\cdot\|_{2,\tau_{\mathcal{N}}}\)-separable subset. We work to show that there is a projection \(q\in\mathcal{N}^{\omega}\cap S^{\prime}\) with trace \(1/2\). This will show \(\mathcal{N}\) satisfies property \(\Gamma\) - this is a standard von Neumann algebra result; its also special case of (i)\(\Rightarrow\)(iii) of Proposition 5.23, for example. Fix a point \(x\in K\) and an open neighbourhood \(U\subseteq K\) of \(x\) such that there is no clopen neighbourhood of \(x\) contained in \(U\). Let \(a_{1}\in C(K)\subseteq\mathcal{M}\) be such that \(0\leq a_{1}\leq 1_{\mathcal{M}}\), \(a_{1}(x)=1\), and \(\operatorname{supp}(a_{1})\subseteq U\), and set \(a_{2}\coloneqq 1_{\mathcal{M}}-a_{1}\). Then \[\sup_{\tau\in X}\min\{\tau(a_{1}),\tau(a_{2})\}\leq\frac{1}{2}<\frac{2}{3}. \tag{6.18}\] Let \(\omega\) be a free ultrafilter on \(\mathbb{N}\) and view \(S\subseteq\mathcal{M}\) as constant functions. Since \((\mathcal{M},X)\) has CPoU, there are projections \(p_{1},p_{2}\in\mathcal{M}^{\omega}\cap S^{\prime}\) such that \(p_{1}+p_{2}=1_{\mathcal{M}^{\omega}}\) and \[\tau(a_{i}p_{i})\leq\frac{2}{3}\tau(p_{i}),\qquad i=1,2,\ \tau\in X^{\omega}. \tag{6.19}\] For \(i=1,2\), let \((p_{i,n})_{n=1}^{\infty}\subseteq\mathcal{M}\) be a sequence of positive contractions representing \(p_{i}\). If \(\tau\in X^{\omega}\) is the limit trace determined by the constant sequence \((\tau_{\mathcal{N}}\circ\operatorname{ev}_{x})_{n=1}^{\infty}\subseteq X\). Then \(\tau(a_{1})=1\), and hence \(\tau(a_{2})=0\). Since \(a_{2}\geq 0\), the Cauchy-Schwarz inequality implies \(\tau(a_{2}p_{1})=0\). So \(\tau(a_{1}p_{1})=\tau(p_{1})\). Now, (6.19) implies \(\tau(p_{1})=0\). Therefore, \[\lim_{n\to\omega}\tau_{\mathcal{N}}(p_{1,n}(x))=0. \tag{6.20}\] Similarly, consider a sequence \((y_{n})_{n=1}^{\infty}\subseteq K\setminus U\), and let \(\tau\in X^{\omega}\) be the limit trace defined by the sequence \((\tau_{\mathcal{N}}\circ\operatorname{ev}_{y_{n}})_{n=1}^{\infty}\subseteq X\). Then \(\tau(a_{2})=1\), and as above, this shows \(\tau(p_{2})=0\). This implies \(\tau(p_{1})=1\). As this holds for all sequences \((y_{n})_{n=1}^{\infty}\subseteq K\setminus U\), it follows that \[\lim_{n\to\omega}\inf_{y\in K\setminus U}\tau_{\mathcal{N}}(p_{1,n}(y))=1. \tag{6.21}\] Define \[N\coloneqq\Big{\{}n\in\mathbb{N}:\tau_{\mathcal{N}}(p_{1,n}(x))<\frac{1}{2}, \inf_{y\in K\setminus U}\tau(p_{1,n}(y))>\frac{1}{2}\Big{\}}, \tag{6.22}\] and note that \(N\in\omega\) by (6.20) and (6.21). For each \(n\in N\), there is a \(y_{n}\in K\) such that \(\tau_{\mathcal{N}}(p_{1,n}(y))=1/2\) since otherwise \[\Big{\{}y\in K:\tau_{\mathcal{N}}(p_{1,n}(y))<\frac{1}{2}\Big{\}}\subseteq K \tag{6.23}\] is a clopen neighbourhood of \(x\) contained in \(U\). For \(n\in N\), set \(q_{n}\coloneqq p_{1,n}(y_{n})\in\mathcal{N}\) and for \(n\in\mathbb{N}\setminus N\), set \(q_{n}=0\in\mathcal{N}\). Then the sequence \((q_{n})_{n=1}^{\infty}\) defines a projection \(q\in\mathcal{N}^{\omega}\cap S^{\prime}\) with trace \(1/2\). Proposition 6.5 provides us with explicit examples of tracially complete \(C^{*}\)-algebras that do not have CPoU. **Example 6.6**.: For any \(n\geq 2\), the trivial \(W^{*}\)-bundle \(C_{\sigma}([0,1],L(\mathbb{F}_{n}))\) does not have CPoU. Indeed, the II\({}_{1}\) factors \(L(\mathbb{F}_{n})\) do not have property \(\Gamma\) ([83]; see also [100, Theorem A.7.2]), and hence the result follows from Proposition 6.5. The converse of Proposition 6.5 is also true. If \((\mathcal{M},X)\) is a trivial \(W^{*}\)-bundle whose fibre is a II\({}_{1}\) factor with property \(\Gamma\), then it is easy to show \((\mathcal{M},X)\) has property \(\Gamma\), and then Theorem 1.4 will show that \((\mathcal{M},X)\) has CPoU. In fact, in the W\({}^{*}\)-bundle case we can give a more direct proof that property \(\Gamma\) implies CPoU. In light of Theorem 3.37, the following result is just the special case of Theorem 1.4 when \(X\) is a Bauer simplex. As the proof is more conceptual in this special case, we include it here. **Proposition 6.7**.: _Let \((\mathcal{M},X)\) be a factorial tracially complete \(C^{*}\)-algebra coming from a \(W^{*}\)-bundle over the space \(K=\partial_{e}X\). Suppose \((\mathcal{M},X)\) has property \(\Gamma\). Then \((\mathcal{M},X)\) has CPoU._ Proof.: Fix an ultrafilter \(\omega\) on \(\mathbb{N}\). By the Krein-Milman theorem and Proposition 3.3, we have that \(\|a\|_{2,X}=\|a\|_{2,K}\) for all \(a\in\mathcal{M}\). Moreover, writing \(K^{\omega}\) for the weak\({}^{*}\)-closure of the set of limit traces coming from a sequence of traces in \(K\), we have \(\|a\|_{2,X^{\omega}}=\|a\|_{2,K^{\omega}}\) for all \(a\in\mathcal{M}^{\omega}\). Let \(a_{1},\ldots,a_{k}\in\mathcal{M}_{+}\) and \(\delta>0\) satisfy \[\delta>\sup_{\tau\in K}\min_{1\leq i\leq k}\tau(a_{i}). \tag{6.24}\] Let \(S\subseteq\mathcal{M}\) be \(\|\cdot\|_{2,X}\)-separable subset, which we may assume contains \(a_{1},\ldots,a_{k}\). Let \(U_{i}\coloneqq\{\tau\in K:\tau(a_{i})<\delta\}\) for \(i=1,\ldots,k\). Then \(\{U_{1},\ldots,U_{k}\}\) is an open cover of \(K\). Let \(g_{1},\dots,g_{k}\colon K\to[0,1]\) be a continuous partition of unity subordinate to this open cover. Set \(h_{0}\coloneqq 0\) and \(h_{i}\coloneqq\sum_{j=1}^{i}g_{i}\) for \(i=1,\dots,k\). Since \(\mathcal{M}\) has property \(\Gamma\), by (i)\(\Rightarrow\)(v) of Proposition 5.23, there exists a \({}^{*}\)-homomorphism \(\phi\colon L^{\infty}[0,1]\to\mathcal{M}^{\omega}\cap S^{\prime}\) such that \[\tau(a\phi(f))=\tau(a)\mathrm{tr}_{\mathrm{Leb}}(f) \tag{6.25}\] for all \(a\in S\), \(f\in L^{\infty}[0,1]\) and \(\tau\in X^{\omega}\), where \(\mathrm{tr}_{\mathrm{Leb}}\) is integration with respect to the Lebesgue measure. Since \(\mathcal{M}\) is a \(W^{*}\)-bundle there is a canonical copy of \(C(K)\) in \(Z(\mathcal{M})\). As the image of \(\phi\) commutes with \(C(K)\subseteq Z(\mathcal{M}^{\omega})\), we have an induced \({}^{*}\)-homomorphism \(\psi:C(K)\otimes L^{\infty}[0,1]\to\mathcal{M}^{\omega}\cap S^{\prime}\). Let \(\tau\) be a limit trace given by a sequence of traces \((\tau_{n})_{n=1}^{\infty}\) in \(K\). Then, since every \(\tau_{n}\in K\) restricts to a point evaluation on \(C(K)\), we see that \[\begin{split}\tau(a\psi(g\otimes f))&=\lim_{n\to \omega}\tau_{n}(a)g(\tau_{n})\mathrm{tr}_{\mathrm{Leb}}(f)\\ &=\lim_{n\to\omega}\tau_{n}(a)\mathrm{tr}_{\mathrm{Leb}}(g(\tau_{ n})f)\end{split} \tag{6.26}\] for all \(a\in S\), \(g\in C(K)\), and \(f\in L^{\infty}[0,1]\). Identifying \(C(K)\otimes L^{\infty}[0,1]\) with \(C(K,L^{\infty}[0,1])\), we have \[\tau(a\psi(F))=\lim_{n\to\omega}\tau_{n}(a)\mathrm{tr}_{\mathrm{Leb}}(F(\tau_{ n})) \tag{6.27}\] for all \(a\in S\) and \(F\in C(K,L^{\infty}[0,1])\). In particular, we have \(\|\psi(F)\|_{2,K^{\omega}}=\sup_{\tau\in K}\|F(\tau)\|_{2,\tau_{\mathrm{Leb}}}\). It follows that \(\psi\) extends to an embedding \(C_{\sigma}(K,(L^{\infty}[0,1],\tau_{\mathrm{Leb}}))\to\mathcal{M}^{\omega} \cap S^{\prime}\) (which we also denote by \(\psi\)) of the trivial \(W^{*}\)-bundle with fibre \(L^{\infty}[0,1]\) such that (6.27) holds for all \(a\in S\) and \(F\in C_{\sigma}(K,(L^{\infty}[0,1],\tau_{\mathrm{Leb}}))\). Using \(\chi_{[h_{i-1},h_{i}]}\colon K\to L^{\infty}([0,1])\) to denote the function \(\tau\mapsto\chi_{[h_{i-1}(\tau),h_{i}(\tau)]}\), this function is in \(C_{\sigma}(K,(L^{\infty}[0,1],\tau_{\mathrm{Leb}}))\), and so we can define \[p_{i}\coloneqq\psi(\chi_{[h_{i-1},h_{i}]}),\qquad i=1,\dots,k. \tag{6.28}\] Then \(p_{1},\dots,p_{k}\) are orthogonal projections in \(\mathcal{M}^{\omega}\cap S^{\prime}\) summing to \(1_{\mathcal{M}^{\omega}}\). Let \(\tau\) be a limit trace given by a sequence of traces \((\tau_{n})_{n=1}^{\infty}\) in \(K\). Then \[\begin{split}\tau(a_{i}p_{i})&=\lim_{n\to\omega} \tau_{n}(a_{i})\mathrm{tr}_{\mathrm{Leb}}(\chi_{[h_{i-1},h_{i}]}(\tau_{n}))\\ &=\lim_{n\to\omega}\tau_{n}(a_{i})g_{i}(\tau_{n})\\ &\leq\lim_{n\to\omega}\delta g_{i}(\tau_{n})\\ &=\delta\tau(p_{i}),\end{split} \tag{6.29}\] where the inequality in the third line holds as for any \(\tau_{n}\in K\), either \(\tau_{n}(a_{i})<\delta\) or \(g_{i}(\tau_{n})=0\) because \(g_{1},\dots g_{k}\) is a partition of unity subordinate to \(\{U_{1},\dots,U_{n}\}\). An application of the Krein-Milman theorem shows that (6.29) holds for all \(\tau\in X^{\omega}\). Therefore, \(\mathcal{M}\) has CPoU. ### Permanence properties of CPoU We will show CPoU is preserved by inductive limits and matrix amplifications. CPoU is also preserved under reduced products of tracially complete \(C^{*}\)-algebras, but the proof will be deferred to Section 7 (see Remark 6.13). We begin by showing that CPoU passes to "quotients" in the sense that restricting to a closed face of traces (and completing in the corresponding uniform 2-norm) preserves CPoU (cf. Corollary 6.10). We isolate the following lemma, which will go into the proof of Proposition 6.9, as well as being recycled in Appendix A. **Lemma 6.8**.: _If \(\mathcal{M}\) is a unital \(C^{*}\)-algebra, \(Y\subseteq\mathcal{T}(\mathcal{M})\) is a closed face, and \(S\subseteq\mathcal{M}\) is \(\|\cdot\|\)-separable, then there is a separable unital \(C^{*}\)-algebra \(A\subseteq\mathcal{M}\) containing \(S\) such that_ \[Y_{A}\coloneqq\{\tau|_{A}:\tau\in Y\}\subseteq T(A) \tag{6.30}\] _is a closed face in \(T(A)\)._ Proof.: We will construct a sequence \((A_{n})_{n=1}^{\infty}\) of separable unital \(C^{*}\)-sub-algebras of \(\mathcal{M}\) containing \(S\) and self-adjoint elements \(a_{n}\in A_{n+1}\) such that for each \(n\geq 1\), 1. \(A_{n}\subseteq A_{n+1}\), 2. every trace on \(A_{n}\) extends to a trace on \(\mathcal{M}\), 3. \(\tau(a_{n})\geq 0\) for all \(\tau\in T(\mathcal{M})\), and 4. if \(Y_{n}\coloneqq\{\tau|_{A_{n}}:\tau\in Y\}\), then for all \(\tau\in T(\mathcal{M})\), we have \(\tau(a_{n})=0\) if and only if \(\tau|_{A_{n}}\in Y_{n}\). The construction is by induction on \(n\). Using [86, Lemma 9], there is a separable unital \(C^{*}\)-algebra \(A_{1}\subseteq\mathcal{M}\) which contains \(S\) such that every trace on \(A_{1}\) extends to a trace on \(\mathcal{M}\). With \(Y_{1}\) as in (iv), note that \(Y_{1}\) is closed49 and \(T(A_{1})\) is metrisable, and hence \(T(A_{1})\setminus Y_{1}\) is an \(F_{\sigma}\)-set. By the continuity of the restriction map \(T(\mathcal{M})\to T(A_{1})\), we have Footnote 49: Given a net \((\tau_{i}|_{A})\) in \(Y_{1}\) arising from a net of traces \(\tau_{i}\in T(\mathcal{M})\) with \(\tau_{i}|_{A}\to\sigma\in T(A)\), it follows that any weak\({}^{*}\)-limit point \(\tau\in T(\mathcal{M})\) of the \(\tau_{i}\) is an extension of \(\sigma\), so \(\sigma\in Y_{1}\). \[U_{1}\coloneqq\{\tau\in T(\mathcal{M}):\tau|_{A_{1}}\notin Y_{1}\} \tag{6.31}\] is an \(F_{\sigma}\)-set in \(T(\mathcal{M})\) disjoint from \(Y\). By Theorem 2.4, there is a continuous affine function \(f\colon T(\mathcal{M})\to[0,1]\) such that \(f(\tau)=0\) for all \(\tau\in Y\) and \(f(\tau)>0\) for all \(\tau\in U_{1}\). By Proposition 2.7, there is a self-adjoint \(a_{1}\in\mathcal{M}\) with \(f(\tau)=\tau(a_{1})\) for all \(\tau\in T(\mathcal{M})\). Now repeat the argument with \(A_{1}\cup\{a_{1}\}\) in place of \(S\) to obtain \(A_{2}\) and \(a_{2}\), and continue in this fashion. Let \(A\subseteq\mathcal{M}\) be the \(\|\cdot\|\)-closure of the union of the \(A_{n}\). Note that every trace \(\tau\in T(A)\) extends to a trace on \(\mathcal{M}\) - indeed, for each \(n\geq 1\), \(\tau|_{A_{n}}\) extends to a trace \(\widetilde{\tau}_{n}\in T(\mathcal{M})\), and then any weak\({}^{*}\)-limit point of the traces \(\widetilde{\tau}_{n}\) will extend \(\tau\). Now suppose \(\sigma,\rho\in T(A)\) and that \(\tau\coloneqq\frac{1}{2}(\sigma+\rho)\in Y_{A}\). Since \(\sigma\) and \(\rho\) extend to traces on \(\mathcal{M}\), (iii) implies \(\sigma(a_{n}),\rho(a_{n})\geq 0\) for all \(n\geq 1\). Since \(\tau\in Y_{A}\), we have \(\tau|_{A_{n}}\in Y_{n}\) for all \(n\geq 1\), and hence \(\tau(a_{n})=0\) for all \(n\geq 1\). Therefore, \(\sigma(a_{n})=0\) for all \(n\geq 1\). Using again that \(\sigma\) extends to a trace on \(\mathcal{M}\), it follows from (iv) that \(\sigma|_{A_{n}}\in Y_{n}\) for all \(n\geq 1\). Let \(\widetilde{\sigma}_{n}\in Y\) be a trace with \(\widetilde{\sigma}_{n}|_{A_{n}}=\sigma|_{A_{n}}\). Let \(\widetilde{\sigma}\) be a weak\({}^{*}\)-limit point of the \(\widetilde{\sigma}_{n}\); then \(\widetilde{\sigma}\in Y\) and \(\widetilde{\sigma}|_{A}=\sigma\). So \(\sigma\in Y_{A}\). Similarly, \(\rho\in Y_{A}\), and hence \(Y_{A}\) is a face in \(T(A)\). The argument of Footnote 49: This is a factrivial tracially complete \(C^{*}\)-algebra with CPoU and \(Y\subseteq X\) is a closed face, then \(\big{(}\overline{\mathcal{M}}^{Y},Y\big{)}\) is factorial and satisfies CPoU. Proof.: Note that \(\mathcal{N}\coloneqq\overline{\mathcal{M}}^{Y}\) is factorial by Proposition 3.23(iv). Define \(\phi\colon(\mathcal{M},X)\to(\mathcal{N},Y)\) be the canonical map. Since the range of \(\phi\) is \(\|\cdot\|_{2,Y}\)-dense in \(\mathcal{N}\), it suffices to show that for every \(\delta>0\), \(a_{1},\ldots,a_{n}\in\mathcal{M}\), and \(S\subseteq\mathcal{M}\) such that \(S\) is \(\|\cdot\|\)-separable and \[\sup_{\tau\in Y}\min_{1\leq i\leq n}\tau(\phi(a_{i}))<\delta, \tag{6.32}\] there are projections \(p_{1},\ldots,p_{n}\in\mathcal{M}^{\omega}\cap S^{\prime}\) such that \[\sum_{j=1}^{n}\phi^{\omega}(p_{j})=1_{\mathcal{N}^{\omega}}\qquad\text{and} \qquad\tau(\phi^{\omega}(a_{i}p_{i}))<\delta\tau(\phi^{\omega}(p_{i})) \tag{6.33}\] for all \(i=1,\ldots,n\) and \(\tau\in Y^{\omega}\). Given \(\delta\), \(S\) and \(a_{1},\ldots,a_{n}\) as above, define \[C\coloneqq\{\tau\in T(\mathcal{M}):\min_{1\leq i\leq n}\tau(a_{i})\geq\delta \}\subseteq T(\mathcal{M}). \tag{6.34}\] Then \(C\) is a compact convex subset of \(T(\mathcal{M})\) disjoint from the compact convex set \(Y\subseteq T(\mathcal{M})\). By the Hahn-Banach theorem, there is \(\alpha>0\) and self-adjoint \(b\in\mathcal{M}\) such that \(\tau(b)>\alpha\) for all \(\tau\in Y\) and \(\tau(b)<\alpha\) for all \(\tau\in C\). Replacing \(b\) with \(b+(\|b\|+1)1_{\mathcal{M}}\) and \(\alpha\) with \(\alpha+\|b\|+1\), we may assume that \(b\in\mathcal{M}_{+}\). Finally, by scaling \(b\), there exists \(a_{0}\in\mathcal{M}_{+}\) such that \(\tau(a_{0})>\delta\) for all \(\tau\in Y\) and \(\tau(a_{0})<\delta\) for all \(\tau\in C\). If \(\tau\in X\) such that \(\tau(a_{i})\geq\delta\) for all \(i=1,\ldots,n\), then \(\tau\in C\), and hence \(\tau(a_{0})=f(\tau)<\delta\). Therefore, \[\sup_{\tau\in X}\min_{0\leq i\leq n}\tau(a_{i})<\delta. \tag{6.35}\] By Lemma 6.8, there is a separable unital \(C^{*}\)-algebra \(A\subseteq\mathcal{M}\) containing \(S\) and \(a_{0}\) such that the set \(Y_{A}\subseteq T(A)\) defined in (6.30) is a face in \(T(A)\). Now, since \((\mathcal{M},X)\) has CPoU, there are projections \(p_{0},p_{1},\ldots,p_{n}\in\mathcal{M}^{\omega}\cap A^{\prime}\) such that \[\sum_{j=0}^{n}p_{j}=1_{\mathcal{M}^{\omega}}\qquad\text{and}\qquad\tau(a_{i}p _{i})\leq\delta\tau(p_{i}) \tag{6.36}\] for all \(i=0,\ldots,n\) and \(\tau\in X^{\omega}\). To verify (6.33), it suffices to show \(\phi^{\omega}(p_{0})=0\). Suppose this fails. Then there is a trace \(\tau\in Y^{\omega}\) with \(\tau(\phi^{\omega}(p_{0}))\neq 0\). Define \(\sigma_{0}\colon A\to\mathbb{C}\) by \[\sigma_{0}(a)\coloneqq\frac{\tau(\phi^{\omega}(ap_{0}))}{\tau(\phi^{\omega}(p _{0}))},\qquad a\in A. \tag{6.37}\] Then \(\sigma_{0}\) is a trace on \(A\) that is dominated by a multiple of \(\tau\circ\phi|_{A}\in Y_{A}\), and since \(Y_{A}\) is a face in \(T(A)\), we have \(\sigma_{0}\in Y_{A}\). By the definition of \(Y_{A}\), there is a trace \(\sigma\in Y\) such that \(\sigma|_{A}=\sigma_{0}\). Then \(\sigma(a_{0})>\delta\) by the choice of \(a_{0}\). But this contradicts (6.36) with \(i=0\) and with \(\tau\circ\phi^{\omega}\) in place of \(\tau\). Therefore, \(\phi^{\omega}(p_{0})=0\), as required. As recalled at the beginning of Section 6.1, a \(C^{*}\)-algebra \(A\) with \(T(A)\) has compact has CPoU in the sense of [23] if and only if, in our notation, the uniform tracial completion of \(A\) with respect to \(T(A)\) has CPoU as a tracially complete \(C^{*}\)-algebra. As a consequence of Proposition 6.9, CPoU passes to quotients of \(C^{*}\)-algebras. **Corollary 6.10**.: _Let \(A\) be a \(C^{*}\)-algebra with \(T(A)\) compact and let \(I\unlhd A\) be an ideal. Then \(T(A/I)\) is compact. Further, if \(A\) satisfies CPoU in the sense of [23], then so does \(A/I\)._ Proof.: We may identify \(T(A/I)\) with the closed face \(Y\) of \(T(A)\) consisting of traces on \(A\) vanishing on \(I\) so that \(T(A/I)\) is certainly compact. Apply Proposition 6.9 after making this identification - note that we are implicitly using Proposition 3.23(v) to identify the uniform tracial completion of \(\overline{A}^{T(A)}\) with respect to \(T(A/I)\) with the uniform tracial completion of \(A/I\) with respect to \(T(A/I)\). Now we show that CPoU is preserved under direct limits. While the corresponding result for the McDuff property and property \(\Gamma\) followed easily from the approximate characterisations of these properties, this permanence property for CPoU will take more work. The new difficulty arises from the supremum over traces in (6.1) since traces on the finite terms of an inductive system will not generally extend to traces on the limit. We overcome this challenge with the following lemma. **Lemma 6.11**.: _Let \((\mathcal{M},X)\coloneqq\varinjlim\big{(}(\mathcal{M}_{n},X_{n}),\phi_{n}^{n +1}\big{)}\) be an inductive limit of tracially complete \(C^{*}\)-algebras and write the inductive limit morphisms as \(\phi_{n}^{\infty}\colon(\mathcal{M}_{n},X_{n})\to(\mathcal{M},X)\). Suppose \(k,m_{0}\geq 1\), \(\delta>0\), and \(b_{1},\dots,b_{k}\in\mathcal{M}_{m_{0}}\) are positive with_ \[\sup_{\tau\in X}\min_{1\leq i\leq k}\tau(\phi_{m_{0}}^{\infty}(b_{i}))<\delta. \tag{6.38}\] _Then there exists \(m\geq m_{0}\) such that_ \[\sup_{\tau\in X_{m}}\min_{1\leq i\leq k}\tau(\phi_{m_{0}}^{m}(b_{i}))<\delta, \tag{6.39}\] _where \(\phi_{m_{0}}^{m}\coloneqq\phi_{m-1}^{m}\circ\dots\circ\phi_{m_{0}}^{m_{0}+1}\)._ Proof.: Suppose the conclusion of the lemma does not hold. For each \(m\geq m_{0}\), since \(X_{m}\) is compact, the failure of (6.39) implies there exists \(\tau_{m}\in X_{m}\) such that \[\min_{1\leq i\leq k}\tau_{m}(\phi_{m_{0}}^{m}(b_{i}))\geq\delta. \tag{6.40}\] Fix a free ultrafilter \(\omega\) on \(\mathbb{N}\), and for every \(m\geq m_{0}\), define \[\sigma_{m}\coloneqq\lim_{n\to\omega}\tau_{n}\circ\phi_{m}^{n}\in X_{m}, \tag{6.41}\] which exists as \(X_{m}\) is compact and Hausdorff. If \(n\geq m_{2}\geq m_{1}\geq m_{0}\), then \(\tau_{n}\circ\phi_{m_{1}}^{n}=\tau_{n}\circ\phi_{m_{2}}^{n}\circ\phi_{m_{1}}^{m _{2}}\) for all \(n\geq m_{2}\). Since \(\omega\) is a free ultrafilter, we obtain \(\sigma_{m_{1}}=\sigma_{m_{2}}\circ\phi_{m_{1}}^{m_{2}}\). Therefore, the sequence \((\sigma_{m})_{m=m_{0}}^{\infty}\) induces a trace \(\sigma\in X\) such that \(\sigma_{m}=\sigma\circ\phi_{m}^{\infty}\) for all \(m\geq m_{0}\). By (6.40), \(\min_{1\leq i\leq k}\sigma_{m_{0}}(b_{i})\geq\delta\). Hence, \(\min_{1\leq i\leq k}\sigma(\phi_{m_{0}}^{\infty}(b_{i}))\geq\delta\), contrary to (6.38). This completes the proof. With the above technical lemma in hand, we now proceed with the proof that CPoU passes to inductive limits. **Proposition 6.12**.: _Let \((\mathcal{M},X)\coloneqq\varinjlim\big{(}(\mathcal{M}_{n},X_{n}),\phi_{n}^{n+1} \big{)}\) be an inductive limit of factorial tracially complete \(\widehat{C^{*}}\)-algebras. If \((\mathcal{M},X)\) has CPoU for all \(n\geq 1\), then \((\mathcal{M},X)\) has CPoU as well.50_ Footnote 50: Note that \((\mathcal{M},X)\) is factorial by Proposition 3.34. Proof.: Write the inductive limit morphisms as \(\phi_{n}^{\infty}\colon(\mathcal{M}_{n},X_{n})\to(\mathcal{M},X)\). We will use (ii)\(\Rightarrow\)(i) of Proposition 6.2. As \(\bigcup_{m=1}^{\infty}\phi_{m}^{\infty}(\mathcal{M}_{m})\) is \(\|\cdot\|_{2,X}\)-dense in \(\mathcal{M}\), we may assume the finite set \(\mathcal{F}\coloneqq\{x_{1},\ldots,x_{l}\}\) in the statement of Proposition 6.2(ii) is contained in \(\bigcup_{m=1}^{\infty}\phi_{m}^{\infty}(\mathcal{M}_{m})\). By rescaling, we may assume that each \(x_{j}\) is a contraction. Fix \(\epsilon>0\) and consider a family \(a_{1},\ldots,a_{k}\) of positive elements in \(\mathcal{M}\) and a scalar \[\delta>\sup_{\tau\in X}\min_{1\leq i\leq k}\tau(a_{i}). \tag{6.42}\] By the \(\|\cdot\|_{2,X}\)-density of \(\bigcup_{m=1}^{\infty}\phi_{m}^{\infty}(\mathcal{M}_{m})\) in \(\mathcal{M}\), there are an integer \(m_{0}\geq 1\) and positive \(b_{1},\ldots,b_{k}\in\mathcal{M}_{m_{0}}\) such that \[\|a_{i}-\phi_{m_{0}}^{\infty}(b_{i})\|_{2,X}<\epsilon/3. \tag{6.43}\] Since \(\tau\circ\phi_{m_{0}}^{\infty}\in X_{m_{0}}\) for all \(\tau\in X\), this implies \[\sup_{\tau\in X}\min_{1\leq i\leq k}\tau(\phi_{m_{0}}^{\infty}(b_{i}))<\delta +\epsilon/3. \tag{6.44}\] Accordingly, by Lemma 6.11, we can find \(m\geq m_{0}\) large enough so that \[\sup_{\tau\in X_{m}}\min_{1\leq i\leq k}\tau(\phi_{m_{0}}^{m}(b_{i}))<\delta+ \epsilon/3, \tag{6.45}\] and enlarging \(m\) if necessary, we may find \(y_{1},\ldots,y_{l}\in\mathcal{M}_{m}\) such that \(\phi_{m}^{\infty}(y_{j})=x_{j}\) for \(1\leq j\leq l\). Now we apply CPoU in \(\mathcal{M}_{m}\) using Proposition 6.2(ii) to obtain pairwise orthogonal positive contractions \(f_{1},\ldots,f_{k}\in\mathcal{M}_{m}\) such that \[\begin{split}\|[f_{i},y_{j}]\|_{2,X_{m}}&<\epsilon, \qquad\qquad\qquad\qquad i=1,\ldots,k,\ j=1\ldots,l,\\ \tau(f_{1}+\cdots+f_{k})&>1-\epsilon,\qquad\qquad \qquad\tau\in X_{m},\ \text{and}\\ \tau(\phi_{m_{0}}^{m}(b_{i})f_{i})&<(\delta+ \epsilon/3)\tau(f_{i})+\epsilon/3\\ &\leq\delta\tau(f_{i})+2\epsilon/3,\qquad\qquad\tau\in X_{m},\ i=1,\ldots,k.\end{split} \tag{6.46}\] Set \(e_{i}\coloneqq\phi_{m}^{\infty}(f_{i})\in\mathcal{N}\). These are pairwise orthogonal positive contractions. By the \(\|\cdot\|_{2,X_{m}}\)-\(\|\cdot\|_{2,X}\)-contractivity of \(\phi_{m}^{\infty}\), we have \[\|[e_{i},x_{j}]\|_{2,X}<\epsilon,\qquad i=1,\ldots,k,\ j=1,\ldots,l. \tag{6.47}\] As \((\phi_{m}^{\infty})^{*}(X)\subseteq X_{m}\), \[\tau(e_{1}+\cdots+e_{k})>1-\epsilon,\qquad\tau\in X, \tag{6.48}\] and \[\begin{split}\tau(a_{i}e_{i})&\stackrel{{ \eqref{eq:c_1}}}{{\leq}}\tau(\phi_{m_{0}}^{\infty}(b_{i})e_{i})+ \epsilon/3\\ &=(\phi_{m}^{\infty})^{*}(\tau)(\phi_{m_{0}}^{m}(b_{i})f_{i})+ \epsilon/3\\ &\stackrel{{\eqref{eq:c_2}}}{{<}}\delta(\phi_{m}^{ \infty})^{*}(\tau)(f_{i})+\epsilon=\delta\tau(e_{i})+\epsilon,\qquad\tau\in X.\end{split} \tag{6.49}\] Therefore, \((\mathcal{M},X)\) has CPoU. **Remark 6.13** (cf. Remark 5.26).: We will later show that a reduced products of factorial tracially complete \(C^{*}\)-algebras with CPoU is factorial and has CPoU (Corollary 7.7(i)). Once the reduced product is shown to be factorial, the reduced product will have CPoU since the local characterisation of CPoU in Proposition 6.2(ii) is easily seen to be preserved by reduced products. The property CPoU also passes to matrix algebras. **Proposition 6.14**.: _If \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebras with CPoU, then so is \(\left(\mathcal{M}\otimes M_{d},X\otimes\{\mathrm{tr}_{d}\}\right)\) for all \(d\in\mathbb{N}\)._ Proof.: First note that \((\mathcal{M}\otimes M_{d},X\otimes\{\mathrm{tr}_{d}\})\) is a factorial tracially complete \(C^{*}\)-algebra by Proposition 3.18. Now fix a \(\|\cdot\|_{2,X\otimes\{\mathrm{tr}_{d}\}}\)-separable set \(S\subseteq\mathcal{M}\otimes M_{d}\) and let \(a_{1},\dots,a_{k}\in(\mathcal{M}\otimes M_{d})_{+}\) and \(\delta>0\) be such that \[\sup_{\tau\in X}\min_{1\leq i\leq k}(\tau\otimes\mathrm{tr}_{d})(a_{i})<\delta. \tag{6.50}\] Let \(T\subseteq\mathcal{M}\) be the set of entries of elements of \(S\) and note that \(T\) is \(\|\cdot\|_{2,X}\)-separable. Applying CPoU to the elements \((\mathrm{id}_{\mathcal{M}}\otimes\mathrm{tr}_{d})(a_{i})\in\mathcal{M}_{+}\), there are projections \(q_{1},\dots,q_{k}\in\mathcal{M}^{\omega}\cap T^{\prime}\) such that \[\sum_{j=1}^{k}q_{j}=1_{\mathcal{M}}\quad\text{and}\quad\tau((\mathrm{id}_{ \mathcal{M}}\otimes\mathrm{tr}_{d})(a_{i})q_{i})\leq\delta\tau(q_{i}) \tag{6.51}\] for all \(i=1,\dots,k\) and \(\tau\in X\). Then \(p_{i}\coloneqq q_{i}\otimes 1_{M_{d}}\), \(1\leq i\leq k\), are projections in \((\mathcal{M}\otimes M_{d})^{\omega}\cap T^{\prime}\) which verify CPoU.51 Footnote 51: Note that we are using Proposition 5.8 to identify \(\mathcal{M}^{\omega}\otimes M_{d}\) and \((\mathcal{M}\otimes M_{d})^{\omega}\). ### Property \(\Gamma\) implies CPoU We now have the machinery to prove Theorem 1.4, giving a large supply of examples of tracially complete factorial \(C^{*}\)-algebras which satisfy CPoU. In particular, all (uniform tracial completions of) unital \(\mathcal{Z}\)-stable \(C^{*}\)-algebras satisfy CPoU, removing the nuclearity condition from [23, Theorem 3.8]. In the separable setting, the proof reduces to the strategy outlined towards the end of Section 1.4, which is modelled on the proof of [23, Theorem 3.8]. The following result is the weak form of CPoU discussed in the first step of the outline. This was shown to hold for uniform tracial completions of nuclear \(C^{*}\)-algebras with compact trace simplex in [23, Lemma 3.6] - we extend the result to all factorial tracially complete \(C^{*}\)-algebras. **Theorem 6.15**.: _Let \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebra such that \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-separable. If \(a_{1},\dots,a_{k}\in\mathcal{M}_{+}\),_ \[\delta>\sup_{\tau\in X}\min_{1\leq i\leq k}\tau(a_{i}), \tag{6.52}\] _and \(q\in\mathcal{M}^{\omega}\cap\mathcal{M}^{\prime}\) is a projection with \(\tau(q)>0\) for all \(q\in X^{\omega}\), then for every \(\|\cdot\|_{2,X^{\omega}}\)-separable set \(S\subseteq\mathcal{M}^{\omega}\), there are positive contractions \(e_{1},\dots,e_{k}\in M^{\omega}\cap S^{\prime}\) summing to \(q\) with \(\tau(a_{i}e_{i})\leq\delta\tau(e_{i})\) for all \(\tau\in X^{\omega}\) and \(i=1,\dots,k\)._ Proof.: Let \(\gamma\coloneqq\inf_{\tau\in X^{\omega}}\tau(q)\), and note that \(\gamma>0\) as \(X^{\omega}\) is compact. By Kirchberg's \(\epsilon\)-test (Lemma 5.1), it suffices to show that for any finite set \(\mathcal{F}\subseteq\mathcal{M}\) and \(\epsilon>0\), there are positive contractions \(e_{1},\ldots,e_{k}\in\mathcal{M}^{\omega}\) such that \[\begin{split}&\Big{\|}\sum_{j=1}^{k}e_{j}-q\Big{\|}_{2,X^{\omega}}< \epsilon,\\ &\max_{x\in\mathcal{F}}\|[x,e_{i}]\|_{2,X}<\epsilon,\qquad\qquad i =1,\ldots,k,\text{ and }\\ &\tau(a_{i}e_{i})-\delta\tau(e_{i})<\gamma^{-1}\epsilon,\qquad \tau\in X^{\omega},\ i=1,\ldots,k.\end{split} \tag{6.53}\] Fix the finite set \(\mathcal{F}\) and the tolerance \(\epsilon>0\). We will first show that for any \(\tau\in X\), there are \[e_{1}^{\prime},\ldots,e_{k}^{\prime},b_{1},\ldots,b_{k},c_{1},\ldots c_{k}\in \mathcal{M} \tag{6.54}\] with each \(e_{i}^{\prime}\) and \(b_{i}\) positive, \(\|e_{i}^{\prime}\|\leq 1\), and \(c_{i}\) in the span of the commutators in \(\mathcal{M}\) such that \[\begin{split}&\Big{\|}\sum_{j=1}^{k}e_{j}^{\prime}-1_{\mathcal{M}} \Big{\|}_{2,\tau}<\epsilon,\\ &\max_{x\in\mathcal{F}}\|[x,e_{i}^{\prime}]\|_{2,\tau}<\epsilon, \quad i=1,\ldots,k,\text{ and }\\ &\|(a_{i}-\delta)e_{i}^{\prime}-b_{i}-c_{i}\|_{2,\tau}<\epsilon, \quad i=1,\ldots,k.\end{split} \tag{6.55}\] Fix \(\tau\in X\). For each \(\sigma\in T(\pi_{\tau}(\mathcal{M})^{\prime\prime})\), we have \(\sigma\circ\pi_{\tau}\in X\) by Lemma 2.10. Therefore, \[\delta>\sup_{\sigma\in T(\pi_{\tau}(\mathcal{M})^{\prime\prime})}\min_{1\leq i \leq k}\sigma(\pi_{\tau}(a_{i})). \tag{6.56}\] By Corollary 6.4, there are projections \(\bar{e}_{1},\ldots,\bar{e}_{k}\in Z(\pi_{\tau}(\mathcal{M})^{\prime\prime})\) summing to \(1_{\pi_{\tau}(\mathcal{M})^{\prime\prime}}\) such that \(\sigma(a_{i}\bar{e}_{i})\leq\delta\sigma(\bar{e}_{i})\) for all \(i=1,\ldots,k\) and \(\sigma\in T(\pi_{\tau}(\mathcal{M})^{\prime\prime})\). For \(i=1,\ldots,k\), define \[\bar{b}_{i}\coloneqq\operatorname{tr}_{\pi_{\tau}(\mathcal{M})^{\prime\prime} }(a_{i}\bar{e}_{i}-\delta\bar{e}_{i})\in Z(\pi_{\tau}(\mathcal{M})^{\prime \prime}). \tag{6.57}\] Then \(\bar{b}_{i}\) is positive and \[\sigma(a_{i}\bar{e}_{i})+\sigma(\bar{b}_{i})=\delta\sigma(\bar{e}_{i}),\qquad \sigma\in T(\pi_{\tau}(\mathcal{M})^{\prime\prime}). \tag{6.58}\] Let \(\bar{c}_{i}=(a_{i}-\delta)\bar{e}_{i}-\bar{b}_{i}\). Then \(\bar{c}_{i}\) vanishes on all traces, so \(\bar{c}_{i}\) is a sum of commutators (see [43, Theoreme 2.3]).52 The existence of the required \(e_{i}^{\prime}\), \(b_{i}\), and \(c_{i}\) then follows from Kaplansky's density theorem. Footnote 52: A standard Hahn–Banach argument implies that since \(\bar{c}_{i}\) vanishes on all traces, \(\bar{c}_{i}\) is in the \(\|\cdot\|\)-closed span of the commutators. After adjusting the bounds, this weaker result is sufficient to run the proof without quoting [43, Theoreme 2.3] The pointwise result above, holding for all \(\epsilon>0\), and Lemma 4.753 imply that for any \(\epsilon>0\) there are positive contractions \(e_{1}^{\prime},\ldots e_{k}^{\prime}\in\mathcal{M}\), positive elements \(b_{1},\dots,b_{k}\in\mathcal{M}\), and elements \(c_{1},\dots,c_{k}\in\mathcal{M}\) in the span of the commutators in \(\mathcal{M}\) such that \[\begin{split}&\Big{\|}\sum_{j=1}^{k}e_{j}^{\prime}-1_{\mathcal{M}} \Big{\|}_{2,X}<\epsilon,\\ &\max_{x\in\mathcal{F}}\|[x,e_{i}^{\prime}]\|_{2,X}<\epsilon, \quad i=1,\dots,k,\text{ and }\\ &\|(a_{i}-\delta)e_{i}^{\prime}-b_{i}-c_{i}\|_{2,X}<\epsilon, \quad i=1,\dots,k.\end{split} \tag{6.59}\] The final equation in (6.59) implies that for all \(\tau\in X\), we have \[\tau\big{(}(a_{i}-\delta)e_{i}^{\prime}-b_{i}-c_{i}\big{)}<\epsilon. \tag{6.60}\] Rearranging, this implies \[\tau(a_{i}e_{i}^{\prime})<\delta\tau(e_{i}^{\prime})+\tau(b_{i})+\tau(c_{i})+ \epsilon\leq\delta\tau(e_{i}^{\prime})+\epsilon, \tag{6.61}\] where the last inequality holds since \(b_{i}\) is positive and \(c_{i}\) in the closed span on commutators. We now have positive contractions \(e_{1}^{\prime},\dots,e_{k}^{\prime}\in\mathcal{M}\) such that \[\begin{split}&\Big{\|}\sum_{j=1}^{k}e_{j}^{\prime}-1_{\mathcal{M}} \Big{\|}_{2,X^{\omega}}<\epsilon,\\ &\quad\max_{x\in\mathcal{F}}\|[x,e_{i}^{\prime}]\|_{2,X}<\epsilon,\quad i=1,\dots,k,\text{ and }\\ &\quad\tau(a_{i}e_{i}^{\prime})-\delta\tau(e_{i}^{\prime})< \epsilon,\quad\tau\in X,\ i=1,\dots,k.\end{split} \tag{6.62}\] For \(i=1,\dots,k\), define a positive contraction \(e_{i}\coloneqq e_{i}^{\prime}q\in\mathcal{M}\). The first two conditions in (6.53) follow from the corresponding conditions in (6.62). We work to verify the third condition in (6.53). Fix \(\tau\in X^{\omega}\). By hypothesis, \(\tau(q)>0\), so we may define \(\tau_{q}\in\mathcal{M}^{*}\) by \(\tau_{q}(a)\coloneqq\tau(q)^{-1}\tau(a)\). As \(q\) commutes with \(\mathcal{M}\), \(\tau_{q}\in T(\mathcal{M})\). Further, since \(\tau|_{\mathcal{M}}\in X\), \(\tau_{q}\leq\tau(q)^{-1}\tau|_{\mathcal{M}}\), and \(X\) is face, we have \(\tau_{q}\in X\). By the third condition in (6.62), we have \[\tau_{q}(a_{i}e_{i}^{\prime})<\delta\tau_{q}(e_{i}^{\prime})+\epsilon\qquad \qquad\qquad i=1,\dots,k, \tag{6.63}\] or equivalently, \[\tau(a_{i}e_{i}^{\prime}q)<\delta\tau(e_{i}q)+\epsilon\qquad\qquad\qquad i=1, \dots,k. \tag{6.64}\] Dividing by \(\tau(q)\) yields the final condition in (6.53). The rest of the proof that property \(\Gamma\) implies CPoU proceeds as in [23]. The following proposition is a tracially complete version of the "tracial projectionisation" result in [23, Lemma 2.4] and the proof is essentially identical. For the sake of completeness, we provide a sketch. **Proposition 6.16** (Projectionisation).: _Suppose \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebra with property \(\Gamma\). If \(S\subseteq\mathcal{M}^{\omega}\) is a \(\|\cdot\|_{2,X^{\omega}}\)-separable subset and \(e\in\mathcal{M}^{\omega}\cap S^{\prime}\) is a positive contraction, then there is a projection \(p\in\mathcal{M}^{\omega}\cap S^{\prime}\) commuting with \(e\) such that \(\tau(ae)=\tau(ap)\) for all \(a\in S\) and \(\tau\in X^{\omega}\)._ Proof.: Let \(k\geq 1\). Using property \(\Gamma\), we may fix projections \(r_{1},\dots,r_{k}\in\mathcal{M}^{\omega}\cap S^{\prime}\cap\{e\}^{\prime}\) partitioning the unit with \[\tau(ar_{i})=\frac{1}{k}\tau(a),\qquad\tau\in X^{\omega},\ a\in C^{*}(S\cup\{e\}),\ i=1,\dots,k. \tag{6.65}\] For \(i=1,\dots,k\), consider the continuous function \(f_{i}\colon[0,1]\to\mathbb{R}\) given by \[f_{i}(t)\coloneqq\begin{cases}0,&0\leq t\leq(i-1)/k,\\ kt-i+1,&(i-1)/k\leq t\leq i/n,\\ 1,&i/k\leq t\leq 1.\end{cases} \tag{6.66}\] Then set \(q\coloneqq\sum_{i=1}^{k}f_{i}(e)r_{i}\in\mathcal{M}^{\omega}\cap S^{\prime} \cap\{e\}^{\prime}\), which is a positive contraction. A computation (as in [23, Equations (2.9) and (2.10)])54 shows Footnote 54: [23, Equation (2.7)] is still correct if the right side is changed to \(\frac{1}{4}1_{C([0,1])}\), and using this, the right side of [23, Equation (2.10)] can be improved to \(1/4n\). \[\tau(ae)=\tau(aq)\quad\text{and}\quad\tau(q-q^{2})<\frac{1}{4k},\quad\tau\in X ^{\omega},\ a\in S. \tag{6.67}\] The result follows from Kirchberg's \(\epsilon\)-test (Lemma 5.1). Applying the above projectionisation to each of the \(e_{i}\) from the conclusion of Theorem 6.15 will produce projections which have the correct behaviour on traces, but fail to be orthogonal. As in [23], this will be addressed via another application of property \(\Gamma\) with a construction deemed "orthogonalisation" in [23]. The orthogonalised projections will no longer sum to the unit, but they will sum to a projection of constant trace \(\frac{1}{k}\). From here, CPoU follows from a maximality argument by repeating the construction in the complementary corner - this is the reason for the projection \(q\) in Theorem 6.15. (A geometric series argument could be used instead of the maximality argument for this last step.) **Theorem 1.4**.: _Let \((\mathcal{M},X)\) be a factorial tracially complete \(C^{*}\)-algebra with property \(\Gamma\). Then \((\mathcal{M},X)\) has CPoU. In particular, unital \(C^{*}\)-algebras with uniform property \(\Gamma\) (e.g. unital \(\mathcal{Z}\)-stable \(C^{*}\)-algebras) have CPoU._ Proof.: The results of Appendix A reduce the theorem to the case when \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-separable. Namely, Theorem A.3 shows that both factoriality plus property \(\Gamma\) and factoriality plus CPoU are strongly separably inheritable properties; therefore, to show one implies the other, it suffices to do so in the \(\|\cdot\|_{2,X}\)-separable case. For the rest of the proof, we assume \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-separable. Suppose \(a_{1}\dots,a_{k}\in\mathcal{M}_{+}\) and \(\delta>0\) are given as in the definition of CPoU, i.e. they satisfy \[\delta>\sup_{\tau\in X}\min_{1\leq i\leq k}\tau(a_{i}). \tag{6.68}\] Let \(I\subseteq[0,1]\) denote the set of \(\alpha\in[0,1]\) such that for all \(\|\cdot\|_{2,X^{\omega}}\)-separable sets \(S\subseteq\mathcal{M}^{\omega}\), there are orthogonal projections \(p_{1},\dots,p_{k}\in\mathcal{M}^{\omega}\cap S^{\prime}\) such that \[\sum_{j=1}^{k}\tau(p_{j})=\alpha\qquad\text{and}\qquad\tau(a_{i}p_{i})\leq \delta\tau(p_{i}) \tag{6.69}\] for all \(\tau\in X^{\omega}\) and \(i=1,\ldots,k\). Clearly \(I\neq\emptyset\) as \(0\in I\), and \(I\) is closed by Kirchberg's \(\epsilon\)-test. Hence \(I\) contains a maximal element \(\alpha\). It suffices to show \(\alpha=1\) since this forces \(\sum_{j=1}^{k}p_{j}=1_{\mathcal{M}^{\omega}}\). We will assume \(\alpha<1\) and show \(\alpha+\frac{1}{k}(1-\alpha)\in I\), which contradicts the maximality of \(\alpha\). Assume \(\alpha<1\) and let \(S\subseteq\mathcal{M}^{\omega}\) be a \(\|\cdot\|_{2,X^{\omega}}\)-separable set with \(\mathcal{M}\subseteq S\). By the assumption on \(\alpha\), there are mutually orthogonal projections \(p^{\prime}_{1}\ldots,p^{\prime}_{k}\in\mathcal{M}^{\omega}\cap S^{\prime}\) such that \[\sum_{j=1}^{k}\tau(p^{\prime}_{j})=\alpha\qquad\text{and}\qquad\tau(a_{i}p^{ \prime}_{i})\leq\delta\tau(p^{\prime}_{i}) \tag{6.70}\] for all \(\tau\in X^{\omega}\) and \(i=1,\ldots,k\). Define \[q\coloneqq 1_{\mathcal{M}^{\omega}}-\sum_{j=1}^{k}p^{\prime}_{j}\in\mathcal{M}^ {\omega}\cap S^{\prime}\subseteq\mathcal{M}^{\omega}\cap\mathcal{M}^{\prime} \tag{6.71}\] Note that \(\tau(q)=1-\alpha>0\) for all \(\tau\in X^{\omega}\). Theorem 6.15 provides positive contractions \(e_{1},\ldots,e_{k}\in\mathcal{M}^{\omega}\cap S^{\prime}\) summing to \(q\) with \(\tau(a_{i}e_{i})\leq\delta\tau(e_{i})\) for all \(\tau\in X^{\omega}\) and \(i=1,\ldots,k\). Let \(q_{1},\ldots,q_{k}\in\mathcal{M}^{\omega}\cap(S\cup\{q,e_{1},\ldots,e_{k}\})^ {\prime}\) be projections as in Proposition 6.16 satisfying \(\tau(ae_{i})=\tau(aq_{i})\) for all \(a\in C^{*}(S\cup\{q\})\) and \(i=1,\ldots,k\). Let \(A\subseteq\mathcal{M}^{\omega}\) denote the \(C^{*}\)-algebra generated by \(S\), the \(e_{i}\), and the \(q_{i}\), and using property \(\Gamma\), let \(r_{1},\ldots,r_{k}\in\mathcal{M}^{\omega}\cap A^{\prime}\) be projections summing to \(1_{\mathcal{M}^{\omega}}\) such that \(\tau(ar_{i})=\frac{1}{k}\tau(a)\) for all \(\tau\in X^{\omega}\), \(i=1,\ldots,k\), and \(a\in A\) Define \(p^{\prime\prime}_{i}\coloneqq q_{i}r_{i}\) for \(i=1,\ldots k\). These are orthogonal projections since \(r_{1},\ldots,r_{k}\) are orthogonal projections and each \(q_{i}\) is a projection commuting with \(q_{i}\). We have \[\sum_{j=1}^{k}\tau(p^{\prime\prime}_{i})=\frac{1}{k}\sum_{j=1}^{k}\tau(q_{i}) =\frac{1}{k}\sum_{j=1}^{k}\tau(e_{i})=\frac{1}{k}\tau(q)=\frac{1}{k}(1-\alpha) \tag{6.72}\] for all \(\tau\in X^{\omega}\). Further, for all \(\tau\in X^{\omega}\) and \(i=1,\ldots,k\), we have \[\tau(a_{i}p^{\prime\prime}_{i})=\frac{1}{k}\tau(a_{i}q_{i})=\frac{1}{k}\tau(a _{i}e_{i})\leq\frac{\delta}{k}\tau(e_{i})=\frac{\delta}{k}\tau(q_{i})=\delta \tau(p^{\prime\prime}_{i}). \tag{6.73}\] Note also that \(e_{i}q=e_{i}\), and hence \(\tau(q_{i}q)=\tau(e_{i}q)=\tau(e_{i})=\tau(q_{i})\) for all \(\tau\in X^{\omega}\). As \(q_{i}\) and \(q\) are commuting positive elements, we have \(q_{i}q\leq q_{i}\), and hence the equality on traces implies \(q_{i}q=q_{i}\). Therefore, \(p^{\prime\prime}_{i}\leq q^{\prime}_{i}\leq q\) for all \(i=1,\ldots,k\). Set \(p_{i}\coloneqq p^{\prime}_{i}+p^{\prime\prime}_{i}\in\mathcal{M}^{\omega}\cap S ^{\prime}\). Since \(p^{\prime}_{1},\ldots,p^{\prime}_{k},p^{\prime\prime}_{1},\ldots,p^{\prime \prime}_{k}\) are mutually orthogonal projections, the same is true of \(p_{1},\ldots,p_{k}\). Further, the projections \(p_{i}\) witness \(\alpha+\frac{1}{k}(1-\alpha)\in I\), which contradicts the maximality of \(\alpha\). ## 7. Applications of CPoU The power of CPoU lies in the fact that we can prove many useful structural properties for a reduced product of a sequence of factorial tracially complete \(C^{*}\)-algebras with CPoU. That is the topic of this subsection. Our first goal is show in Section 7.1 that a reduced product of factorial tracially complete \(C^{*}\)-algebras with CPoU is factorial. In fact, we will show all traces on the reduced product are in the closed convex hull of the limit traces - solving the trace problem for such reduced products. Along the way, we show that such reduced products have real rank zero and comparison of projections with respect to limit traces. We then analyse the unitary group of factorial tracially complete \(C^{*}\)-algebras with CPoU in Section 7.2 and show that a uniform 2-norm dense set of unitaries are exponentials. This also provides a stability result for unitaries showing that in the uniform 2-norm, every approximate unitary is close to a unitary. Finally, Section 7.3 strengthens the comparison property obtained in Section 7.1 mentioned above and classifies projections in factorial tracially complete \(C^{*}\)-algebras with CPoU via traces. ### Traces on reduced products The following lemma gives an approximate version of the existence of spectral projections in factorial tracially complete \(C^{*}\)-algebras with CPoU, and the proof is a typical application of CPoU. The idea is the following: if \(\mathcal{M}\) is a von Neumann algebra and \(x\in\mathcal{M}\) is self-adjoint, then there is a projection \(q\in\mathcal{M}\) such that \(qx=x_{+}\); indeed, one may take \(q\) to be the spectral projection of \(x\) corresponding to the interval \([0,\infty)\). Then CPoU provides a method for transferring this result from tracial von Neumann algebra completions of a tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\) to produce an analogous result in \(\mathcal{M}\) up to a small \(\|\cdot\|_{2,X}\)-error. **Lemma 7.1**.: _Suppose \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebra with CPoU. If \(x\in\mathcal{M}\) is self-adjoint and \(\epsilon>0\), then there is a positive contraction \(q\in\mathcal{M}\) such that_ \[\|q-q^{2}\|_{2,X}<\epsilon\qquad\text{and}\qquad\|qx-x_{+}\|_{2,X}<\epsilon. \tag{7.1}\] Proof.: For each \(\tau\in X\), let \(\bar{q}_{\tau}\in\pi_{\tau}(\mathcal{M})^{\prime\prime}\) be the spectral projection of \(\pi_{\tau}(a)\) corresponding to the interval \([0,\infty)\) so that \(\bar{q}_{\tau}\pi_{\tau}(x)=\pi_{\tau}(x_{+})\). By Kaplansky's density theorem, there is a positive contraction \(q_{\tau}\in\mathcal{M}\) such that \[\|q_{\tau}-q_{\tau}^{2}\|_{2,\tau}<\frac{\epsilon}{\sqrt{3}}\qquad\text{and} \qquad\|q_{\tau}x-x_{+}\|_{2,\tau}<\frac{\epsilon}{\sqrt{3}}. \tag{7.2}\] For all \(\tau\in X\), define \[a_{\tau}\coloneqq|q_{\tau}-q_{\tau}^{2}|^{2}+|q_{\tau}x-x_{+}|^{2}\in \mathcal{M}_{+} \tag{7.3}\] and note that \(\tau(a_{\tau})<2\epsilon^{2}/3\). For \(\tau\in X\), let \[U_{\tau}\coloneqq\Big{\{}\sigma\in X:\sigma(a_{\tau})<\frac{2\epsilon^{2}}{3 }\Big{\}}, \tag{7.4}\] which is an open neighbourhood of \(\tau\). As \(X\) is compact, there are traces \(\tau_{1},\dots,\tau_{k}\in X\) such that \((U_{\tau_{i}})_{i=1}^{k}\) is an open cover of \(X\). Therefore, \[\sup_{\tau\in X}\min_{1\leq i\leq k}\tau(a_{\tau_{i}})<\frac{2\epsilon^{2}}{3}. \tag{7.5}\] Set \(S\coloneqq\{q_{\tau_{i}}:i=1,\dots,k\}\cup\{x\}\). As \((\mathcal{M},X)\) has CPoU, there are pairwise orthogonal projections \(p_{1},\dots,p_{k}\in\mathcal{M}^{\omega}\cap S^{\prime}\) such that \[\sum_{i=1}^{k}p_{i}=1_{\mathcal{M}^{\omega}}\qquad\text{and}\qquad\tau(a_{ \tau_{i}}p_{i})\leq\frac{2\epsilon^{2}}{3}\tau(p_{i}) \tag{7.6}\] for all \(1\leq i\leq k\) and \(\tau\in X^{\omega}\). Let \(q\coloneqq\sum_{i=1}^{k}q_{\tau_{i}}p_{i}\in\mathcal{M}_{+}^{\omega}\). Since \(p_{1},\ldots,p_{k}\) are mutually orthogonal projections summing to the identity which commute with \(\{q_{\tau_{i}}:i=1,\ldots,k\}\cup\{x\}\), we have \[\begin{split}|q-q^{2}|^{2}+|qx-x_{+}|^{2}&=\sum_{i= 1}^{k}|q_{\tau_{i}}^{2}-q_{\tau_{i}}|^{2}p_{i}+\sum_{i=1}^{k}|q_{\tau_{i}}x-x_ {+}|^{2}p_{i}\\ &=\sum_{i=1}^{k}a_{\tau_{i}}p_{i}.\end{split} \tag{7.7}\] Now, we compute \[\|q-q^{2}\|_{2,\tau}^{2}+\|qx-x_{+}\|_{2,\tau}^{2}=\sum_{i=1}^{k}\tau(a_{\tau _{i}}p_{i})\leq\frac{2\epsilon^{2}}{3}\sum_{i=1}^{k}\tau(p_{i})<\epsilon^{2} \tag{7.8}\] for all \(\tau\in X^{\omega}\). If \((q_{n})_{n=1}^{\infty}\) is a sequence of positive contractions in \(\mathcal{M}\) representing \(q\), then for \(\omega\)-many \(n\), \[\|q_{n}-q_{n}^{2}\|_{2,X}<\epsilon\qquad\text{and}\qquad\|q_{n}x-x_{+}\|_{2,X} <\epsilon.\qed \tag{7.9}\] We will prove a stronger version of the following result in Corollary 7.15 once we show reduced products of factorial tracially complete \(C^{*}\)-algebras with CPoU are factorial and have CPoU. **Proposition 7.2**.: _Let \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) be a sequence of factorial tracially complete \(C^{*}\)-algebras with CPoU. Then \(\prod^{\omega}\mathcal{M}_{n}\) has real rank zero._ Proof.: Define \((\mathcal{M},X)\coloneqq\big{(}\prod^{\omega}\mathcal{M}_{n},\sum^{\omega}X_{ n}\big{)}\) and let \(x\in\mathcal{M}\) be self-adjoint. Let \((x_{n})_{n=1}^{\infty}\in\prod_{n=1}^{\infty}\mathcal{M}_{n}\) be a self-adjoint element lifting \(x\) and then use Lemma 7.1 to produce a positive contraction \((q_{n})_{n=1}^{\infty}\in\prod_{n=1}^{\infty}\mathcal{M}_{n}\) such that for all \(n\geq 1\), \[\|q_{n}-q_{n}^{2}\|_{2,X_{n}}<\frac{1}{n}\qquad\text{and}\qquad\|q_{n}x_{n}-(x _{n})_{+}\|_{2,X_{n}}<\frac{1}{n}. \tag{7.10}\] Let \(q\in\mathcal{M}^{\omega}\) denote the image of \((q_{n})_{n=1}^{\infty}\). Then we have \[\|q-q^{2}\|_{2,X}=0\qquad\text{and}\qquad\|qx-x_{+}\|_{2,X}=0. \tag{7.11}\] Hence, \(q\) is a projection with \(qx=x_{+}\), which also implies that \(x\) and \(q\) commute. Fix \(\epsilon>0\) and note that \[y\coloneqq x+\epsilon(2q-1_{\mathcal{M}})=(x_{+}+\epsilon)q-(x_{-}+\epsilon)q ^{\perp}\in\mathcal{M} \tag{7.12}\] is self-adjoint and invertible with \(\|x-y\|\leq\epsilon\). Hence, the invertible self-adjoint elements in \(\mathcal{M}\) form a dense subset of the self-adjoint elements of \(\mathcal{M}\). Therefore, \(\mathcal{M}\) has real rank zero. Now we turn to comparison of projections in tracially complete \(C^{*}\)-algebras. Just as Murray-von Neumann subequivalence of projections in finite von Neumann algebras is determined by traces, we will show in Theorem 7.18 that a factorial tracial complete \(C^{*}\)-algebra \((\mathcal{M},X)\) with CPoU has comparison of projections with respect to \(X\), i.e. that a projection \(p\in\mathcal{M}\) is Murray-von Neumann subequivalent to a projection \(q\in\mathcal{M}\) if and only if \(\tau(p)\leq\tau(q)\) for all \(\tau\in X\). Before doing this, we need the following approximate comparison result, which will lead to comparison of projections in reduced products. **Lemma 7.3**.: _Suppose \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebra with CPoU, \(\epsilon>0\), and \(p,q\in\mathcal{M}\) are positive contractions such that_ \[\tau(p-p^{2})<\epsilon,\quad\tau(q-q^{2})<\epsilon,\quad\text{and}\quad\tau(p )<\tau(q)+\epsilon \tag{7.13}\] _for all \(\tau\in X\). Then there is a contraction \(v\in\mathcal{M}\) such that_ \[\|v^{*}qv-p\|_{2,X}<\big{(}2\sqrt{2}+\sqrt{5}\big{)}\sqrt{\epsilon}. \tag{7.14}\] Proof.: Using that \(X\) is compact, there is an \(\epsilon^{\prime}\in(0,\epsilon)\) with \[\tau(p-p^{2})<\epsilon^{\prime},\quad\tau(q-q^{2})<\epsilon^{\prime},\quad \text{and}\quad\tau(p)<\tau(q)+\epsilon^{\prime} \tag{7.15}\] for all \(\tau\in X\). We first show that for each \(\tau\in X\), there is a partial isometry \(\bar{v}_{\tau}\in\pi_{\tau}(\mathcal{M})^{\prime\prime}\) such that \[\|\bar{v}_{\tau}^{*}\pi_{\tau}(q)\bar{v}_{\tau}-\pi_{\tau}(p)\|_{2,\tau}<\delta \coloneqq\big{(}2\sqrt{2}+\sqrt{5}\big{)}\sqrt{\epsilon^{\prime}}. \tag{7.16}\] Fix \(\tau\in X\) and let \(p_{\tau},q_{\tau}\in\pi_{\tau}(\mathcal{M})^{\prime\prime}\) denote the spectral projections of \(\pi_{\tau}(p)\) and \(\pi_{\tau}(q)\) corresponding to the interval \([1/2,1]\). Then \[|\pi_{\tau}(p)-p_{\tau}|^{2} \leq|\pi_{\tau}(p)-p_{\tau}|\leq 2(\pi_{\tau}(p)-\pi_{\tau}(p)^{2}) \qquad\text{ and }\] \[|\pi_{\tau}(q)-q_{\tau}|^{2} \leq|\pi_{\tau}(q)-q_{\tau}|\leq 2(\pi_{\tau}(q)-\pi_{\tau}(q)^{2}), \tag{7.17}\] so that (7.15) gives \[\|\pi_{\tau}(p)-p_{\tau}\|_{2,\tau}\leq\sqrt{2\epsilon^{\prime}}\text{ and }\|\pi_{\tau}(q)-q_{\tau}\|_{2,\tau}\leq\sqrt{2\epsilon^{\prime}}. \tag{7.18}\] By Lemma 2.10, we have \(\sigma\circ\pi_{\tau}\in X\) for all \(\sigma\in T(\pi_{\tau}(\mathcal{M})^{\prime\prime})\), and so (7.15) and (7.17) imply that \[\sigma(p_{\tau})<\sigma(q_{\tau})+5\epsilon^{\prime},\qquad\sigma\in T(\pi_{ \tau}(\mathcal{M})^{\prime\prime}). \tag{7.19}\] By the general comparison theorem for projections in von Neumann algebras (see for example [104, Theorem V.1.8]), there exists a central projection \(z_{\tau}\in\pi_{\tau}(\mathcal{M})^{\prime\prime}\) with \(z_{\tau}p_{\tau}\precsim z_{\tau}q_{\tau}\) and \(z_{\tau}^{\perp}q_{\tau}\precsim z_{\tau}^{\perp}p_{\tau}\). Fix a partial isometry \(\bar{v}_{\tau}\in\pi_{\tau}(\mathcal{M})^{\prime\prime}\) such that \[z_{\tau}\bar{v}_{\tau}^{*}\bar{v}_{\tau} =z_{\tau}p_{\tau}\quad\text{and}\quad z_{\tau}\bar{v}_{\tau}\bar{ v}_{\tau}^{*}\leq z_{\tau}q_{\tau}, \tag{7.21}\] \[z_{\tau}^{\perp}\bar{v}_{\tau}^{*}\bar{v}_{\tau} \leq z_{\tau}^{\perp}p_{\tau}\quad\text{and}\quad z_{\tau}^{\perp} \bar{v}_{\tau}\bar{v}_{\tau}^{*}=z_{\tau}^{\perp}q_{\tau}. \tag{7.20}\] In the special case where \(z_{\tau}=1_{\mathcal{M}}\), then (7.20) implies that \(p_{\tau}=\bar{v}_{\tau}^{*}q_{\tau}\bar{v}_{\tau}\); hence, the required inequality (7.16) is now a consequence of (7.17) and (7.15). Suppose now that \(z_{\tau}\neq 1_{\mathcal{M}}\). Then \(\tau(z_{\tau}^{\perp})\neq 0\), so we may define \(\sigma\in T(\pi_{\tau}(\mathcal{M})^{\prime\prime})\) by \(\sigma(a)\coloneqq\tau(z_{\tau}^{\perp})^{-1}\tau(z_{\tau}^{\perp}a)\). Then (7.20) implies that \(z_{\tau}p_{\tau}=z_{\tau}\bar{v}_{\tau}^{*}q_{\tau}\bar{v}_{\tau}\), and (7.21) implies that \(z_{\tau}^{\perp}p_{\tau}-z_{\tau}^{\perp}\bar{v}_{\tau}^{*}q_{\tau}\bar{v}_{\tau}\) is a projection. Therefore, we have \[\begin{split}\|p_{\tau}-\bar{v}_{\tau}^{*}q_{\tau}\bar{v}_{\tau}\|_{ 2,\tau}^{2}&=\|z_{\tau}(p_{\tau}-\bar{v}_{\tau}^{*}q_{\tau}\bar{v} _{\tau})+z_{\tau}^{\perp}(p_{\tau}-\bar{v}_{\tau}^{*}q_{\tau}\bar{v}_{\tau})\|_{ 2,\tau}^{2}\\ &=\|z_{\tau}^{\perp}p_{\tau}-z_{\tau}^{\perp}\bar{v}_{\tau}^{*}q_{ \tau}\bar{v}_{\tau}\|_{2,\tau}^{2}\\ &=\tau(z_{\tau}^{\perp}p_{\tau}-\bar{v}_{\tau}^{*}z_{\tau}^{\perp }q_{\tau}\bar{v}_{\tau})\\ &=\tau(z_{\tau}^{\perp}p_{\tau})-\tau(z_{\tau}^{\perp}\bar{q}_{ \tau}\bar{v}_{\tau}\bar{v}_{\tau}^{*})\\ &=\tau(z_{\tau}^{\perp})\big{(}\sigma(p_{\tau})-\sigma(q_{\tau}) \big{)}\\ &<5\epsilon^{\prime},\end{split} \tag{7.22}\] where the final estimate uses (7.19). Combining (7.22) with (7.18) proves (7.16). Now we use CPoU to obtain (7.14) from (7.16). By Kaplansky's density theorem, there is a contraction \(v_{\tau}\in\mathcal{M}\) such that \[\|v_{\tau}^{*}qv_{\tau}-p\|_{2,\tau}<\delta. \tag{7.23}\] Define \(a_{\tau}\coloneqq|v_{\tau}^{*}qv_{\tau}-p|^{2}\), and use this to define the open neighbourhood \[U_{\tau}\coloneqq\{\sigma\in X:\sigma(a_{\tau})<\delta^{2}\} \tag{7.24}\] of \(\tau\) in \(X\). As \(X\) is compact, there are \(\tau_{1},\dots,\tau_{k}\in X\) such that \((U_{\tau_{i}})_{i=1}^{k}\) covers \(X\). Therefore, \[\sup_{\tau\in X}\min_{1\leq i\leq k}\tau(a_{\tau_{i}})<\delta^{2}. \tag{7.25}\] Set \(S:=\{v_{\tau_{i}}:i=1,\dots,k\}\cup\{p,q\}\). As \((\mathcal{M},X)\) has CPoU, there are mutually orthogonal projections \(e_{1},\dots,e_{k}\in\mathcal{M}^{\omega}\cap S^{\prime}\) such that \[\sum_{i=1}^{k}e_{i}=1_{\mathcal{M}^{\omega}}\qquad\text{and}\qquad\tau(a_{ \tau_{i}}e_{i})\leq\delta^{2}\tau(e_{i}) \tag{7.26}\] for all \(\tau\in X^{\omega}\) and \(i=1,\dots,k\). Define \(v\coloneqq\sum_{i=1}^{k}e_{i}v_{\tau_{i}}\in\mathcal{M}^{\omega}\). Since \(e_{1},\dots,e_{k}\) are mutually orthogonal projections summing to the identity which commute with \(\{v_{\tau_{i}}:i=1,\dots,k\}\cup\{p,q\}\), we have \[|v^{*}qv-p|^{2}=\sum_{i=1}^{k}|v_{\tau_{i}}^{*}qv_{\tau_{i}}-p|^{2}e_{i}=\sum_ {i=1}^{k}a_{\tau_{i}}e_{i}. \tag{7.27}\] Hence, for each \(\tau\in X^{\omega}\), \[\|v^{*}qv-p\|_{2,\tau}^{2}=\tau(|v^{*}qv-p|^{2})=\sum_{i=1}^{k}\tau(a_{\tau_{i }}e_{i})\leq\delta^{2}\sum_{j=1}^{k}\tau(e_{i})=\delta^{2}. \tag{7.28}\] Finally, if \((v_{n})_{n=1}^{\infty}\subseteq\mathcal{M}\) is a sequence representing \(v\), then \[\lim_{n\to\omega}\|v_{n}^{*}qv_{n}-p\|_{2,X}=\|v^{*}qv-p\|_{2,X^{\omega}}\leq \delta<(2\sqrt{2}+\sqrt{5})\sqrt{\epsilon}, \tag{7.29}\] and the result follows. **Proposition 7.4**.: _Suppose \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) is a sequence of factorial tracially complete \(C^{*}\)-algebras with CPoU and \(p,q\in\prod^{\omega}\mathcal{M}_{n}\) are projections._ * _If_ \(\tau(p)\leq\tau(q)\) _for all_ \(\tau\in\sum^{\omega}X_{n}\)_, then_ \(p\) _is Murray-von Neumann subequivalent to_ \(q\) _._ 2. _If_ \(\tau(p)=\tau(q)\) _for all_ \(\tau\in\sum^{\omega}X_{n}\)_, then_ \(p\) _and_ \(q\) _are unitarily equivalent._ Proof.: Lemma 7.3 immediately implies (i). For (ii), use (i) to obtain \(v\in\prod^{\omega}\mathcal{M}_{n}\) with \(v^{*}v=p\) and \(vv^{*}\leq q\). Since \(q-vv^{*}\geq 0\) and \[\tau(q-vv^{*})=\tau(q)-\tau(p)=0 \tag{7.30}\] for all \(\tau\in\sum^{\omega}X_{n}\), we have \(vv^{*}=q\). Repeating this argument with \(p^{\perp}\) and \(q^{\perp}\) in place of \(p\) and \(q\) produces a partial isometry \(w\in\prod^{\omega}\mathcal{M}_{n}\) such that \(w^{*}w=p^{\perp}\) and \(ww^{*}=q^{\perp}\). Then \(u\coloneqq v+w\) is a unitary with \(upu^{*}=q\). Combining Propositions 7.2 and 7.4 resolves the trace problem (Question 1.1) for reduced products of tracially complete \(C^{*}\)-algebras with CPoU. This generalises the case of McDuff \(W^{*}\)-bundles from [10, Proposition 3.32] (taking \(A\coloneqq\mathbb{C}\) in that proposition). **Theorem 7.5**.: _If \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) is a sequence of factorial tracially complete \(C^{*}\)-algebras with CPoU, then \(T\big{(}\prod^{\omega}\mathcal{M}_{n})=\sum^{\omega}X_{n}\)._ Proof.: Let \((\mathcal{M},X)\coloneqq\big{(}\prod^{\omega}\mathcal{M}_{n},\sum^{\omega}X_{ n}\big{)}\). Then \(\mathcal{M}\) has real rank zero by Proposition 7.2 and comparison of projections with respect to \(X\) by Proposition 7.4. Further, for all \(d\geq 1\), \(\mathcal{M}\otimes M_{d}\) has comparison of projections with respect to \(X\otimes\{\mathrm{tr}_{d}\}\) since Proposition 5.8 identifies \((\mathcal{M}\otimes M_{d},X\otimes\{\mathrm{tr}_{d}\})\) with \(\prod^{\omega}(\mathcal{M}_{n}\otimes M_{d},X_{n}\otimes\{\mathrm{tr}_{d}\})\) and each of the algebras \((\mathcal{M}_{n}\otimes M_{d},X_{n}\otimes\{\mathrm{tr}_{d}\})\) are factorial and have CPoU by Proposition 6.14. Let \(V(\mathcal{M})\) denote the Murray-von Neumann semigroup of \(\mathcal{M}\) and let \([p]\in V(\mathcal{M})\) denote the class of a projection \(p\in\mathcal{M}\otimes M_{d}\) for an integer \(d\geq 1\). For \(\tau\in T(\mathcal{M})\), let \(\hat{\tau}\) denote the induced state on \(V(\mathcal{M})\) given by \[\hat{\tau}([p])\coloneqq(\tau\otimes\mathrm{Tr}_{d})(p) \tag{7.31}\] for a projection \(p\) in \(\mathcal{M}\otimes M_{d}\), where \(\mathrm{Tr}_{d}\) is the unnormalised trace on \(M_{d}\) so that \(\mathrm{Tr}_{d}(1_{M_{d}})=d\). By comparison of projections in matrices over \(\mathcal{M}\), the natural map \[V(\mathcal{M})\to\mathrm{Aff}(X)\colon x\mapsto(\tau\mapsto\hat{\tau}(x)) \tag{7.32}\] is an order embedding. Now suppose \(\sigma\in T(\mathcal{M})\). By [9, Corollary 2.7], the state \(\hat{\sigma}\) on \(V(\mathcal{M})\) extends to a state \(\phi\) on \(\mathrm{Aff}(X)\). Since all states on \(\mathrm{Aff}(X)\) are point evaluations, (see Proposition 2.1(i)), there is a trace \(\tau\in X\) such that \(\phi(f)=f(\tau)\) for all \(f\in\mathrm{Aff}(X)\). Then, by construction, \(\hat{\sigma}=\hat{\tau}\), and hence \(\sigma(p)=\tau(p)\) for all projections \(p\in\mathcal{M}\). As \(\mathcal{M}\) has real rank zero by Proposition 7.2, \(\mathcal{M}\) is the \(\|\cdot\|\)-closed span of its projections, and hence \(\sigma=\tau\in X\). **Remark 7.6**.: Versions of Theorem 7.5 have appeared previously in [10, Proposition 3.22] and [23, Proposition 4.6] in the settings of ultrapowers of factorial McDuff \(W^{*}\)-bundles and uniform tracial ultrapowers of \(C^{*}\)-algebras with CPoU, respectively. The proofs in these references are based on a result of Fack and de la Harpe ([43, Theoreme 2.3]). This style of argument could also be adapted to the setting of tracially complete \(C^{*}\)-algebras, providing an alternative proof of Theorem 7.5. Theorem 7.5 also proves that factoriality passes to reduced products in the presence of CPoU. This allows us to prove both property \(\Gamma\) and CPoU pass to reduced products as promised in Remarks 5.26 and 6.13. **Corollary 7.7**.: _Suppose \(\bigl{(}(\mathcal{M}_{n},X_{n})\bigr{)}_{n=1}^{\infty}\) is a sequence of factorial tracially complete \(C^{*}\)-algebras with reduced product \((\mathcal{M},X)\)._ * _If each_ \((\mathcal{M}_{n},X_{n})\) _has CPoU, then_ \((\mathcal{M},X)\) _is factorial and has CPoU._ * _If each_ \((\mathcal{M}_{n},X_{n})\) _satisfies property_ \(\Gamma\)_, then_ \((\mathcal{M},X)\) _is factorial and satisfies property_ \(\Gamma\)_._ Proof.: To see (i), note that \((\mathcal{M},X)\) is factorial by Theorem 7.5. The approximation property in Proposition 6.2(ii) characterising CPoU passes to reduced products, so \((\mathcal{M},X)\) satisfies CPoU. For (ii), Theorem 1.4 implies each \((\mathcal{M}_{n},X_{n})\) satisfies CPoU, so \((\mathcal{M},X)\) is factorial by (i). The approximation property in Proposition 5.23(ii) characterising \(\Gamma\) passes to reduced products, and hence \((\mathcal{M},X)\) satisfies property \(\Gamma\). Corollary 7.7 and Kirchberg's \(\epsilon\)-test (Lemma 5.1) allow for the following variation of CPoU for reduced products. The point is that the projections \(p_{i}\) in the definition of CPoU can be chosen in the reduced product instead of the reduced power of the reduced product. **Corollary 7.8**.: _Suppose \(\bigl{(}(\mathcal{M}_{n},X_{n})\bigr{)}_{n=1}^{\infty}\) is a sequence of factorial tracially complete \(C^{*}\)-algebras with CPoU. If \(a_{1},\ldots,a_{k}\in(\prod^{\omega}\mathcal{M}_{n})_{+}\), \(S\subseteq\prod^{\omega}\mathcal{M}_{n}\) is \(\|\cdot\|_{2,X^{\omega}}\)-separable, and_ \[\sup_{\tau\in X}\min_{1\leq i\leq k}\tau(a_{i})<\delta, \tag{7.33}\] _then there are projections \(p_{1},\ldots,p_{k}\in\prod^{\omega}\mathcal{M}_{n}\cap S^{\prime}\) such that_ \[\sum_{i=1}^{k}p_{i}=1_{\mathcal{M}^{\omega}}\qquad\text{and}\qquad\tau(a_{i}p _{i})\leq\delta\tau(p_{i}) \tag{7.34}\] _for all \(1\leq i\leq k\) and \(\tau\in X^{\omega}\)._ ### Structure of unitaries With Corollary 7.7 in hand, we derive some properties unitaries in tracially complete \(C^{*}\)-algebras with CPoU. The following result and its proof are analogous to [21, Proposition 2.1]. **Proposition 7.9**.: _Suppose \(\bigl{(}(\mathcal{M}_{n},X_{n})\bigr{)}_{n=1}^{\infty}\) is a sequence of factorial tracially complete \(C^{*}\)-algebras with CPoU and \(S\subseteq\prod^{\omega}\mathcal{M}_{n}\) is a \(\|\cdot\|_{2,X^{\omega}}\)-separable subset. If \(u\in\prod^{\omega}\mathcal{M}_{n}\cap S^{\prime}\) is a unitary, then there is a self-adjoint \(h\in\prod^{\omega}\mathcal{M}_{n}\cap S^{\prime}\) such that \(u=e^{ih}\) and \(\|h\|\leq\pi\)._ Proof.: Since this theorem involves the numbers \(\pi\) and \(i=\sqrt{-1}\), we shall write \(\sigma_{\tau}\) to denote the GNS representation corresponding to a trace \(\tau\) and use the letter \(j\) for our summation index. For the sake of brevity, we write \((\mathcal{M},X)\coloneqq\bigl{(}\prod^{\omega}\mathcal{M}_{n},\sum^{\omega}X _{n}\bigr{)}\). Fix \(\epsilon>0\) and a finite set \(\mathcal{F}\subseteq\mathcal{M}\). By Kirchberg's \(\epsilon\)-test (Lemma 5.1), it suffices to show that there is a self-adjoint \(h\in\mathcal{M}\) with \(\|h\|\leq\pi\) such that \[\|u-e^{ih}\|_{2,X}\leq\epsilon\qquad\text{and}\qquad\max_{x\in\mathcal{F}}\|[ h,x]\|_{2,X}\leq\epsilon. \tag{7.35}\] For each \(\tau\in X\), there is a self-adjoint \(\bar{h}_{\tau}\in\sigma_{\tau}(\mathcal{M})^{\prime\prime}\cap\sigma_{\tau}( \mathcal{F})^{\prime}\) with \(\|\bar{h}_{\tau}\|\leq\pi\) such that \(\sigma_{\tau}(u)=e^{i\bar{h}_{\tau}}\). By Kaplansky's density theorem, there is a self-adjoint \(h_{\tau}\in\mathcal{M}\) with \(\|h_{\tau}\|\leq\pi\) such that \[\|u-e^{ih_{\tau}}\|_{2,\tau} <(|\mathcal{F}|+1)^{-1/2}\epsilon\qquad\text{and}\] \[\max_{x\in\mathcal{F}}\|[h_{\tau},x]\|_{2,\tau} <(|\mathcal{F}|+1)^{-1/2}\epsilon.\] Then define \[a_{\tau}\coloneqq|u-e^{ih_{\tau}}|^{2}+\sum_{x\in\mathcal{F}}|[h_{\tau},x]|^{2 }\in\mathcal{M}_{+}, \tag{7.36}\] so that \(\tau(a_{\tau})<\epsilon^{2}\). By the compactness of \(X\), there are \(\tau_{1},\dots,\tau_{k}\in X\) such that \[\sup_{\tau\in X}\min_{1\leq j\leq k}\tau(a_{\tau_{j}})<\epsilon^{2}. \tag{7.37}\] By CPoU (in the form of Corollary 7.8), there are mutually orthogonal projections \[p_{1},\dots,p_{k}\in\mathcal{M}\cap\mathcal{F}^{\prime}\cap\{u\}^{\prime}\cap \{h_{\tau_{1}},\dots,h_{\tau_{k}}\}^{\prime} \tag{7.38}\] such that \[\sum_{j=1}^{k}p_{j}=1_{\mathcal{M}}\qquad\text{and}\qquad\tau(a_{\tau_{j}}p_{j })\leq\epsilon^{2}\tau(p_{j}) \tag{7.39}\] for all \(1\leq j\leq k\) and \(\tau\in X\). Define \(h\coloneqq\sum_{j=1}^{k}h_{\tau_{j}}p_{j}\in\mathcal{M}\) and note that \(\|h\|\leq\pi\) as \(\|h_{\tau_{j}}\|\leq\pi\) for all \(1\leq j\leq k\) as the \(p_{j}\) are mutually orthogonal projections commuting with the \(h_{\tau_{j}}\). Also, for each \(\tau\in X\), we have \[\|u-e^{ih}\|_{2,\tau}^{2}\leq\sum_{j=1}^{k}\tau(a_{\tau_{j}}p_{j})\leq \epsilon^{2}\sum_{j=1}^{k}\tau(p_{j})=\epsilon^{2}, \tag{7.40}\] and for each \(\tau\in X\) and \(x\in\mathcal{F}\), \[\|[h,x]\|_{2,\tau}^{2}\leq\sum_{j=1}^{k}\tau(a_{\tau_{j}}p_{j})\leq\epsilon^{ 2}\sum_{j=1}^{k}\tau(p_{j})=\epsilon^{2}. \tag{7.41}\] This verifies (7.35). Applying Proposition 7.9 to matrix algebras (with \(S=\emptyset\)) yields the following \(K\)-theoretic computation. **Corollary 7.10**.: _If \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) is a sequence of factorial tracially complete \(C^{*}\)-algebras with CPoU, then \(K_{1}\big{(}\prod^{\omega}\mathcal{M}_{n}\big{)}=0\)._ Proof.: Proposition 7.9 implies that the unitary group of \(\prod^{\omega}\mathcal{M}_{n}\) is path-connected, and Propositions 6.14 and 5.8 imply the same result for matrices over \(\prod^{\omega}\mathcal{M}_{n}\). The following result facilitates the use of Theorem 5.11 in the presence of CPoU. **Corollary 7.11**.: _If \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) is a sequence of factorial tracially complete \(C^{*}\)-algebras with CPoU and \(u\in\prod^{\omega}\mathcal{M}_{n}\) is a unitary, then there is a sequence of unitaries \((u_{n})_{n=1}^{\infty}\in\prod_{n=1}^{\infty}\mathcal{M}_{n}\) lifting \(u\)._ Using a sequence of counterexamples argument (see for example [71]), Corollary 7.11 also provides a uniform \(2\)-norm stability result for unitaries. **Corollary 7.12**.: _For all \(\epsilon,c>0\), there is a \(\delta>0\) such that if \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebra with CPoU and \(v\in\mathcal{M}\) with \(\|v\|\leq c\) and \(\|v^{*}v-1\|_{2,X}<\delta\), there is a unitary \(u\in\mathcal{M}\) such that \(\|u-v\|_{2,X}<\epsilon\)._ Proof.: Suppose the result is false and that \(\epsilon,c>0\) provide a counterexample. For each \(n\in\mathbb{N}\), let \((\mathcal{M}_{n},X_{n})\) be a tracially complete \(C^{*}\)-algebra with CPoU and let \(v_{n}\in\mathcal{M}_{n}\) be such that \(\|v_{n}\|\leq c\), \(\|v_{n}^{*}v_{n}-1\|_{2,X_{n}}<\frac{1}{n}\), but for all unitaries \(u\in\mathcal{M}_{n}\), we have \[\|u-v_{n}\|_{2,X_{n}}\geq\epsilon. \tag{7.42}\] The sequence \((v_{n})_{n=1}^{\infty}\) induces an element \(v\in\prod^{\omega}\mathcal{M}_{n}\). We have \(\|v^{*}v-1\|_{2,\sum^{\omega}X_{n}}=0\), and hence \(v\) is an isometry. Since \(1-vv^{*}\geq 0\) and \(\tau(1-vv^{*})=0\) for all \(\tau\in X\), we further have that \(v\) is unitary. By Corollary 7.11, there is a sequence of unitaries \((u_{n})_{n=1}^{\infty}\in\prod_{n=1}^{\infty}\mathcal{M}_{n}\) lifting \(v\). Now, \[\lim_{n\to\omega}\|u_{n}-v_{n}\|_{2,X_{n}}=0, \tag{7.43}\] which contradicts (7.42). Proposition 7.9 also implies that if \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebra with CPoU, then there is a \(\|\cdot\|_{2,X}\)-dense set of unitaries in \(\mathcal{M}\) that have the form \(e^{ih}\) for a self-adjoint \(h\in\mathcal{M}\) with \(\|h\|\leq\pi\). We do not know if every unitary in \(\mathcal{M}\) has this form. Without CPoU, there are certainly commutative counterexamples such as \(\big{(}C(\mathbb{T}),T(C(\mathbb{T}))\big{)}\), but we do not know of any counterexamples among type II\({}_{1}\) tracially complete \(C^{*}\)-algebras \((\mathcal{M},X)\) - with or without CPoU -, or even among tracially complete \(C^{*}\)-algebras \((\mathcal{M},X)\) for which \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) is diffuse for each \(\tau\in X\). **Question 7.13**.: If \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebra with CPoU, is every unitary in \(\mathcal{M}\) an exponential? Slightly less ambitiously, is the unitary group of \(\mathcal{M}\) path connected in the \(C^{*}\)-norm topology? In every finite von Neumann algebra, every element has a unitary polar decomposition. The same holds in reduced products of factorial tracially complete \(C^{*}\)-algebras with CPoU. In fact, this can be done in an approximately central way. **Proposition 7.14**.: _Suppose \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) is a sequence of factorial tracially complete \(C^{*}\)-algebras with CPoU and \(S\subseteq\prod^{\omega}\mathcal{M}_{n}\) is a \(\|\cdot\|_{2,X^{\omega}}\)-separable subset. If \(a\in\mathcal{M}\cap S^{\prime}\), then there is a unitary \(u\in\prod^{\omega}\mathcal{M}_{n}\cap S^{\prime}\) such that \(a=u|a|\). Further, if \(a=a^{*}\), we may arrange that \(u=u^{*}\)._ Proof.: For the sake of brevity, define \((\mathcal{M},X)\coloneqq\big{(}\prod^{\omega}\mathcal{M}_{n},\sum^{\omega}X_ {n}\big{)}\). We may assume \(S\) is a set of contractions. Fix \(\epsilon>0\) and a finite set \(\mathcal{F}\subseteq S\). For the first part, by Kirchberg's \(\epsilon\)-test, it suffices to show that if \(a\in\mathcal{M}\), then there is a unitary \(u\in\mathcal{M}\) such that \[\|a-u|a|\|_{2,X}\leq\epsilon\qquad\text{and}\qquad\max_{b\in\mathcal{F}}\|[u,b ]\|_{2,X}\leq\epsilon. \tag{7.44}\] By the existence of unitary polar decompositions in finite von Neumann algebras, for each \(\tau\in X\), there is a unitary \(\bar{u}_{\tau}\in\pi_{\tau}(\mathcal{M})^{\prime\prime}\cap\pi_{\tau}(\mathcal{ F})^{\prime}\) such that \[\pi_{\tau}(a)=\bar{u}_{\tau}\pi_{\tau}(|a|). \tag{7.45}\] Since \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\cap\pi_{\tau}(\mathcal{F})^{\prime}\) is a von Neumann algebra, \(\bar{u}_{\tau}=e^{i\bar{h}_{\tau}}\) for some self-adjoint \(\bar{h}_{\tau}\in\pi_{\tau}(\mathcal{M})^{\prime\prime}\cap\pi_{\tau}( \mathcal{F})^{\prime}\). By Kaplansky's density theorem, we can approximate \(\bar{h}_{\tau}\) by a self-adjoint in \(\pi_{\tau}(\mathcal{M})\) and lift to a self-adjoint element \(h_{\tau}\in\mathcal{M}\) such that unitary \(u_{\tau}\coloneqq e^{ih_{\tau}}\in\mathcal{M}\) satisfies \[\begin{split}\|a-u_{\tau}|a|\|_{2,\tau}&<(2+| \mathcal{F}|)^{-1/2}\epsilon,\\ \max_{b\in\mathcal{F}}\|[u_{\tau},b]\|_{2,\tau}&<(2+| \mathcal{F}|)^{-1/2}\epsilon.\end{split} \tag{7.46}\] For \(\tau\in X\), define \[c_{\tau}\coloneqq\big{|}a-u_{\tau}|a|\big{|}^{2}+\sum_{b\in\mathcal{F}}\big{|} [u_{\tau},b]\big{|}^{2}\in\mathcal{M}_{+}, \tag{7.47}\] and note that \(\tau(c_{\tau})<\epsilon^{2}\) for all \(\tau\in X\). By the compactness of \(X\), there are \(\tau_{1},\dots,\tau_{k}\in X\) such that \[\sup_{\tau\in X}\min_{1\leq j\leq k}\tau(c_{\tau_{j}})<\epsilon^{2}. \tag{7.48}\] By CPoU (in the form of Corollary 7.8), there are mutually orthogonal projections \[p_{1},\dots,p_{k}\in\mathcal{M}\cap\mathcal{F}^{\prime}\cap\{a\}^{\prime}\cap \{u_{\tau_{1}},\dots,u_{\tau_{k}}\}^{\prime} \tag{7.49}\] such that \[\sum_{j=1}^{k}p_{j}=1_{\mathcal{M}}\qquad\text{and}\qquad\tau(c_{\tau_{j}}p_{ j})\leq\epsilon^{2}\tau(p_{j}), \tag{7.50}\] for all \(1\leq j\leq k\) and \(\tau\in X\). Define \(u\coloneqq\sum_{i=j}^{n}u_{\tau_{j}}p_{j}\). As the \(u_{\tau_{j}}\) are unitaries commuting with the \(p_{j}\), we have that \(u\) is a unitary. We compute that for all \(\tau\in X\) and \(b\in\mathcal{F}\), \[\|a-u|a|\|_{2,\tau}^{2}\leq\sum_{j=1}^{k}\tau(c_{\tau_{j}}p_{j})\leq\epsilon^{ 2}\sum_{j=1}^{k}\tau(p_{j})=\epsilon^{2} \tag{7.51}\] and \[\|[u,b]\|_{2,\tau}^{2}\leq\sum_{j=1}^{k}\tau(c_{\tau_{j}}p_{j})\leq\epsilon^{ 2}\sum_{j=1}^{k}\tau(p_{j})=\epsilon^{2} \tag{7.52}\] which implies (7.44), as required. In the case that \(a^{*}=a\), the unitaries \(\bar{u}_{\tau}\) can be taken to be self-adjoint. For this, by ensuring the unitaries \(u_{\tau}\) are chosen with \(\|u_{\tau}-u_{\tau}^{*}\|_{2,\tau}\) sufficiently small, and replacing \(c_{\tau}\) with \(c_{\tau}+|u_{\tau}-u_{\tau}^{*}|^{2}\) in the above proof, one may also arrange \(\|u-u^{*}\|_{2,X^{\omega}}\leq\epsilon\) in (7.44). As promised, we have the following strengthened version of Proposition 7.2. **Corollary 7.15**.: _If \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) is a sequence of factorial tracially complete \(C^{*}\)-algebras with CPoU and \(S\subseteq\prod^{\omega}\mathcal{M}_{n}\) is a \(\|\cdot\|_{2,X^{\omega}}\)-separable subset, then \(\prod^{\omega}\mathcal{M}_{n}\cap S^{\prime}\) has real rank zero and stable rank one._ Proof.: Suppose \(a\in\prod^{\omega}\mathcal{M}_{n}\cap S^{\prime}\) and write \(a=u|a|\) for a unitary \(u\in\prod^{\omega}\mathcal{M}_{n}\cap S^{\prime}\) as in Proposition 7.14. Then for each \(\epsilon>0\), the element \(b\coloneqq u(|a|+\epsilon)\) is invertible and \(\|a-b\|\leq\epsilon\). Therefore, \(\prod^{\omega}\mathcal{M}_{n}\cap S^{\prime}\) has stable rank one. If \(a\) is self-adjoint and we take \(u=u^{*}\), then \(u\) and \(|a|\) commute. It follows that \(b\) is self-adjoint, and this shows \(\prod^{\omega}\mathcal{M}_{n}\cap S^{\prime}\) has real rank zero. In the same spirit as the comments after Corollary 7.11, an approximate version of stable rank one can be obtained for all factorial tracially complete \(C^{*}\)-algebras \((\mathcal{M},X)\) with CPoU. Indeed, combining Proposition 7.14 (with \(S=\emptyset\)) and Corollary 7.11 shows that a \(\|\cdot\|_{2,X}\)-dense set of elements \(a\in\mathcal{M}\) have the form \(a=u|a|\) for some unitary \(u\in\mathcal{M}\). Then the proof of Corollary 7.15 shows that a \(\|\cdot\|_{2,X}\)-dense set of elements in \(\mathcal{M}\) are invertible. The analogous statement for self-adjoint elements is less clear - this would require a version of Corollary 7.11 for self-adjoint unitaries (or, equivalently, for projections). **Question 7.16**.: If \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebra with CPoU, does \(\mathcal{M}\) have real rank zero or stable rank one? ### Classification of projections Questions 7.13 and 7.16 expose a common drawback of the CPoU technique - even when exact results or \(\|\cdot\|\)-approximate results are possible in finite von Neumann algebras, a direct application of CPoU will always introduce a uniform 2-norm error. In the case of classification of projections, we can overcome this defect by controlling the 2-norm distance from the unitary implementing the equivalence to the unit. The following lemma and its proof is analogous to [105, Lemma XIV.2.1] where the result is shown for finite von Neumann algebras. It is possible to prove this result by combining the von Neumann algebra result with a CPoU argument similar to those above, but with the structural results of reduced products already obtained, a more direct proof is possible. **Lemma 7.17**.: _Suppose \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebra with CPoU. If \(p,q\in\mathcal{M}\) are projections with \(\tau(p)=\tau(q)\) for all \(\tau\in X\) and \(\epsilon>0\), then there is a unitary \(u\in\mathcal{M}\) such that_ \[\|upu^{*}-q\|_{2,X}<\epsilon\qquad\text{and}\qquad\|u-1_{\mathcal{M}}\|_{2,X} <2\sqrt{2}\|p-q\|_{2,X}+\epsilon. \tag{7.53}\] Proof.: By the liftability of unitaries in reduced products (Corollary 7.11), it suffices to show there is a unitary \(u\in\mathcal{M}^{\omega}\) such that \[upu^{*}=q\qquad\text{and}\qquad\|u-1_{\mathcal{M}^{\omega}}\|_{2,X^{\omega}} \leq 2\sqrt{2}\|p-q\|_{2,X}. \tag{7.54}\] By Proposition 7.4, there is a unitary \(v\in\mathcal{M}^{\omega}\) such that \(vpv^{*}=q\). Set \(a\coloneqq pvp+p^{\perp}vp^{\perp}\). As \(a\in\mathcal{M}^{\omega}\cap\{p\}^{\prime}\), Proposition 7.14 implies there is a unitary \(w\in\mathcal{M}^{\omega}\cap\{p\}^{\prime}\) such that \(a=w|a|\). Then \(u\coloneqq vw^{*}\) is a unitary with \(upu^{*}=q\). Note that \((pvp-pv)^{*}(p^{\perp}vp^{\perp}-p^{\perp}v)=0\). Therefore, by the Pythagorean identity, \[\begin{split}\|a-v\|_{2,X^{\omega}}^{2}&=\|pvp-pv\|_{ 2,X^{\omega}}^{2}+\|p^{\perp}vp^{\perp}-p^{\perp}v\|_{2,X^{\omega}}^{2}\\ &=\|p(q-p)v\|_{2,X^{\omega}}^{2}+\|p^{\perp}(q^{\perp}-p^{\perp}) v\|_{2,X^{\omega}}^{2}\\ &\leq 2\|p-q\|_{2,X}^{2}.\end{split} \tag{7.55}\] Further, since \(\|a\|\leq 1\), we have \((1_{\mathcal{M}^{\omega}}-|a|)^{2}\leq(1_{\mathcal{M}^{\omega}}-|a|^{2})^{2}\) in \(\mathcal{M}^{\omega}\). Therefore, \[\begin{split}\|w-a\|_{2,X^{\omega}}^{2}&=\|1-|a| \|_{2,X^{\omega}}^{2}\\ &\leq\|1-|a|^{2}\|_{2,X^{\omega}}^{2}\\ &=\|p-pv^{*}pvp\|_{2,X^{\omega}}^{2}+\|p^{\perp}-p^{\perp}v^{*}p^ {\perp}vp^{\perp}\|_{2,X^{\omega}}^{2}\\ &=\|pv^{*}(q-p)vp\|_{2,X^{\omega}}^{2}+\|p^{\perp}v^{*}(q^{\perp}- p^{\perp})vp\|_{2,X^{\omega}}^{2}\\ &\leq 2\|p-q\|_{2,X}^{2}.\end{split} \tag{7.56}\] Combining (7.55) and (7.56) shows \[\|u-1_{\mathcal{M}^{\omega}}\|_{2,X^{\omega}}=\|v-w\|_{2,X^{\omega}}\leq 2 \sqrt{2}\|p-q\|_{2,X}.\qed \tag{7.57}\] The second part of the following result is Theorem 1.3(ii) from the introduction. Note also that taking \((\mathcal{M},X)\) to be a trivial \(W^{*}\)-bundle over a compact Hausdorff space \(K\) with fibre being a II\({}_{1}\) factor with property \(\Gamma\) in the following result proves Theorem E from the overview, using Theorem 1.4 to verify the CPoU hypothesis. **Theorem 7.18**.: _Suppose \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebra with CPoU and \(p,q\in\mathcal{M}\) are projections._ 1. _If_ \(\tau(p)\leq\tau(q)\) _for all_ \(\tau\in X\)_, then_ \(p\) _is Murray-von Neumann subequivalent to_ \(q\)_._ 2. _If_ \(\tau(p)=\tau(q)\) _for all_ \(\tau\in X\)_, then_ \(p\) _and_ \(q\) _are unitarily equivalent._ Proof.: First we show (ii). Let \((\epsilon_{n})_{n=1}^{\infty}\subseteq(0,\infty)\) be a decreasing sequence with \(\sum_{n=1}^{\infty}\epsilon_{n}<\infty\). We will construct a sequence of unitaries \((u_{n})_{n=1}^{\infty}\subseteq\mathcal{M}\) such that \[\begin{split}\|u_{n}pu_{n}^{*}-q\|_{2,X}&<\epsilon_{n },\quad\quad\quad n\geq 1,\\ \|u_{n+1}-u_{n}\|_{2,X}&<4\epsilon_{n-1},\quad\quad n \geq 2.\end{split} \tag{7.58}\] Then \((u_{n})_{n=1}^{\infty}\subseteq\mathcal{M}\) is a \(\|\cdot\|\)-bounded, \(\|\cdot\|_{2,X}\)-Cauchy sequence, and hence converges to some \(u\in\mathcal{M}\). Since multiplication is \(\|\cdot\|_{2,X}\)-continuous on \(\|\cdot\|\)-bounded sets, it follows that \(u\) is a unitary and \(upu^{*}=q\). The sequence \((u_{n})_{n=1}^{\infty}\) will be constructed inductively. Lemma 7.17 implies there is a unitary \(u_{1}\in\mathcal{M}\) such that \[\|u_{1}pu_{1}^{*}-q\|_{2,X}<\epsilon_{1}. \tag{7.59}\] Assuming \(u_{n}\) has been constructed, applying Lemma 7.17 to the projections \(u_{n}pu_{n}^{*}\) and \(q\), there is a unitary \(v_{n}\in\mathcal{M}\) such that \[\begin{split}\|v_{n}u_{n}pu_{n}^{*}v_{n}^{*}-q\|_{2,X}& <\epsilon_{n+1},\\ \|v_{n}-1_{\mathcal{M}}\|_{2,X}&<2\sqrt{2}\epsilon_{ n}+\epsilon_{n+1}.\end{split} \tag{7.60}\] Now set \(u_{n+1}\coloneqq v_{n}u_{n}\). Then \(\|u_{n+1}pu_{n+1}-q\|_{2,X}<\epsilon_{n+1}\) and \[\begin{split}\|u_{n+1}-u_{n}\|_{2,X}&\leq\|u_{n}\| \|v_{n}-1\|_{2,X}\\ &<2\sqrt{2}\epsilon_{n}+\epsilon_{n+1}\\ &\leq 3\epsilon_{n}+\epsilon_{n}\\ &=4\epsilon_{n},\end{split} \tag{7.61}\] as required. Now we show (i). By Proposition 7.4, there is a partial isometry \(v\in\mathcal{M}^{\infty}\) such that \(v^{*}v=p\) and \(vv^{*}\leq q\). Suppose \(r\colon\mathbb{N}\to\mathbb{N}\) is a function with \(\lim_{n\to\infty}r(n)=\infty\) and consider the reparameterisation morphism \(r^{*}\colon\mathcal{M}^{\infty}\to\mathcal{M}^{\infty}\) as defined just before Theorem 5.11. Since \(p,q\in\mathcal{M}\), we have \(r^{*}(p)=p\) and \(r^{*}(q)=q\). For all \(\tau\in X^{\infty}\), \[\tau(r^{*}(q-vv^{*}))=\tau(r^{*}(q))-\tau(r^{*}(p))=\tau(q)-\tau(p)=\tau(q-vv^{ *}). \tag{7.62}\] By Proposition 7.4, \(q-vv^{*}\) and \(r^{*}(q-vv^{*})\) are unitarily equivalent in \(\mathcal{M}^{\infty}\). By Corollary 7.11, unitaries in \(\mathcal{M}^{\infty}\) lift to unitaries in \(\ell^{\infty}(\mathcal{M})\), so we may apply intertwining through reparameterisation (Theorem 5.11) in the case where \(S\) is a one point space, and obtain a projection \(p^{\prime}\in\mathcal{M}\) that is unitarily equivalent to \(q-vv^{*}\) in \(\mathcal{M}^{\infty}\). The projections \(p\oplus p^{\prime}\) and \(q\oplus 0\) in \(M_{2}(\mathcal{M})\) agree on traces. By Proposition 6.14, \(M_{2}(\mathcal{M})\) has CPoU. Hence, \(p\oplus p^{\prime}\) and \(q\oplus 0\) are unitarily equivalent by (ii). Therefore, \(p\precsim q\) in \(M_{2}(\mathcal{M})\) and so also in \(\mathcal{M}\). The uniqueness theorem for projections is complemented by a corresponding existence theorem building projections with prescribed behaviour on traces. This is obtained by applying CPoU to Proposition 2.8(iv). Note that the following result is precisely Theorem 1.3(ii) from the introduction. **Theorem 7.19**.: _Suppose \((\mathcal{M},X)\) is a type \(\mathrm{II}_{1}\) factorial tracially complete \(C^{*}\)-algebra with CPoU. If \(f\in\mathrm{Aff}(X)\) with \(0\leq f\leq 1\), then there is a projection \(p\in\mathcal{M}\) such that \(\tau(p)=f(\tau)\) for all \(\tau\in T(\mathcal{M})\)._ Proof.: It suffices to construct a projection \(p\in\mathcal{M}^{\infty}\) such that \[\tau(p)=f(\tau|_{\mathcal{M}}),\qquad\tau\in X^{\infty}. \tag{7.63}\] Indeed, assuming this has been done, for every function \(r\colon\mathbb{N}\to\mathbb{N}\) with \(\lim_{n\to\infty}r(n)=\infty\), we have \[\tau(r^{*}(p))=f((\tau\circ r^{*})|_{\mathcal{M}})=f(\tau|_{\mathcal{M}})=\tau (p) \tag{7.64}\] for all \(\tau\in X^{\infty}\), and hence by Lemma 7.3, \(r^{*}(p)\) and \(p\) are unitarily equivalent. Since unitaries in \(\mathcal{M}^{\infty}\) lift to unitaries in \(\ell^{\infty}(\mathcal{M})\) (Corollary 7.11), intertwining through reparameterisation (Theorem 5.11) implies that there is a projection \(p_{0}\in\mathcal{M}\) that is unitarily equivalent to \(p\). Then \(\tau(p_{0})=f(\tau)\) for all \(\tau\in X\). We work to build a projection \(p\in\mathcal{M}^{\infty}\) satisfying (7.63). First, we extend \(f\) to all of \(T(\mathcal{M})\) using Theorem 2.3. Then, by Proposition 2.7, there is a self-adjoint \(c\in\mathcal{M}\) such that \(\tau(c)=f(\tau)\) for all \(\tau\in T(\mathcal{M})\). Fix \(\tau\in X\) and \(\epsilon>0\). Since \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) is type \(\mathrm{II}_{1}\), Proposition 2.8(iv) implies there is a projection \(\bar{p}_{\tau}\in\pi_{\tau}(\mathcal{M})^{\prime\prime}\) such that \(\sigma(\bar{p}_{\tau})=\sigma(\pi_{\tau}(c))\) for all \(\sigma\in T(\pi_{\tau}(\mathcal{M})^{\prime\prime})\). Therefore, by [43, Theoreme 3.2] and Kaplansky's density theorem, there are a positive contraction \(p_{\tau}\in\mathcal{M}\) and elements \(x_{j,\tau},y_{j,\tau}\in\mathcal{M}\), for \(j=1,\ldots,10\), of norm at most \(12(\|c\|+1)\) such that \[\left\|p_{\tau}-c-\sum_{j=1}^{10}[x_{j,\tau},y_{j,\tau}]\right\|_{2,\tau}^{2}< \epsilon\quad\text{and}\quad\tau(p_{\tau}-p_{\tau}^{2})<\epsilon \tag{7.65}\] for all \(\tau\in X\). Define \[a_{\tau}\coloneqq p_{\tau}-p_{\tau}^{2}+\left|p_{\tau}-c-\sum_{j=1}^{10}[x_{j, \tau},y_{j,\tau}]\right|^{2}\in\mathcal{M}_{+} \tag{7.66}\] and note that \(\tau(a_{\tau})<2\epsilon\). As \(X\) is compact, there are \(\tau_{1},\ldots,\tau_{k}\in X\) such that \[\sup_{\tau\in X}\min_{1\leq i\leq k}\tau(a_{\tau_{i}})<2\epsilon.\] By CPoU, there are projections \(q_{1},\ldots,q_{k}\in\mathcal{M}^{\infty}\) summing to \(1_{\mathcal{M}^{\infty}}\) and commuting with \(c\), \(p_{\tau_{i}}\), \(x_{j,\tau_{i}}\) and \(y_{j,\tau_{i}}\) for all \(i=1,\ldots,k\) and \(j=1,\ldots,10\), such that \[\tau(q_{i}a_{\tau_{i}})\leq 2\epsilon\tau(q_{i}),\qquad\tau\in X^{\infty}, \quad i=1,\ldots,k. \tag{7.67}\] Define \[p\coloneqq\sum_{i=1}^{k}q_{i}p_{\tau_{i}},\quad x_{j}\coloneqq\sum_{i=1}^{k}q _{i}x_{j,\tau_{i}},\quad\text{and}\quad y_{j}\coloneqq\sum_{i=1}^{k}q_{i}y_{j,\tau_{i}} \tag{7.68}\] for \(j=1,\ldots,10\). Then we have \[\left\|p-c-\sum_{i=1}^{10}[x_{j},y_{j}]\right\|_{2,X^{\infty}}\leq\sqrt{2 \epsilon}\quad\text{and}\quad\sup_{\tau\in X^{\infty}}\tau(p-p^{2})\leq\sqrt{ 2}\epsilon \tag{7.69}\] The result of the previous paragraph and Kirchberg's \(\epsilon\)-test implies there are a projection \(p\in\mathcal{M}^{\infty}\) and elements \(x_{j},y_{j}\in\mathcal{M}^{\infty}\) such that \[p=c+\sum_{j=1}^{10}[x_{j},y_{j}]. \tag{7.70}\] Then for all \(\tau\in X^{\infty}\), \(\tau(p)=\tau(c)=f(\tau|_{\mathcal{M}})\) as required in (7.63). The classification of projections given in Theorems 7.18 and 7.19 allows us to compute the Murray-von Neumann semigroup and the \(K_{0}\)-group for factorial tracially complete \(C^{*}\)-algebra with CPoU. This is immediate from the previous two results and Propositions 5.8 and 6.14, which allow us to apply the previous two results to projections in matrix amplifications. **Corollary 7.20**.: _Suppose \((\mathcal{M},X)\) is a type \(\mathrm{II}_{1}\) factorial tracially complete \(C^{*}\)-algebra with CPoU. The natural maps_ \[V(\mathcal{M})\to\mathrm{Aff}(X)_{+}\qquad\text{and}\qquad K_{0}(\mathcal{M}) \to\mathrm{Aff}(X) \tag{7.71}\] _are isomorphisms of ordered monoids and ordered groups, respectively._ ## 8. Hyperfiniteness There is a natural notion of hyperfiniteness for tracially complete \(C^{*}\)-algebras which is modelled on the Murray-von Neumann notion of hyperfiniteness for \(\mathrm{II}_{1}\) factors from [83]. It is defined by a local approximation condition, asking that every finite set is approximately contained in a finite dimensional \(C^{*}\)-subalgebra and the existence of an inductive limit decomposition. Section 8.1 contains the definition of hyperfiniteness along with the statements of all the results needed to prove the regularity theorem: namely, hyperfiniteness implies amenability and CPoU. The proof of the latter implication occupies most of this section. En route to proving CPoU, we prove an inductive limit decomposition for hyperfinite tracially complete \(C^{*}\)-algebras in the separable setting (Theorem 8.4). Section 8.2 requires the finite dimensional perturbation lemmas required to obtain the inductive limit decomposition. The proofs of Theorems 8.3 and 8.4 are given in Section 8.3. ### Definition and main properties We define hyperfiniteness analogously to Murray and von Neumann's notion for \(\mathrm{II}_{1}\) factors in [83]. **Definition 8.1**.: We say that a tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\) is _hyperfinite_ if for every finite set \(\mathcal{F}\subseteq\mathcal{M}\) and \(\epsilon>0\), there is a finite dimensional \(C^{*}\)-algebra \(F\subseteq\mathcal{M}\) such that for every \(a\in\mathcal{F}\), there is a \(b\in F\) with \(\|a-b\|_{2,X}<\epsilon\). The passage from hyperfiniteness to amenability is a little subtle. The obvious approach is to verify the completely positive approximation property directly by taking the "downward" maps to be conditional expectations onto finite dimensional subalgebras and the "upward" maps to be the inclusions. For the estimates to match up correctly, one must arrange for the conditional expectations to be continuous in the uniform 2-norms, and it is not clear if this can be done. This issue does not arise in the setting of tracial von Neumann algebras (as the trace-preserving conditional expectation onto a subalgebra is necessarily 2-norm contractive), so we can obtain amenability by working locally. **Theorem 8.2**.: _Every hyperfinite tracially complete \(C^{*}\)-algebra is amenable._ Proof.: Suppose \((\mathcal{M},X)\) is a hyperfinite tracially complete \(C^{*}\)-algebras. For all \(\tau\in X\), the tracial von Neumann algebra \(\pi_{\tau}(\mathcal{M})^{\prime\prime}\) is hyperfinite and hence is semidiscrete. The result follows from Theorem 1.2 (which was proved in Theorem 4.9). The following result obtaining CPoU from hyperfiniteness is more subtle yet. **Theorem 8.3**.: _Every hyperfinite factorial tracially complete \(C^{*}\)-algebra satisfies CPoU._ It is easy to see that finite dimensional \(C^{*}\)-algebras satisfy CPoU in the sense that if \(F\) is a finite dimensional \(C^{*}\)-algebra, then \(\big{(}F,T(F)\big{)}\) is a factorial tracially complete \(C^{*}\)-algebras with CPoU (this is a very special case of Corollary 6.4). If one strengthens the definition of hyperfiniteness (Definition 8.1) to require that the finite dimensional subalgebras \(F\subseteq\mathcal{M}\) is factorial when viewed as a tracially complete \(C^{*}\)-algebra with the traces inherited from \((\mathcal{M},X)\) (or equivalently, that every trace on \(F\) extends to a trace in \(X\)), then CPoU for \((\mathcal{M},X)\) follows directly from CPoU from \(\big{(}F,T(F)\big{)}\). For a general finite dimensional \(C^{*}\)-subalgebra \(F\subseteq\mathcal{M}\), we do not have a way of relating general traces in \(T(F)\) to traces of the form \(\tau|_{F}\) for \(\tau\in X\), so the above argument does not work. A somewhat related issue was addressed by Murray and von Neumann in the setting of \(\mathrm{II}_{1}\) factors in [83]. They defined a \(\mathrm{II}_{1}\) factor \(\mathcal{M}\) to be "approximately finite (B)" if it satisfies for all finite set \(\mathcal{F}\subseteq\mathcal{M}\) and \(\epsilon>0\), there is a finite dimensional \(C^{*}\)-algebra \(F\subseteq\mathcal{M}\) such that for all \(a\in\mathcal{F}\), there is a \(b\in F\) such that \(\|a-b\|_{2}<\epsilon\), and they define \(\mathcal{M}\) to be "approximately finite (A)" if one can further arrange for \(F\) to be a factor. It is a non-trivial result that these two notions coincide (see [83, Lemma 4.6.2]).55 Footnote 55: In the end, for type \(\mathrm{II}_{1}\) factorial tracially complete \(C^{*}\)-algebras, the two analogous notions of hyperfiniteness are equivalent as a consequence of Theorem 9.15 using that the tracially complete \(C^{*}\)-algebras \((\mathcal{R}_{X},X)\) of Example 3.35 satisfy the stronger notion of hyperfiniteness. However, this is circular since Theorem 9.15 depends on Theorem 8.3. We circumvent the need to establish the equivalence discussed in the previous paragraph by showing that (in the separable setting) hyperfiniteness is equivalent to the a-priori stronger property of being a sequential inductive limit of finite dimensional algebras. Murray and von Neumann established such a result for separably acting hyperfinite \(\mathrm{II}_{1}\) factors (using the terminology "approximately finite (C)" for sequential inductive limits), and Bratteli proved a \(C^{*}\)-version of this result in his seminal paper on AF \(C^{*}\)-algebras, ([11]). **Theorem 8.4**.: _If \((\mathcal{M},X)\) is a hyperfinite tracially complete \(C^{*}\)-algebra such that \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-separable, then \((\mathcal{M},X)\) is an inductive limit of tracially complete \(C^{*}\)-algebras \((\mathcal{M}_{n},X_{n})\), \(n\geq 1\), such that each \(\mathcal{M}_{n}\) is finite dimensional._ The proof has the same flavour as the corresponding results for \(C^{*}\)-algebras and tracial von Neumann algebras. We will prove Theorem 8.4 in Section 8.3 after we set out the relevant uniform \(2\)-norm perturbation and near-containment lemmas in the next section. Once, Theorem 8.4 is in place, it will allow us to realise a separable factorial hyperfinite \(C^{*}\)-algebras as a tracial completion of the tracial completion of an AF algebra with respect to a closed face of its traces. Such tracial completions have CPoU by the permanence properties of Section 6.3. This will prove Theorem 8.3 in the separable case, and we will deduce the non-separable case from there. ### Perturbation lemmas The goal of this subsection is to prove a uniform \(2\)-norm 'near-containment' result (Proposition 8.8) for finite dimensional subalgebras. Similar results were used by Murray and von Neumann in the von Neumann algebra setting, and Glimm and Bratteli for \(C^{*}\)-algebras (cf. [83, 51, 11]). The main observation is that although the Borel functional calculus used in the von Neumann algebraic results does not exist in a tracially complete \(C^{*}\)-algebra, it is defined in all finite dimensional subalgebras. We start with a perturbation result for almost orthogonal projections. **Lemma 8.5**.: _For all \(\epsilon>0\) and \(n\in\mathbb{N}\), there is a \(\delta>0\) such that for all finite dimensional tracially complete \(C^{*}\)-algebras \((F,X)\) and all self-adjoint contractions \(q_{1},\dots,q_{n}\in F\) with_ \[\|q_{i}-q_{i}^{2}\|_{2,X}<\delta\qquad\text{and}\qquad\|q_{i}q_{j}\|_{2,X}<\delta, \tag{8.1}\] _for all \(i,j=1,\dots,n\) with \(i\neq j\), there are mutually orthogonal projections \(p_{1},\dots,p_{n}\in F\) such that \(\|p_{i}-q_{i}\|_{2,X}<\epsilon\)._ Proof.: We will prove the result by induction on \(n\). When \(n=1\), set \(\delta=\epsilon/2\), let \(f\colon\mathbb{R}\to\mathbb{R}\) denote the characteristic function of \([1/2,\infty)\), and define \(p_{1}\coloneqq f(q_{1})\). Then \[|f(t)-t|\leq 2|t-t^{2}|,\qquad t\in\mathbb{R}, \tag{8.2}\] and so \(\|p_{1}-q_{1}\|_{2,X}\leq 2\|q_{1}-q_{1}^{2}\|_{2,X}<\epsilon\). Assuming the result has been proven for \(n\in\mathbb{N}\), let \(\delta^{\prime}>0\) be given by applying the lemma with this \(n\) and with \(\epsilon/(8n)\) in place of \(\epsilon\). Define \[\delta\coloneqq\min\Big{\{}\delta^{\prime},\frac{\epsilon}{8n+4}\Big{\}}. \tag{8.3}\] Let \(q_{1},\dots,q_{n+1}\in F\) be given as in the statement. As \(\delta\leq\delta^{\prime}\), the choice of \(\delta^{\prime}\) implies there are mutually orthogonal projections \(p_{1},\dots,p_{n}\in F\) such that \(\|p_{i}-q_{i}\|_{2,X}<\epsilon/(10n+2)\) for \(i=1,\dots,n\). Define \(p\coloneqq\sum_{i=1}^{n}p_{i}\), \(q\coloneqq q_{n+1}\), and \(p_{n+1}\coloneqq f(p^{\perp}qp^{\perp})\). Then \(p_{n+1}\) is a projection and is orthogonal to each of \(p_{1},\dots,p_{n}\). Using the bimodule property of (3.2), we can estimate \[\|p_{n+1}-q_{n+1}\|_{2,X} \leq\|f(p^{\perp}qp^{\perp})-p^{\perp}qp^{\perp}\|_{2,X}+\|p^{ \perp}qp^{\perp}-q\|_{2,X}\] \[\leq 2\|p^{\perp}qp^{\perp}-(p^{\perp}qp^{\perp})^{2}\|_{2,X}+2 \|pq|_{2,X}\] \[\leq 2\|q-qp^{\perp}q\|_{2,X}+2\|pq\|_{2,X}\] \[\leq 2\|q-q^{2}\|_{2,X}+4\|pq\|_{2,X}\] \[<2\delta+4\sum_{i=1}^{n}\|p_{i}q\|_{2,X}\] \[\leq 2\delta+4\sum_{i=1}^{n}\big{(}\|p_{i}-q_{i}\|_{2,X}+\|q_{i}q \|_{2,X}\big{)}\] \[<2\delta+4n\Big{(}\frac{\epsilon}{8n}+\delta\Big{)}\leq\epsilon\] The following perturbation result for partial isometries will be used to perturb the off diagonal matrix units of a near inclusion of finite dimensional \(C^{*}\)-algebras. **Lemma 8.6**.: _Suppose \((F,X)\) is a finite dimensional tracially complete \(C^{*}\)-algebra and \(p,q\in F\) are projections. If \(w\in F\) is a contraction, then there is a partial isometry \(v\in F\) such that \(vp=v=qv\) and_ \[\|v-w\|_{2,X}\leq 6\max\big{\{}\|w^{*}w-p\|_{2,X}^{1/2},\|ww^{*}-q\|_{2,X}^{1/2} \big{\}}. \tag{8.4}\] Proof.: Define \[\delta:=\max\big{\{}\|w^{*}w-p\|_{2,X},\|ww^{*}-q\|_{2,X}\big{\}}. \tag{8.5}\] Let \(u:=qwp\) and note that \(u\) is a contraction and \[\max\big{\{}\|u^{*}u-p\|_{2,X},\|uu^{*}-q\|_{2,X}\big{\}}\leq 3\delta. \tag{8.6}\] Indeed, \[\|u^{*}u-p\|_{2,X} =\|pw^{*}qwp-p\|_{2,X}\] \[\leq\|pw^{*}(q-ww^{*})wp\|_{2,X}+\|pw^{*}w(w^{*}w-p)p\|_{2,X}\] \[\qquad+\|p(w^{*}w-p)p\|_{2,X}\] \[\leq 3\delta, \tag{8.7}\] where the last inequality uses (8.5) and the bimodule property of (3.2). The other inequality in (8.6) follows similarly. Define \(g\colon[0,1]\to\mathbb{R}\) by \[g(t):=\begin{cases}0&0\leq t\leq\frac{1}{4}\\ t^{-1/2}&\frac{1}{4}<t\leq 1\end{cases} \tag{8.8}\] and define \(v\coloneqq ug(u^{*}u)\). Then \(v^{*}v\) is a projection, and hence \(v\) is a partial isometry. As \(up=u=qu\), we also have \(vp=v=qv\). After solving some polynomial inequalities, we get that for \(t\in[0,1]\), we have \[0\leq|t-tg(t)|\leq\frac{4}{3}(t-t^{2})\qquad\text{and}\qquad 0\leq g(t)\leq 2. \tag{8.9}\] Therefore, for each \(\tau\in X\), \[\|u-v\|_{2,\tau}^{2} = \tau\big{(}(u-ug(u^{*}u))^{*}(u-ug(u^{*}u))\big{)}\] \[= \tau\big{(}u^{*}u-2u^{*}ug(u^{*}u)+u^{*}ug(u^{*}u)^{2}\big{)}\] \[= \tau\big{(}u^{*}u-u^{*}ug(u^{*}u)\big{)}+\tau\big{(}(u^{*}ug(u^{* }u)-u^{*}u)g(u^{*}u)\big{)}\] \[\leq \tau\big{(}|u^{*}u-u^{*}ug(u^{*}u)|\big{)}+2\tau\big{(}|u^{*}u-u^ {*}ug(u^{*}u)|\big{)}\] \[\stackrel{{\eqref{eq:1}}}{{\leq}} 4\tau(u^{*}u-(u^{*}u)^{2})\] \[\leq 4(\|u^{*}u-p\|_{2,\tau}+\|p-(u^{*}u)^{2}\|_{2,\tau})\] \[\stackrel{{\eqref{eq:1}}}{{\leq}} 36\delta, \tag{8.10}\] using Cauchy-Schwarz, and the fact that \(p\) is a projection. The perturbations of the previous two lemmas may change the trace of the diagonal matrix units. The following is used to account for this. **Lemma 8.7**.: _If \((F,X)\) is a finite dimensional tracially complete \(C^{*}\)-algebra and \(p_{1},\dots,p_{n},p\in F\) are projections with \(p_{i}\leq p\) for all \(i=1,\dots,n\), then_ \[\Big{\|}p-\bigwedge_{i=1}^{n}p_{i}\Big{\|}_{2,X}\leq\Big{(}\sum_{i=1}^{n}\|p- p_{i}\|_{2,X}^{2}\Big{)}^{1/2}\leq\sum_{i=1}^{n}\|p-p_{i}\|_{2,X}. \tag{8.11}\] Proof.: The second inequality amounts to \(\sum_{i=1}^{n}\lambda_{i}^{2}\leq(\sum_{i=1}^{n}\lambda)^{2}\) for positive \(\lambda_{i}\), so is immediate. We prove the first inequality by induction on \(n\) with the case \(n=1\) being trivial. Let \(q_{1}\coloneqq\bigwedge_{i=1}^{n}p_{i}\) and \(q_{2}\coloneqq p_{n+1}\). By [7, Proposition III.1.1.3], \[(q_{1}-(q_{1}\wedge q_{2}))\text{ and }((q_{1}\lor q_{2})-q_{2}), \tag{8.12}\] are Murray-von Neumann equivalent. Therefore, for each \(\tau\in X\), \[\begin{split}\|p-(q_{1}\wedge q_{2})\|_{2,\tau}^{2}&= \tau(p-(q_{1}\wedge q_{2}))\\ &=\tau(p-q_{1})+\tau(q_{1}-(q_{1}\wedge q_{2}))\\ &=\tau(p-q_{1})+\tau((q_{1}\lor q_{2})-q_{2})\\ &\leq\tau(p-q_{1})+\tau(p-q_{2})\\ &=\|p-q_{1}\|_{2,\tau}^{2}+\|p-q_{2}\|_{2,\tau}^{2}.\end{split} \tag{8.13}\] Hence, we have \[\left\|p-\bigwedge_{i=1}^{n+1}p_{i}\right\|_{2,X}^{2}\leq\left\|p-\bigwedge_{i= 1}^{n}p_{i}\right\|_{2,X}^{2}+\|p-p_{n+1}\|_{2,X}^{2} \tag{8.14}\] The first inequality in (8.11) now follows by induction. The last three lemmas combine to prove the following stability result for near inclusions of finite dimensional tracially complete \(C^{*}\)-algebras. **Proposition 8.8**.: _Suppose \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra, let \(F\) be a finite dimensional \(C^{*}\)-subalgebra of \(\mathcal{M}\), and let \(\mathcal{F}\subseteq F\) be a system of matrix units. For all \(\epsilon>0\), there exists \(\delta>0\) such that if \(G\subseteq\mathcal{M}\) is a finite dimensional \(C^{*}\)-algebra and for all \(a\in\mathcal{F}\), there exists \(b\in G\) with \(\|a-b\|_{2,X}<\delta\), then there is a (not necessarily unital) \({}^{*}\)-homomorphism \(\phi\colon F\to G\) such that_ \[\|\phi(a)-a\|_{2,X}\leq\epsilon\|a\|,\qquad a\in F. \tag{8.15}\] Proof.: Let \[\left(e_{i,j}^{(k)}\right)_{i,j=1,\dots,d_{k}}^{k=1,\dots,m}\subseteq F \tag{8.16}\] be a system of matrix units for \(F\). After replacing \(\epsilon\) with a scalar multiple (depending only on the dimension of \(F\)), it suffices to construct a corresponding system of matrix units \[\left(f_{i,j}^{(k)}\right)_{i,j=1,\dots,d_{k}}^{k=1,\dots,m}\subseteq G \tag{8.17}\] such that \(\left\|e_{i,j}^{(k)}-f_{i,j}^{(k)}\right\|_{2,X}<\epsilon\) for all \(i,j,k\). Let \(d=\max\{d_{1},\dots,d_{m}\}\) and choose \(\epsilon^{\prime}>0\) with \[(4\epsilon^{\prime}+24(\epsilon^{\prime})^{1/2})(d+1)<\epsilon. \tag{8.18}\] Apply Lemma 8.5 with \(\epsilon^{\prime}\) in place of \(\epsilon\) and with \(\sum_{k=1}^{m}d_{k}\) in place of \(n\) to obtain \(\delta^{\prime}>0\), and define \(\delta\coloneqq\min\{\epsilon^{\prime}/3,\delta^{\prime}/9\}>0\). By assumption and Lemma 3.27, for each \(i=1,\dots,d_{k}\) and \(k=1,\dots,m\), there are contractions \(q_{i}^{(k)},w_{i}^{(k)}\in G\) such that \[\left\|q_{i}^{(k)}-e_{i,i}^{(k)}\right\|_{2,X}<3\delta\leq\epsilon^{\prime} \quad\text{and}\quad\left\|w_{i}^{(k)}-e_{i,1}^{(k)}\right\|_{2,X}<3\delta \leq\epsilon^{\prime}. \tag{8.19}\] For all \(k,k^{\prime}=1,\dots,m\), \(i=1,\dots,d_{k}\), and \(i^{\prime}=1,\dots,d_{k^{\prime}}\) such that \((i,k)\neq(i^{\prime},k^{\prime})\), we have \[\|(q_{i}(k))^{2}-q_{i}^{(k)}\|_{2,X}<9\delta\leq\delta^{\prime},\quad\text{and }\quad\|q_{i}^{(k)}q_{i^{\prime}}^{(k^{\prime})}\|_{2,X}<6\delta<\delta^{ \prime}. \tag{8.20}\] By the choice of \(\delta^{\prime}\), we may invoke the conclusion of Lemma 8.5 to produce mutually orthogonal projections \(p_{i}^{(k)}\in G\) such that for all \(i\) and \(k\), \[\|p_{i}^{(k)}-q_{i}^{(k)}\|_{2,X}<\epsilon^{\prime}. \tag{8.21}\] Now for each \(i\) and \(k\), we have \[w_{i}^{(k)*}w_{i}^{(k)}\overset{(\ref{eq:p_i})}{\approx}_{2\epsilon^{\prime}} e_{1,i}^{(k)}e_{i,1}^{(k)}=e_{1,1}^{(k)}\overset{(\ref{eq:p_i})}{\approx}_{ \epsilon^{\prime}}q_{1}^{(k)}\overset{(\ref{eq:p_i})}{\approx}_{\epsilon^{ \prime}}p_{1}^{(k)}, \tag{8.22}\] and similarly \(w_{i}^{(k)}w_{i}^{(k)*}\approx_{4\epsilon^{\prime}}p_{i}^{(k)}\), where the approximations are in \(\|\cdot\|_{2,X}\). By Lemma 8.6, for each \(i\) and \(k\), there is a partial isometry \(v_{i}^{(k)}\in G\) with source contained in \(p_{1}^{(k)}\) and range contained in \(p_{i}^{(k)}\) and such that \[\|v_{i}^{(k)}-w_{i}^{(k)}\|_{2,X}\leq 12(\epsilon^{\prime})^{1/2}. \tag{8.23}\] For \(k=1,\ldots,m\) and \(i,j=1,\ldots,d_{k}\), define \[r^{(k)}\coloneqq\bigwedge_{i=1}^{d_{k}}v_{i}^{(k)*}v_{i}^{(k)}\leq p_{1}^{(k) }\quad\text{and}\quad f_{i,j}^{(k)}\coloneqq v_{i}^{(k)}r^{(k)}v_{j}^{(k)*}. \tag{8.24}\] Then the \(f_{i,j}^{(k)}\in G\) satisfy the matrix unit relations, so it suffices to show that \(\left\|e_{i,j}^{(k)}-f_{i,j}^{(k)}\right\|_{2,X}<\epsilon\) for all \(i,j,k\). For each \(k=1,\ldots,m\), we have \[v_{i}^{(k)*}v_{i}^{(k)}\overset{(\ref{eq:p_i})}{\approx}_{24(\epsilon^{\prime })^{1/2}}w_{i}^{(k)*}w_{i}^{(k)}\overset{(\ref{eq:p_i})}{\approx}_{4\epsilon^{ \prime}}p_{1}^{(k)} \tag{8.25}\] where the approximations are in the \(\|\cdot\|_{2,X}\)-norm. Therefore, Lemma 8.7 implies \[\|p_{1}^{(k)}-r^{(k)}\|_{2,X}<(4\epsilon^{\prime}+24(\epsilon^{\prime})^{1/2} )d_{k}. \tag{8.26}\] For all \(k=1,\ldots,m\) and \(i,j=1,\ldots,d_{k}\), we have \[e_{i,j}^{(k)}=e_{i,1}^{(k)}e_{1,j}^{(k)}\overset{(\ref{eq:p_i})}{\approx}_{2 \epsilon^{\prime}}w_{i}^{(k)}w_{i}^{(k)*}\overset{(\ref{eq:p_i})}{\approx}_{2 4(\epsilon^{\prime})^{1/2}}v_{i}^{(k)}v_{i}^{(k)*}=v_{i}^{(k)}p_{1}^{(k)}v_{i} ^{(k)*}. \tag{8.27}\] Combining (8.26) and (8.27), we have \[\left\|e_{i,j}^{(k)}-f_{i,j}^{(k)}\right\|_{2,X}\leq(4\epsilon^{\prime}+24( \epsilon^{\prime})^{1/2})(d_{k}+1)\overset{(\ref{eq:p_i})}{<}\epsilon.\qed \tag{8.28}\] ### Hyperfinite implies CPoU The perturbation results of the previous subsection allow us to prove Theorem 8.4, from which we will deduce Theorem 8.3. Proof of Theorem 8.4.: Suppose \((\mathcal{M},X)\) is hyperfinite and \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-separable. Let \(\mathcal{G}_{n}\) be an increasing sequence of finite subsets of \(\mathcal{M}\) with \(\|\cdot\|_{2,X}\)-dense union and let \(\epsilon_{n}>0\) be such that \(\sum_{n=1}^{\infty}\epsilon_{n}<\infty\). We will inductively construct sequences of finite dimensional subalgebras \(F_{n}\subseteq\mathcal{M}\) and \({}^{*}\)-homomorphisms \(\phi_{n}\colon F_{n}\to F_{n+1}\) such that for all \(n\geq 1\), 1. for all \(a\in\mathcal{G}_{n}\), there is a \(b\in F_{n}\) such that \(\|a-b\|_{2,X}<\epsilon_{n}\), and 2. for all \(b\in F_{n}\), \(\|\phi_{n}(b)-b\|_{2,X}<\epsilon_{n}\|b\|\). We can find \(F_{1}\) satisfying (i) by the definition of hyperfiniteness. Assuming that \(F_{n}\) has been constructed, let \(\mathcal{F}_{n}\) be a system of matrix units for \(F_{n}\), and let \(\delta_{n}>0\) be given by applying Proposition 8.8 with \(F_{n}\) and \(\epsilon_{n}\) in place of \(F\) and \(\epsilon\). By the definition of hyperfiniteness, there is a finite dimensional \(C^{*}\)-algebra \(F_{n+1}\subseteq\mathcal{M}\) such that (i) holds and for all \(a\in\mathcal{F}_{n}\), there is a \(b\in F_{n+1}\) such that \(\|a-b\|_{2,X}<\delta_{n}\). Then Proposition 8.8 yields the \({}^{*}\)-homomorphism \(\phi_{n}^{n+1}\colon F_{n}\to F_{n+1}\) required in (ii), which completes the construction. For \(n,m\in\mathbb{N}\) with \(m<n\), let \(\phi_{m}^{n}\colon F_{m}\to F_{n}\) be given by composing the maps \(\phi_{k}^{k+1}\) for \(m\leq k<n\). For \(m\in\mathbb{N}\) and \(b\in F_{m}\), condition (ii) above together with the choice of \(\epsilon_{n}\) imply that the bounded sequence \((\phi_{m}^{n}(b))_{n=m}^{\infty}\) is \(\|\cdot\|_{2,X}\)-Cauchy. Define \(\psi_{m}\colon F_{m}\to\mathcal{M}\) by \[\psi_{m}(b)\coloneqq\lim_{n\to\infty}\phi_{m}^{n}(b),\qquad b\in F_{m}. \tag{8.29}\] Note that \(\psi_{n}\circ\phi_{m}^{n}=\psi_{m}\) for all \(m,n\in N\) with \(m<n\). If \(A\coloneqq\varinjlim(F_{n},\phi_{n})\) is the \(C^{*}\)-algebraic inductive limit of the algebras \(F_{n}\) and \(\phi_{\infty,n}\colon\overrightarrow{F_{n}}\to A\) are the natural maps, then there is an induced \({}^{*}\)-homomorphism \(\psi\colon A\to\mathcal{M}\) such that \(\psi\circ\phi_{\infty,n}=\psi_{n}\) for all \(n\in\mathbb{N}\). Since \(A\) is an AF algebra and quotients of AF algebras are also AF algebras (see [31, Theorem III.4.4], for example), we have that \(\psi(A)\) is an AF subalgebra of \(\mathcal{M}\). It suffices to show that \(\psi(A)\subseteq\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-dense. For each \(n\in\mathbb{N}\), condition (ii) and the definition of \(\psi_{n}\) implies \[\|\phi_{n}(b)-b\|_{2,X}\leq\sum_{m=n}^{\infty}\epsilon_{m}\|b\|,\qquad b\in F_ {n}. \tag{8.30}\] As \(\sum_{n=1}^{\infty}\epsilon_{n}<\infty\) and the sets \(\mathcal{G}_{n}\) are increasing, the above inequality and condition (i) imply that \(\mathcal{G}_{n}\) is contained in the \(\|\cdot\|_{2,X}\)-closure of \(\psi(A)\) for all \(n\geq 1\). As the sets \(\mathcal{G}_{n}\) have \(\|\cdot\|_{2,X}\)-dense union in \(\mathcal{M}\), this completes the proof that \((\mathcal{M},X)\) is an inductive limit of finite-dimensional tracially complete \(C^{*}\)-algebras. The converse is straightforward. We now have the pieces to show hyperfinite implies CPoU in the factorial setting. Proof of Theorem 8.3.: Let \((\mathcal{M},X)\) be a hyperfinite factorial tracially complete \(C^{*}\)-algebra. Suppose first that \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-separable. Then, by Theorem 8.4, there is an increasing sequence \((F_{n})_{n=1}^{\infty}\) of finite dimensional unital \(C^{*}\)-subalgebras of \(\mathcal{M}\) with \(\|\cdot\|_{2,X}\)-dense union. Let \(A\subseteq\mathcal{M}\) denote the norm closure of the union of the \(F_{n}\) so that \(A\) is an AF \(C^{*}\)-algebra. If \(X_{A}\coloneqq\{\tau|_{A}:\tau\in X\}\), then Corollary 3.29 implies that \(X_{A}\) is a closed face in \(T(A)\) and \(\big{(}\overline{A}^{X_{A}},X_{A}\big{)}\cong(\mathcal{M},X)\). Note that \((\overline{A}^{T(A)},T(A))\) is the inductive limit of the factorial tracially complete \(C^{*}\)-algebras \(\big{(}F_{n},T(F_{n})\big{)}\) and each of these has CPoU by Corollary 6.4. Therefore \((\overline{A}^{T(A)},T(A))\) has CPoU by Proposition 6.12. Further, note that the tracial completion of \(\overline{A}^{T(A)}\) with respect to \(X_{A}\) is \((\overline{A}^{X_{A}},X_{A})\cong(\mathcal{M},X)\). This has CPoU by Proposition 6.9. The general case follows from the \(\|\cdot\|_{2,X}\)-separable case as hyperfiniteness is separably inheritable and factorial CPoU tracially complete \(C^{*}\)-algebras are strongly separably inheritable (see the discussion immediately following Definition A.1 in Appendix A). The proofs of the (strong) separable inheritability are given in Theorem A.3. ## 9. Classification and Regularity In this final section, we combine the pieces together to obtain our main results. The classification theorem (Theorem C) will follow from the existence and uniqueness results for morphisms obtained in Sections 9.2 and 9.1, respectively, via standard intertwining arguments. In more detail, we will show that for all suitable tracially complete \(C^{*}\)-algebras \((\mathcal{M},X)\) and \((\mathcal{N},Y)\), such as those covered in the classification theorem, any continuous affine map \(Y\to X\) is induced by a morphism \((\mathcal{M},X)\to(\mathcal{N},Y)\), and this map is unique up to approximate unitary equivalence. From here, if \(X\cong Y\), then the existence theorem can be applied twice to produce morphisms \(\phi\colon(\mathcal{M},X)\to(\mathcal{N},Y)\) and \(\psi\colon(\mathcal{N},Y)\to(\mathcal{M},X)\) which are inverses of each other on traces. Then, two applications of the uniqueness theorem allow us to show that \(\psi\circ\phi\) and \(\phi\circ\psi\) are approximately unitary equivalent to the identity maps on \(\mathcal{M}\) and \(\mathcal{N}\), respectively. An application of (a tracially complete version of) Elliott's intertwining argument will then imply \((\mathcal{M},X)\cong(\mathcal{N},Y)\). The uniqueness result is covered in Section 9.1 and follows from a mostly standard uniqueness theorem for weakly nuclear \({}^{*}\)-homomorphisms into finite von Neumann algebras (Proposition 9.2) and a CPoU argument. The existence result in Section 9.2 is more subtle. On the first pass, we will only be able to obtain an "approximate existence" result showing the existence of approximately multiplicative maps \(\mathcal{M}\to\mathcal{N}\) approximately implementing a given continuous affine map \(Y\to X\); a discussion of the strategy behind this can be found at the beginning of Section 9.2. As is standard in the \(C^{*}\)-algebra classification literature, this approximate existence result can be paired with an analogous uniqueness result for approximately multiplicative maps via an approximate intertwining argument to strengthen our existence result and produce a genuine morphism \((\mathcal{M},X)\to(\mathcal{N},Y)\) with prescribed behaviour on traces (Theorem 9.12). In the proof of Theorem 9.12, we will avoid explicitly using an approximate uniqueness theorem and this final intertwining argument by making use of the intertwining via reparameterisation technique given in Theorem 5.11. More precisely, the approximate existence result will provide a \({}^{*}\)-homomorphism \((\mathcal{M},X)\to(\mathcal{N}^{\infty},Y^{\infty})\). Our uniqueness result will apply to morphisms into \((\mathcal{N}^{\infty},Y^{\infty})\) since the hypotheses on the codomain of the existence theorem are preserved by reduced powers. This then allows us to use Theorem 5.11 to obtain the final existence result. With the classification theorem in hand, the structure theorem for separable type II\({}_{1}\) factorial amenable tracially complete \(C^{*}\)-algebras with property \(\Gamma\) (Theorem B) will follow. ### Uniqueness results for morphisms This subsection gives a uniqueness result for tracially nuclear \({}^{*}\)-homomorphisms into factorial tracially complete \(C^{*}\)-algebras with CPoU. Uniqueness will be up to approximate unitary equivalence in uniform 2-norm, as follows. **Definition 9.1**.: If \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra and \(\phi,\psi\colon A\to\mathcal{M}\) are functions, we say that \(\phi\) and \(\psi\) are _approximately unitarily equivalent_ if there is a net of unitaries \((u_{\lambda})\subseteq\mathcal{M}\) such that \[\|u_{\lambda}\phi(a)u_{\lambda}^{*}-\psi(a)\|_{2,X}\to 0,\qquad a\in A. \tag{9.1}\] Note that when \(A\) is a separable \(C^{*}\)-algebra and \(\phi\) and \(\psi\) are \(\|\cdot\|\)-continuous we may arrange the net \((u_{\lambda})\) to be a sequence. We begin by recording following the uniqueness result for von Neumann algebras which is a consequence of Connes' theorem. Several variations and special cases of this can be found in the literature (see [28, Proposition 2.1] or [99, Proposition 1.1] for example); the result most often appears when \(A\) is nuclear and \(\mathcal{M}\) is a II\({}_{1}\) factor. Most, if not all, variations of this argument (including the one below) follow the same strategy: 1. use Connes' theorem to replace the domain with a hyperfinite von Neumann algebra; 2. use an \(\epsilon/3\)-argument to replace the domain with a finite dimensional algebra; 3. use the Murray-von Neumann classification of projections by traces to compare matrix units in the codomain. **Proposition 9.2**.: _Suppose \(A\) is a \(C^{*}\)-algebra, \(\mathcal{N}\) is a finite von Neumann algebra, and \(\phi,\psi\colon A\to\mathcal{N}\) are weakly nuclear \({}^{*}\)-homomorphisms. Then there is a net of unitaries \((u_{\lambda})\subseteq\mathcal{N}\) such that_ \[\sigma\text{-strong}^{*}\text{-}\lim_{\lambda}\,u_{\lambda}\phi(a)u_{\lambda}^{ *}=\psi(a),\qquad a\in A, \tag{9.2}\] _if and only if \(\tau\circ\phi=\tau\circ\psi\) for all \(\tau\in T(\mathbb{N})\)._ Proof.: Suppose that \(\tau\circ\phi=\tau\circ\psi\) for all \(\tau\in T(\mathcal{N})\). Let \(\mathcal{M}\subseteq\mathcal{N}\) denote the von Neumann subalgebra generated by \(\phi(A)\). Recall that since \(\mathcal{N}\) is finite, \(T(\mathcal{N})\) forms a faithful set of traces, i.e. if \(x\in\mathcal{N}\) is non-zero, then there exists \(\tau\in T(\mathcal{N})\) with \(\tau(x^{*}x)\neq 0\). Accordingly the hypothesis ensures \(\ker(\phi)=\ker(\psi)\), and so \(\psi\) factorises through \(\phi(A)\). Let \(\bar{\phi}\colon\mathcal{M}\to\mathcal{N}\) denote the inclusion map and let \(\bar{\psi}\colon\mathcal{M}\to\mathcal{N}\) denote the unique normal \({}^{*}\)-homomorphism with \(\bar{\psi}\circ\phi=\psi\). Then \(\tau\circ\bar{\phi}=\tau\circ\bar{\psi}\) for all normal traces \(\tau\in T(\mathcal{N})\) (and hence also for all \(\tau\in T(\mathcal{N})\)). As \(\mathcal{N}\) is finite, there is a normal conditional expectation \(\mathcal{N}\to\mathcal{M}\) (see [15, Lemma 1.5.10]; for example), and so, the corestriction of \(\phi\) to a \({}^{*}\)-homomorphism \(A\to\mathcal{M}\) is weakly nuclear. Then \(\mathcal{M}\) is hyperfinite by the proof of (6) implies (5) of [12, Theorem 3.2.2].56 Footnote 56: This reference handles the case that \(\mathcal{M}=\pi_{\tau}(A)^{\prime\prime}\) for a trace \(\tau\in T(A)\) and \(\phi=\pi_{\tau}\). To prove the claim in our setting, fix a faithful normal \({}^{*}\)-homomorphism \(\pi\colon\mathcal{M}\to\mathcal{B}(H)\) so that \(\mathcal{M}\cong\pi(\mathcal{M})=\pi(\phi(A))^{\prime\prime}\), and in the proof in [12], replace \(\pi_{\tau}\) with \(\pi\circ\phi\). Let \(F\subseteq\mathcal{M}\) be a finite dimensional \(C^{*}\)-algebra, so that \(\tau\circ\bar{\phi}|_{F}=\tau\circ\bar{\psi}|_{F}\). It is a standard consequence of the classification of projections in the finite von Neumann algebra \(\mathcal{N}\) by traces that \(\bar{\phi}|_{F}\) and \(\bar{\psi}|_{F}\) are unitarily equivalent.57 Therefore \(\bar{\phi}\) and \(\bar{\psi}\) are strong\({}^{*}\)-approximate unitarily equivalent, and hence so too are \(\phi\) and \(\psi\). The converse is immediate. Our main uniqueness result is obtained by a CPoU argument to pass from local uniqueness at the fibres to a global statement. This works in essentially the same way as [21, Theorem 2.2] (which handles the case when \(A\) is nuclear).58 Footnote 58: The setup in [21] considers uniform tracial sequence algebras \(B^{\infty}\) as codomains when \(B\) is a separable \(C^{*}\)-algebra with CPoU, but there are no additional difficulties in working in the abstract framework of tracially complete \(C^{*}\)-algebras. **Theorem 9.3**.: _Suppose \(A\) is a \(C^{*}\)-algebra, \((\mathcal{N},Y)\) is a factorial tracially complete \(C^{*}\)-algebra with CPoU, and \(\phi,\psi\colon A\to\mathcal{N}\) are tracially nuclear \({}^{*}\)-homomorphisms. Then \(\phi\) and \(\psi\) are approximately unitarily equivalent if and only if \(\tau\circ\phi=\tau\circ\psi\) for all \(\tau\in Y\)._ Proof.: Fix a finite set \(\mathcal{F}\subseteq A\) and \(\epsilon>0\). For each pair of traces \(\tau\in Y\) and \(\sigma\in T(\pi_{\tau}(\mathcal{N})^{\prime\prime})\), we have \(\sigma\circ\pi_{\tau}\in Y\) by Lemma 2.10, and hence \[\sigma\circ\pi_{\tau}\circ\phi=\sigma\circ\pi_{\tau}\circ\psi. \tag{9.3}\] As \(\pi_{\tau}\circ\phi\) and \(\pi_{\tau}\circ\psi\) are weakly nuclear, Proposition 9.2 implies there is a unitary \(\bar{u}_{\tau}\in\pi_{\tau}(\mathcal{N})^{\prime\prime}\) such that \[\max_{a\in\mathcal{F}}\|\bar{u}_{\tau}\pi_{\tau}(\phi(a))\bar{u}_{\tau}^{*}- \pi_{\tau}(\psi(a))\|_{2,\tau}<\epsilon,\qquad\tau\in Y. \tag{9.4}\] As \(\pi_{\tau}(\mathcal{N})^{\prime\prime}\) is a von Neumann algebra, we know that \(\bar{u}_{\tau}=e^{i\bar{h}_{\tau}}\) for some self-adjoint \(\bar{h}_{\tau}\in\pi_{\tau}(\mathcal{N})^{\prime\prime}\). Applying Kaplansky's density theorem to \(\bar{h}_{\tau}\in\pi_{\tau}(\mathcal{N})^{\prime\prime}\) and making use of the existence of self-adjoint lifts, we deduce that for each \(\tau\in Y\), each unitary \(\bar{u}_{\tau}\) is a \(\|\cdot\|_{2,\tau}\)-limit of unitaries of the form \(\pi_{\tau}(u_{\tau})\) for a unitary \(u_{\tau}=e^{ih_{\tau}}\in\mathcal{N}\), and in particular, we may find unitaries \(u_{\tau}\in\mathcal{N}\) so that \[\max_{a\in\mathcal{F}}\|u_{\tau}\phi(a)u_{\tau}^{*}-\psi(a)\|_{2,\tau}< \epsilon,\qquad\tau\in Y. \tag{9.5}\] For \(\tau\in Y\), define \[a_{\tau}\coloneqq\sum_{a\in\mathcal{F}}\left|u_{\tau}\phi(a)u_{\tau}^{*}-\psi (a)\right|^{2}\in\mathcal{N}_{+} \tag{9.6}\] and note that \(\tau(a_{\tau})<|\mathcal{F}|\epsilon^{2}\) for all \(\tau\in Y\). As \(Y\) is compact, there are traces \(\tau_{1},\dots,\tau_{n}\in Y\) such that \[\sup_{\tau\in Y}\min_{1\leq i\leq n}\tau(a_{\tau_{i}})<|\mathcal{F}|\epsilon^{ 2}. \tag{9.7}\] Define \(S\coloneqq\phi(\mathcal{F})\cup\psi(\mathcal{F})\cup\{u_{\tau_{1}},\dots,u_{ \tau_{n}}\}\subseteq\mathcal{N}\). As \((\mathcal{N},Y)\) has CPoU, there are projections \(p_{1},\dots,p_{n}\in\mathcal{N}^{\omega}\cap S^{\prime}\) such that \[\sum_{j=1}^{n}p_{j}=1_{\mathcal{N}^{\omega}}\qquad\text{and}\qquad\tau(a_{\tau _{i}}p_{i})\leq|\mathcal{F}|\epsilon^{2} \tag{9.8}\] for all \(i=1,\ldots,n\) and \(\tau\in Y^{\omega}\). Then, using the fact that the \(p_{i}\) commute with \(S\) and are orthogonal, \(u\coloneqq\sum_{i=1}^{n}p_{i}u_{\tau_{i}}\in\mathcal{N}^{\omega}\) is a unitary and \[\max_{a\in\mathcal{F}}\|u\phi(a)u^{*}-\psi(a)\|_{2,Y^{\omega}}\leq|\mathcal{F} |^{1/2}\epsilon. \tag{9.9}\] By Corollary 7.11, there is a sequence of unitaries \((u_{n})_{n=1}^{\infty}\subseteq\mathcal{N}\) representing \(u\). Then \[\limsup_{n\to\omega}\max_{a\in\mathcal{F}}\|u_{n}\phi(a)u_{n}^{*}-\psi(a)\|_{2,Y}\leq|\mathcal{F}|^{1/2}\epsilon, \tag{9.10}\] which proves the theorem. In the special case of reduced products of factorial tracially complete \(C^{*}\)-algebras with CPoU, the uniqueness theorem gives on-the-nose unitary equivalence. **Corollary 9.4** (cf. [21, Theorem 2.2]).: _Let \(A\) be a separable \(C^{*}\)-algebra and \((\mathcal{N},Y)\) is a factorial tracially complete \(C^{*}\)-algebra with CPoU. If \(\phi,\psi\colon A\to\mathcal{N}^{\omega}\) are tracially nuclear \({}^{*}\)-homomorphisms, then \(\phi\) and \(\psi\) are unitarily equivalent if and only if \(\tau\circ\phi=\tau\circ\psi\) for all \(\tau\in Y^{\omega}\)._ Proof.: As \((\mathcal{N},Y)\) is a factorial tracially complete \(C^{*}\)-algebra with CPoU, the same is true for \((\mathcal{N}^{\omega},Y^{\omega})\) by Corollary 7.7. Hence, applying Theorem 9.3, we get that \(\phi\) and \(\psi\) are \(\|\cdot\|_{2,Y^{\omega}}\)-approximately unitary equivalent if and only if \(\tau\circ\phi=\tau\circ\psi\) for all \(\tau\in Y^{\omega}\). By the separability of \(A\) and a standard application of Kirchberg's \(\epsilon\)-test (Lemma 5.1), \(\phi,\psi\colon A\to\mathcal{N}^{\omega}\) are \(\|\cdot\|_{2,Y^{\omega}}\)-approximately unitary equivalent if and only if they are unitary equivalent. ### Existence results for morphisms We will give a general existence result showing that morphisms can be constructed from amenable tracially complete \(C^{*}\)-algebras to factorial tracially complete \(C^{*}\)-algebras with CPoU with prescribed tracial information (see Corollary 9.9 below). By construction, the approximate morphisms we produce will approximately factor through finite dimensional \(C^{*}\)-algebras. In fact, given a separable \(C^{*}\)-algebra \(A\), a factorial tracially complete \(C^{*}\)-algebra \((\mathcal{N},Y)\), and a continuous affine map \(\gamma\colon Y\to T(A)\), we will produce approximate factorisations of \(\gamma\) through the trace simplices \(T(F_{n})\) of finite dimensional \(C^{*}\)-algebras \(F_{n}\). If \(\gamma(\tau)\) satisfies a suitable approximation property for all \(\tau\in Y\) (e.g. amenability), the maps \(T(F_{n})\to T(A)\) may be approximately implemented by approximate morphisms \(A\to F_{n}\). Further, when \((\mathcal{N},Y)\) has CPoU, the classification of projections in \(\mathcal{N}\) (Theorems 7.18 and 7.19) allows us to show the maps \(Y\to T(F_{n})\) are implemented by morphisms \(F_{n}\to\mathcal{N}\). The compositions \(A\to F_{n}\to\mathcal{N}\) provide the desired approximate morphism. The following lemma gives the required existence result for morphisms out of finite dimensional \(C^{*}\)-algebras. When \(\mathcal{N}\) is a II\({}_{1}\) factor, this is a standard result, and the proof here is essentially identical using Theorems 7.18 and 7.19 in place of the classification of projections in II\({}_{1}\) factors. **Lemma 9.5**.: _If \(F\) is a finite dimensional \(C^{*}\)-algebra, \((\mathcal{N},Y)\) is a type II\({}_{1}\) factorial tracially complete \(C^{*}\)-algebra with CPoU, and \(\gamma\colon Y\to T(F)\) is a continuous affine map, then there is a unital \({}^{*}\)-homomorphism \(\phi\colon F\to\mathcal{N}\) with \(\tau\circ\phi=\gamma(\tau)\) for all \(\tau\in Y\)._ Proof.: We first assume that \(F\) is commutative and let \(e_{1},\ldots,e_{m}\) denote the minimal projections of \(F\). It suffices to construct pairwise orthogonal projections \(p_{1},\ldots,p_{n}\in F\) such that \(\tau(p_{i})=\gamma(\tau)(e_{i})\) for all \(\tau\in Y\) and \(i=1,\ldots,m\) since we may then define a \({}^{*}\)-homomorphism \(\phi\colon F\to\mathcal{N}\) by \(\phi(e_{i})\coloneqq p_{i}\). Further, since \(\tau(\phi(1_{F}))=1\) for all \(\tau\in Y\), we will have \(\phi(1_{F})=1_{\mathcal{N}}\). The existence of \(p_{1}\) follows immediately from Theorem 7.19. Assuming \(p_{1},\ldots,p_{k}\) have been constructed some some \(k<m\), Theorem 7.19 provides a projection \(p^{\prime}_{k+1}\in\mathcal{N}\) such that \(\tau(p^{\prime}_{k+1})=\gamma(\tau)(e_{k+1})\) for all \(\tau\in Y\). Note that \[\tau(p^{\prime}_{k+1})=\gamma(\tau)(e_{k+1})\leq 1-\sum_{i=1}^{k}\gamma(\tau) (e_{i})=\tau\Big{(}1_{\mathcal{N}}-\sum_{i=1}^{k}p_{i}\Big{)} \tag{9.11}\] for all \(\tau\in Y\). Then Theorem 7.18 implies there is a partial isometry \(v\) with \(p^{\prime}_{k+1}=v^{*}v\) and such that \(p_{k+1}\coloneqq vv^{*}\) is orthogonal to each of \(p_{1},\ldots p_{k}\). This completes the proof when \(F\) is commutative. Next consider a general finite dimensional \(C^{*}\)-algebra \(F\) with a system of matrix units \[\big{(}e^{(k)}_{i,j}\big{)}_{1\leq i,j\leq d_{k}}^{1\leq k\leq m}\subseteq F. \tag{9.12}\] By the first part of the proof, there are mutually orthogonal projections \(p^{(k)}_{i}\in\mathcal{N}\) for \(k=1,\ldots,m\) and \(i=1\ldots,d_{k}\) such that \[\tau(p^{(k)}_{i})=\gamma(\tau)\big{(}e^{(k)}_{i,i}\big{)}\qquad\text{for all }\tau\in Y. \tag{9.13}\] By Theorem 7.18, for each \(k=1,\ldots,m\) and \(i=1\ldots,d_{k}\), there is a partial isometry \(v^{(k)}_{i}\in\mathcal{N}\) such that \[v^{(k)}_{i}{}^{*}v^{(k)}_{i}=p^{(k)}_{i}\qquad\text{and}\qquad v^{(k)}_{i}v^{ (k)*}_{i}=p^{(k)}_{1}. \tag{9.14}\] Then we may define a \({}^{*}\)-homomorphism \(\phi\colon F\to\mathcal{N}\) by \[\phi\big{(}e^{(k)}_{i,j}\big{)}\coloneqq v^{(k)}_{i}{}^{*}v^{(k)}_{j} \tag{9.15}\] for all \(k=1,\ldots,m\) and \(i,j=1,\ldots,d_{k}\) Then \(\tau\circ\phi=\gamma(\tau)\) for all \(\tau\in Y\), and since \(\tau(\phi(1_{F}))=1\) for all \(\tau\in Y\), we have \(\phi(1_{F})=1_{\mathcal{N}}\). We now work towards an existence result for constructing morphisms \(A\to\mathcal{N}^{\omega}\) with prescribed tracial data. One of the most influential embedding results is Connes' theorem from [29] that every separably acting injective \(\Pi_{1}\) factor embeds into \(\mathcal{R}^{\omega}\).59 This is a key ingredient in showing that such factors are isomorphic to \(\mathcal{R}\). In the same paper Connes poses his eponymous embedding problem asking if every separably acting \(\Pi_{1}\) factor embeds into \(\mathcal{R}^{\omega}\) (see the paragraph above [29, Notation 5.6]). Note that since every \(\Pi_{1}\) factor contains \(\mathcal{R}\), we have that every \(\Pi_{1}\) factor which embeds into \(\mathcal{R}^{\omega}\) also embeds into \(\mathcal{N}^{\omega}\) for each \(\Pi_{1}\) factor \(\mathcal{N}\). Footnote 59: This statement does not appear explicitly in [29], but follows by Lemma 5.2 and \(7\Rightarrow 6\) of Theorem 5.1. Condition 7 of Theorem 5.1 is verified by composing the trace on \(\mathcal{N}\) with a conditional expectation onto \(\mathcal{N}\). Our embedding result will be along these lines showing that if \(A\) is separable \(C^{*}\)-algebra, \((\mathcal{N},Y)\) is a type \(\Pi_{1}\) factorial tracially complete \(C^{*}\)-algebra with CPoU, then there is a \({}^{*}\)-homomorphism \(A\to\mathcal{N}^{\omega}\) with prescribed tracial data in the case that all the relevant traces on \(A\) factorise through \(\mathcal{R}^{\omega}\). We first provide the following local characterisation of traces factorising through \(\mathcal{R}^{\omega}\). **Definition 9.6**.: A trace \(\tau\) on a \(C^{*}\)-algebra \(A\) is _hyperlinear_ if there is a net of self-adjoint linear maps \((\psi_{\lambda}\colon A\to M_{d(\lambda)})\) such that for all \(a,b\in A\), * \(\|\psi_{\lambda}(ab)-\psi_{\lambda}(a)\psi_{\lambda}(b)\|_{2}\to 0\), * \(\operatorname{tr}_{d(\lambda)}(\psi_{\lambda}(a))\to\tau(a)\), and * \(\limsup_{\lambda}\|\psi_{\lambda}(a)\|<\infty\). For applications to classification, the hyperlinear traces of interest will be amenable, so that the maps \(\psi_{i}\) can be taken to be c.p.c. However, we will prove the existence theorem in terms of hyperlinear traces since it takes little extra effort. **Remark 9.7**.: We collect here a few general observations about hyperlinear traces. * As usual, when \(A\) is unital, we may arrange for each \(\psi_{\lambda}\) to be unital, and when \(A\) is separable, we can arrange for the net \((\psi_{\lambda})\) to be a sequence. * If \(A\) is separable, we choose a sequence \((\psi_{n})_{n=1}^{\infty}\) as above and view \(M_{d(n)}\) as a unital subalgebra of \(\mathcal{R}\). In this way, the \(\psi_{n}\) induce a \({}^{*}\)-homomorphism \(\psi\colon A\to\mathcal{R}^{\omega}\) with \(\operatorname{tr}_{\mathcal{R}^{\omega}}\circ\psi=\tau\). Note that part (iii) of Definition 9.6 is needed to guarantee that \(\psi_{n}\) is well-defined. * If \(G\) is a discrete group, then \(G\) is hyperlinear if and only if the canonical trace on the reduced group \(C^{*}\)-algebra \(C^{*}_{\lambda}(G)\) is hyperlinear. This is the reason for the terminology. * Definition 9.6(iii) is equivalent to (9.16) \[\limsup_{\lambda}\|\psi_{\lambda}(a)\|\leq\|a\|,\qquad a\in A.\] Indeed, if \(\Lambda\) is the index set of the net and we view each \(M_{d(\lambda)}\) as a subalgebra of \(\mathcal{R}\),, then the \(\psi_{\lambda}\) induce a \({}^{*}\)-homomorphism \(\psi\colon A\to\ell^{\infty}(\Lambda,\mathcal{R})/c_{0}(\Lambda,\mathcal{R})\), where \(c_{0}(\Lambda,\mathcal{R})\) consists of all bounded strongly null nets. As \({}^{*}\)-homomorphisms are contractive, (9.16) follows. The following is our main existence result for \({}^{*}\)-homomorphisms, although in the classification theorem, it will be accessed through Corollary 9.9 which gives a restatement in terms of reduced products when the domain is separable. We keep track of the additional details regarding factorisations through finite dimensional \(C^{*}\)-algebras as it is conceivable that these will play a role, for example, in subsequent nuclear dimension calculations. **Theorem 9.8**.: _Suppose \(A\) is a \(C^{*}\)-algebra, \((\mathcal{N},Y)\) is a type \(\mathrm{II}_{1}\) factorial tracially complete \(C^{*}\)-algebra with CPoU, and \(\gamma\colon Y\to T(A)\) is a continuous affine map such that \(\gamma(\tau)\) is hyperlinear for all \(\tau\in Y\). For every finite set \(\mathcal{F}\subseteq A\) and \(\epsilon>0\), there are a finite dimensional \(C^{*}\)-algebra \(F\), a self-adjoint linear map \(\psi\colon A\to F\), and a unital \({}^{*}\)-homomorphism \(\phi\colon F\to\mathcal{N}\) such that for all \(a,b\in\mathcal{F}\):_ * \(\|\psi(ab)-\psi(a)\psi(b)\|_{2,T(F)}<\epsilon\)_,_ _._ 2. \(|\tau(\phi(\psi(a)))-\gamma(\tau)(a)|<\epsilon\) _for all_ \(\tau\in Y\)_, and_ 3. \(\|\psi(a)\|<\|a\|+\epsilon\)_._ _Moreover, if \(A\) is unital, we may arrange for \(\psi\) to be unital, if \(\gamma(\tau)\) is amenable for all \(\tau\in Y\), we may arrange for \(\psi\) to be c.p.c., and if \(\gamma(\tau)\) is quasidiagonal for all \(\tau\in T(A)\), we may arrange for_ 1. \(\|\psi(ab)-\psi(a)\psi(b)\|<\epsilon\)__ _for all \(a,b\in\mathcal{F}\). Finally, the last three statements can be performed simultaneously._ Proof.: Since \(Y\) is a closed face in \(T(\mathcal{N})\), we have that \(Y\) is a Choquet simplex (Theorem 2.6). Applying Theorem 2.2 to the continuous affine maps \(\mathrm{ev}_{a}\circ\gamma\) for \(a\in\mathcal{F}\), there is a finite dimensional Choquet simplex \(Z\) together with continuous affine maps \(\alpha^{\prime}\colon Z\to Y\) and \(\beta\colon Y\to Z\) so that \[|\gamma(\alpha^{\prime}(\beta(\tau)))(a)-\gamma(\tau)(a)|<\epsilon,\qquad a\in \mathcal{F},\ \tau\in Y. \tag{9.17}\] Define \(\alpha\coloneqq\gamma\circ\alpha^{\prime}\colon Z\to T(A)\) so that \(\alpha(\tau)\) is hyperlinear for all \(\tau\in Z\) and \[|\alpha(\beta(\tau))(a)-\gamma(\tau)(a)|<\epsilon,\qquad a\in\mathcal{F},\ \tau\in Y. \tag{9.18}\] Let \(\rho_{1},\ldots,\rho_{n}\in Z\) denote the extreme points of \(Z\). For \(i=1,\ldots,n\), since \(\alpha(\rho_{i})\in T(A)\) is hyperlinear (by the assumption on \(\gamma\)), there are \(d_{i}\in\mathbb{N}\) and \({}^{*}\)-linear maps \(\psi_{i}\colon A\to M_{d_{i}}\) such that for all \(a,b\in\mathcal{F}\), \[\begin{split}\|\psi_{i}(ab)-\psi_{i}(a)\psi_{i}(b)\|_{2,\mathrm{ tr}_{d_{i}}}&<\epsilon,\\ |\mathrm{tr}_{d_{i}}(\psi_{i}(a))-\alpha(\rho_{i})(a)|& <\epsilon,\quad\text{and}\\ \|\psi_{i}(a)\|-\|a\|&<\epsilon.\end{split} \tag{9.19}\] Define \(F\coloneqq\bigoplus_{i=1}^{n}M_{d_{i}}\) and let \(\psi\coloneqq\bigoplus_{i=1}^{n}\phi_{i}\colon A\to F\). We identify \(Z\) with \(T(F)\) via the affine map given by \[Z\to T(F)\colon\rho_{i}\mapsto\mathrm{tr}_{d_{i}}\circ\pi_{i}, \tag{9.20}\] where \(\pi_{i}\colon F\to M_{d_{i}}\) denotes the projection map for \(i=1,\ldots,n\). By Lemma 9.5, there is a unital \({}^{*}\)-homomorphism \(\phi\colon F\to\mathcal{N}\) such that \(\tau\circ\phi=\beta(\tau)\) for all \(\tau\in Y\). Then all three conditions in the theorem follow from the corresponding conditions in (9.19). The additional claims in the theorem follow by choosing the \(\psi_{i}\) with the appropriate properties in the second paragraph of the proof. **Corollary 9.9**.: _Suppose \(A\) is a separable \(C^{*}\)-algebra, \((\mathcal{N},Y)\) is a type \(\mathrm{II}_{1}\) factorial tracially complete \(C^{*}\)-algebra with CPoU. Given a continuous affine map \(\gamma\colon Y\to T(A)\) such that \(\gamma(\tau)\) is hyperlinear for all \(\tau\in Y\), there is a unital \({}^{*}\)-homomorphism \(\theta\colon A\to\mathcal{N}^{\omega}\) such that \(\tau\circ\theta=\gamma(\tau|_{\mathcal{N}})\) for all \(\tau\in Y^{\omega}\)._ Proof.: Since \(A\) is separable, we may choose a countable dense \(\mathbb{Q}[i]\)-subalgebra \(A_{0}\subseteq A\). By Theorem 9.8, there are sequences \((F_{n})_{n=1}^{\infty}\) of finite dimensional \(C^{*}\)-algebras, \((\psi_{n}\colon A_{0}\to F_{n})_{n=1}^{\infty}\) of self-adjoint linear maps, and \((\phi_{n}\colon F_{n}\to\mathcal{N})_{n=1}^{\infty}\) of unital \({}^{*}\)-homomorphisms such that for all \(a,b\in A_{0}\) and \(\tau\in Y\), 1. \(\|\psi_{n}(ab)-\psi_{n}(a)\psi_{n}(b)\|_{2,T(F_{n})}\to 0\), 2. \(|\tau(\phi(\psi(a)))-\gamma(\tau)(a)|\to 0\), and 3. \(\limsup_{n\to\infty}\|\psi(a_{n})\|\leq\|a\|\). The sequences \((\phi_{n})_{n=1}^{\infty}\) and \((\psi_{n})_{n=1}^{\infty}\) induce \({}^{*}\)-homomorphisms \[A_{0}\xrightarrow{\psi}\prod\limits^{\omega}\left(F_{n},T(F_{n})\right) \xrightarrow{\phi}\mathcal{N}^{\omega}. \tag{9.21}\] Define \(\theta_{0}\coloneqq\phi\circ\psi\). By (iii), \(\theta_{0}\) is contractive, and hence extends to a \({}^{*}\)-homomorphism \(\theta\colon A\to\mathcal{N}^{\omega}\) with the required property. If the range of \(\gamma\) is contained in the uniformly amenable traces on \(A\), then \(\theta\) will necessarily be tracially nuclear by Theorem 4.9. In the next subsection, we will use this observation together with the uniqueness result in Corollary 9.4 to strengthen this existence theorem - namely, \(\theta\) can be chosen to take values in \(\mathcal{N}\) (Theorem 9.12). As a first application of Theorem 9.8, we establish a \(W^{*}\)-bundle version of the Connes' embedding problem assuming a positive solution in each fibre. In the following result, we assume \(\omega\) is a free ultrafilter on \(\mathbb{N}\) (as opposed to a free filter on \(\mathbb{N}\) as above). **Corollary 9.10**.: _Suppose \(\mathcal{M}\) is a \(W^{*}\)-bundle over a compact metrisable space \(K\) with separably acting fibres. If each fibre of \(\mathcal{M}\) admits a trace-preserving embedding into \(\mathcal{R}^{\omega}\), then there is an embedding \(\mathcal{M}\hookrightarrow C_{\sigma}(K,\mathcal{R})^{\omega}\) which restricts to the identity on \(C(K)\)._ Proof.: As in Proposition 3.6, we may view \(\mathcal{N}\coloneqq C_{\sigma}(K,\mathcal{R})\) as a factorial tracially complete \(C^{*}\)-algebra \((\mathcal{N},Y)\) where \(Y\cong\operatorname{Prob}(K)\) is the set of traces given by integrating the trace on \(\mathcal{R}\) over \(K\). It is easy to see that \(\mathcal{N}\) is McDuff, and hence \((\mathcal{N},Y)\) has CPoU by Theorem 1.4. Similarly, we may view \(\mathcal{M}\) as a tracially complete \(C^{*}\)-algebra \((\mathcal{M},X)\) where \(X\cong\operatorname{Prob}(K)\) where again, a Radon probability measure on \(K\) induces on trace on \(\mathcal{M}\) by integrating the traces on the fibres of \(\mathcal{M}\). Let \(\gamma\colon Y\to X\) be the continuous affine map induced by the identity map on \(\operatorname{Prob}(K)\). By hypothesis, \(\gamma(\tau)\in X\) is hyperlinear for all \(\tau\in K\). It is easy to show that the hyperlinear traces on a unital \(C^{*}\)-algebra form a closed convex set, and hence \(\gamma(\tau)\in X\) is hyperlinear for all \(\tau\in Y\). Since \(K\) is metrisable and each fibre of \(\mathcal{M}\) is separably acting, we have that \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-separable. Let \(A\subseteq\mathcal{M}\) be a \(\|\cdot\|\)-separable unital \(C^{*}\)-subalgebra. By Corollary 9.9, there is a \({}^{*}\)-homomorphism \(\theta_{0}\colon A\to\mathcal{N}^{\omega}\) such that \(\gamma(\tau)|_{A}=\tau\circ\theta_{0}\) for all \(\tau\in Y^{\omega}\). Then \(\theta_{0}\) extends by continuity to a morphism \((\mathcal{M},X)\to(\mathcal{N}^{\omega},Y^{\omega})\). By construction, the inclusion map \(C(K)\hookrightarrow\mathcal{N}^{\omega}\) and \(\theta|_{C(K)}\colon C(K)\hookrightarrow\mathcal{N}^{\omega}\) agree on traces, and hence are unitarily equivalent by Corollary 9.4. If \(u\in\mathcal{N}^{\omega}\) is a unitary conjugating \(\theta|_{C(K)}\) to the inclusion map, then \(\operatorname{ad}(u)\circ\theta\) is the desired embedding. **Question 9.11**.: Can Corollary 9.10 be improved to provide an embedding \(\mathcal{M}\to C_{\sigma}(K,\mathcal{R}^{\omega})\) (which restricts to the identity on \(C(K)\))? ### Classification and consequences The existence and uniqueness results for morphisms in the last sections together can be combined with tracially complete versions of standard intertwining arguments to produce our main classification theorems. First, we use the intertwining via reparameterisation technique (Theorem 5.11) to obtain a classification result for tracially nuclear morphisms. **Theorem 9.12**.: _Let \((\mathcal{N},Y)\) be a a type \(\mathrm{II}_{1}\) factorial tracially complete \(C^{*}\)-algebra with CPoU._ 1. _Let_ \(A\) _be a separable_ \(C^{*}\)_-algebra. If_ \(\gamma\colon Y\to T(A)\) _is a continuous affine map such that_ \(\gamma(\tau)\) _is uniformly amenable for all_ \(\tau\in Y\)_, then there is a tracially nuclear_ \({}^{*}\)_-homomorphism_ \(\phi\colon A\to\mathcal{N}\) _such that_ \(\tau\circ\phi=\gamma(\tau)\) _for all_ \(\tau\in Y\)_, and this_ \(\phi\) _is unique up to approximate unitary equivalence._ 2. _Let_ \((\mathcal{M},X)\) _be a_ \(\|\cdot\|_{2,X}\)_-separable tracially complete_ \(C^{*}\)_-algebra. If_ \(\gamma\colon Y\to X\) _is a continuous affine function such that_ \(\gamma(Y)\) _is contained in the uniformly amenable traces on_ \(\mathcal{M}\) _then there is a tracially nuclear_ \({}^{*}\)_-homomorphism_ \(\phi\colon(\mathcal{M},X)\to(\mathcal{N},Y)\) _such that_ \(\tau\circ\phi=\gamma(\tau)\) _for all_ \(\tau\in Y\)_, and this_ \(\phi\) _is unique up to approximate unitary equivalence._ 3. _Let_ \((\mathcal{M},X)\) _be a_ \(\|\cdot\|_{2,X}\)_-separable amenable tracially complete_ \(C^{*}\)_-algebra. If_ \(\gamma\colon Y\to X\) _is a continuous affine function, then there is a tracially nuclear_ \({}^{*}\)_-homomorphism_ \(\phi\colon(\mathcal{M},X)\to(\mathcal{N},Y)\) _such that_ \(\tau\circ\phi=\gamma(\tau)\) _for all_ \(\tau\in Y\)_, and this_ \(\phi\) _is unique up to approximate unitary equivalence._ Proof.: (i). The uniqueness part of (i) is Theorem 9.3. For the existence part of (i), since uniformly amenable traces are hyperlinear, Corollary 9.9 implies there is a \({}^{*}\)-homomorphism \(\theta_{\infty}\colon A\to\mathcal{N}^{\infty}\) with \(\tau\circ\theta_{\infty}=\gamma(\tau|_{\mathcal{N}})\) for all \(\tau\in Y^{\infty}\). By Theorem 4.9, \(\theta\) is tracially nuclear. Note that if \(r\colon\mathbb{N}\to\mathbb{N}\) is a function with \(\lim_{n\to\infty}r(n)=\infty\) and \(r^{*}\colon\mathcal{N}^{\infty}\to\mathcal{N}^{\infty}\) is the reparameterisation map as in Theorem 5.11, then \[\tau\circ\theta_{\infty}=\gamma(\tau|_{\mathcal{N}})=\gamma((\tau\circ r^{*} )|_{\mathcal{N}})=\tau\circ r^{*}\circ\theta_{\infty},\qquad\tau\in Y^{ \infty}. \tag{9.22}\] By Corollary 9.4, \(r^{*}\circ\theta\) and \(\theta\) are unitarily equivalent. By Corollary 7.11, unitaries in \(\mathcal{N}^{\infty}\) lift to unitaries in \(\ell^{\infty}(\mathcal{N})\), and hence Theorem 5.11 provides a \({}^{*}\)-homomorphism \(\theta\colon A\to\mathcal{N}\) such that \(\iota_{\mathcal{N}}\circ\theta\) and \(\theta_{\infty}\) are unitarily equivalent. Then \(\theta\) satisfies \(\tau\circ\theta=\gamma(\tau)\) for all \(\tau\in\mathcal{N}\), and Theorem 4.9 implies that \(\theta\) is tracially nuclear. (ii). Let \(A\subseteq\mathcal{M}\) be a separable \(\|\cdot\|_{2,X}\)-dense \(C^{*}\)-subalgebra. Then uniqueness follows from uniqueness in (i) as all morphisms between tracially complete \(C^{*}\)-algebras are automatically contractive between the uniform \(2\)-norms. Part (i) gives a (tracially nuclear) \({}^{*}\)-homomorphism \(\tilde{\phi}\colon A\to\mathcal{N}\) with \(\tau\circ\tilde{\phi}=\tilde{\gamma}(\tau)\) for \(\tau\in Y\), which extends uniquely to \(\phi\colon(\mathcal{M},X)\to(\mathcal{N},Y)\) by Corollary 3.29(iii). The extension is tracially nuclear by Lemma 4.12. As \(A\) is \(\|\cdot\|_{2,X}\)-dense in \(\mathcal{M}\), \(\tau\circ\phi=\gamma(\tau)\) for \(\tau\in Y\). (iii). This is immediate from (ii) as amenability of \(\mathcal{M}\) forces all traces in \(X\) to be uniformly amenable (by Theorem 4.9(iii), taking \(\theta\) to be \(\mathrm{id}_{\mathcal{M}}\)). The classification theorem for regular amenable tracially complete \(C^{*}\)-algebras is obtained from a tracially complete version of the two-sided Elliott intertwining argument (the \(C^{*}\)-version of which can be found as [93, Corollary 2.3.4], for example; note that [38, Theorem 3] abstracts this \(C^{*}\)-version to an abstract intertwining result for categories with notions of inner automorphisms and metric structure on the morphism sets, and one could apply it to the category of separable tracially complete \(C^{*}\)-algebras). As with \(C^{*}\)-classification results, any isomorphism at the level of the invariants - in this case the designated traces - lifts to an isomorphism of tracially complete \(C^{*}\)-algebras. The second part of Theorem D from the overview follows from part (ii) below, with the first part of Theorem D having already been proven in Theorem 4.9. **Theorem 9.13**.: _Suppose that \((\mathcal{M},X)\) and \((\mathcal{N},Y)\) are type \(\mathrm{II}_{1}\) semidiscrete factorial tracially complete \(C^{*}\)-algebras with property \(\Gamma\) such that \(\mathcal{M}\) is \(\|\cdot\|_{2,X}\)-separable and \(\mathcal{N}\) is \(\|\cdot\|_{2,Y}\)-separable._ 1. \((\mathcal{M},X)\cong(\mathcal{N},Y)\) _if and only if_ \(X\cong Y\)_, and moreover,_ 2. _For any affine homeomorphism_ \(\gamma\colon Y\to X\)_, there is an isomorphism_ \(\theta\colon(\mathcal{M},X)\to(\mathcal{N},Y)\) _such that_ \(\tau\circ\theta=\gamma(\tau)\) _for all_ \(\tau\in Y\)_._ Proof.: The first part follows from the second, so fix an isomorphism \(\gamma\) as in (ii). Both \((\mathcal{M},X)\) and \((\mathcal{N},Y)\) have CPoU by Theorem 1.4. So, by two applications of the existence portion of Theorem 9.12(ii), there are tracially nuclear morphisms \(\phi_{0}\colon(\mathcal{M},X)\to(\mathcal{N},Y)\) and \(\psi_{0}\colon(\mathcal{N},Y)\to(\mathcal{M},X)\) such that \[\tau\circ\phi_{0}=\gamma(\tau)\qquad\text{and}\qquad\sigma\circ\psi_{0}= \gamma^{-1}(\sigma) \tag{9.23}\] for all \(\tau\in Y\) and \(\sigma\in X\). In particular, \[\tau\circ\phi_{0}\circ\psi_{0}=\tau\qquad\text{and}\qquad\sigma\circ\psi_{0} \circ\phi_{0}=\sigma \tag{9.24}\] for all \(\tau\in Y\) and \(\sigma\in X\). Using the uniqueness portion of Theorem 9.12(ii) twice, we have \(\phi_{0}\circ\psi_{0}\) is approximately unitarily equivalent to \(\mathrm{id}_{\mathcal{N}}\) and \(\psi_{0}\circ\phi_{0}\) is approximately unitarily equivalent to \(\mathrm{id}_{\mathcal{M}}\). Let \((\mathcal{F}^{\prime}_{n})_{n=1}^{\infty}\) and \((\mathcal{G}^{\prime}_{n})_{n=1}^{\infty}\) be increasing sequences of finite subsets of \(\mathcal{M}\) and \(\mathcal{N}\), respectively, whose unions are dense in the respective uniform \(2\)-norms. We will inductively construct increasing sequences \((\mathcal{F}_{n})_{n=1}^{\infty}\) and \((\mathcal{G}_{n})_{n=1}^{\infty}\) of finite subsets of \(\mathcal{M}\) and \(\mathcal{N}\), respectively, and sequences of \({}^{*}\)-homomorphisms \((\phi_{n}\colon\mathcal{M}\to\mathcal{N})_{n=1}^{\infty}\) and \((\psi_{n}\colon\mathcal{N}\to\mathcal{M})_{n=1}^{\infty}\) such that for all \(n\geq 1\), 1. \(\mathcal{F}^{\prime}_{n}\subseteq\mathcal{F}_{n}\) and \(\mathcal{G}^{\prime}_{n}\subseteq\mathcal{G}_{n}\), 2. \(\phi_{n}\) and \(\psi_{n}\) are unitarily equivalent to \(\phi_{0}\) and \(\psi_{0}\), respectively, 3. \(\phi_{n}(\mathcal{F}_{n})\subseteq G_{n}\) and \(\psi_{n}(\mathcal{G}_{n})\subseteq\mathcal{F}_{n+1}\), and 4. for all \(a\in\mathcal{F}_{n}\) and \(b\in\mathcal{G}_{n}\), we have (9.25) \[\|\psi_{n}(\phi_{n}(a))-a\|_{2,X}<2^{-n}\] and (9.26) \[\|\phi_{n}(\psi_{n-1}(b))-b\|_{2,Y}<2^{-n}.\] Assuming that \(\mathcal{F}_{n-1}\), \(\mathcal{G}_{n-1}\), \(\phi_{n-1}\), and \(\psi_{n-1}\) have been constructed, let \(\mathcal{F}_{n}\coloneqq\mathcal{F}^{\prime}_{n}\cup\mathcal{F}_{n-1}\cup \psi_{n-1}(\mathcal{G}_{n-1})\). Since \(\phi_{n-1}\) and \(\psi_{n-1}\) are unitarily equivalent to \(\phi_{0}\) and \(\psi_{0}\), respectively, and \(\phi_{0}\circ\psi_{0}\) is approximately unitarily equivalent to \(\mathrm{id}_{\mathcal{M}}\), we have \(\phi_{n-1}\circ\psi_{n-1}\) approximately unitarily equivalent to \(\mathrm{id}_{\mathcal{M}}\). Therefore, there is a \({}^{*}\)-homomorphism \(\phi_{n}\colon\mathcal{M}\to\mathcal{N}\) which satisfies (9.26), and is unitarily equivalent to \(\phi_{n-1}\), and hence also \(\phi_{0}\). The construction of \(\mathcal{G}_{n}\) and \(\psi_{n}\) is similar. If \(m\geq n\geq 1\) and \(a\in\mathcal{F}_{n}\), then \[\begin{split}\|\phi_{m+1}(a)-\phi_{m}(a)\|_{2,Y}&\leq \|\phi_{m+1}(a-\psi_{m}(\phi_{m}(a)))\|_{2,Y}\\ &\quad+\|\phi_{m+1}(\psi_{m}(\phi_{m}(a)))-\phi_{m}(a)\|_{2,Y}\\ &<2^{-m}+2^{-m-1}.\end{split} \tag{9.27}\] In particular, \((\phi_{m}(a))_{m=1}^{\infty}\) is norm-bounded and \(\|\cdot\|_{2,Y}\)-Cauchy for all \(a\in\mathcal{F}_{n}\). So \((\phi_{n}(a))_{n=1}^{\infty}\) is norm-bounded and \(\|\cdot\|_{2,Y}\)-Cauchy for all \(a\in\bigcup_{n=1}^{\infty}\mathcal{F}_{n}\), and hence also for all \(a\in\mathcal{M}\). Similarly, \((\psi_{n}(b))_{n=1}^{\infty}\) is norm-bounded and \(\|\cdot\|_{2,X}\)-Cauchy for all \(b\in\mathcal{N}\). Define \(\phi\colon\mathcal{M}\to\mathcal{N}\) and \(\psi\colon\mathcal{N}\to\mathcal{M}\) by \[\phi(a)\coloneqq\lim_{n\to\infty}\phi_{n}(a)\qquad\text{and}\qquad\psi(a) \coloneqq\lim_{n\to\infty}\psi_{n}(a). \tag{9.28}\] By construction, \(\phi\) and \(\psi\) are approximately unitarily equivalent to \(\phi_{0}\) and \(\psi_{0}\), respectively, and in particular, \(\tau\circ\phi=\tau\circ\phi_{0}\) for all \(\tau\in Y\) and \(\sigma\circ\psi=\sigma\circ\psi_{0}\) for all \(\sigma\in X\). To complete the proof, note that \(\phi\) and \(\psi\) are mutual inverses using (9.25) and (9.26). Theorem A from the overview is an immediate consequence of Theorem 9.13, provided (as set out in Footnote 1) we interpret it to mean that for unital separable nuclear \(\mathcal{Z}\)-stable \(C^{*}\)-algebras \(A\) and \(B\), one has \(\big{(}\overline{A}^{T(A)},T(A)\big{)}\cong\big{(}\overline{B}^{T(B)}\big{)}\) if and only if \(T(A)\) and \(T(B)\) are affinely homeomorphic. The point is that the 'easy direction' - recovering the trace space from an isomorphism - is a tautology when the isomorphism is in the category of tracially complete \(C^{*}\)-algebras.60 To see the 'hard direction', note that if \(A\) is a unital separable nuclear \(\mathcal{Z}\)-stable \(C^{*}\)-algebra, then \(\big{(}\overline{A}^{T(A)},T(A)\big{)}\) is \(\|\cdot\|_{2,T(A)}\)-separable, semidiscrete by Corollary 4.10, factorial by Proposition 3.23(iv), and satisfies property \(\Gamma\) by Propositions 5.17 and 5.22. The result then follows from Theorem 9.13(i). Footnote 60: To recover \(T(A)\) from \(\overline{A}^{T(A)}\) as a \(C^{*}\)-algebra requires the forthcoming solution to the trace problem (Question 1.1) for tracially complete \(C^{*}\)-algebras with CPoU due to the third-named author. The classification result for morphisms provides the following strengthened version of completely positive approximation property in which the 'upward maps' can be taken to be \({}^{*}\)-homomorphisms (and the downward maps are approximately multiplicative). Such a result holds for injective von Neumann algebras, and this was the starting point to obtaining strengthened forms of the completely positive approximation property involving _decomposable approximations_ in the \(C^{*}\)-algebraic setting ([14, 18, 59]).61 Footnote 61: In [59, Section 1] a local reflexivity argument is given which shows how to use hyperfiniteness of a von Neumann algebra \(\mathcal{M}\) to obtain nets of finite dimensional algebras \(F_{i}\) together with u.c.p. maps \(\psi_{i}\colon\mathcal{M}\to F_{i}\), \(\psi_{i}\colon F_{i}\to\mathcal{M}\) such that \(\phi_{i}\) a \({}^{*}\)-homomorphism and \(\phi_{i}(\psi_{i}(x))\to x\) in the weak\({}^{*}\)-topology for all \(x\in\mathcal{M}\). In fact all of Connes’, Popa’s, and Haagerup’s proofs of injectivity implies hyperfiniteness ([29, 92, 57]) output such approximations in the case of factors. **Theorem 9.14**.: _Suppose \(A\) is a \(C^{*}\)-algebra, \((\mathcal{N},Y)\) is a type \(\mathrm{II}_{1}\) factorial tracially complete \(C^{*}\)-algebra, and \(\theta\colon A\to\mathcal{N}\) is a tracially nuclear \({}^{*}\)-homomorphism. Then there are nets_ \[A\xrightarrow{\psi_{\lambda}}F_{\lambda}\xrightarrow{\phi_{\lambda}}\mathcal{N} \tag{9.29}\] of finite dimensional \(C^{*}\)-algebras \(F_{\lambda}\) and c.p.c. maps \(\phi_{\lambda}\) and \(\psi_{\lambda}\) such that_ * \(\|\phi_{\lambda}(\psi_{\lambda}(a))-\theta(a)\|_{2,Y}\to 0\) _for all_ \(a\in A\)_,_ * \(\|\psi_{\lambda}(ab)-\psi_{\lambda}(a)\psi_{\lambda}(b)\|_{2,T(F_{\lambda})}\to 0\) _for all_ \(a,b\in A\)_,_ * _each_ \(\phi_{\lambda}\) _is a unital_ \({}^{*}\)_-homomorphism._ _If \(\|\tau\circ\theta\|^{-1}(\tau\circ\theta)\) is quasidiagonal whenever \(\tau\in Y\) with \(\tau\circ\theta\neq 0\), we may further arrange for_ * \(\|\psi_{\lambda}(ab)-\psi_{\lambda}(a)\psi_{\lambda}(b)\|\to 0\) _for all_ \(a,b\in A\)_._ _In either case, when \(A\) and \(\theta\) are unital, we may arrange for each \(\psi_{\lambda}\) to be unital._ Proof.: Adding a unit to \(A\), we may assume that \(A\) is unital (Lemma 4.3). Further, as the conclusion of Theorem 9.14 is a local condition, it suffices to prove it when \(A\) is separable. By Theorem 4.9, \(\tau\circ\theta\) is uniformly amenable for each \(\tau\in T(A)\). By Theorem 9.8 (applied to the affine map \(\tau\mapsto\tau\circ\theta\)), there are finite dimensional \(C^{*}\)-algebras \(F_{n}\) and u.c.p. maps \[A\stackrel{{\psi_{n}}}{{\longrightarrow}}F_{n}\stackrel{{ \phi^{\prime}_{n}}}{{\longrightarrow}}\mathcal{N} \tag{9.30}\] so that for all \(a,b\in A\) and \(\tau\in Y\), we have \[\|\psi_{n}(ab)-\psi_{n}(a)\psi_{n}(b)\|_{2,T(F_{n})}\to 0, \tag{9.31}\] and \[|\tau(\phi^{\prime}_{n}(\psi_{n}(a)))-\tau(\theta(a))|\to 0, \tag{9.32}\] and such that each \(\phi^{\prime}_{n}\) is a unital \({}^{*}\)-homomorphism. Further, if \(\tau\circ\theta\) is quasidiagonal for all \(\tau\in T(A)\), we may replace (9.31) with \[\|\psi_{n}(ab)-\psi_{n}(a)\psi_{n}(b)\|\to 0 \tag{9.33}\] for all \(a,b\in A\). The maps \((\phi^{\prime}_{n}\circ\psi_{n})_{n=1}^{\infty}\) induce a \({}^{*}\)-homomorphism \(\theta^{\prime}\colon A\to\mathcal{N}^{\infty}\) with \(\tau\circ\theta^{\prime}=\tau\circ\theta\) for all \(\tau\in Y^{\infty}\). By Theorem 4.9, \(\theta^{\prime}\) is tracially nuclear, and hence by Corollary 9.4, \(\theta\) and \(\theta^{\prime}\) are unitarily equivalent. If \(u\in\mathcal{N}^{\omega}\) is a unitary with \(\operatorname{ad}(u)\circ\theta^{\prime}=\theta\), then by Corollary 7.11, there is a sequence of unitaries \((u_{n})_{n=1}^{\infty}\subseteq\mathcal{N}\) lifting \(u\). The result follows by setting \(\phi_{n}\coloneqq\operatorname{ad}(u_{n})\circ\phi^{\prime}_{n}\). We end by proving the structure and classification theorems from the overview (Theorem B and C). Most of the implications involved are already in place, and it remains to use the classification theorem to pass back from hyperfiniteness to the McDuff property via CPoU. **Theorem 9.15**.: _Let \((\mathcal{M},X)\) be a type \(\mathrm{II}_{1}\) factorial tracially complete \(C^{*}\)-algebra. Then the following conditions are equivalent:_ * \((\mathcal{M},X)\) _hyperfinite;_ * \((\mathcal{M},X)\) _is amenable and has CPoU;_ * \((\mathcal{M},X)\) _is amenable and satisfies property_ \(\Gamma\)_;_ * \((\mathcal{M},X)\) _is amenable and McDuff._ _In this setting, if \(\mathcal{M}\) is also assumed to be \(\|\cdot\|_{2,X}\)-separable, then \((\mathcal{M},X)\cong(\mathcal{R}_{X},X)\) (see Example 3.35)._ Proof.: The implications (iv) \(\Longrightarrow\) (iii), (iii) \(\Longrightarrow\) (ii), and (i) \(\Longrightarrow\) (ii) hold by Proposition 5.22, Theorem 1.4, and Theorem 8.3, respectively. Further, Theorem 9.14 applied to \(\theta=\operatorname{id}_{\mathcal{M}}\) shows (ii) \(\Longrightarrow\) (i). Suppose (ii) holds and \((\mathcal{M},X)\) is \(\|\cdot\|_{2,X}\)-separable. Both \((\mathcal{M},X)\) and \((\mathcal{R}_{X},X)\) satisfy the conditions of Theorem 9.13, and hence \((\mathcal{M},X)\cong(\mathcal{R}_{X},X)\). As \((\mathcal{R}_{X},X)\) is McDuff, this also shows \((\mathcal{M},X)\) is McDuff and finishes the proof in the separable setting. It remains to show (ii) implies (iv) without separability. Assume \((\mathcal{M},X)\) satisfies CPoU. By the local characterisation of McDuff's property (Proposition 5.14(ii)), it suffices to show that every finite set \(\mathcal{F}\subseteq\mathcal{M}\) is contained in a unital \(\|\cdot\|_{2,X}\)-closed \(C^{*}\)-subalgebra of \(\mathcal{M}\) which is factorial and McDuff as a tracially complete \(C^{*}\)-algebra. Using the separable inheritability of the conditions in (ii) (Theorem A.3(ii) and (vi), together with Proposition A.2), this follows from the fact that (ii) implies (iv) in the separable setting. ## Appendix A Separabilisation In this appendix we collect the machinery needed to reduce our main structural results - Theorems B and 1.4 - to the case of separable tracially complete \(C^{*}\)-algebras, and prove the non-metrisable version of Theorem 2.2. ### Separabilising tracially complete \(C^{*}\)-algebras The following definition is modelled on Blackadar's notion of separable inheritability for \(C^{*}\)-algebras ([7, Section II.8.5]). Recall from Section 3.1 (before Definition 3.11) that if \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra then a tracially complete \(C^{*}\)-subalgebra of \((\mathcal{M},X)\) is \((\mathcal{M}_{0},X_{0})\) where \(\mathcal{M}_{0}\subseteq\mathcal{M}\) is a unital \(\|\cdot\|_{2,X}\)-closed \(C^{*}\)-subalgebra and \(X_{0}\subseteq T(\mathcal{M})\) is the set of traces arising as restrictions of traces in \(X\) to \(\mathcal{M}_{0}\). **Definition A.1** (cf. [7, Section II.8.5]).: We say that a property \((P)\) of tracially complete \(C^{*}\)-algebras is _separably inheritable_ if 1. whenever \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra satisfying \((P)\) and \(S\subseteq\mathcal{M}\) is a \(\|\cdot\|_{2,X}\)-separable subset of \(\mathcal{M}\), there is a tracially complete \(C^{*}\)-subalgebra \((\mathcal{M}_{0},X_{0})\) which is \(\|\cdot\|_{2,X_{0}}\)-separable, satisfies \((P)\) and such that \(S\subseteq\mathcal{M}_{0}\), 2. if \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) is a sequence of tracially complete \(C^{*}\)-algebras satisfying \((P)\) and \(\phi_{n}^{n+1}\colon(\mathcal{M}_{n},X_{n})\to(\mathcal{M}_{n+1},X_{n+1})\) is an embedding for each \(n\geq 1\), then \(\varinjlim\big{(}(\mathcal{M}_{n},X_{n}),\phi_{n}^{n+1}\big{)}\) also satisfies \((P)\). We say that \((P)\) is _strongly separably inheritable_ if, in addition, 1. if \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra such that for every \(\|\cdot\|_{2,X}\)-separable subset \(S\) of \(\mathcal{M}\), there exists a tracially complete \(C^{*}\)-subalgebra \((\mathcal{M}_{0},X_{0})\) satisfying \((P)\) and containing \(S\), then \((\mathcal{M},X)\) satisfies \((P)\). Adapting the language of [99, Definition 1.4] to tracially complete \(C^{*}\)-algebras, condition (iii) asks that \((\mathcal{M},X)\) satisfies \((P)\) whenever it separably satisfies \((P)\). If \((P)\) is a separably inheritable property and \((Q)\) satisfies condition (iii), then in order to prove \((P)\implies(Q)\) for all tracially complete \(C^{*}\)-algebras, it is enough to prove it for \(\|\cdot\|_{2,X}\)-separable tracially complete \(C^{*}\)-algebras. The following will allow us to construct separable subalgebras that satisfy several separably inheritable properties simultaneously. The proof is an easy modification of the corresponding result for \(C^{*}\)-algebras given in [7, Proposition II.8.5.3]. **Proposition A.2**.: _The conjunction of countably many (strongly) separably inheritable properties is (strongly) separably inheritable._ Proof.: Let \(\big{(}(P_{\lambda})\big{)}_{\lambda\in\Lambda}\) be a collection of separably inheritable properties of tracially complete \(C^{*}\)-algebras indexed by some countable set \(\Lambda\), and let \((P)\) be the conjunction of the \((P_{\lambda})\). It is clear that Definition A.1(ii) holds for \((P)\) as it holds for each \((P_{\lambda})\). To see Definition A.1(i), let \((\mathcal{M},X)\) be a tracially complete \(C^{*}\)-algebras satisfying \((P)\) and let \(S\subseteq\mathcal{M}\) be a \(\|\cdot\|_{2,X}\)-separable subset. Fix a surjective map \(f\colon\mathbb{N}\to\Lambda\) such that each \(\lambda\in\Lambda\) has infinitely many preimages. Using Definition A.1(i) for the properties \((P_{\lambda})\), inductively construct an increasing sequence of \(\|\cdot\|_{2,X}\)-closed and \(\|\cdot\|_{2,X}\)-separable \(C^{*}\)-subalgebras \(\mathcal{M}_{n}\subseteq\mathcal{M}\) such that \(S\subseteq\mathcal{M}_{1}\) and \(\mathcal{M}_{n}\) satisfies \((P_{f(n)})\). As each \((P_{\lambda})\) satisfies Definition A.1(ii), the \(\|\cdot\|_{2,X}\)-closed union of the \(\mathcal{M}_{n}\) satisfies \((P)\). For strong separable inheritability, it is clear that Definition A.1(iii) is closed under (arbitrary) conjucntions. We will show that many of the properties of tracially compete \(C^{*}\)-algebras defined in this paper are strongly separably inheritable. **Theorem A.3**.: _The following properties are strongly separably inheritable:_ 1. _factoriality,_ 2. _amenability,_ 3. _hyperfiniteness,_ 4. _McDuff's property_ 5. _factoriality and property_ \(\Gamma\)_, and_ 6. _factoriality and CPoU._ In the case of the last two conditions, we include the factoriality condition as we have not defined property \(\Gamma\) or CPoU in the non-factorial setting. The rest of this subsection is devoted to the proof. We will show each condition separately. For factoriality, most of the work is contained in Lemma 6.8. Proof of Theorem A.3(i).: A sequential direct limit of factorial tracially complete \(C^{*}\)-algebras is factorial by Proposition 3.34. Now suppose \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebra and \(S\subseteq\mathcal{M}\) is a \(\|\cdot\|_{2,X}\)-separable subset of \(\mathcal{M}\). By Lemma 6.8, there is a \(\|\cdot\|\)-separable unital \(C^{*}\)-algebra \(A\subseteq\mathcal{M}\) containing a \(\|\cdot\|_{2,X}\)-dense subset of \(S\) such that (A.1) \[X_{A}\coloneqq\{\tau|_{A}:\tau\in X\}\subseteq T(A)\] is a closed face. Let \((\mathcal{N},Y)\) denote the tracial completion of \(A\) with respect to \(X_{A}\) and note that \((\mathcal{N},Y)\) is factorial by Proposition 3.23(iv). The inclusion \(A\to\mathcal{M}\) extends to a morphism \(\phi\colon(\mathcal{N},Y)\to(\mathcal{M},X)\) by Proposition 3.25. Note that \(\phi\) is an embedding whose range contains \(S\). Also, the unit ball of \(\phi(\mathcal{N})\) is \(\|\cdot\|_{2,X}\)-closed in the unit ball of \(\mathcal{M}\), and it follows from Lemma 3.27 that \(\phi(\mathcal{N})\) is \(\|\cdot\|_{2,X}\)-closed in \(\mathcal{M}\). Finally, suppose that \((\mathcal{M},X)\) is a tracially complete \(C^{*}\)-algebra such that every \(\|\cdot\|_{2,X}\)-separable subset \(S\) is contained in a factorial tracially complete \(C^{*}\)-subalgebra of \((\mathcal{M},X)\). To show that \((\mathcal{M},X)\) is factorial, let \(\tau_{1},\tau_{2},\tau\in T(\mathcal{M})\) be such that \(\tau\in X\) and \(\tau\) is a non-trivial convex combination of \(\tau_{1}\) and \(\tau_{2}\), and we will check that \(\tau_{1}\in X\). For any finite subset \(\mathcal{F}\) of \(\mathcal{M}\), we can find a factorial tracially complete \(C^{*}\)-subalgebra \((\mathcal{M}_{0},X_{0})\) such that \(\mathcal{F}\subseteq\mathcal{M}_{0}\). Since \(\tau|_{\mathcal{M}_{0}}\) is a non-trivial convex combination of \(\tau_{1}|_{\mathcal{M}_{0}}\) and \(\tau_{2}|_{\mathcal{M}_{0}}\), and \(X_{0}\) is a face, it follows that \(\tau_{1}|_{\mathcal{M}_{0}}\in X_{0}\). By definition of \(X_{0}\), this means that there exists \(\sigma_{\mathcal{F}}\in X\) such that \(\tau_{1}|_{\mathcal{M}_{0}}=\sigma_{\mathcal{F}}|_{\mathcal{M}_{0}}\), so in particular, \(\tau_{1}|_{\mathcal{F}}=\sigma_{\mathcal{F}}|_{\mathcal{F}}\). By doing this over all finite subsets \(\mathcal{F}\), we see that \(\tau_{1}\) is a weak\({}^{*}\)-limit of traces in \(X\), and therefore \(\tau_{1}\in X\) as required. It is somewhat subtle to show that amenability is preserved by inductive limits. Even in the tracial von Neumann algebra setting, the only known proof that the weak closure of an increasing union of semidiscrete von Neumann algebras is semidiscrete relies on the equivalence of semidiscreteness and injectivity from Connes' work ([29]). Our argument for parts (i) and (ii) of Definition A.1 goes via the extension result for tracially nuclear morphisms (Lemma 4.12) proved using the local-to-global characterisations of amenability in Section 4. Proof of Theorem a.3(ii).: For condition (iii) of Definition A.1, fix a finite subset \(\mathcal{F}\) of \(\mathcal{M}\) and \(\epsilon>0\). Use the hypothesis to find an amenable tracially complete \(C^{*}\)-subalgebra \((\mathcal{M}_{0},X_{0})\) of \(\mathcal{M}\) containing \(\mathcal{F}\). By Arveson's extension theorem, the maps witnessing amenability of \((\mathcal{M}_{0},X_{0})\) can be extended to give a finite dimensional \(C^{*}\)-algebra \(F\) and c.p.c. maps \(\mathcal{M}\stackrel{{\psi}}{{\to}}F\stackrel{{ \phi}}{{\to}}\mathcal{M}\) with \(\|\phi(\psi(x))-x\|<\epsilon\) for \(x\in\mathcal{F}\). Working with a net indexed over finite sets \(\mathcal{F}\) and \(\epsilon>0\) gives amenability of \(\mathcal{M}\). Suppose \(\big{(}(\mathcal{M}_{n},X_{n})\big{)}_{n=1}^{\infty}\) is a sequence of amenable tracially complete \(C^{*}\)-algebras and \(\phi_{n}^{n+1}\colon(\mathcal{M}_{n},X_{n})\to(\mathcal{M}_{n+1},X_{n+1})\) is an embedding for each \(n\geq 1\) and consider \((\mathcal{M},X)\coloneqq\varinjlim\big{(}(\mathcal{M}_{n},X_{n}),\phi_{n}^{n+ 1}\big{)}\). If \(A\subseteq\mathcal{M}\) is the \(C^{*}\)-algebraic direct limit of the \(\mathcal{M}_{n}\), then the inclusion \(A\hookrightarrow\mathcal{M}\) is tracially nuclear (using Arveson's extension theorem as above). By Lemma 4.12, \((\mathcal{M},X)\) is amenable, and this shows Definition A.1(ii). Suppose now that \((\mathcal{M},X)\) is an amenable tracially complete \(C^{*}\)-algebra. We'll first show that for every unital \(\|\cdot\|_{2,X}\)-separable, \(\|\cdot\|_{2,X}\)-closed \(C^{*}\)-subalgebra \(\mathcal{M}_{0}\subseteq\mathcal{M}\), there is a \(\|\cdot\|_{2,X}\)-separable, \(\|\cdot\|_{2,X}\)-closed \(C^{*}\)-subalgebra \(\mathcal{N}\subseteq\mathcal{M}\) containing \(\mathcal{M}_{0}\) so that the inclusion \(\mathcal{M}_{0}\hookrightarrow\mathcal{N}\) is tracially nuclear. Fix a \(\|\cdot\|\)-separable, \(\|\cdot\|_{2,X}\)-dense \(C^{*}\)-algebra \(A\subseteq\mathcal{M}_{0}\) and an increasing sequence \((\mathcal{F}_{n})\) of finite subsets of \(A\) with \(\|\cdot\|\)-dense union, and for each \(n\geq 1\), use Proposition 4.4 to construct a finite dimensional \(C^{*}\)-algebra \(F_{n}\) and c.p.c. maps (A.2) \[A\stackrel{{\psi_{n}}}{{\longrightarrow}}F_{n}\stackrel{{ \phi_{n}}}{{\longrightarrow}}\mathcal{M}\] so that (A.3) \[\left\|\phi_{n}(\psi_{n}(a))-a\right\|_{2,X}<\frac{1}{n},\qquad a\in\mathcal{F}_{n}.\] Let \(\mathcal{N}\) denote the tracially complete \(C^{*}\)-subalgebra of \(\mathcal{M}\) generated by \(\mathcal{M}_{0}\cup\bigcup_{n=1}^{\infty}\phi_{n}(\mathcal{F}_{n})\). By construction, \(\mathcal{N}\) is \(\|\cdot\|_{2,X}\)-separable and the inclusion \(A\hookrightarrow\mathcal{N}\) is tracially nuclear, and by Lemma 4.12, so is the inclusion \(\mathcal{M}_{0}\hookrightarrow\mathcal{N}\). Applying this result inductively, given a \(\|\cdot\|_{2,X}\)-separable subset \(S\subseteq\mathcal{M}\), we may construct an increasing sequence \((\mathcal{M}_{n})_{n=1}^{\infty}\) of \(\|\cdot\|_{2,X}\)-separable, \(\|\cdot\|_{2,X}\)-closed subalgebras of \(\mathcal{M}\) such that \(S\subseteq\mathcal{M}_{1}\) and the inclusions \(\mathcal{M}_{n}\hookrightarrow\mathcal{M}_{n+1}\) are tracially nuclear. If \(\mathcal{N}\) is the \(\|\cdot\|_{2,X}\)-closure of the union of the \(\mathcal{M}_{n}\), then \(\mathcal{N}\) is \(\|\cdot\|_{2,X}\)-separable. If \(A\subseteq\mathcal{N}\) is the \(\|\cdot\|\)-closure of the union of the \(\mathcal{M}_{n}\), then the inclusion \(A\hookrightarrow\mathcal{N}\) is tracially nuclear (by the same application of Arveson's extension theorem used in the previous parts), and hence \(\mathcal{N}\) is amenable by Lemma 4.12. This shows Definition A.1(i). The proof that hyperfiniteness is separably inheritable is standard. Proof of Theorem a.3(iii).: Verifying Definition A.1(ii) and (iii) is a standard \(\epsilon/3\) argument. To see Definition A.1(i) suppose \((\mathcal{M},X)\) is a hyperfinite tracially complete \(C^{*}\)-algebra and let \(S\) be a \(\|\cdot\|_{2,X}\)-separable subset of \(\mathcal{M}\), and let \(\mathcal{M}_{1}\) denote the tracially complete \(C^{*}\)-subalgebra of \(\mathcal{M}\) generated by \(S\). As \(\mathcal{M}_{1}\) is \(\|\cdot\|_{2,X}\)-separable, fix an increasing sequence \((\mathcal{F}_{k})\) of finite sets with dense union in \(\mathcal{M}_{1}\). For each \(k\geq 1\), let \(F_{k}\subseteq\mathcal{M}\) be a finite dimensional \(C^{*}\)-algebra such that for every \(a\in\mathcal{F}_{k}\), there is a \(b\in F_{k}\) with \(\|a-b\|_{2,X}<\frac{1}{k}\). Let \(\mathcal{M}_{2}\) denote the tracially complete \(C^{*}\)-subalgebra generated by \(\mathcal{M}_{1}\) and the \(F_{k}\). Iterating this construction, there is an increasing sequence of \(\|\cdot\|_{2,X}\)-closed \(C^{*}\)-algebras \(\mathcal{M}_{n}\subseteq\mathcal{M}\) such that for every \(n\geq 1\), finite set \(\mathcal{F}\subseteq\mathcal{M}_{n}\), and \(\epsilon>0\), there is a finite dimensional \(C^{*}\)-algebra \(F\subseteq\mathcal{M}_{n+1}\) such that for all \(a\in\mathcal{F}\) there is a \(b\in F\) with \(\|a-b\|_{2,X}<\epsilon\). Then the \(\|\cdot\|_{2,X}\)-closure of the union of the \(\mathcal{M}_{n}\) is a tracially complete hyperfinite subalgebra of \(\mathcal{M}\) containing \(S\). The separable inheritability of McDuff's property and property \(\Gamma\) are easy consequences of the local characterisation of these properties in Proposition 5.14(ii) and Proposition 5.25 respectively. Proof of Theorem a.3(iv) and (v).: McDuff's property is preserved by sequential inductive limits by Corollary 5.16, and the combination of factoriality and property \(\Gamma\) is preserved under sequential inductive limits by Propositions 3.34 and 5.25, respectively. In particular, Definition A.1(ii) holds for both conditions. Likewise, Proposition 5.14(ii) shows that Definition A.1(iii) holds for McDuff's property, while Proposition 5.23(ii) (together with the fact that Definition A.1(iii) holds for factoriality) handles this condition for factorial tracially complete \(C^{*}\)-algebras with property \(\Gamma\). For Definition A.1(i), let \((\mathcal{M},X)\) be a McDuff tracially complete \(C^{*}\)-algebra and let \(S\subseteq\mathcal{M}\) be a \(\|\cdot\|_{2,X}\)-separable set; write \(\mathcal{M}_{1}\) for the \(\|\cdot\|_{2,X}\)-closed \(C^{*}\)-subalgebra of \(\mathcal{M}\) generated by \(S\) which is necessarily \(\|\cdot\|_{2,X}\)-separable. Inductively, given a \(\|\cdot\|_{2,X}\)-separable, \(\|\cdot\|_{2,X}\)-closed \(C^{*}\)-subalgebra \(\mathcal{M}_{n}\) of \(\mathcal{M}\), Proposition 5.14(ii) provides a sequence \((v_{m})_{m=1}^{\infty}\subseteq\mathcal{M}\) such that for any finite subset \(\mathcal{F}\subseteq\mathcal{M}_{n}\) and \(\epsilon>0\), there is \(m\geq 1\) such that for all \(a\in\mathcal{F}\), (A.4) \[\|[v_{m},a]\|_{2,X}<\epsilon,\ \|v_{m}^{*}v_{m}+v_{m}v_{m}^{*}-1_{\mathcal{M}} \|_{2,X}<\epsilon,\ \text{and}\ \|v_{m}^{2}\|_{2,X}<\epsilon.\] Let \(\mathcal{M}_{n+1}\) be the \(\|\cdot\|_{2,X}\)-closed \(C^{*}\)-subalgebra of \(\mathcal{M}\) generated by \(\mathcal{M}_{n}\cup\{v_{m}:m\in\mathbb{N}\}\). Let \(\mathcal{N}\) denote the \(\|\cdot\|_{2,X}\)-closure of the union of the \(\mathcal{M}_{n}\). Then \(\mathcal{N}\) is a \(\|\cdot\|_{2,X}\)-separable, \(\|\cdot\|_{2,X}\)-closed \(C^{*}\)-subalgebra of \(\mathcal{M}\) containing \(S\) and satisfying the approximation property in Proposition 5.14(ii). So \(\mathcal{N}\) is McDuff, and Definition A.1(i) holds. The argument for Definition A.1(i) works very similarly for the combination of factoriality and property \(\Gamma\). For the inductive step, given a \(\|\cdot\|_{2,X}\)-separable, \(\|\cdot\|_{2,X}\)-closed \(C^{*}\)-subalgebra of \(\mathcal{M}\) which is factorial with the tracially complete structure induced from \(\mathcal{M}\), we replace the sequence \((v_{m})_{m=1}^{\infty}\) with a sequence \((p_{m})_{m=1}^{\infty}\) of contractions such that for any finite subset \(\mathcal{F}\subset\mathcal{M}_{n}\) and \(\epsilon>0\), there is \(m\) with (A.5) \[\|p_{m}-p_{m}^{2}\|_{2,X}<\epsilon,\ \|[p_{m},a]\|_{2,X}<\epsilon\ \text{and}\ \sup_{\tau\in X}\big{|}\tau(ap_{m})-\frac{1}{2}\tau(a)\big{|}<\epsilon\] for all \(a\in\mathcal{F}\) which is given by the characterisation of property \(\Gamma\) in Proposition 5.23(ii). Now use the corresponding part of Theorem A.3(i) to obtain a \(\|\cdot\|_{2,X}\)-separable, \(\|\cdot\|_{2,X}\)-closed \(C^{*}\)-subalgebra \(\mathcal{M}_{n+1}\) of \(\mathcal{M}\) containing \(\mathcal{M}_{n}\cup\{p_{m}:m\in\mathbb{N}\}\) which is factorial in the induced tracially complete structure. Just as in the previous paragraph, the \(\|\cdot\|_{2,X}\)-closure of the union of the \(\mathcal{M}_{n}\) is factorial (by Theorem A.3(i)) with property \(\Gamma\) (via Proposition 5.23(ii)). Finally we establish that CPoU is strongly separably inheritable for factorial tracially complete \(C^{*}\)-algebras. This works in a very similar fashion to separable inheritability of the McDuff property and property \(\Gamma\). Proof of Theorem A.3(vi).: The combination of factoriality and CPoU is preserved under inductive limits by Propositions 3.34 and 6.12. Similarly, Proposition 6.2(ii) (together with the fact that Definition A.1(iii) holds for factoriality) shows that factoriality and CPoU satisfies Definition A.1(iii). Suppose now that \((\mathcal{M},X)\) is a factorial tracially complete \(C^{*}\)-algebra satisfying CPoU. Let \(S\subseteq\mathcal{M}\) be a \(\|\cdot\|_{2,X}\)-separable subset of \(\mathcal{M}\). We start by constructing an increasing sequence \((\mathcal{N}_{n})_{n=1}^{\infty}\) of \(\|\cdot\|_{2,X}\)-separable, \(\|\cdot\|_{2,X}\)-closed \(C^{*}\)-subalgebras of \(\mathcal{M}\) such that * \(S\subseteq\mathcal{N}_{1}\), * each \(\mathcal{N}_{n}\) is factorial as a tracially complete \(C^{*}\)-algebra, and * for each \(k,n\geq 1\), if \(a_{1},\dots,a_{k}\in(\mathcal{N}_{n})_{+}\) and \(\delta>0\) with (A.6) \[\sup_{\tau\in X}\min_{1\leq i\leq k}\tau(a_{i})<\delta,\] then there are projections \(p_{1},\dots,p_{k}\in(\mathcal{N}_{n+1})^{\omega}\cap\mathcal{N}_{n}^{\prime}\) such that (A.7) \[\sum_{j=1}^{k}p_{j}=1_{\mathcal{M}^{\omega}}\qquad\text{and}\qquad\tau(a_{i}p_{ i})\leq\delta\tau(p_{i})\] for all \(i=1,\dots,k\) and \(\tau\in X^{\omega}\). We will construct the algebras \(\mathcal{N}_{n}\) by induction starting with \(\mathcal{N}_{1}\) being any unital factorial \(\|\cdot\|_{2,X}\)-closed \(C^{*}\)-subalgebra of \(\mathcal{M}\) containing \(S\), which exists as factoriality is separably inheritable (part (i) of Theorem A.3). Suppose \(n\geq 1\) and \(\mathcal{N}_{n}\) has been constructed. Let \(S_{n}\) be a countable, \(\|\cdot\|_{2,X}\)-dense subset of \((\mathcal{N}_{n})_{+}\), and let \(K_{n}\) be the set of all pairs \(\kappa=(\mathcal{G}_{\kappa},\delta_{\kappa})\) where \(\mathcal{G}_{\kappa}\subseteq S_{n}\) is a finite set and \(\delta_{\kappa}>0\) is rational with (A.8) \[\sup_{\tau\in X}\min_{a\in\mathcal{G}_{\kappa}}\tau(a)<\delta_{\kappa}.\] Note that \(K_{n}\) is a countable set. Since \((\mathcal{M},X)\) satisfies CPoU, for each \(\kappa\in K_{n}\) and \(a\in\mathcal{G}_{\kappa}\), there are projections \(p_{\kappa,a}\in\mathcal{M}^{\omega}\cap\mathcal{N}_{n}^{\prime}\) such that (A.9) \[\sum_{b\in\mathcal{G}_{\kappa}}p_{\kappa,b}=1_{\mathcal{M}^{\omega}}\qquad \text{and}\qquad\tau(ap_{\kappa,a})\leq\delta\tau(p_{\kappa,a})\] for all \(a\in\mathcal{G}_{\kappa}\) and \(\tau\in X^{\omega}\). For each \(\kappa\in K_{n}\) and \(a\in\mathcal{G}_{\kappa}\), let \((p_{\kappa,a,m})_{m=1}^{\infty}\subseteq\mathcal{M}\) be a sequence in \(\mathcal{M}\) representing \(p_{\kappa,a}\). As factoriality is separably inheritable (part (i) of the theorem), there is a unital \(\|\cdot\|_{2,X}\)-separable, \(\|\cdot\|_{2,X}\)-closed \(C^{*}\)-subalgebra \(\mathcal{N}_{n+1}\subseteq\mathcal{M}\) which contains \(\mathcal{N}_{n}\) and each \(p_{\kappa,a,m}\) for all \(m\geq 1\), \(\kappa\in K_{n}\) and \(a\in\mathcal{G}_{\kappa}\), and is factorial as a tracially complete \(C^{*}\)-algebra. Then \(\mathcal{N}_{n+1}\) satisfies the required properties. Let \(\mathcal{N}\subseteq\mathcal{M}\) be the \(\|\cdot\|_{2,X}\)-closure of the union of the algebras \(\mathcal{N}_{n}\). Then \(\mathcal{N}\) is a \(\|\cdot\|_{2,X}\)-closed, \(\|\cdot\|_{2,X}\)-separable \(C^{*}\)-subalgebra of \(\mathcal{M}\) which contains \(S\). As each \(\mathcal{N}_{n}\) is factorial, so too is \(\mathcal{N}\) by Proposition 3.34. The third condition on the \(\mathcal{N}_{n}\) ensures that \(\mathcal{N}\) satisfies CPoU, and this completes the proof. ### Proof of Theorem 2.2 The final section of the appendix is devoted to finite dimensional approximations of non-metrisable Choquet simplices. We show how reduce Theorem 2.2 to the metrisable case ([40, Lemma 2.8], which is a corollary of the fundamental work of Lazar and Lindenstrauss in [70]). The strategy is another variation of Blackadar's separable inheritability, this time for compact convex sets; roughly, we want to know the property of being a Choquet simplex is "metrisably inheritable" among compact convex sets. We find it conceptually easier to do this in the dual picture and consider separably inheritable properties of Archimedean order unit spaces. Choquet simplices can be characterised in terms of Archimedean order unit spaces using Kadison duality as follows. An ordered vector space \(V\) satisfies _Riesz interpolation_ if for all \(f_{1},f_{2},g_{1},g_{2}\in V\) with \(f_{i}\leq g_{j}\) for \(i,j=1,2\), there is an \(h\in V\) with \(f_{i}\leq h\leq g_{j}\) for \(i,j=1,2\). **Theorem A.4** ([3, Corollary II.3.11]).: _Suppose \(X\) is a compact convex set and \(V\) is an Archimedean order unit space._ 1. \(X\) _is a Choquet simplex if and only if_ \(\operatorname{Aff}(X)\) _has Riesz interpolation._ 2. \(V\) _has Riesz interpolation if and only if_ \(S(V)\) _is a Choquet simplex._ Proof.: The first statement is [3, Corollary II.3.11], and the second statement follows from the first by Kadison duality (see Section 2.1). We will show Riesz interpolation is a separably inheritable property in Proposition A.6. To facilitate this, we show that Riesz interpolation can be detected on a dense set. **Lemma A.5**.: _Let \(V\) be an Archimedean order unit space. Then \(V\) satisfies Riesz interpolation if and only if there exists a unital dense rational subspace \(V_{0}\subseteq V\) with Riesz interpolation._ Proof.: The forward direction is clear by taking the subspace to be \(V\). In the other direction, suppose \(V_{0}\subseteq V\) is a unital dense rational subspace of \(V\). Since all states (i.e. positive and unital functionals) on both \(V_{0}\) and \(V\) are bounded and \(V_{0}\) is dense in \(V\), the restriction map \(S(V)\to S(V_{0})\) is an affine homeomorphism. As \(V_{0}\) has Riesz interpolation, \(S(V_{0})\) is a Choquet simplex by [55, Corollary 10.6], and hence \(S(V)\) is a Choquet simplex. So \(V\) has Riesz interpolation. Finally, we arrive at the main separabilisation result needed in this subsection. The proof is similar to the proofs in the previous subsection, and also to the proofs of separabilisation results in other settings, such as those in [7, Section II.8.5] **Proposition A.6**.: _Riesz interpolation is a separably inheritable property of Archimedean order unit spaces in the following sense._ * _If_ \(V\) _is an Archimedean order unit space with Riesz interpolation and_ \(S\) _is a separable subset of_ \(V\)_, then there is a separable unital closed subspace_ \(V_{0}\subseteq V\) _that contains_ \(S\) _and has Riesz interpolation._ * _For a sequence_ \((V_{n})_{n=1}^{\infty}\) _of separable Archimedean order unit spaces with Riesz interpolation and unital order embeddings_ \(V_{n}\hookrightarrow V_{n+1}\)_, the inductive limit of the_ \(V_{n}\) _also has Riesz interpolation._ Proof.: Part (ii) follows from Lemma A.5 since the non-closed union of the \(V_{n}\) will have Riesz interpolation. Assume in (i) that \(V\) and \(S\) are given, and without loss of generality, assume \(S\) is countable and \(1\in S\). Let \(V_{1}\) be the rational vector space spanned by \(S\) so that \(V_{1}\) is countable. We can inductively construct an increasing sequence of countable rational subspaces \((V_{n})_{n=1}^{\infty}\) of \(V\) such that for all \(n\geq 1\) and \(f_{1},f_{2},g_{1},g_{2}\in V_{n}\) with \(f_{i}\leq g_{j}\) for \(1\leq i,j\leq 2\), there is an \(h\in V_{n+1}\) such that \(f_{i}\leq h\leq g_{j}\) for all \(1\leq i,j\leq 2\). Indeed, as \(V\) has Riesz interpolation, for any such quadruple \((f_{1},f_{2},g_{1},g_{2})\) from \(V_{n}\), we may choose a corresponding \(h\in V\). As the set of all such quadruples in \(V_{n}\) is countable, we may define \(V_{n+1}\) as the rational span of \(V_{n}\) and the corresponding functions \(h\) corresponding to these quadruples. Now, \(\bigcup_{n=1}^{\infty}V_{n}\) has Riesz interpolation. Let \(V_{0}\) be the closure of this union and note that \(V_{0}\) has Riesz interpolation by Lemma A.5. With these results in hand, we prove Theorem 2.2 by reducing it to the metrisable setting. We restate the theorem for the convenience of the reader. **Theorem 2.2**.: _If \(X\) is a Choquet simplex, then there are nets of finite dimensional Choquet simplices \(Z_{\lambda}\) and continuous affine maps_ \[X\xrightarrow{\beta_{\lambda}}Z_{\lambda}\xrightarrow{\alpha_{\lambda}}X \tag{2.5}\] _such that \(\lim_{\lambda}\|f\circ\alpha_{\lambda}\circ\beta_{\lambda}-f\|=0\) for all \(f\in\operatorname{Aff}(X)\)._ Proof of Theorem 2.2.: View \(\mathbb{R}^{d}\) as an ordered vector space with the coordinatewise order and with order unit \((1,\ldots,1)\). By Kadison duality, it suffices to show that for every finite set \(\mathcal{F}\subseteq\operatorname{Aff}(X)\) and \(\epsilon>0\), there are an integer \(d\) and unital positive linear maps (A.10) \[\operatorname{Aff}(X)\xrightarrow{\psi}\mathbb{R}^{d}\xrightarrow{\phi} \operatorname{Aff}(X)\] such that \(\|\phi(\psi(f))-f\|<\epsilon\) for all \(f\in\mathcal{F}\). By Proposition A.6, there is a separable unital closed subspace \(V_{0}\) of \(\operatorname{Aff}(X)\) that contains \(\mathcal{F}\) and satisfies Riesz interpolation. By the Hahn-Banach theorem (applied in each coordinate), any unital positive map \(V_{0}\to\mathbb{R}^{d}\) extends to a unital positive map \(\operatorname{Aff}(X)\to\mathbb{R}^{d}\) (since unital functionals are positive exactly when they have norm 1). Therefore, it suffices to find an integer \(d\) and unital positive linear maps (A.11) \[V_{0}\xrightarrow{\psi}\mathbb{R}^{d}\xrightarrow{\phi}V_{0}\] such that \(\|\phi(\psi(f))-f\|<\epsilon\) for all \(f\in\mathcal{F}\). Since \(V_{0}\) has Riesz interpolation, \(S(V_{0})\) is a Choquet simplex by Theorem A.4(ii). Using Kadison duality to identify \(V_{0}\) with \(\operatorname{Aff}(S(V_{0}))\) the result follows from the separable case, [40, Lemma 2.8].
2310.00122
Dimension bounds for escape on average in homogeneous spaces
Let $X = G/\Gamma$, where $G$ is a Lie group and $\Gamma$ is a uniform lattice in $G$, and let $O$ be an open subset of $X$. We give an upper estimate for the Hausdorff dimension of the set of points whose trajectories escape $O$ on average with frequency $\delta$, where $0 < \delta \le 1$.
Dmitry Kleinbock, Shahriar Mirzadeh
2023-09-29T20:27:26Z
http://arxiv.org/abs/2310.00122v1
# Dimension bounds for escape on average ###### Abstract. Let \(X=G/\Gamma\), where \(G\) is a Lie group and \(\Gamma\) is a uniform lattice in \(G\), and let \(O\) be an open subset of \(X\). We give an upper estimate for the Hausdorff dimension of the set of points whose trajectories escape \(O\) on average with frequency \(\delta\), where \(0<\delta\leq 1\). The first named author was supported by NSF grant DMS-2155111 and by a grant from the Simons Foundation (922989, Kleinbock). Introduction Let \(G\) be a Lie group, \(\Gamma\) a uniform 1 lattice in \(G\), \(X=G/\Gamma\), and let \(F^{+}\) be a one-parameter \(\operatorname{Ad}\)-diagonalizable subsemigroup of \(G\) whose action on \(X\) is exponentially mixing. Then there exists \(r_{1}>0\) such that for any \(O\subset X\) and any \(0<r<r_{1}\) _one has2_ Footnote 2: The result in [9] actually involved the slightly smaller set \[E(F^{+},O):=\{x\in X:g_{i}x\notin O\ \forall\,t\geq 0\}\] of points whose trajectories avoid \(O\), bit it is easy to see that \(\dim E(F^{+},O)=\dim\widetilde{E}(F^{+},O)\). In the special case \(\delta=1\), using (1.5) we obtain the following immediate corollary: **Corollary 1.3**.: _Let \(G\), \(\Gamma\), \(X\), \(F^{+}\) and \(r_{1}\) be as in Theorem 1.2. Then for any \(0<r<r_{1}\) we have_ \[\operatorname{codim}E_{1}(F^{+},O)\gg\frac{\mu(\sigma_{r}O)\cdot\log\frac{1}{ 1-\frac{\mu(\sigma_{r}O)}{2}}}{\log\frac{1}{r}}\gg\frac{\mu(\sigma_{r}O)^{2}} {\log\frac{1}{r}}.\] One sees that the above corollary does not produce any improvement of (1.2); on the contrary, our new dimension bound for escape on average happens to be worse by a factor of \(\mu(\sigma_{r}O)\) than the bound for the eventual escape. In the sequel to the present paper, using some ideas from the work [1] where a similar problem was studied for the Teichmuller geodesic flow, the authors plan to improve the existing estimates as well as extend the methods to treat non-compact homogeneous spaces. We also remark that when \(X\) is not compact, one can consider the set \[E_{\delta}^{comp}(F^{+}):=\bigcap_{C\subset X\text{ compact}}E_{\delta}(F^{+},C)\] of points in \(X\) with trajectories \(\delta\)-escaping all compact subsets of \(X\). Equivalently one can define \(E_{\delta}^{comp}(F^{+})\) as the set of \(x\in X\) such that there exists a sequence \(T_{k}\to\infty\) and a weak-\(*\) limit \(\nu\) of the sequence of probability measures \(f\mapsto\frac{1}{T_{k}}\int_{0}^{T_{k}}f(g_{t}x)\,dt\) such that \(\nu(X)\leq 1-\delta\). It is proved in [13, Theorem 1.3] that whenever \(G\) is a connected semisimple Lie group, \(\Gamma\) a lattice in \(G\) and \(F^{+}\) a one-parameter semigroup contained in one of the simple factors of \(G\), one has \[\operatorname{codim}\left(E_{\delta}^{comp}(F^{+})\right)\geq c\delta\text{ for all }0<\delta<1,\] where \(c\) depends only on \(G\) and \(\Gamma\). See also [6, Remark 2.1] for a special case. From now on let \(G\), \(\Gamma\), \(X\) and \(F^{+}\) be as in Theorems 1.1 and 1.2. Similarly to the argument in [9, 10, 11], our main theorem is deduced from a result that estimates \[\dim E_{\delta}(F^{+},O)\cap Hx,\] where \(x\in X\) and \(H\) is the unstable horospherical subgroup with respect to \(F^{+}\), defined as \[H:=\{g\in G:\operatorname{dist}(g_{t}gg_{-t},e)\to 0\,\ as\,\ t\to-\infty\}. \tag{1.9}\] More generally, in the theorem below we estimate \[\dim E_{\delta}(F^{+},O)\cap Px\] for \(x\in X\) and some proper subgroups \(P\) of \(H\), namely those which have so-called Effective Equidistribution Property (EEP) with respect to the flow \((X,F^{+})\). The latter property was motivated by [8] and introduced in [9], where it was shown to hold for \(H\) as above under the assumption of exponential mixing. Namely, we have the following **Definition 1.4**.: _Say that a subgroup \(P\) of \(G\) has Effective Equidistribution Property (EEP) with respect to the flow \((X,F^{+})\) if \(P\) is normalized by \(F^{+}\), and there exist \(\lambda>0\), \(t_{0}>0\) and \(\ell\in\mathbb{N}\) such that for any \(x\in X\), \(t\geq t_{0}\), \(f\in C^{\infty}(P)\) with \(\operatorname{supp}f\subset B^{P}(1)\) and \(\psi\in C^{\infty}(X)\) it holds that_ \[\left|\int_{P}f(h)\psi(g_{t}hx)\,d\nu(h)-\int_{P}f\,d\nu\int_{X}\psi\,d\mu \right|\ll\max\big{(}\|\psi\|_{C^{1}},\|\psi\|_{\ell}\big{)}\cdot\|f\|_{C^{t}} \cdot e^{-\lambda t}. \tag{1.10}\] We remark that in [9, 10, 11] this property was defined and used in the more general set-up of \(X\) being non-compact, with additional constraints on the injectivity radius at points \(x\in X\) for which (1.10) is satisfied (see SS3.2 for more detail). However when \(X\) is compact the injectivity radius is uniformly bounded from below, hence a possibility to simplify the definition. **Theorem 1.5**.: _Let \(G\), \(\Gamma\), \(X\), \(F^{+}\) be as above. Then there exists \(r_{2}>0\) such that whenever a connected subgroup \(P\) of \(H\) has property (EEP) with respect to the flow \((X,F^{+})\), for any non-empty open subset \(O\) of \(X\), any \(0<r\leq r_{2}\), any \(0<\delta<1\) and any \(x\in X\) one has_ \[\operatorname{codim}\big{(}E_{\delta}(F^{+},O)\cap Px\big{)}\gg\frac{\mu( \sigma_{r}O)\cdot\phi\big{(}\mu(\sigma_{r}O),\sqrt{1-\delta}\big{)}}{\log \frac{1}{r}}.\] In the next section we derive Theorem 1.2 from Theorem 1.5, and the rest of the paper is be dedicated to proving Theorem 1.5. Section 3 contains a discussion of basic technical constructions needed for the proof, such as Hausdorff dimension estimates for \(\limsup\) sets, tessellations of nilpotent Lie groups and Bowen boxes. In Section 4 we use property (EEP) to, given a subset \(S\) of \(X\), a large \(T>0\) and \(0<\delta<1\), write down a measure estimate for the set of \(h\in P\) such that the Birkhoff average \(\frac{1}{T}\int_{0}^{T}1_{S}(g_{t}hx)\,dt\geq\delta\). In the subsequent section this estimate is used to bound the number of Bowen boxes that can cover certain exceptional sets. Finally, Section 6 contains the conclusion of the proof. ## 2. Theorem 1.5\(\Rightarrow\) Theorem 1.2 The reduction of Theorem 1.2 to Theorem 1.5 is fairly standard. Let \(\mathfrak{g}\) be a Lie algebra of \(G\), \(\mathfrak{g}_{\mathbb{C}}\) its complexification, and for \(\lambda\in\mathbb{C}\), let \(E_{\lambda}\) be the eigenspace of \(\operatorname{Ad}g_{1}\) corresponding to \(\lambda\). Let \(\mathfrak{h}\), \(\mathfrak{h}^{\circ}\), \(\mathfrak{h}^{-}\) be the subalgebras of \(\mathfrak{g}\) with complexifications: \[\mathfrak{h}_{\mathbb{C}}=\operatorname{span}(E_{\lambda}:|\lambda|>1),\ \mathfrak{h}_{ \mathbb{C}}^{0}=\operatorname{span}(E_{\lambda}:|\lambda|=1),\ \mathfrak{h}_{\mathbb{C}}^{-}= \operatorname{span}(E_{\lambda}:|\lambda|<1).\] Let \(H\), \(H^{0}\), \(H^{-}\) be the corresponding subgroups of \(G\). Note that \(H\) is precisely the unstable horospherical subgroup with respect to \(F^{+}\) (defined in (1.9)) and \(H^{-}\) is the stable horospherical subgroup defined by: \[H^{-}=\{h\in G:g_{t}hg_{-t}\to e\ as\ t\to+\infty\}.\] Since \(\operatorname{Ad}g_{1}\) is assumed to be diagonalizable over \(\mathbb{C}\), \(\mathfrak{g}\) is the direct sum of \(\mathfrak{h}\), \(\mathfrak{h}^{\circ}\) and \(\mathfrak{h}^{-}\). Therefore \(G\) is locally (at a neighborhood of identity) a direct product of the subgroups \(H\), \(H^{0}\) and \(H^{-}\). Denote the group \(H^{-}H^{0}\) by \(\tilde{H}\), and fix \(0<\rho<1\) with the following properties: the multiplication map \(\tilde{H}\times H\to G\) is one to one on \(B^{\tilde{H}}(\rho)\times B^{H}(\rho)\), (2.1) and \[g_{t}B^{\tilde{H}}(r)g_{-t}\subset B^{\tilde{H}}(2r)\text{ for any }0<r\leq\rho \text{ and }t\geq 0 \tag{2.2}\] (the latter can be done since \(F\) is \(\operatorname{Ad}\)-diagonalizable and the restriction of the map \(g\to g_{t}gg_{-t}\), \(t>0\), to the subgroup \(\tilde{P}\) is non-expanding). Proof of Theorem 1.2 assuming Theorem 1.5.: Let \(\rho\) be as in (2.1), (2.2) and define \[r_{1}:=\min(\rho,r_{2}).\] For any \(0<r<r_{1}\) choose \(s\) such that \[B^{G}(s)\subset B^{\tilde{H}}(r/4)B^{H}(r/4). \tag{2.3}\] Now take \(O\subset X\), and for \(x\in X\) denote \[E_{\delta,x,s}:=\left\{g\in B^{G}(s):gx\in E_{\delta}(F^{+},O)\right\}. \tag{2.4}\] In view of the countable stability of Hausdorff dimension, in order to prove the theorem it suffices to show that for any \(x\in X\), \[\operatorname{codim}E_{\delta,x,s}\gg\frac{\mu(\sigma_{r}O)\cdot\phi\big{(} \mu(\sigma_{r}O),\sqrt{1-\delta}\big{)}}{\log\frac{1}{r}} \tag{2.5}\] Indeed, \(E_{\delta}(F^{+},O)\) can be covered by countably many sets \(\{gx:g\in E_{\delta,x,s}\}\), with the quotient maps \(\pi_{x}:B^{G}(s)\to X\) being Lipschitz and one-to-one. Since every \(g\in B(s)\) can be written as \(g=\tilde{h}h\), where \(\tilde{h}\in B^{\tilde{H}}(r/4)\) and \(h\in B^{H}(r/4)\), for any \(y\in X\) we can write \[\begin{split}\operatorname{dist}(g_{t}gx,y)&\leq \operatorname{dist}(g_{t}\tilde{h}hx,g_{t}hx)+\operatorname{dist}(g_{t}hx,y)\\ &=\operatorname{dist}\bigl{(}g_{t}\tilde{h}g_{-t}g_{t}hx,g_{t}hx \bigr{)}+\operatorname{dist}(g_{t}hx,y).\end{split} \tag{2.6}\] Hence in view of (2.2) and (2.3), \(g\in E_{\delta,x,s}\) implies that \(hx\) belongs to \(E_{\delta}(F^{+},\sigma_{r/2}O)\), and by using Wegmann's Product Theorem [15] we conclude that: \[\begin{split}\dim E_{\delta,x,s}&\leq\dim\left(\{h \in B^{H}(r/4):hx\in E_{\delta}(F^{+},\sigma_{r/2}O)\}\times B^{\tilde{H}}(r/4) \right)\\ &\leq\dim\bigl{(}\{h\in B^{H}(r/4):hx\in E_{\delta}(F^{+},\sigma _{r/2}O)\}\bigr{)}+\dim\tilde{H}.\end{split} \tag{2.7}\] Since \((X,F^{+})\) is exponentially mixing, by [9, Theorem 2.6]\(H\) has (EEP) with respect to \((X,F^{+})\). Therefore, by Theorem 1.5 applied to \(O\) replaced by \(\sigma_{r/2}O\) and \(r\) replaced with \(r/4\) we have for any \(0<r<r_{1}\) \[\begin{split}&\operatorname{codim}\{h\in B^{H}(r/4):hx\in E_{\delta} (F^{+},\sigma_{r/2}O)\}\\ &\gg\frac{\mu(\sigma_{r/4}\sigma_{r/2}O)\cdot\phi\big{(}\mu( \sigma_{r/4}\sigma_{r/2}O),\sqrt{1-\delta}\big{)}}{\log\frac{4}{r}}\\ &\gg\frac{\mu(\sigma_{r}O)\cdot\phi\big{(}\mu(\sigma_{r}O), \sqrt{1-\delta}\big{)}}{\log\frac{4}{r}}\gg\frac{\mu(\sigma_{r}O)\cdot\phi\big{(} \mu(\sigma_{r}O),\sqrt{1-\delta}\big{)}}{\log\frac{1}{r}}.\end{split} \tag{2.8}\] Now from (2.7) and (2.8) we conclude that (2.5) is satisfied, which finishes the proof. ## 3. Auxillary facts ### Hausdorff dimension of limsup sets The exceptional sets we study in this paper are of the form \(A=\limsup\limits_{N\to\infty}A_{N}\), that is \[A=\bigcap_{N\geq 1}\bigcup_{n\geq N}A_{n}\] for a sequence of subsets \(A_{N}\). First, we recall the definition of the Hausdorff dimension. Let \(A\) be a subset of a metric space \(Y\). For any \(\rho,\beta>0\), we define \[\mathcal{H}_{\rho}^{\beta}(A)=\inf\left\{\sum_{I\in\mathcal{U}}\operatorname{ diam}(I)^{\beta}:\mathcal{U}\text{ is a cover of A by balls of diameter }<\rho\right\}.\] Then, the \(\beta\)-dimensional Hausdorff measure of \(A\) is defined to be \[\mathcal{H}^{\beta}(A)=\lim_{\rho\to 0}\mathcal{H}_{\rho}^{\beta}(A).\] **Definition 3.1**.: _The Hausdorff dimension of a subset \(A\) of a metric space \(Y\) is equal to_ \[\dim(A)=\inf\left\{\beta\geq 0:\mathcal{H}^{\beta}(A)=0\right\}=\sup\left\{ \beta\geq 0:\mathcal{H}^{\beta}(A)=\infty\right\}.\] **Lemma 3.2**.: _Let \(\left\{A_{N}\right\}_{N\geq 1}\) be a collection of subsets of \(Y\). Suppose there exist constants \(C>0\), \(0<\rho<1\), \(N_{0}\in\mathbb{N}\), and a sequence \(\{\alpha_{N}\}_{N\geq 1}\) such that for each \(N\geq N_{0}\), the set \(A_{N}\) can be covered with at most \(\rho^{-\alpha_{N}N}\) balls of radius \(C\rho^{N}\). Then_ \[\dim\left(\limsup\limits_{N\to\infty}A_{N}\right)\leq\liminf\limits_{N\to \infty}\alpha_{N}.\] Proof.: Let \(A:=\limsup\limits_{N\to\infty}A_{N}\) and \(\alpha:=\liminf\limits_{N\to\infty}\alpha_{N}\); without loss of generality we can assume that \(\alpha<\infty\) and \(\alpha_{N}\to\alpha\) as \(N\to\infty\). Take \(\beta>\alpha\); we will show that \(\mathcal{H}^{\beta}(A)=0\), which will imply the lemma. For any \(\xi\in(0,1)\), let \(N_{\xi}\geq N_{0}\) be a natural number such that \(\rho^{-N}<\xi\) and \(|\alpha_{N}-\alpha|<\frac{\beta-\alpha}{2}\) for all \(N\geq N_{\xi}\). Notice that \(N_{\xi}\) tends to infinity as \(\xi\) goes to \(0\). Take \(N\geq N_{\xi}\) and denote by \(\mathcal{O}_{N}\) a cover of the set \(A_{N}\) by balls of radius \(C\rho^{N}\) such that \(\#\mathcal{O}_{N}\), the number of balls in the cover \(\mathcal{O}_{N}\), is at most \(\rho^{-\alpha_{N}N}\). Then \(\mathcal{O}=\bigcup_{N\geq N_{\xi}}\mathcal{O}_{N}\) is a cover of \(A\) for which the following holds: \[\sum_{B\in\mathcal{O}}\operatorname{diam}(B)^{\beta} =\sum_{N\geq N_{\xi}}\sum_{B\in\mathcal{O}_{N}}\operatorname{diam}(B )^{\beta}\leq(2C)^{\beta}\sum_{N\geq N_{\xi}}\#\mathcal{O}_{N}\cdot\rho^{\beta N}\] \[\leq(2C)^{\beta}\sum_{N\geq N_{\xi}}\rho^{(\beta-\alpha_{N})N} \leq(2C)^{\beta}\sum_{N\geq N_{\xi}}\rho^{\frac{\beta-\alpha}{2}N}\] \[\leq(2C)^{\beta}\frac{\rho^{\frac{\beta-\alpha}{2}N_{\xi}}}{1- \rho^{\frac{\beta-\alpha}{2}}}\xrightarrow{\xi\to 0}0.\] This implies that \(\mathcal{H}^{\beta}(A)=0\), and the conclusion of the lemma follows. ### Choosing \(r_{2}\) Recall that as part of the proof of Theorem 1.5 we need to define a bound \(r_{2}\) for possible values of \(r\). This bound will come from two ingredients. Namely, we define \[r_{2}:=\frac{1}{2}\min(r_{0},r^{\prime}), \tag{3.1}\] where * \(0<r^{\prime}<1/4\) is chosen so that for any Lie subalgebra \(\mathfrak{p}\) of \(\mathfrak{g}\) the exponential map from \(\mathfrak{p}\) to \(P=\exp(\mathfrak{p})\) is \(2\)-bi-Lipischitz on \(B^{\mathfrak{p}}(r^{\prime})\); in particular, we will have \[B^{P}(2r)\supset\exp\left(B^{\mathfrak{p}}(r)\right)\supset B^{P}(r/2)\text{ for any }0<r\leq r^{\prime}.\] (3.2) * \(r_{0}:=r_{0}(X)=\inf\{r_{0}(x):x\in X\}>0\), where \[r_{0}(x):=\sup\{r>0:\text{the map }\pi_{x}:g\mapsto gx\text{ is injective on }B(r)\}.\] (the injectivity radius at \(x_{0}\)). ### Tessellations of \(P\) Now let \(P\) be a connected subgroup of \(H\) normalized by \(F^{+}\). Following [7], say that an open subset \(V\) of \(P\) is a tessellation domain for \(P\) relative to a countable subset \(\Lambda\) of \(P\) if * \(\nu(\partial V)=0\); * \(V\gamma_{1}\cap V\gamma_{2}=\varnothing\) for different \(\gamma_{1},\gamma_{2}\in\Lambda\); * \(P=\bigcup\limits_{\gamma\in\Lambda}\overline{V}\gamma\). Note that \(P\) is a connected simply connected nilpotent Lie group. Denote \(\mathfrak{p}:=\operatorname{Lie}(P)\) and \(L:=\dim P\), and fix a Haar measure \(\nu\) on \(P\). As shown in [7, Proposition 3.3], one can choose a basis of \(\mathfrak{p}\) such that for any \(r>0\), \(\exp\left(rI_{\mathfrak{p}}\right)\), where \(I_{\mathfrak{p}}\subset\mathfrak{p}\) is the cube centered at \(0\) with side length \(1\) with respect to that basis, is a tessellation domain. Let us denote \[V_{r}:=\exp\left(\frac{r}{4\sqrt{L}}I_{\mathfrak{p}}\right) \tag{3.3}\] and choose a countable \(\Lambda_{r}\subset P\) such that \(V_{r}\) is a tessellation domain relative to \(\Lambda_{r}\). Then it follows from (3.2) that for any \(0<r\leq r^{\prime}\) one has \[B^{P}\Big{(}\frac{r}{16\sqrt{L}}\Big{)}\subset V_{r}\subset B^{P}\left(\frac{r }{4}\right). \tag{3.4}\] ### Bowen boxes Note that the measure \(\nu\) and the pushforward of the Lebesgue measure \(\operatorname{Leb}\) on \(\mathfrak{p}\) by the exponential map are absolutely continuous with respect to each other with locally bounded Radon-Nikodym derivative. This implies that there exists \(0<c_{1}<c_{2}\) such that \[c_{1}\operatorname{Leb}(A)\leq\nu\big{(}\exp(A)\big{)}\leq c_{2}\operatorname{ Leb}(A)\quad\forall\,\text{measurable }A\subset B^{\mathfrak{p}}(1). \tag{3.5}\] Define \[\lambda_{\min}:=\min\{|\lambda|:\,\lambda\text{ is an eigenvalue of }\operatorname{ad}_{g_{1}}|_{\mathfrak{p}}\} \tag{3.6}\] and \[\lambda_{\max}:=\max\{|\lambda|:\,\lambda\text{ is an eigenvalue of }\operatorname{ad}_{g_{1}}|_{ \mathfrak{p}}\}. \tag{3.7}\] Using the bi-Lipschitz property of \(\exp\), we can conclude that for any \(0<r\leq r^{\prime}\) and any \(t>0\) one has \[\operatorname{diam}(g_{-t}V_{r}g_{t})<2re^{-\lambda_{\min}t}. \tag{3.8}\] Also let \(\eta:=\operatorname{Tr}\operatorname{ad}_{g_{1}}|_{\mathfrak{p}}\); clearly one then has \[\nu(g_{-t}Ag_{t})=e^{-\eta t}\nu(A)\text{ for any measurable }A\subset P. \tag{3.9}\] Let us now define a Bowen \((t,r)\)-box in \(P\) to be a set of the form \(g_{-t}V_{r}\gamma g_{t}\) for some \(\gamma\in P\) and \(t>0\). Our approach to estimating Hausdorff dimension of various subsets of \(P\) will be through covering them by Bowen boxes. We are going to need three results proved in [11]. The first one, a slight modification of [7, Proposition 3.4], gives an upper bound for the number of \(\gamma\in\Lambda_{r}\) such that the Bowen box \(g_{-t}V_{r}\gamma g_{t}\) has non-empty intersection with \(V_{r}\): **Lemma 3.3**.: _[_11_, Lemma 4.1]_There exists \(c_{0}>0\) such that for any \(0<r\leq r^{\prime}\) and for any \(t>\frac{\log\aleph}{\lambda_{\min}}\)_ \[\#\{\gamma\in\Lambda_{r}:g_{-t}\overline{V_{r}}\gamma g_{t}\cap V_{r}\neq \varnothing\}\leq e^{\eta t}\left(1+c_{0}e^{-\lambda_{\min}t}\right).\] The second one gives us an upper bound for the number of balls of radius \(re^{-\lambda_{\max}t}\) needed to cover a Bowen box. **Lemma 3.4**.: _[_11_, Lemma 7.4]_ _There exists \(C_{1}>0\) such that for any \(0<r<1\) and any \(t>0\), any Bowen \((t,r)\)-box in \(P\) can be covered with at most \(C_{1}e^{(\lambda_{\max}L-\eta)t}\) balls in \(P\) of radius \(re^{-\lambda_{\max}t}\)._ The third result is a direct consequence of property (EEP); it is a simplified version of [11, Proposition 4.4]. **Proposition 3.5**.: _Let \(F^{+}\) be a one-parameter subsemigroup of \(G\) and \(P\) a subgroup of \(G\) with property \((\operatorname{EEP})\) with respect to \(F^{+}\). Then there exist \(t_{1}\geq 1\) and \(\lambda^{\prime}>0\) such that for any open \(O\subset X\), any \(x\in X\), any \(r\leq r_{2}\) and any \(t\geq t_{1}\) one has_ \[\nu\left(\left\{h\in V_{r}:g_{t}hx\in O\right\}\right)\geq\nu\left(V_{r} \right)\mu(\sigma_{e^{-\lambda^{\prime}t}}O)-e^{-\lambda^{\prime}t}.\] ## 4. Effective equidistribution of translates and a measure estimate From now on we will work with \(F^{+}\) and \(P\) as in Theorem 1.5, and for the rest of the paper fix a positive \(r\leq r_{2}\), where \(r_{2}\) is as in (3.1). Note that by the countable stability of Hausdorff dimension, in order to prove Theorem 1.5 it suffices, for a subset \(O\) of \(X\) and \(0<\delta<1\), to get a uniform (in \(x\in X\)) upper bound for the Hausdorff dimension of the set \[\begin{split} S_{x,\delta}(O)&:=\left\{h\in V_{r}: hx\in E_{\delta}(F^{+},O)\right\}\\ &=\left\{h\in V_{r}:\limsup_{T\to\infty}\frac{1}{T}\int_{0}^{T}1_{O^{ c}}(g_{t}hx)\,dt\geq\delta\right\}.\end{split} \tag{4.1}\] For this it will be convenient to discretize the above definition. Namely let us introduce the following notation: given \(T>0\), a subset \(S\) of \(X\), \(x\in X\) and \(0<\delta<1\), let us define \[A_{x,\delta}(T,S):=\left\{h\in V_{r}:\frac{1}{T}\int_{0}^{T}1_{S}(g_{t}hx)\,dt \geq\delta\right\}. \tag{4.2}\] In the following proposition we find the relation between the set \(S_{x,\delta}(O)\) and the family of sets \(\{A_{x,\delta}(NT,O^{c})\}_{N\in\mathbb{N}}\). **Proposition 4.1**.: _For any \(T>0\) and any \(x\in X\) and we have_ \[S_{x,\delta}(O)=\limsup_{N\in\mathbb{N},\,N\to\infty}A_{x,\delta}(NT,O^{c})\] Proof.: In view of definition of \(S_{x,\delta}(O)\), it suffices, for a fixed \(T>0\), to prove that \[\limsup_{R\to\infty}\frac{1}{R}\int_{0}^{R}1_{O^{c}}(g_{t}hx)\,dt =\limsup_{N\in\mathbb{N},\,N\to\infty}\frac{1}{NT}\int_{0}^{NT}1 _{O^{c}}(g_{t}hx)\,dt\] \[=\lim_{N_{0}\to\infty}\sup_{N\geq N_{0}}\frac{1}{NT}\int_{0}^{NT} 1_{O^{c}}(g_{t}hx)\,dt.\] Let \(N_{0}\in\mathbb{N}\), and let \(R\geq N_{0}T\). Then one can find \(N\geq N_{0}\) and \(0\leq R^{\prime}<T\) such that \(R=NT+R^{\prime}\). Hence \[\frac{1}{R}\int_{0}^{R}1_{O^{c}}(g_{t}hx)\,dt= \frac{1}{NT+R^{\prime}}\int_{0}^{NT+R^{\prime}}1_{O^{c}}(g_{t}hx) \,dt\] \[\leq \frac{1}{NT}\int_{0}^{NT+R^{\prime}}1_{O^{c}}(g_{t}hx)\,dt\] \[= \frac{1}{NT}\left(\int_{0}^{NT}1_{O^{c}}(g_{t}hx)\,dt+\int_{NT}^{ NT+R^{\prime}}1_{O^{c}}(g_{t}hx)\,dt\right)\] \[\underset{(1_{O^{c}}\leq 1,\,R^{\prime}\leq T)}{\leq} \frac{1}{NT}\int_{0}^{NT}1_{O^{c}}(g_{t}hx)\,dt+\frac{1}{N}\] \[\underset{(N\geq N_{0})}{\leq} \frac{1}{NT}\int_{0}^{NT}1_{O^{c}}(g_{t}hx)\,dt+\frac{1}{N_{0}}.\] Therefore, we get \[\limsup_{R\to\infty}\frac{1}{R}\int_{0}^{R}1_{O^{c}}(g_{t}hx)\,dt\leq\lim_{N_ {0}\to\infty}\sup_{N\geq N_{0}}\frac{1}{NT}\int_{0}^{NT}1_{O^{c}}(g_{t}hx)\,dt.\] The reverse inequality is obvious. The next proposition gives an upper estimate for the measure of \(A_{x,\delta}(T,O^{c})\). We will use it in SS5 to obtain an upper bound for the number of Bowen \((r,T)\)-boxes in \(P\) needed to cover this set (see Corollary 5.2). **Proposition 4.2**.: _For all \(x\in X\), \(0<\varepsilon<1\), for any open \(O\subset X\) and for all_ \[T>T_{r}:=\max\left(t_{1},\,\frac{1}{\lambda^{\prime}}\log\frac{2}{r},\,\frac{ 1}{\lambda^{\prime}}\log\frac{(4\sqrt{L})^{L}}{c_{1}\lambda^{\prime}r^{L}}, \frac{\log 8}{\lambda_{\min}}\right), \tag{4.3}\] _where \(c_{1}\) is as in (3.5) and \(t_{1},\lambda^{\prime}\) are as in Proposition 3.5, one has_ \[\nu\big{(}A_{x,1-\varepsilon}(T,O)\big{)}\geq\nu\left(V_{r}\right)\left(1- \frac{1}{\varepsilon}\Big{(}1-\mu(\sigma_{r/2}O)+\frac{T_{r}+1}{T}\Big{)} \right). \tag{4.4}\] Proof.: Let \(x,\varepsilon,O\) and \(T\) be as above. Then by definition we have: \[\begin{split}\nu\big{(}A_{x,\,1-\varepsilon}(T,O)\big{)}& =\nu\left(\left\{h\in V_{r}:\frac{1}{T}\int_{0}^{T}1_{O^{c}}(g_{t} hx)\,dt\geq 1-\varepsilon\right\}\right)\\ &=\nu\left(V_{r}\right)-\nu\left(\left\{h\in V_{r}:\frac{1}{T} \int_{0}^{T}1_{O^{c}}(g_{t}hx)\,dt\!>\!\varepsilon\right\}\right)\\ &\geq\nu\left(V_{r}\right)-\nu\big{(}A_{x,\varepsilon}(T,O^{c}) \big{)}.\end{split} \tag{4.5}\] Our goal is to estimate the right-hand side of (4.5) from below. We have: \[\begin{split}\nu\big{(}A_{x,\varepsilon}(T,O^{c})\big{)}& =\nu\left(\left\{h\in V_{r}:\frac{1}{T}\int_{0}^{T}1_{O^{c}}(g_{ t}hx)\,dt\geq\varepsilon\right\}\right)\\ \underset{\text{(Markov's inequality)}}{\leq}& \frac{1}{\varepsilon T}\int_{0}^{T}\int_{V_{r}}1_{O^{c}}(g_{t} hx)\,d\nu(h)\,dt\\ =&\frac{1}{\varepsilon T}\int_{0}^{T}\nu\left(\left\{ h\in V_{r}:g_{t}hx\in O^{c}\right\}\right)dt\\ =&\frac{1}{\varepsilon T}\left(\int_{0}^{T_{r}}\nu \left(\left\{h\in V_{r}:g_{t}hx\in O^{c}\right\}\right)dt\right.\\ &\hskip 142.26378pt+\int_{T_{r}}^{T}\nu\left(\left\{h\in V_{r}:g_{ t}hx\in O^{c}\right\}\right)dt\right).\end{split} \tag{4.6}\] Note that \[\int_{0}^{T_{r}}\nu\left(\left\{h\in V_{r}:g_{t}hx\in O^{c}\right\}\right)dt \leq T_{r}\cdot\nu\left(V_{r}\right), \tag{4.7}\] and, since \(T_{r}\geq t_{1}\), by applying Proposition 3.5 we get \[\begin{split}&\int_{T_{r}}^{T}\nu\left(\left\{h\in V_{r}:g_{t}hx \in O^{c}\right\}\right)\,dt\\ =&\int_{T_{r}}^{T}\left(\nu(V)-\nu\left(\left\{h\in V _{r}:g_{t}hx\in O\right\}\right)\right)dt\\ \leq&\int_{T_{r}}^{T}\left(\nu\left(V_{r}\right) \left(1-\mu(\sigma_{e^{-\lambda^{\prime}T_{r}}}O)\right)+e^{-\lambda^{\prime} t}\right)\,dt\\ \leq& T\cdot\nu\left(V_{r}\right)\left(1-\mu(\sigma _{e^{-\lambda^{\prime}T_{r}}}O)\right)+\frac{e^{-\lambda^{\prime}T_{r}}}{ \lambda^{\prime}}\\ \underset{\text{(4.3)}}{\leq}& T\cdot\nu\left(V_{r} \right)\left(1-\mu(\sigma_{r/2}O)\right)+\frac{e^{-\lambda^{\prime}T_{r}}}{ \lambda^{\prime}}.\end{split} \tag{4.8}\] It remains to observe that (4.3), in combination with (3.3) and (3.5), also implies that \(\frac{e^{-\lambda^{\prime}T_{r}}}{\lambda^{\prime}}\leq\nu(V_{r})\). Hence (4.4) follows from (4.5), (4.6), (4.7) and (4.8). ## 5. A covering result In this section we will prove a covering result for the sets of the form \(A_{x,\delta}(NT,O^{c})\). Then, by using Proposition 4.1 and Lemma 3.2, in the next section we will obtain an upper estimate for the Hausdorff dimension of \(S_{x,\delta}(O)\). We start with the following lemma: **Lemma 5.1**.: _Let \(O\) be an open subset of \(X\), and let \(0<\varepsilon<1\). Then for any \(T>0\), \(\gamma\in\Lambda_{r}\), \(x\in X\), any \(0<\alpha<\varepsilon\) and for any Bowen \((r,T)\)-box \(B=g_{-T}V_{r}\gamma g_{T}\) which _has non-empty intersection with the set \(A_{x,1-\alpha}(T,\sigma_{4r}O)\), we have_ \[B\cap\,A_{x,\varepsilon}(T,O^{c})=\varnothing.\] Proof.: Let \(O\) be an open subset of \(X\) and let \(\gamma\in\Lambda_{r}\). Consider the Bowen \((r,T)\)-box \(B=g_{-T}V_{r}\gamma g_{T}\). Let \(x\in X\), take \(p\in B\), and assume that \(g_{t}px\in\sigma_{4r}O\) for some \(0\leq t\leq T\). Any \(p^{\prime}\in B\) is of the form \(p^{\prime}=hp\) where \(h\in g_{-T}(V_{r}\cdot V_{r})g_{T}\). Thus we have \[g_{t}p^{\prime}x=g_{t}hg_{-t}g_{t}px \in g_{-(T-t)}(V_{r}\cdot V_{r})g_{(T-t)}g_{t}px\] \[\underset{(\ref{eq:B})}{\in}B^{G}(2re^{-\lambda_{\min}(T-t)}) \cdot B^{G}(2re^{-\lambda_{\min}(T-t)})g_{t}px\] \[\in B^{G}(4r)g_{t}px\] which, in view of (3.4), implies that \(\operatorname{dist}(g_{t}p^{\prime}x,g_{t}px)\leq 4r\). Hence \(g_{t}p^{\prime}x\in O\). Now assume in addition that \(0<\alpha<\varepsilon<1\) and \(p\in A_{x,1-\alpha}(T,\sigma_{4r}O)\); then \[\frac{1}{T}\int_{0}^{T}1_{O^{c}}(g_{t}p^{\prime}x)\,dt =1-\frac{1}{T}\int_{0}^{T}1_{O}(g_{t}p^{\prime}x)\,dt\] \[\leq 1-\frac{1}{T}\int_{0}^{T}1_{\sigma_{4r}O}(g_{t}px)\,dt\leq 1 -(1-\alpha)=\alpha<\varepsilon.\] Therefore, if \(B\) has non-empty intersection with \(A_{x,1-\alpha}(T,\sigma_{4r}O)\), then for any \(p^{\prime}\in B\) we have \(p^{\prime}\notin A_{x,\varepsilon}(T,O^{c})\). This ends the proof. Now, by combining the previous lemma with Lemma 3.3 and Proposition 4.2 we obtain the following corollary which is a covering result for the sets of type \(A_{x,\varepsilon}(T,O^{c})\): **Corollary 5.2**.: _Let \(T_{r}\) be as in (4.3). Then for any \(0<\varepsilon<1\), any \(T>T_{r}\), any \(x\in X\), and for any open subset \(O\) of \(X\), the set \(A_{x,\varepsilon}(T,O^{c})\) can be covered with at most \(\frac{e^{\eta T}}{\varepsilon}C(T,O)\) Bowen \((T,r)\)-boxes in \(P\), where_ \[C(T,O):=1-\mu(\sigma_{5r}O)+\frac{T_{r}+1}{T}+c_{0}e^{-\lambda_{\min}T}. \tag{5.1}\] Proof.: Let \(0<\varepsilon<1\), \(0<\alpha<\varepsilon\), \(T>T_{r}\), \(x\in X\), and let \(O\) be an open subset of \(X\). Then we have: \[\#\{\gamma\in\Lambda_{r}:g_{-T}V_{r}\gamma g_{T}\,\cap\,A_{x, \varepsilon}(T,O^{c})\neq\varnothing\}\] \[\underset{\text{Lemma \ref{eq:B}}}{\leq} \#\{\gamma\in\Lambda_{r}:g_{-T}V_{r}\gamma g_{T}\,\cap\,V_{r}\neq \varnothing\}-\#\{\gamma\in\Lambda_{r}:g_{-T}V_{r}\gamma g_{T}\,\subset\,A_{x,1 -\alpha}(T,\sigma_{4r}O)\}\] \[\underset{\text{Lemma \ref{eq:B}}}{\leq} e^{\eta T}\left(1+c_{0}e^{-\lambda_{\min}T}\right)-\#\{\gamma\in \Lambda_{r}:g_{-T}V_{r}\gamma g_{T}\,\subset\,A_{x,1-\alpha}(T,\sigma_{4r}O)\}\] \[\leq e^{\eta T}\left(1+c_{0}e^{-\lambda_{\min}T}\right)-\frac{\nu \left(A_{x,1-\alpha}(T,\sigma_{4r}O)\right)}{\nu(g_{-T}V_{r}g_{T})}\] \[\underset{\text{Proposition \ref{eq:B}}}{\leq} e^{\eta T}\left(1+c_{0}e^{-\lambda_{\min}T}\right)-e^{\eta t}\left(1- \alpha^{-1}\cdot\left(1-\mu(\sigma_{r/2}\sigma_{4r}O)+\frac{T_{r}+1}{T}\right)\right)\] \[\underset{\text{(\ref{eq:B})}}{\leq} \frac{e^{\eta T}}{\alpha}C(T,O).\] Now since \(0<\alpha<\varepsilon\) was arbitrary, by letting \(\alpha\) approach \(\varepsilon\), we get that \(A_{x,\varepsilon}(T,O^{c})\) can be covered with at most \(\frac{e^{\eta T}}{\varepsilon}C(T,O)\) Bowen \((T,r)\)-boxes in \(P\), as desired. Next we will need a generalized version of the definition (4.2) of sets \(A_{x,\delta}(T,S)\). Namely, given \(S\subset X\), \(x\in X\), \(T>0\), \(0<\delta<1\) and \(J\subset\mathbb{N}\), let us define \[A_{x,\,\delta}(T,S,J):=\left\{h\in V_{r}:\frac{1}{T}\int_{(i-1)T}^{iT}1_{S}(g_{ t}hx)\,dt\geq\delta\ \ \ \forall\,i\in J\right\}. \tag{5.2}\] Clearly \(A_{x,\delta}(T,S)=A_{x,\,\delta}(T,S,\{1\})\). Using the above corollary inductively, in the following proposition we obtain a covering result for the sets of the form (5.2). **Proposition 5.3**.: _Let \(O\) be a non-empty open subset of \(X\), and let \(T_{r}\) be as in (4.3). Then for all \(0<\varepsilon<1\), \(T>T_{r}\), \(N\in\mathbb{Z}_{+}\), \(J\subset\{1,\ldots,N\}\), and for all \(x\in X\), the set \(A_{x,\varepsilon}(T,O^{c},J)\) can be covered with at most_ \[e^{\eta NT}\bigg{(}\frac{C(T,O)}{\varepsilon}\bigg{)}^{|J|}\left(1+c_{0}e^{- \lambda_{\min}t}\right)^{N-|J|} \tag{5.3}\] _Bowen \((NT,r)\)-boxes in \(P\)._ Proof.: Let \(0<\varepsilon<1\), let \(T>T_{r}\), and let \(x\in X\). We argue by induction on \(N\); the basis is given by \(N=0\) and \(J=\varnothing\), which makes the quantity (5.3) equal to \(1\). This makes sense, since \(A_{x,\varepsilon}(T,O^{c},J)=V_{r}\), which is precisely a \((0,r)\)-box. Now take an arbitrary \(N\in\mathbb{N}\) and let \(J^{\prime}:=J\smallsetminus\{N\}\). By the induction assumption, the set \(A_{x,\varepsilon}\big{(}(N-1)T,O^{c},J^{\prime}\big{)}\) can be covered with at most \[e^{\eta(N-1)T}\cdot\bigg{(}\frac{C(T,O)}{\varepsilon}\bigg{)}^{|J^{\prime}|} \left(1+c_{0}e^{-\lambda_{\min}t}\right)^{N-1-|J^{\prime}|} \tag{5.4}\] Bowen \(\big{(}(N-1)T,r\big{)}\)-boxes in \(P\). Now let \(g_{-(N-1)T}V_{r}\gamma g_{(N-1)T}\) be one of the Bowen \(\big{(}(N-1)T,r\big{)}\)-boxes in the above cover which has non-empty intersection with \(A_{x,\varepsilon}\big{(}(N-1)T,O^{c},J^{\prime}\big{)}\). Take any \(q=g_{-(N-1)T}h\gamma g_{(N-1)T}\in g_{-(N-1)T}V_{r}\gamma g_{(N-1)T}\), and consider two cases. * If \(N\in J\), so that \(|J^{\prime}|=|J|-1\) and \(N-1-|J^{\prime}|=N-|J|\), write \[\frac{1}{T}\int_{(N-1)T}^{NT}1_{O^{c}}(g_{t}qx)\,dt =\frac{1}{T}\int_{(N-1)T}^{NT}1_{O^{c}}\big{(}g_{t}(g_{-(N-1)T}h \gamma g_{(N-1)T})x\big{)}\,dt\] (5.5) \[=\frac{1}{T}\int_{0}^{T}1_{O^{c}}\big{(}g_{t}h(\gamma g_{(N-1)T}x )\big{)}\,dt.\] Consequently, \[\left\{q\in g_{-(N-1)T}V_{r}\gamma g_{(N-1)T}:\frac{1}{T}\int_{( N-1)T}^{NT}1_{O^{c}}(g_{t}qx)\,dt\geq\varepsilon\right\}\] (5.6) \[=g_{(N-1)T}\left\{h\in V_{r}:\frac{1}{T}\int_{0}^{T}1_{O^{c}} \big{(}g_{t}h(\gamma g_{(N-1)T}x)\big{)}\,dt\geq\varepsilon\right\}\gamma g_{( N-1)T}\] \[=g_{(N-1)T}A_{\gamma g_{(N-1)T}x,\varepsilon}(T,r,O^{c})\gamma g_ {(N-1)T}.\] Hence, by applying Corollary 5.2 with \(\gamma g_{(N-1)T}x\) in place of \(x\), we can cover the set in the left hand side of (5.6) with at most \(e^{\eta T}\Big{(}\frac{C(T,O)}{\varepsilon}\Big{)}^{|J|}\) Bowen \((NT,r)\)-boxes in \(P\). Therefore the number of Bowen \((NT,r)\)-boxes needed to cover \(A_{x,\varepsilon}(T,O^{c},J)\) is at most \(e^{\eta T}\Big{(}\frac{C(T,O)}{\varepsilon}\Big{)}^{|J|}\) times the quantity in (5.3), which is precisely (5.4). * If \(N\notin J\), so that \(|J^{\prime}|=|J|\) and \(N-1-|J^{\prime}|=N-1-|J|\), the argument is even clearer. By Lemma 3.3, \(g_{-(N-1)T}V_{r}\gamma g_{(N-1)T}\) can be covered by at most \(e^{\eta T}\left(1+c_{0}e^{-\lambda_{\min}t}\right)\) Bowen \((NT,r)\)-boxes. Hence the number of Bowen \((NT,r)\)-boxes needed to cover \(A_{x,\varepsilon}(T,O^{c},J)\) is at most \(e^{\eta T}\left(1+c_{0}e^{-\lambda_{\min}t}\right)\) times the quantity in (5.3), which again is precisely (5.4). Recall that our goal in this section is to find a covering result for the sets of the form \(A_{x,\delta}(NT,O^{c})\). The following lemma reduces this task to a covering result for the sets of the form \(A_{x,\varepsilon}(T,O^{c},J)\) for \(0<\varepsilon<\delta\) and \(J\subset\{1,\ldots,N\}\). **Lemma 5.4**.: _For any \(S\subset X\), \(N\in\mathbb{N}\), \(T>0\), \(x\in X\), \(0<\delta<1\), and for any \(0<\varepsilon<\delta\)_ \[A_{x,\delta}(NT,S)\subset\bigcup_{J\subset\{1,\ldots,N\}:|J|\geq\lceil\left(1- \frac{1-\delta}{1-\varepsilon}\right)N\rceil}A_{x,\varepsilon}(T,S,J)\] _._ Proof.: Let \(N\in\mathbb{N}\), \(r>0\), \(T>0\), \(x\in X\) and \(0<\varepsilon<\delta<1\). Also let \(h\in A_{x,\delta}(NT,S)\), and define \[E:=\big{\{}j\in\{1,\ldots,N\}:h\notin A_{x,\varepsilon}(T,S,\{j\})\big{\}}.\] Then \[\delta\leq \frac{1}{NT}\int_{0}^{NT}1_{S}(g_{t}hx)\,dt\] \[=\frac{1}{N}\sum_{j\in E}\frac{1}{T}\int_{(j-1)T}^{JT}1_{S}(g_{t} hx)\,dt+\frac{1}{N}\sum_{j\in\{1,\ldots,N\}\smallsetminus E}\frac{1}{T}\int_{(j-1)T}^ {JT}1_{S}(g_{t}hx)\,dt\] \[\leq\frac{1}{N}\cdot|E|\cdot\varepsilon+\frac{1}{N}\cdot|\{1, \ldots,N\}\smallsetminus E|=\frac{1}{N}\left(|E|\cdot\varepsilon+N-|E|\right).\] This implies \[|E|\leq\frac{1-\delta}{1-\varepsilon}N.\] Note that it follows immediately from the definition of \(E\) that \(h\) is an element of \(A_{x,\varepsilon}(T,S,\{1,\ldots,N\}\smallsetminus E)\). Hence, in view of the above inequality we conclude that \[h\in\bigcup_{J\subset\{1,\ldots,N\}:|J|\geq\lceil\left(1-\frac{1-\delta}{1- \varepsilon}\right)N\rceil}A_{x,\varepsilon}(T,S,J),\] finishing the proof of the lemma. From the above lemma combined with Proposition 5.3 we get the following crucial covering result: **Corollary 5.5**.: _Let \(C_{1}\) be as in Lemma 3.4, \(0<\delta<1\), \(0<r\leq r_{2}\), and let \(T_{r}\) be as in (4.3). Then for any \(x\in X\), any \(N\in\mathbb{N}\), any \(T>T_{r}\), and for any \(0<\varepsilon<\delta\) the set \(A_{x,\delta}(NT,O^{c})\) can be covered with at most_ \[C_{1}e^{\lambda_{\max}LNT}\binom{N}{\lceil\left(1-\frac{1-\delta}{1- \varepsilon}\right)N\rceil}\cdot\left(\frac{C(T,O)}{\varepsilon}\right)^{ \lceil\left(1-\frac{1-\delta}{1-\varepsilon}\right)N\rceil}\cdot\left(1+c_{0 }e^{-\lambda_{\min}T}\right)^{N-\lceil\left(1-\frac{1-\delta}{1-\varepsilon} \right)N\rceil} \tag{5.6}\] _balls in \(P\) of radius \(re^{-\lambda_{\max}NT}\)._ Proof.: Let \(x\in X\), \(N\in\mathbb{N}\), \(T>T_{r}\), and let \(0<\varepsilon<\delta\). By the above lemma we have: \[A_{x,\delta}(NT,O^{c})\subset\bigcup_{J\subset\{1,\dots,N\}:|J|\geq\lceil\left(1 -\frac{1-\delta}{1-\varepsilon}\right)N\rceil}A_{x,\varepsilon}(T,O^{c},J).\] Now note that for any \(J\subset\{1,\dots,N\}\), if we take any subset \(J^{\prime}\) of \(J\), then it follows immediately that \(A_{x,\varepsilon}(T,O^{c},J)\subset A_{x,\varepsilon}(T,O^{c},J^{\prime})\). Therefore, the above inclusion yields the following inclusion: \[A_{x,\delta}(NT,O^{c})\subset\bigcup_{J\subset\{1,\dots,N\}:|J|=\lceil\left(1 -\frac{1-\delta}{1-\varepsilon}\right)N\rceil}A_{x,\varepsilon}(T,O^{c},J).\] Also, by Lemma 3.4, every Bowen \((NT,r)\)-boxes in \(P\) can be covered with at most \(C_{1}e^{(\lambda_{\max}L-\eta)NT}\) balls in \(P\) of radius \(re^{-\lambda_{\max}NT}\). From this, combined with Proposition 5.3 and the above inclusion we can conclude the proof. ## 6. Proof of Theorem 1.5 Proof of Theorem 1.5.: Let \(O\) be an open subset of \(X\), \(x\in X\), and let \(\delta>0\). In view of countable stability of Hausdorff dimension, it suffices to show that for any \(0<r\leq r_{2}\), we have \[\operatorname{codim}S_{x,\delta}(O)\gg\frac{\mu(\sigma_{r}O)\cdot\phi\left( \mu(\sigma_{r}O),\sqrt{1-\delta}\right)}{\log\frac{1}{r}}\] where \(S_{x,\delta}(O)\) is as in (4.1) and \(\phi\) is as in (1.4). In order to prove the above statement, it is evident that it suffices to demonstrate that for any \(0<r\leq r_{2}/5\) we have \[\operatorname{codim}S_{x,\delta}(O)\gg\frac{\mu(\sigma_{5r}O)\cdot\phi\left( \mu(\sigma_{5r}O),\sqrt{1-\delta}\right)}{\log\frac{1}{5r}}. \tag{6.1}\] If \(\mu(\sigma_{5r}O)=0\), then the above statement follows immediately. So in this proof we assume always that \(\mu(\sigma_{5r}O)>0\). We start with the following combinatorial lemma: **Lemma 6.1**.: _Let \(m=m(n)\leq n\) be a function of \(n\) such that \(\lim_{n\to\infty}\frac{m}{n}=z\) for some fixed constant \(0<z<1\). Then_ \[\binom{n}{m}=o(1)B\bigl{(}\tfrac{m}{n}\bigr{)}^{n},\] _where_ \[B(z):=\left(\frac{1}{z}\right)^{z}\left(\frac{1}{1-z}\right)^{1-z}. \tag{6.2}\] Proof.: Note that \(\lim_{n\to\infty}\frac{m}{n}=z<1\) implies that both \(m\) and \(n-m\) tend to infinity as \(n\) goes to infinity; moreover, \(\lim_{n\to\infty}\frac{n-m}{n}=1-z\). Hence, by using Stirling's approximation we have: \[\binom{n}{m}=\frac{n!}{m!(n-m)!} =(1+o(1))\frac{\sqrt{2\pi n}(\tfrac{n}{e})^{n}}{\sqrt{2\pi m( \tfrac{m}{e})^{m}+\sqrt{2\pi(n-m)}(\tfrac{n-m}{e})^{n-m}}}\] \[=(1+o(1))\sqrt{\frac{n}{2\pi m(n-m)}}\left(\frac{n}{m}\right)^{m} \left(\frac{n}{n-m}\right)^{n-m}\] \[=o(1)\left(\frac{n}{m}\right)^{m}\left(\frac{n}{n-m}\right)^{n-m}\] \[=o(1)B\bigl{(}\tfrac{m}{n}\bigr{)}^{n},\] where the third equality above follows from the fact that \[\lim_{n\to\infty}\frac{n}{2\pi m(n-m)}=\lim_{n\to\infty}\frac{1}{2\pi n}\cdot\lim_ {n\to\infty}\frac{n}{m}\cdot\lim_{n\to\infty}\frac{n}{n-m}=0\cdot\frac{1}{z} \cdot\frac{1}{1-z}=0.\] Now given \(0<\varepsilon<\delta\) set \[z:=1-\frac{1-\delta}{1-\varepsilon}\quad\Longleftrightarrow\quad\varepsilon= 1-\frac{1-\delta}{1-z}=\frac{\delta-z}{1-z}. \tag{6.3}\] Lemma 6.1, applied with \(n\) replaced with \(N\) and \(m\) replaced with \(\lceil zN\rceil\), then implies that there exists \(N_{0}=N_{0}(z)\in\mathbb{N}\) such that \[\binom{N}{\lceil zN\rceil}\leq B\left(\frac{\lceil zN\rceil}{N}\right)^{N} \quad\text{ for all }N\geq N_{0}. \tag{6.4}\] Take \(0<r\leq r_{2}/5\) and \(T_{r}\) as in (4.3), and let \(T>T_{r}\). By combining Corollary 5.5 with (6.4) we get that for any \(N\geq N_{0}\) and any \(0<\varepsilon<\delta\), \[A_{x,\delta}(NT,O^{c})\text{ can be covered with at most} \tag{6.5}\] \[C_{1}e^{LN\lambda_{\max}T}\cdot\beta_{N}^{N}\text{ balls in }P\text{ of radius }r e^{-\lambda_{\max}NT},\] where \[\beta_{N}:=B\left(\frac{\lceil zN\rceil}{N}\right)\cdot\left(\frac{C(T,O)}{ \varepsilon}\right)^{\frac{\lceil zN\rceil}{N}}\left(1+c_{0}e^{-\lambda_{ \min}T}\right)^{1-\frac{\lceil zN\rceil}{N}}.\] Note that we have \[\lim_{N\to\infty}\beta_{N}=B(z)\cdot\left(\frac{C(T,O)}{\varepsilon}\right)^{ z}\left(1+c_{0}e^{-\lambda_{\min}T}\right)^{1-z} \tag{6.6}\] In view of (6.3), (6.5), (6.6) and Proposition 4.1, by applying Lemma 3.2 with \(e^{-\lambda_{\max}T}\) in place of \(\rho\), \(L+\frac{\log\beta_{N}}{\lambda_{\max}T}\) in place of \(\alpha_{N}\) and \(r\) in place of \(C\), we conclude that for any \(0<z<\delta\) the Hausdorff dimension of the set \(S_{x,\delta}(O)\) is bounded from above by \[L+\frac{1}{\lambda_{\max}T}\text{log}\left(B(z)\left(\frac{C(T,O)(1-z)}{ \delta-z}\right)^{z}\left(1+c_{0}e^{-\lambda_{\min}T}\right)^{1-z}\right)\] \[\underset{\eqref{eq:2.2}}{=}L+\frac{1}{\lambda_{\max}T}\text{log}\left(\left( \frac{C(T,O)(1-z)}{z(\delta-z)}\right)^{z}\left(\frac{1+c_{0}e^{-\lambda_{ \min}T}}{1-z}\right)^{1-z}\right).\] This shows that our objective should be to find \(z\in(0,\delta)\) and \(T>T_{r}\) such that the value of \[\frac{1}{T}\left(z\log\left(\frac{z(\delta-z)}{C(T,O)(1-z)}\right)+(1-z)\log \left(\frac{1-z}{1+c_{0}e^{-\lambda_{\min}T}}\right)\right)\] is the largest possible. We are going to approximate the maximum by first choosing \(T\) in a convenient way. Take \(T_{0}\geq 1\) sufficiently large so that for any \(T\geq T_{0}\) one has \[c_{0}e^{-\lambda_{\min}T}<\frac{1}{T} \tag{6.7}\] (note that \(T_{0}\) depends only on \(G\), \(F_{+}\) and \(P\)), and set \[T:=\max\left(\frac{8T_{r}}{\mu(\sigma_{5r}O)},T_{0}\right). \tag{6.8}\] Then \[\frac{1}{T}<\frac{1+T_{r}}{T}\leq\frac{2T_{r}}{T}\leq\frac{\mu(\sigma_{5r}O)}{ 4},\] which, in combination with (6.7), yields \[1+c_{0}e^{-\lambda_{\min}T}\leq 1+\frac{1}{T}<1+\frac{\mu(\sigma_{5r}O)}{4}\] and \[C(T,O) =1-\mu(\sigma_{5r}O)+\frac{T_{r}+1}{T}+c_{0}e^{-\lambda_{\min}T}\] \[\leq 1-\mu(\sigma_{5r}O)+\frac{\mu(\sigma_{5r}O)}{4}+\frac{\mu( \sigma_{5r}O)}{4}=1-\frac{\mu(\sigma_{5r}O)}{2}.\] Therefore for \(T\) as in (6.8) the codimension of \(S_{x,\delta}(O)\) is for any \(0<z<\delta\) bounded from below by \[\frac{1}{\lambda_{\max}T}\left(z\log\Biggl{(}\frac{z(\delta-z)}{\left(1-\frac {\mu(\sigma_{5r}O)}{2}\right)(1-z)}\Biggr{)}+(1-z)\log\left(\frac{1-z}{1+\frac {\mu(\sigma_{5r}O)}{4}}\right)\right).\] Note that the second summand in the above expression is always negative; thus it makes sense to try choosing \(0<z<\delta\) in a way that ensures that the first summand is maximized and is positive if possible (this condition could prove to be unsuccessful for any \(0<z<\delta\), contingent upon \(\mu(\sigma_{5r}O)\) and \(\delta\); in this case we will not achieve a dimension drop). We will solve the latter problem approximately by finding \(0<z<\delta\) which maximizes the ratio \(\frac{z(\delta-z)}{1-z}\). An elementary calculus exercise shows that for that one should take \(z=1-\sqrt{1-\delta}\), so that \(\frac{z(\delta-z)}{1-z}=(1-\sqrt{1-\delta})^{2}\). Denoting \(s=\sqrt{1-\delta}\) and \(y=\mu(\sigma_{5r}O)\), we get an estimate \[\begin{array}{l}\operatorname{codim}S_{x,\delta}(O)\geq\ \frac{1}{ \lambda_{\max}T}\left((1-s)\log\biggl{(}\frac{(1-s)^{2}}{1-\frac{y}{2}} \biggr{)}+s\log\left(\frac{s}{1+\frac{y}{4}}\right)\right)\\ \underset{\eqref{eq:1}}{=}\frac{1}{\lambda_{\max}T}\phi(y,s).\end{array}\] It remains to observe that \[T\ \underset{\eqref{eq:1}}{\ll}\ \frac{8T_{r}}{\mu(\sigma_{5r}O)}\ \underset{\eqref{eq:1}}{\ll}\ \frac{\log\frac{1}{r}}{\mu(\sigma_{5r}O)}\ \underset{r\leq\frac{r_{2}}{5}<1/40}\ \frac{\log\frac{1}{5r}}{\mu(\sigma_{5r}O)};\] thus (6.1) follows, which ends the proof of Theorem 1.5.
2309.05139
A Skeleton-based Approach For Rock Crack Detection Towards A Climbing Robot Application
Conventional wheeled robots are unable to traverse scientifically interesting, but dangerous, cave environments. Multi-limbed climbing robot designs, such as ReachBot, are able to grasp irregular surface features and execute climbing motions to overcome obstacles, given suitable grasp locations. To support grasp site identification, we present a method for detecting rock cracks and edges, the SKeleton Intersection Loss (SKIL). SKIL is a loss designed for thin object segmentation that leverages the skeleton of the label. A dataset of rock face images was collected, manually annotated, and augmented with generated data. A new group of metrics, LineAcc, has been proposed for thin object segmentation such that the impact of the object width on the score is minimized. In addition, the metric is less sensitive to translation which can often lead to a score of zero when computing classical metrics such as Dice on thin objects. Our fine-tuned models outperform previous methods on similar thin object segmentation tasks such as blood vessel segmentation and show promise for integration onto a robotic system.
Josselin Somerville Roberts, Paul-Emile Giacomelli, Yoni Gozlan, Julia Di
2023-09-10T21:16:56Z
http://arxiv.org/abs/2309.05139v2
# A Skeleton-based Approach For Rock Crack Detection Towards A Climbing Robot Application ###### Abstract Conventional wheeled robots are unable to traverse scientifically interesting, but dangerous, cave environments. Multi-imbed climbing robot designs, such as ReachBot, are able to grasp irregular surface features and execute climbing motions to overcome obstacles, given suitable grasp locations. To support grasp site identification, we present a method for detecting rock cracks and edges, the SKeletion Intersection Loss (SKIL). SKIL is a loss designed for thin object segmentation that leverages the skeleton of the label. A dataset of rock face images was collected, manually annotated, and augmented with generated data. A new group of metrics, LineAcc, has been proposed for thin object segmentation such that the impact of the object width on the score is minimized. In addition, the metric is less sensitive to translation which can often lead to a score of zero when computing classical metrics such as Dice on thin objects. Our fine-tuned models outperform previous methods on similar thin object segmentation tasks such as blood vessel segmentation and show promise for integration onto a robotic system. computer vision, segmentation, thin object, crack detection, blood vessel detection, climbing robotics ## I Introduction In exploration tasks, robots offer versatility and robustness for navigating novel environments. However, challenging terrain, like caverns and steep slopes which are of high astrobiological and geological interest, have often hampered traditional approaches such as wheeled rovers. Aerial vehicles meanwhile may be suited for rough terrain exploration, but have limited flight time and ability to traverse caves. To address this mobility gap, ReachBot is a novel robot concept that uses microspine grippers at the end of extendable booms to achieve a large workspace and wrench capability. This results in higher navigation capability across the rocky scrambles that one may find in lava tubes and caves on the Moon and Mars [1, 2, 3]. Of crucial importance is a perception system that identifies a good grasp site on a realistic rock surface. While one may identify grasp sites through basic primitives, such as trial and error, we note that in climbing robot applications, misplaced holds may lead to mission failure and robot damage. Furthermore, ReachBot's large workspace encourages a grasp site identification method capable from a distance. As an example, we consider cracks and edges as promising rock features for microspine grippers, because microspines may engage well with the asperities of a crack or edge leading to a strong grasp [4]. Yet identifying rock cracks from a distance can prove complex because cracks are typically long and thin features that are susceptible to lighting conditions and the angle of the camera with respect to the rock wall. _Statement of Contributions:_ This work focuses on improving rock crack detection towards a climbing robot application, with the following contributions: * We present a new segmentation loss, SKIL (SKeleton Intersection Loss), to identify thin features such as cracks on rock walls, blood vessels, or other thin objects. * We propose novel metrics for evaluating the detection results of fine and longitudinal objects, LineAcc. These metrics are based on the position (LineAcc-pos) and sizes (LineAcc-width/length) of the cracks, and are more robust to ground truth labeling errors than classical metrics. * We manually collected and generated a new dataset of rock wall images, which is open-sourced to support the development of other rock climbing-related applications. All code, commands, and datasets are included in this Github repo as well as instructions to recreate results. To assess the added value of SKIL and our new metric, we evaluated and compared performance on blood vessel Fig. 1: Workflow of the ViT-B based model discussed in this work (Section IV-A) with crop size of \(512\) and patch size of \(16\). segmentation benchmark datasets, an existing and related task in the medical imaging community. _Paper Organization:_ The rest of the paper is organized as follows. Section II provides an overview of related works for the task. Section III discusses the robot design and important considerations for crack detection. Section IV explains the proposed SKIL loss and new metrics. Section V describes the experiments and results. Finally, Section VI summarizes our contributions and explores future work. ## II Related Works ### _Existing Datasets_ We examined the possibility of leveraging existing datasets for this work. Rock segmentation for indoor climbing or climbing robots generally feature artificially colored rocks with indoor lighting, which are not representative of the outdoor volcanic rock walls that ReachBot will encounter on a potential mission to a Martian lava tube [5, 6, 7]. Rock classification datasets are collected outdoors and can be specified by sediment type (including volcanic), but rock type alone does not necessarily correspond to whether a rock is graspable, and many datasets are only of standalone rocks and not natural rock faces as one would encounter in a cave or lava tube [8, 9, 10, 11]. Datasets on infrastructural cracks are also potential candidates for crack detection, but most man-made structures like concrete walls and bridges are macroscopically smooth with very fine cracks, and do not represent realistic surfaces for this robot mission [12, 13]. ### _Existing Methods_ Because ReachBot does not have an anthropomorphic gripper, existing grasp site identification approaches developed for human climbing do not fit the kinematics of the gripper [7]. Alternatively, there are models for the microspine grippers on a variety of rock-climbing robots [14, 4, 15], and past analyses have developed a stochastic wrench limit surface, which depends on both the gripper design and surface topography [16, 17, 18]. While these model-based approaches work for analyzing microspine grasps, they often require detailed ground truth topographical knowledge which limits them from deployment on a real robot. Therefore, we are interested in learning to detect rock _features_ for grasping, such as cracks and edges. As a first step, crack detection methods exist in other related applications such as infrastructural monitoring, but these tend to be extremely fine cracks [19, 20]. A similar task can be found in the medical imaging space with blood vessel segmentation [21, 22]. Several losses have been proposed for blood vessels segmentation task (e.g. imbalanced [23], connected [24] and thin [25]) and some state-of-the-art methods for vessel segmentation, such as U-Net [26, 27, 28], appear promising for rock crack detection. We further discuss in Section V their performance for this use case, and investigate a modified skeleton-based approach for detecting crack and edge features for a robotics application, and benchmark performance with related blood vessel datasets. ### _Existing Metrics_ For image segmentation, IoU and Dice coefficient are commonly used metrics [29]. These metrics measure the pixel-wise agreement between prediction and ground truth, which penalizes any small misalignments between the predicted crack pixels and the true crack (which may often occur with manual annotation). Further, adjustments to the Dice score to make the prediction less sensitive to small position misalignments, such as a diffused prediction, would still not help us understand model performance on graspable cracks. Many alternative metrics have been proposed for image segmentation [30]. In the context of crack detection, spatial distance-based metrics may be relevant such as the Hausdorff distance [31]. However, they are sensitive to the thickness of the thin objects, which is uncertain in our case. One proposed metric is to only compute the Dice score around thin zones of the ground truth [32]. This metric does not reflect performance on larger zones of ground truth and does not mitigate the translation sensitivity. ## III Gripper Discussion The ReachBot design concept allows for navigation through cave-like environments with the use of microspine grippers on the end of extendable booms, as shown in Fig. 2. ReachBot's microspine gripper, inset in Fig. 2, grasps onto rocks successfully when a sufficient number of microspines are engaged on the surface asperities (such as those around the rock cracks). Because microspine grasping is inherently stochastic, there is no formal guarantee for grasp success. However, several parameters influence the envelope of graspable features, including the distribution of millimeter-scale asperities of the surface, the geometry of the surface based on the gripper design, and the sharpness of the microspines (which dull after too much continuous use) [33]. Because obtaining micrometer-level surface characterization of the asperities is not practical on the robot, we instead investigate macroscale rock _features_ for grasping. Akin to Fig. 2: The ReachBot concept uses microspine grippers at the end of extendable booms to reach a large workspace inside a Martian lava tube. Inset is a microspine gripper prototype grasping real lava rock in a Mojave Lava Tube. how a trained rock climber may look for certain rock shapes to fit their hand, previous work has already investigated rock geometry analysis for a ReachBot-specific gripper [33]. Classical shape-fitting techniques were used, which work best at identifying symmetrically convex shapes but miss potential irregular grasp sites such as cracks, ledges, and edges. It is important to identify as many grasp sites as possible because ReachBot's unique design enables navigation even when grasp sites are sparse, but only if the robot knows they exist. ## IV Proposed Approach ### _Model Architecture_ All the experiments were run with a ViT-B [34] backbone with a crop size of \(512\times 512\) and a patch size of \(16\times 16\) using the UPerNet [35] method. This model was chosen qualitatively as it offered good performances after a few hundred epochs. Other models considered included SAM [36], ResNet [37], and DeiT [38], but were deemed too complex for a resource-constrained hardware context. As the goal is to compare losses and eventually run on a field robot in a cave, we wanted to keep the model as simple as possible. ### _LineAcc Segmentation Metrics_ We propose several metrics, grouped under the name of **LineAcc**, to measure different aspects of the segmentation that classic metrics did not capture as discussed earlier. #### Iv-B1 Center line position The first metric is **the center line position**, which is referred to as LineAccpos. The goal is to measure if the cracks predicted are well positioned regardless of the width or length of the crack. Given a prediction \(\mathcal{P}\) and label \(\mathcal{L}\), we introduce their skeletons \(\mathcal{S}_{\mathcal{P}}\) and \(\mathcal{S}_{\mathcal{L}}\). The metric LineAccpos is defined as: \[\begin{split}\text{LineAcc}_{\text{pos}}(\mathcal{P},\mathcal{L })&=\frac{\sum\mathcal{S}_{\mathcal{P}}\times e^{-d(\mathcal{S}_ {\mathcal{L}})^{2}/2\sigma^{2}}}{\sum\mathcal{S}_{\mathcal{P}}}\\ &\times\frac{\sum\mathcal{S}_{\mathcal{L}}\times e^{-d(\mathcal{S }_{\mathcal{P}})^{2}/2\sigma^{2}}}{\sum\mathcal{S}_{\mathcal{L}}}\end{split} \tag{1}\] With \(d\) a distance function such that for \(I\in\mathbb{R}^{H\times W}\), \[d(I)_{x,y}=\min_{u,v,I(u,v)=1}|x-u|+|y-v| \tag{2}\] This metric returns a score illustrating how close the two skeletons are (with a Gaussian decrease). #### Iv-B2 Width and length ratios We further introduce two ratio-based metrics: the **width** and **length** ratios. These two metrics illustrate how close the width and length of a predicted crack is to the ground truth, independently of the position of the crack. \[\text{LineAcc}_{\text{length}}(\mathcal{P},\mathcal{L})=\exp\left(-\left| \frac{\sum\mathcal{S}_{\mathcal{P}}+\epsilon}{\sum\mathcal{S}_{\mathcal{L}}+ \epsilon}-1\right|\right) \tag{3}\] \[\text{LineAcc}_{\text{width}}(\mathcal{P},\mathcal{L})=\exp\left(-\left| \frac{(\sum\mathcal{L})\left(\sum\mathcal{S}_{\mathcal{P}}\right)+\epsilon}{( \sum\mathcal{P})\left(\sum\mathcal{S}_{\mathcal{L}}\right)+\epsilon}-1\right|\right) \tag{4}\] These two metrics are also not correlated as the length ratio \(\text{LineAcc}_{\text{length}}\) relies on the skeletons (independent to width) and the width ratio \(\text{LineAcc}_{\text{width}}\) uses the inverse of the length ratio to balance its effect. A constant \(\epsilon=10^{-3}\) is added to avoid dividing by zero. ### _SKIL: a new loss_ We investigated new losses that optimize for the new metrics. Although CL-Dice [24] loss appeared promising (computes the Dice score between the skeleton of the prediction and the ground truth label), the skeleton intersection is dependent on the width of the labels and the Dice score does not optimize the center line position well. As we plan to use a combination of Dice and a new loss, this was not desirable. #### Iv-C1 SKIL-Dice, a loss adapted to the new metrics The core idea of SKIL-Dice is a skeleton-based approach that abstracts the width of the crack. Another improvement that was necessary was to integrate the distance from the two center lines. As an example, with CL-Dice, if the ground truth is a thin vertical rectangle of width \(w\), then whether the prediction is shifted by a distance \(D\) of 1 pixel or \(w/2-1\), the score would be the same. However, the \(\text{LineAcc}_{\text{pos}}\) score would be proportional to \(e^{-D^{2}/2\sigma^{2}}\). In order to solve this issue, we propose to first compute the soft skeleton (first introduced [24]), then apply a smooth diffusion and finally compute the Dice score between the two diffused skeletons. Given a binary image \(I\), we compute with our smooth diffusion the diffused image \(\text{Dif}(I)\) such that the pixels in \(I\) equal to 1 are still equal to 1 in \(\text{Dif}(I)\), and the pixels equal to 0 in \(I\) are now equal in \(\text{Dif}(I)\) to a value that depends on the distance to the closest pixel equal to 1 in \(I\). One could think of using the distance function \(d\) defined in Eq. 2, but this is not differentiable. We propose the following algorithm: ``` proceduresmooth_dilate(\(I,s_{\text{border}},n_{\text{iter}}^{\text{max}},f\)) \(s_{\text{dilate}}\leftarrow\max\left(1,s_{\text{border}}/n_{\text{iter}}^{ \text{max}}\right)\) \(I_{\text{enlarged}}\leftarrow\text{copy}(I)\) for\(d\in\{1,...,\min\left(s_{\text{border}},s_{\text{dilate}}\right)\}\)do \(I_{\text{enlarged}}^{\text{new}}\leftarrow\text{\text{\text{\text{\text{soft\_dilate}}}}}(I_{ \text{enlarged}},s_{\text{dilate}})\) \(I_{\text{enlarged}}\gets f*I_{\text{enlarged}}+(1-f)*I_{\text{enlarged}}^{ \text{new}}\) endfor return\(I_{\text{enlarged}}\) endprocedure ``` **Algorithm 1** Smooth binary mask diffusion where \(\text{\text{\text{\text{\text{soft\_dilate}}}}}(I,d)\) is a maxpool with a kernel of size \((1+2d)\times(1+2d)\) with the corners removed. This introduces 3 parameters of the loss: \(s_{\text{border}}\) the size of the diffused skeleton, \(n_{\text{iter}}^{\text{max}}\) the maximum number of iterations for the diffusion (more iterations take more memory), and \(f\) the factor that controls the decrease. Finally, SKIL-Dice is defined as (for the sake of simplicity parameters are omitted): \[\text{SKIL}^{\text{Dice}}(P,L)=\text{Dice}(\text{Dif}(S_{\mathcal{P}}),\text{Dif}(S_{L})) \tag{5}\] with \(\tilde{P}=T(P,s)\) the prediction threshold. We use a smooth threshold \(T(I,s)\) with \(s\) the sharpness: \[T(P,s)=\text{sigmoid}\left(s(P-0.5)\right) \tag{6}\] #### Iv-A2 SKIL-Product We also propose another variant, SKIL-Product. This version is as similar as possible to the CL-Dice loss. The predictions and labels are skeletonized, then diffused and finally the product of the predicted skeleton and the diffused skeleton of the label is computed. This is done instead of using the Dice score in order to have a normalization factor of 1. \[g(A,B)=\frac{\sum\left(S_{A}*\text{Diff}(S_{B})\right)+\epsilon}{\sum S_{A}+\epsilon} \tag{7}\] \[\text{SKIL}^{\text{Prod}}(P,L)=1-\sqrt{g(\tilde{P},L)*g(L,\tilde{P})} \tag{8}\] ## V Results We discuss the following questions: * Are our new metrics visually compelling? For example, given two predictions of identical Dice scores, does LineAcc capture if one is visually better than the other? * Does SKIL improve performance and if yes which variant is better: SKIL-Dice or SKIL-Product? * Is SKIL useful in similar segmentation tasks? * Does SKIL help on poorly annotated images? ### _Dataset collection_ The **Cracks Reachbot** dataset has been open-sourced and is available here. #### V-A1 Real data We have collected an initial dataset of 234 rock wall photographs at Pinnacles National Park, Monument Valley, and Castle Rock using an Intel RealSense D435 camera. From these 234 images, we selected and annotated 100 images that presented interesting cracks and edges. Figure 3 illustrates some of the images. #### V-A2 Generated data In order to add more diverse data, we also generated some images using DALL-E 2 [39]. The prompt used was, _"A natural rock face with no apparent edges, cracks or holds."_. We asked for no visible edges to have more natural-looking rock faces (Figure 4). We generated and annotated an additional 136 images, which we combined into a dataset of real and generated images composed of 50 real images for evaluation and 186 images for training, including 50 real and 136 generated ones. In the rest of this paper, we only consider this augmented dataset as our dataset of 100 real images was too small to evaluate. ### _Metrics Evaluation_ For comparison purposes, we define the combined metric, \(\text{LineAcc}_{\text{comb.}}\) (combined): \[\begin{split}\text{LineAcc}_{\text{comb.}}&=2 \text{LineAcc}_{\text{pos.}}+0.5\text{LineAcc}_{\text{width}}\\ &+0.5\text{LineAcc}_{\text{length}}+0.5\text{Dice}+0.5\text{ IOU}\end{split} \tag{9}\] The coefficients were chosen manually to prioritize detecting cracks in the correct locations, but not necessarily with very precise width or length. To evaluate our new metrics we generated two predictions with the same Dice score but different \(\text{LineAcc}_{\text{comb.}}\) to check Fig. 4: Examples of images generated using DALL-E 2 [39]. Fig. 5: **Qualitative results:** We show two generated predictions with the same Dice score (absolute error of 1%) but significantly different performance on \(\text{LineAcc}_{\text{comb.}}\). From top to bottom, the Dice scores are 0.6, 0.4, and 0.2. The second prediction has a \(\text{LineAcc}_{\text{comb.}}\) score that is \(0.23\), \(0.29\), and \(0.22\) higher than the first, respectively. Fig. 3: Crack dataset images. The first row shows raw images of cracks, and the second row shows annotated images. which one was visually better. The predictions were generated using random deformations as described in Section V-D1. Figure 5 shows that LineAcccomb. is less sensitive to the prediction width (first row), to small translations (second row), and rewards predictions with the correct line length (last row). ### _Loss performance_ To answer the question: Does SKIL improve training, we trained a ViT-B network on the crack dataset and a combined dataset of blood vessels (Section V-A). The training was run on an NVIDIA RTX A6000 and averaged on 20 runs each for better precision. We used a linear learning scheduler that reaches the maximum at 25% of the training. The model was trained for 100 epochs with a maximum learning rate of \(1\times 10^{-5}\) for the cracks dataset while it was trained for 200 epochs with a maximum learning rate of \(5\times 10^{-5}\) for the blood vessels dataset. All models were fine-tuned with the Adam [40] optimizer from a pre-trained model. For all experiments, the parameters used for the SKIL loss (both Dice and Product) were \(s_{\text{border}}=20\), \(n_{\text{iter}}^{\text{max}}=50\), \(f=0.82\) and \(s=10\). These parameters were qualitatively chosen and manually tuned to offer the behavior we wanted, and parameter search optimization will be part of future work. #### V-C1 Quantitative results For each training, we used a mixture of the Dice loss with a coefficient of 3 and the studied loss with a coefficient of 1 (_For the Dice-only training we, therefore, had a coefficient of 4_). In the tables below we will only reference the studied loss even though the Dice loss was also used for all of them. #### Iv-C4 SKIL with U-Net The quantitative results in Table II are not state of the art, so we further investigate the choice of the ViT-B [41] architecture for our crack detection problem given that the U-Net [26] architecture performs better for blood vessel segmentation (See Table IV). We show that the U-Net architecture does not perform as well on the crack segmentation task across the four losses. We believe this is because while rock crack detection and blood vessel detection are similar tasks for benchmarking, they still have nuanced differences, such as blood vessels having a more radial configuration as they emerge from a central optic nerve [42]. Figure 8 and Table III report U-Net results. For completeness, a comparison of the losses is reported in Table III and IV. The networks are trained from scratch with a maximum learning rate of \(1\times 10^{-2}\) for \(40000\) steps (approximately \(2000\) epochs depending on the size of the dataset) using the Adam [40] optimizer. While the results slightly favor SKIL, it is not possible to conclude statistical significance given the relative error. ### _Generalization on poor quality annotations_ One of the motivations of SKIL and \(\text{LineAcc}_{\text{pos}}\) was to abstract the width of the cracks from the training and evaluation. This was important as the true width of a graspable crack is subjective (to the point that it was not always consistent between labelers). More generally, given the stochasticity of grasping with microspine grippers, crack detection for robotic grasping may always be poorly annotated. To test SKIL's performance on poorly annotated data, we created some random deformations to the blood vessel labels, trained on them and evaluated them on clean labels. #### Iv-D1 Generation of poorly annotated blood vessels We created 3 deformations (Figure 9): a simple **shift**, a **random width change** that changes the width of the vessels but does not create or remove any vessels, and a **random branch cutter** which removes vessels with a probability inversely proportional to the thickness. Below we explain the latter two. To change the width of the vessels, given a binary image \(I\), we first compute the skeleton \(\mathcal{S}_{I}\) as well as the distance to the closest pixel equal to 0 at each pixel, noted \(D_{I}^{0}\) (Analog to \(d\) in Section IV-B1). Then we multiply \(D_{I}^{0}\) by some Perlin noise [43]\(\mathcal{N}^{P}\) to finally apply a **decreasing dilation** which consists of iteratively applying some maxpool of size \(3\times 3\) and subtract 1 each time to the pixels that were equal to 0 before the pooling. This way, a pixel of value \(x\) will generate a diamond of width \(x\). \[\mathcal{D}_{\text{width}}(I)=\big{(}\textbf{decrease\_dilate}(D_{I}^{0}* \mathcal{S}_{I}*\mathcal{N}^{P})\big{)}>0 \tag{10}\] The parameters chosen for the Perlin noise were: amplitude between \(0.1\) and \(2.0\), resolution of \(3\), \(4\) octaves with a persistence of \(0.5\), and a lacunarity of \(3\). The random branch cutter also relies on the same idea, however instead of multiplying the distance values \(D_{l}^{0}\) by some Perlin noise, the values are changed to 0 based on a probability \(\mathcal{P}\) computed as follows: \[\mathcal{P}(I)=\mathcal{N}^{P}*\left(\frac{\sum\mathcal{S}_{I}*D_{l}^{0}}{\sum \mathcal{S}_{I}}*\frac{1}{\epsilon+D_{l}^{0}}\right)^{\alpha}\] where \(\epsilon=1\times 10^{-3}\) is a constant to avoid dividing by zero and \(\alpha\) is the selectiveness, set to \(0.2\) in our experiments. A pixel is then set to 0 if \(\mathcal{P}(I)>1-p\) with \(p=0.35\). All the parameters for the Perlin noise are identical. #### V-B2 Quantitative results Table V summarizes the results obtained with the four different deformations and shows the performance of each loss when the model is trained on the deformed data, and Table VI helps to interpret these results by reporting relative performance between the network trained on the clean data and deformed data. We see that SKIL substantially improves performance on _shifted labels_ deformations due to its soft dilation. The results for the _random width_ deformation show that even with the clean data the network does not perform well regarding the width prediction as the deformation does not deteriorate this metric at all. The _cropped branches_ deformation seems to indicate that SKIL slightly performs better when some annotations are missing but the difference is too small to generalize. With combined deformations, SKIL outperforms CL-Dice and Dice so we conclude that SKIL does adapt to poorly annotated data. ## VI Conclusion In this work, we present SKIL, a skeleton-based segmentation loss to identify cracks on rock walls, and evaluate its performance on both rock cracks and a similar task, blood vessel detection. We also present three novel metrics (LineAcc) for evaluating this method. Finally, we publish a new dataset containing 236 images of rocks with graspable edges and cracks annotated. Experiments show that SKIL and the LineAcc metrics perform well on our dataset of rock cracks and on several existing datasets for blood vessel detection. In this work, we take the first step to developing an accurate learning-based approach for visual rock crack detection to enable a climbing robot. Next steps include parameter fine-tuning, deployment on hardware for field testing with a partial robot system, and interfacing with motion planning systems to enable full autonomy. ## Acknowledgements Support for this work was provided by NASA under the Innovative Advanced Concepts (NIAC) program. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{**Classical**} & \multicolumn{3}{c}{**LineAcc**} \\ \cline{2-7} & IOU & Dice & Pos. & Width & Length & Combined \\ \hline \multicolumn{7}{l}{_Shifted labels_} \\ Dice & 0.215 & 0.349 \(\pm\) 0.002 & 0.485 & 0.490 & 0.504 & 0.444 \(\pm\) 0.002 \\ CL-Dice & **0.221** & **0.359 \(\pm\) 0.002** & 0.420 & 0.441 & 0.481 & 0.407 \(\pm\) 0.002 \\ SKIL-Dice & 0.210 & 0.340 \(\pm\) 0.003 & **0.751** & **0.564** & **0.700** & **0.618** \(\pm\) **0.002** \\ _Random width_ & & & & & & \\ \hline Dice & **0.276** & **0.419 \(\pm\) 0.001** & 0.778 & 0.547 & 0.678 & 0.638 \(\pm\) 0.001 \\ CL-Dice & 0.274 & 0.418 \(\pm\) 0.001 & 0.776 & 0.541 & 0.688 & 0.635 \(\pm\) 0.001 \\ SKIL-Dice & 0.268 & 0.406 \(\pm\) 0.001 & **0.831** & **0.616** & **0.783** & **0.687** \(\pm\) **0.001** \\ _Cropped branches_ & & & & & & \\ \hline Dice & 0.267 & 0.407 \(\pm\) 0.001 & 0.682 & 0.581 & 0.599 & 0.576 \(\pm\) 0.001 \\ CL-Dice & **0.274** & **0.417 \(\pm\) 0.001** & 0.671 & 0.549 & 0.605 & 0.571 \(\pm\) 0.001 \\ SKIL-Dice & 0.264 & 0.401 \(\pm\) 0.001 & **0.795** & **0.637** & **0.689** & **0.651** \(\pm\) **0.001** \\ _Combined deformations_ & & & & & & \\ \hline Dice & **0.271** & **0.419 \(\pm\) 0.001** & 0.653 & 0.507 & 0.577 & 0.551 \(\pm\) 0.001 \\ CL-Dice & 0.269 & 0.418 \(\pm\) 0.001 & 0.623 & 0.475 & 0.568 & 0.532 \(\pm\) 0.001 \\ SKIL-Dice & 0.266 & 0.410 \(\pm\) 0.001 & **0.798** & **0.558** & **0.718** & **0.651** \(\pm\) **0.001** \\ \hline \hline \end{tabular} \end{table} TABLE V: Different metrics on the **veelsels datasets** (aggregation of DRIVE, HRF, STARE, CHASE-DB1) _(averaged over 10 runs)_ when training for 200 epochs with 3 losses: Dice, CL-Dice and SKIL _(_This Work_). Standard error is reported. The experiments were run on 4 types of deformations. Combined deformation is a combination of shift, random width, and branch crop with a probability of 0.75 for each deformation. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{**Classical**} & \multicolumn{3}{c}{**LineAcc**} \\ \cline{2-7} & IOU & Dice & Pos. & Width & Length & Combined \\ \hline \multicolumn{7}{l}{_Shifted labels_} \\ Dice & 81.4\% & 87.3 \(\pm\) 0.7\% & 65.0\% & 77.7\% & 78.4\% & 71.8 \(\pm\) 0.3\% \\ CL-Dice & **81.9**\% & **87.6 \(\pm\) 0.4\%** & 56.4\% & 75.5\% & 73.2\% & 65.9 \(\pm\) 0.3\% \\ SKIL-Dice & **81.7**\% & 87.4 \(\pm\) 0.7\% & **91.25**\% & **82.3\%** & **93.8\%** & **91.0 \(\pm\) 0.2\%** \\ _Random width_ & & & & & \\ \hline Dice & **104.5**\% & **104.8 \(\pm\) 0.2\%** & **104.3**\% & 91.0\% & **105.4\%** & **103.2 \(\pm\) 0.1\%** \\ CL-Dice & 101.5\% & 120.0 \(\pm\) 0.1\% & 104.2\% & **92.6**\% & 104.7\% & 126.8 \(\pm\) 0.1\% \\ SKIL-Dice & 104.3\% & 104.4 \(\pm\) 1.0\% & 101.0\% & 89.9\% & 105.0\% & 101.2 \(\pm\) 0.1\% \\ _Cropped branches_ & & & & & \\ \hline Dice & 101.1\% & 101.8 \(\pm\) 0.1\% & 91.4\% & 92.1\% & **93.2\%** & 93.2 \(\pm\) 0.2\% \\ _CL-Dice_ & 101.5\% & 101.7 \(\pm\) 0.1\% & 90.1\% & **94.0\%** & 92.1\% & 92.4 \(\pm\) 0.1\% \\ SKIL-Dice & **102.7**\% & **103.1 \(\pm\) 0.1\%** & **96.6\%** & 93.0\% & 92.4\% & **95.9 \(\pm\) 0.1\%** \\ _Combined deformations_ & & & & & \\ \hline Dice & 102.7\% & 104.8 \(\pm\) 0.2\% & 87.5\% & 80.3\% & 89.7\% & 89.2 \(\pm\) 0.2\% \\ CL-Dice & 99.3\% & 102.0 \(\pm\) 0.2\% & 83.6\% & 81.3\% & 86.5\% & 86.1 \(\pm\) 0.2\% \\ SKIL-Dice & **103.5**\% & **105.4 \(\pm\) 0.2\%** & **97.0\%** & **81.5\%** & **96.2\%** & **95.9 \(\pm\) 0.1\%** \\ \hline \hline \end{tabular} \end{table} TABLE VI: Relative performance of Table V. For a given method and metric, the score trained on the deformed data is divided by the score trained on the clean data. Fig. 9: Deformed annotations on the DRIVE dataset. The first row shows the original annotation; following rows show deformation examples (ordered by column). The last column is a combination of the shift, random width, and branch crop deformations, with a probability of 0.5 for each deformation. Examples of visual changes are marked by red arrows. _(The last column is unmarked as the changes are visible enough.)_
2309.14034
A unified worst case for classical simplex and policy iteration pivot rules
We construct a family of Markov decision processes for which the policy iteration algorithm needs an exponential number of improving switches with Dantzig's rule, with Bland's rule, and with the Largest Increase pivot rule. This immediately translates to a family of linear programs for which the simplex algorithm needs an exponential number of pivot steps with the same three pivot rules. Our results yield a unified construction that simultaneously reproduces well-known lower bounds for these classical pivot rules, and we are able to infer that any (deterministic or randomized) combination of them cannot avoid an exponential worst-case behavior. Regarding the policy iteration algorithm, pivot rules typically switch multiple edges simultaneously and our lower bound for Dantzig's rule and the Largest Increase rule, which perform only single switches, seem novel. Regarding the simplex algorithm, the individual lower bounds were previously obtained separately via deformed hypercube constructions. In contrast to previous bounds for the simplex algorithm via Markov decision processes, our rigorous analysis is reasonably concise.
Yann Disser, Nils Mosis
2023-09-25T11:01:20Z
http://arxiv.org/abs/2309.14034v1
# A unified worst case for classical simplex and policy iteration pivot rules ###### Abstract We construct a family of Markov decision processes for which the policy iteration algorithm needs an exponential number of improving switches with Dantzig's rule, with Bland's rule, and with the Largest Increase pivot rule. This immediately translates to a family of linear programs for which the simplex algorithm needs an exponential number of pivot steps with the same three pivot rules. Our results yield a unified construction that simultaneously reproduces well-known lower bounds for these classical pivot rules, and we are able to infer that any (deterministic or randomized) combination of them cannot avoid an exponential worst-case behavior. Regarding the policy iteration algorithm, pivot rules typically switch multiple edges simultaneously and our lower bound for Dantzig's rule and the Largest Increase rule, which perform only single switches, seem novel. Regarding the simplex algorithm, the individual lower bounds were previously obtained separately via deformed hypercube constructions. In contrast to previous bounds for the simplex algorithm via Markov decision processes, our rigorous analysis is reasonably concise. Bland's pivot rule, Dantzig's pivot rule, Largest Increase pivot rule, Markov decision process, policy iteration, simplex algorithm 2012 acmcopyright ## 1 Introduction Since the simplex algorithm for linear programming was proposed by Dantzig in 1951 [12], it has been a central question in discrete optimization whether it admits a polynomial time pivot rule. A positive answer to this question would yield an efficient combinatorial algorithm for solving linear programs, and thus resolve an open problem on Smale's list of mathematical problems for the 21st century [41]. It would also resolve the polynomial Hirsch conjecture [11], which states that every two vertices of every polyhedron with \(n\) facets are connected via a path of \(\mathcal{O}(\mathrm{poly}(n))\) edges. At this point, the best known pivot rules are randomized and achieve subexponential running times in expectation [21, 27, 31, 36]. For the most natural, memoryless and deterministic, pivot rules, exponential worst-case examples based on distorted hypercubes were constructed early on [4, 25, 30, 35, 38]. Amenta and Ziegler [3] introduced the notion of deformed products to unify several of these constructions. However, while this unification defines a class of polytopes that generalizes distorted hypercubes, it does not yield a unified exponential worst-case construction to exclude all pivot rules based on these deformed products, and neither does it yield new lower bounds for additional pivot rules. Randomized and history-based pivot rules resisted similar approaches, and it was a major breakthrough in 2011 when Friedmann et al. were able to prove the first subexponential lower bound for several randomized pivot rules [20, 21, 26]. They introduced a new technique based on a connection [39] between Howard's policy iteration algorithm [28] for Markov decision processes (MDPs) and the simplex algorithm for linear programs (LPs). The same technique was later used to prove exponential lower bounds for history-based pivot rules that had been candidates for polynomial time rules for a long time [5, 15]. While the approach via MDPs has proven powerful, the resulting analyses are often very technical (the full version of [15] with all details of the proof has 197 pages). In this paper, we apply the MDP-based technique to classical (memoryless and deterministic) pivot rules and obtain a unified construction that excludes several pivot rules at the same time, and any combination of them, while being relatively simple. Our results.We give a unified worst-case construction for the policy iteration algorithm for MDPs that simultaneously applies to three of the most classical pivot rules. The rigorous analysis of the resulting MDPs is reasonably concise. We note that the exponential lower bounds for Dantzig's rule and the Largest Increase rule seem novel for the considered version of the policy iteration algorithm, while the result for Bland's rule is known [37]. There is a family \((\mathcal{D}_{n})_{n\in\mathbb{N}}\) of Markov decision processes \(\mathcal{D}_{n}\) of size \(\mathcal{O}(n)\) such that policy iteration performs \(\Omega(2^{n})\) improving switches with Dantzig's rule, Bland's rule, and the Largest Increase pivot rule. In fact, all three pivot rules apply the same set of improving switches with only slight differences in the order in which they get applied. Because of this, the result still holds if we allow to change pivot rules during the course of the algorithm. For any (deterministic or randomized) combination of Dantzig's, Bland's, or the Largest Increase rule, the policy iteration algorithm has an exponential running time. A well-known connection between policy iteration and the simplex method, allows to immediately translate our result to the simplex algorithm with the same pivot rules. In particular, we obtain an exponential lower bound construction that holds even if, in every step, the entering variable is selected independently according to Dantzig's rule, Bland's rule, or the Largest Increase pivot rule, i.e., even if we change pivot rules during the course of the algorithm. In other words, we obtain a lower bound for a family of pivot rules that results from combining these three rules. There is a family \((\mathcal{L}_{n})_{n\in\mathbb{N}}\) of linear programs \(\mathcal{L}_{n}\) of size \(\mathcal{O}(n)\) such that the simplex algorithm performs \(\Omega(2^{n})\) pivot operations for any (deterministic or randomized) combination of Dantzig's, Bland's, or the Largest Increase pivot rule. Related work.Policy iteration for MDPs has been studied extensively for a variety of pivot rules. In its original version [28], the algorithm applies improving switches to the current policy in all states simultaneously in every step. Fearnley [18] showed an exponential lower bound for a greedy pivot rule that selects the best improvement in every switchable state. In this paper, we focus on pivot rules that only apply a single switch in each iteration. Most of the MDP constructions for randomized or history-based pivot rules [5, 15, 21, 26] consider this case, and Melekopoglou and Condon [37] gave exponential lower bounds for several such deterministic pivot rules. We emphasize that their constructions already include an exponential lower bound for Bland's rule [8]. Since policy iteration is traditionally considered with simultaneous switches, to the best of our knowledge, no exponential lower bounds are known for Dantzig's rule [12] and the Largest Increase rule [11] in the setting of single switches. There is a strong connection between policy iteration and the simplex algorithm, which, under certain conditions (see below), yields that worst-case results for policy iteration carry over to the simplex method [39]. This connection was used to establish subexponential lower bounds for randomized pivot rules, namely Randomized Bland [26] and Random-Edge, RaisingTheBar and Random-Facet [21]. It also lead to exponential lower bounds for history-based rules, namely Cunningham's rule [5] and Zadeh's rule [15]. Conversely, lower bounds for the simplex algorithm with classical pivot rules were obtained via deformed hypercubes [3] and do not transfer to MDPs. Such results include lower bounds for Dantzig's rule [35], the Largest Increase rule [30], Bland's rule [4], the Steepest Edge rule [25], and the Shadow Vertex rule [38]. We provide an alternate lower bound construction for the first three of these rules via a family of MDPs. As far as we can tell, as a side product, this yields the first exponential lower bound for policy iteration with Dantzig's rule and the Largest Increase rule. While it remains open whether LPs can be solved in strongly polynomial time, there are several, both deterministic [32, 34] and randomized [7, 17, 33], algorithms that solve LPs in weakly polynomial time. A (strongly) polynomial time pivot rule for the simplex algorithm would immediately yield a strongly polynomial algorithm. There have been different attempts to deal with the worst-case behavior of the simplex method from a theoretical perspective. For example, the excessive running time was justified by showing that the simplex algorithm with Dantzig's original pivot rule is _NP-mighty_[16], which means that it can be used to solve NP-hard problems. This result was subsequently strengthened by establishing that deciding which solution is computed and whether a given basis will occur is PSPACE-complete [2, 19]. On the positive side, there are different results explaining the efficiency of the simplex method in practice, such as average-case analyses [1, 9, 44]. Spielman and Teng [42] introduced smoothed analysis as a way of bridging the gap between average-case and worst-case analysis. They showed that the simplex algorithm with the shadow vertex pivot rule [24] has a polynomial smoothed complexity, and their results were further improved later [10, 14, 29, 45]. Another approach to derive stronger lower bounds on pivot rules is to consider combinatorial abstractions of LPs, such as Unique Sink Orientations (USOs) [23]. There is still a large gap between the best known deterministic algorithm for finding the unique sink, which is exponential [43], and the almost quadratic lower bound [40]. Considering randomized rules, the Random-Facet pivot rule, which is the best known simplex rule [27], is also the best known pivot rule for acyclic USOs [22], achieving a subexponential running time in both settings. ## 2 Preliminaries ### Markov Decision Processes A _Markov decision process_ is an infinite duration one-player game on a finite directed graph \(G=(V_{A},V_{R},E_{A},E_{R},r,p)\). The vertex set \(V=V_{A}\cup V_{R}\) of the graph is divided into _agent vertices_\(V_{A}\) and _randomization vertices_\(V_{R}\). Every agent edge \(e\in E_{A}\subseteq V_{A}\times V\) is assigned a _reward_\(r(e)\in\mathbb{R}\), while every randomization edge \(\hat{e}\in E_{R}\subseteq V_{R}\times V\) is assigned a _transition probability_\(p(\hat{e})\in[0,1]\). Outgoing transition probabilities add to one in every randomization vertex. A process starts in an arbitrary starting vertex. If this is an agent vertex, the agent moves along one of the outgoing edges of this vertex (we assume that all vertices have at least one outgoing edge) and collects the corresponding reward. Otherwise, it gets randomly moved along one of the outgoing edges according to the transition probabilities. The process continues in this manner ad infinitum. An agent vertex \(s\in V_{A}\) whose only outgoing edge is a self-loop with reward zero is called _sink_ of \(G\) if it is reachable from all vertices. A _policy_ for \(G\) is a function \(\pi\colon V_{A}\to V\) with \((v,\pi(v))\in E_{A}\) for all \(v\in V_{A}\), determining the behavior of the process in agent vertices. A policy \(\pi\) for \(G\) is called _weak unichain_ if \(G\) has a sink \(s\) such that \(\pi\) reaches \(s\) with a probability of one from every starting vertex. The _value_ of a vertex \(v\) w.r.t. a policy \(\pi\) for a Markov decision process \(G\) is given by the expected total reward that the agent collects with policy \(\pi\) when the process starts in \(v\). More formally, the value function \(\operatorname{Val}_{\pi,G}\colon V\to\mathbb{R}\) is defined by the following system of Bellman [6] equations \[\operatorname{Val}_{\pi,G}(u)=\left\{\begin{array}{ll}r((u,\pi(u)))+ \operatorname{Val}_{\pi,G}(\pi(u)),&\text{if $u\in V_{A}$},\\ \sum\limits_{v\in\Gamma^{+}(u)}p((u,v))\operatorname{Val}_{\pi,G}(v),&\text{ if $u\in V_{R}$},\end{array}\right.\] together with \(\operatorname{Val}_{\pi,G}(s)=0\) if \(G\) has a sink \(s\). The policy \(\pi\) is optimal (w.r.t. the _expected total reward criterion_) if \(\operatorname{Val}_{\pi,G}(v)\geq\operatorname{Val}_{\pi,G}(v)\) for all \(v\in V_{A}\) and all policies \(\tilde{\pi}\) for \(G\). Whenever the underlying process \(G\) is clear from the context, we write \(\operatorname{Val}_{\pi}\) instead of \(\operatorname{Val}_{\pi,G}\). We say that the agent edge \((u,v)\in E_{A}\) is an _improving switch_ for the policy \(\pi\) for process \(G\) if it satisfies \(z_{\pi,G}(u,v)\coloneqq r((u,v))+\operatorname{Val}_{\pi,G}(v)-\operatorname{ Val}_{\pi,G}(u)>0\), where \(z_{\pi,G}(u,v)\) are the _reduced costs_ of \((u,v)\) with respect to \(\pi\). Again, we usually write \(z_{\pi}\) instead of \(z_{\pi,G}\). If we _apply_ an improving switch \(s=(u,v)\in E_{A}\) to a policy \(\pi\), we obtain a new policy \(\pi^{s}\) which is given by \(\pi^{s}(u)=v\) and \(\pi^{s}(w)=\pi(w)\) for all \(w\in V_{A}\setminus\{u\}\). The improving switch \(s\) increases the value of \(u\) without decreasing the value of any other vertex. ### Policy Iteration for Markov Decision Processes Howard's [28] policy iteration algorithm receives as input a finite Markov decision process \(G\) and a weak unichain policy \(\pi\) for \(G\). It then iteratively applies a set of improving switches to the current policy until there are none left. In the remainder of this paper, we consider a version of this algorithm that applies a single switch in every iteration, cf. Algorithm 1. Due to monotonicity of the vertex values, this procedure visits every policy at most once. As there are only finitely many policies, the algorithm thus terminates after a finite number of iterations for every initial policy. ``` input: a weak unichain policy \(\pi\) for a Markov decision process \(G\) while\(\pi\) admits an improving switch: \(\bar{s}\leftarrow\) improving switch for \(\pi\) \(\pi\leftarrow\pi^{\bar{s}}\) return\(\pi\) ``` **Algorithm 1** PolicyIteration(\(G,\pi\)) We know that the policy iteration algorithm returns an optimal policy if there is an optimal policy which is weak unichain. [[20]] Let \(\pi\) be a weak unichain policy for a Markov decision process \(G\). If \(G\) admits a weak unichain, optimal policy, then PolicyIteration(\(G,\pi\)) only visits weak unichain policies and returns an optimal policy w.r.t. the expected total reward criterion. In this paper, we consider the following three _pivot rules_, i.e., rules that determine the choice of \(\textsc{PolicyIteration}(G,\pi)\) in each iteration: * _Bland's pivot rule_ assigns a unique number to every agent edge of \(G\). Then, in every iteration, it chooses the improving switch with the smallest number. * _Dantzig's pivot rule_ chooses an improving switch \(\bar{s}\) maximizing the reduced costs \(z_{\pi}(\bar{s})\). * The _Largest Increase rule_ chooses an improving switch \(\bar{s}\) maximizing \(\sum_{v\in V_{A}}\operatorname{Val}_{\pi^{s}}(v)\). ## Appendix A Connection between Policy Iteration and the Simplex Method Given a Markov decision process, we can formulate a linear program such that the application of the simplex method is in some sense equivalent to the application of policy iteration. We refer to [20] for more details and the derivation of the following result. [[20]] Let \(\pi\) be a weak unichain policy for a Markov decision process \(G\). Assume that there is an optimal, weak unichain policy for \(G\) and that \(\textsc{PolicyIteration}(G,\pi)\) with a given pivot rule takes \(N\) iterations. Then, there is an LP of linear size such that the simplex algorithm with the same pivot rule takes \(N\) iterations. In terms of the simplex method, Bland's pivot rule chooses the entering variable of smallest index [8], Dantzig's rule chooses an entering variable maximizing the reduced costs [12], and the Largest Increase rule greedily maximizes the objective function value. The linear program in the previous theorem has one variable for every agent edge of the Markov decision process such that the reduced costs of a given edge equal the reduced costs of the corresponding variable, and the objective function equals the sum over all vertex values as given in the Largest Increase rule for policy iteration [5, 15, 26]. Therefore, the choices of each pivot rule in the two settings are consistent. Additionally, we want to mention that the linear program from Theorem 3 is always non-degenerate. Therefore, we cannot reduce the number of required iterations on these programs by combining a given pivot rule with the Lexicographic pivot rule [13]. ### Notation Let \(n\in\mathbb{N}\) be fixed. We write \([n]=\{1,2,\ldots,n\}\) and \([n]_{0}=\{0,1,\ldots,n\}\). Then, the set of all numbers that can be represented with \(n\) bits is \([2^{n}-1]_{0}\). For every \(x\in[2^{n}-1]_{0}\) and \(i\in[n]\), let \(x_{i}\) denote the \(i\)-th bit of \(x\), i.e., \(x=\sum_{i\in[n]}x_{i}2^{i-1}\), and let \(L(i,x)=\max\{j\in[i-2]\mid x_{j}=1\text{ or }j=1\}\) for \(i\geq 3\). Finally, for \(x\in[2^{n}-1]\), we denote the _least significant set bit_ of \(x\) by \(\ell_{1}(x)=\min\{i\in[n]:x_{i}=1\}\), and the _most significant set bit_ of \(x\) by \(\operatorname{m}_{1}(x)=\max\{i\in[n]:x_{i}=1\}\). Let \(G=(V_{A},V_{R},E_{A},E_{R},r,p)\) be a Markov decision process. For \(v\in V_{A}\cup V_{R}\), we write \(\Gamma^{+}_{G}(v)=\{w\in V_{A}\cup V_{R}\colon(v,w)\in E_{A}\cup E_{R}\}\). If the underlying process is clear from the context, we just write \(\Gamma^{+}(v)\). ## Appendix B An Exponential Lower Bound for Bland's pivot rule In this section, we consider a family \((\mathcal{B}_{n}=(V_{\mathcal{B}_{n}},E_{\mathcal{B}_{n}},r_{\mathcal{B}_{n} }))_{n\in\mathbb{N}}\) of Markov decision processes, which do not involve any randomization. Consider Figure 0(a) for a drawing of \(\mathcal{B}_{4}\). Every process \(\mathcal{B}_{n}\) consists of \(n\) separate levels, together with a global _transportation vertex_\(t\), a sink \(s\), and a _dummy vertex_\(d\). Each level \(\ell\in[n]\) comprises two vertices, called \(a_{\ell}\) and \(b_{\ell}\). For convenience, we sometimes denote the sink by \(a_{n+1}\) and the dummy vertex by \(b_{n+1}\). In vertex \(a_{\ell}\), the agent can either _enter_ level \(\ell\) by going to vertex \(b_{\ell}\), _skip_ this level by going to vertex \(a_{\ell+1}\), or _board_ the transportation vertex by going to \(t\). From the transportation vertex, the agent _travels_ to one of the vertices \(a_{i}\) with \(i\in[n]\). In \(b_{\ell}\), the agent can decide between _leaving_ the set \(\bigcup_{i\in[n+1]}\{b_{i}\}\) by going to \(a_{\ell+1}\) and _staying_ in this set by going to \(b_{\ell+1}\). We will simply say that the agent leaves level \(\ell\) or stays in level \(\ell\), respectively. Finally, when the agent reaches the dummy vertex \(d\), it must go to the sink, and the only outgoing edge of the sink \(s\) is the self-loop \((s,s)\). The function \(r_{\mathcal{B}_{n}}\) grants the agent a reward of \(2^{\ell}\) for entering level \(\ell\), a reward of \(0.75\) for staying in level \(\ell\), and a (negative) reward of \((-2^{\ell}+1.25)\) for boarding \(t\) from \(a_{\ell}\); all other rewards are zero. The _Bland numbering_\(\mathcal{N}_{\mathcal{B}_{n}}\colon E_{\mathcal{B}_{n}}\to|E_{\mathcal{B}_{n}}|\) of the edges of \(\mathcal{B}_{n}\) is defined in Table 1, together with \(\mathcal{N}_{\mathcal{B}_{n}}((d,s))=6n+1\) and \(\mathcal{N}_{\mathcal{B}_{n}}((s,s))=6n+2(=|E_{\mathcal{B}_{n}}|)\). This table also contains alternative names for the edges, which match the description above and which we will use to simplify the exposition. Consider Figure 0(b) for the Bland numbering of \(\mathcal{B}_{4}\). In the following, consider \(\mathcal{B}_{n}\) for some arbitrary but fixed \(n\in\mathbb{N}\). The aim of this section is to show that PolicyIteration with Bland's pivot rule, cf. Algorithm 2, applies \(\Omega(2^{n})\) improving switches when given \(\mathcal{B}_{n}\), a suitable initial policy, and \(\mathcal{N}_{\mathcal{B}_{n}}\) as input. ``` input: Markov decision process \(G\), weak unichain policy \(\pi\), edge numbering \(\mathcal{N}\) while\(\pi\) admits an improving switch: \(\bar{s}\leftarrow\) the improving switch \(s\) for \(\pi\) that minimizes \(\mathcal{N}(s)\) \(\pi\leftarrow\pi^{\bar{s}}\) return\(\pi\) ``` **Algorithm 2**\(\textsc{Bland}(G,\pi,\mathcal{N})\) More precisely, we will see that the algorithm visits all of the following policies. **Definition 6**.: The policy \(\pi_{0}\) for \(\mathcal{B}_{n}\) such that travel(1) is active, and skip(\(i\)) and leave(\(i\)) are active for all \(i\in[n]\) is the _canonical policy_ for \(0\). For \(x\in[2^{n}-1]\), the policy \(\pi_{x}\) for \(\mathcal{B}_{n}\) is the _canonical policy_ for \(x\) if it satisfies the following conditions: Figure 1: Two drawings of the Markov decision process \(\mathcal{B}_{4}\). In (a), edge labels denote rewards and unlabeled edges have a reward of zero. In (b), edge labels define the Bland numbering \(\mathcal{N}_{\mathcal{B}_{4}}\). * The policy travels from \(t\) to the least significant set bit, i.e., travel(\(\ell_{1}(x)\)) is active. * It collects no reward above the most significant set bit, i.e., leave(m\({}_{1}(x)\)), skip(\(i\)), and leave(\(i\)) are active for all m\({}_{1}(x)<i\leq n\). * Every set bit \(x_{i}=1\) determines the behavior of the policy down to the next, less significant set bit or, if \(i=\ell_{1}(x)\), down to the first bit: * \(\text{\rm{enter}}(i)\) is active. * \(\text{\rm{b}}\text{\rm{>}}\) if \(i=2\), then leave(1) is active. If additionally \(x_{1}=0\), then skip(1) is active. * \(\text{\rm{c}}\text{\rm{>}}\) if \(i\geq 3\) and \(x_{i-1}=1\), then leave(\(i-1\)) is active. * \(\text{\rm{d}}\text{\rm{>}}\) if \(i\geq 3\) and \(x_{i-1}=0\): * \(\text{\rm{stay}}(i-1)\), skip(\(i-1\)), and leave(\(i-2\)) are active. * \(\text{\rm{<}}\text{\rm{d}}_{2}\text{\rm{>}}\) if \(L(i,x)<i-2\), then for all \(j\in\{L(i,x)+1,\ldots,i-2\}\), the edges board(\(j\)) and stay(\(j-1\)) are active; if \(L(i,x)=1\) and \(x_{1}=0\), then board(1) is active. Consider Figure 1(a) and Figure 1(d) for examples of canonical policies. Note that canonical policies exist and are unique as the definition contains precisely one condition on every agent vertex with more than one outgoing edge. Further, the \(2^{n}\) canonical policies are pairwise different as enter(\(i\)) is active in \(\pi_{x}\) if and only if \(x_{i}=1\). We will now analyze the behavior of \(\textsc{Bland}(\mathcal{B}_{n},\pi_{0},\mathcal{N}_{\mathcal{B}_{n}})\), i.e., we choose the canonical policy for zero as our initial policy. Since this policy visits every vertex except the sink only once, it is weak unichain. The canonical policy \(\pi_{0}\) is a weak unichain policy for \(\mathcal{B}_{n}\). Thus, according to Theorem 3.2, the following result will allow us to transfer our results for the policy iteration algorithm to the simplex method. Let the policy \(\pi_{*}\) for \(\mathcal{B}_{n}\) be determined as follows: \(\text{\rm{stay}}(n)\) and \(\text{\rm{travel}}(1)\) are active, \(\text{\rm{enter}}(i)\) is active for all \(i\in[n]\), and \(\text{\rm{leave}}(j)\) is active for all \(j\in[n-1]\). Then, \(\pi_{*}\) is weak unichain and optimal for \(\mathcal{B}_{n}\). Proof.: Since \(\pi_{*}\) visits every vertex, besides the sink, only once, it is weak unichain. For optimality, note that \(t\) travels to \(a_{1}\) and that, when starting in a vertex \(a_{\ell}\), policy \(\pi_{*}\) enters level \(\ell\) and all levels above and collects the reward of stay(\(n\)). The policy is thus clearly optimal among the set of policies that do not use boarding edges. Further, we have \(r_{\mathcal{B}_{n}}(\text{\rm{board}}(\ell))=-2^{\ell}+1.25=-(\sum_{i=1}^{\ell -1}2^{i}+0.75)\). That is, the costs of board(\(\ell\)) equal the maximum reward that can be collected in the first \(\ell-1\) levels. Thus, we cannot increase vertex values by using boarding edges, which yields that \(\pi_{*}\) is optimal. The following technical result will be helpful in the upcoming proofs. Let \(x\in[2^{n}-1]_{0}\) and \(i\in[n]\). Then, \(\text{\rm{travel}}(i)\) is not improving for \(\pi_{x}\). Proof.: All vertex values with respect to \(\pi_{0}\) are zero, and \(r_{\mathcal{B}_{n}}(\text{\rm{travel}}(i))=0\). Thus, the claim holds for \(x=0\), so we assume \(x\in[2^{n}-1]\) in the following. Let the vertices \(a_{k}\) and \(a_{\ell}\) either correspond to successive set bits, i.e., \(x_{k}=x_{\ell}=1\) and \(x_{j}=0\) for all \(k<j<\ell\), or let \(k=\text{m}_{1}(x)\) and \(\ell=n+1\). Either way, Definition 3.2 implies that \(\pi_{x}\) includes a path from \(a_{k}\) to \(a_{\ell}\), which does not contain any boarding edge. Hence, we have \(\text{\rm{Val}}_{\pi_{x}}(a_{\alpha})\geq\text{\rm{Val}}_{\pi_{x}}(a_{\beta})\geq 0\) for all set bits \(x_{\alpha}=x_{\beta}=1\) with \(\alpha\leq\beta\). Since the transportation vertex chooses the least significant set bit in \(\pi_{x}\), this yields that travel(\(i\)) is not improving if \(x_{i}=1\). Further, Definition 6 yields that \(x_{j}=1\) if and only if \(\text{enter}(j)\) is active in \(\pi_{x}\). Thus, when starting in some vertex \(a_{i}\) with \(x_{i}=0\), policy \(\pi_{x}\) either boards \(t\) from \(a_{i}\) or it skips levels until reaching a node that boards \(t\), a level corresponding to a set bit, or the sink. In all four cases, \(\text{travel}(i)\) is not improving. This completes the proof. We will show in two steps that, when using the initial policy \(\pi_{0}\), Bland visits all of the other canonical policies. Firstly, given the canonical policy for an arbitrary even integer \(x\), we see that the algorithm applies improving switches until reaching the canonical policy \(\pi_{x+1}\). Let \(x\in[2^{n}-2]_{0}\) be even. Then, \(\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{ \textsc{\textsc{\textsc{\textsc{\textsctextsctextsctextsctextsctextsctextsctextsctextsctextsc\textsc{\textsctextsctextsctextsctextsc \cdot **Lemma 12**: _Let \(\pi_{x}\) be the canonical policy for some odd \(x\in[2^{n}-3]\). Applying the canonical phases with respect to \(x\) to \(\pi_{x}\) results in the canonical policy \(\pi_{x+1}\)._ Let \(x\in[2^{n}-3]\) be odd, and let \(\pi_{x}\) be the canonical policy for \(x\). For convenience, we write \(\ell\coloneqq\ell_{0}(x)>1\). Let \(\pi\) denote the policy resulting from applying the canonical phases w.r.t. \(x\) to \(\pi_{x}\). We need to show that \(\pi\) satisfies the properties from Definition 3 with respect to \(x+1\). Due to the third canonical phase, \(\mathrm{travel}(\ell)\) is active in \(\pi\). Since \(\ell_{1}(x+1)=\ell\), this implies that \(\pi\) satisfies condition \(<\)\(1\)\(>\) w.r.t \(x+1\). We have \(\mathrm{m}_{1}(x+1)\geq\mathrm{m}_{1}(x)\), and the canonical phases solely include switches in the first \(\ell\) levels. Therefore, if \(\ell<\mathrm{m}_{1}(x)\), level \(\mathrm{m}_{1}(x+1)\) and the levels above remain unchanged. If \(\ell\geq\mathrm{m}_{1}(x)\), we know that \(\ell=\mathrm{m}_{1}(x)+1=\mathrm{m}_{1}(x+1)\) and \(x_{\ell+1}=0\). Hence, the only switches in level \(\mathrm{m}_{1}(x+1)\) or above are \(\mathrm{enter}(\ell)\) and \(\mathrm{travel}(\ell)\). In both cases, we obtain that, as \(\pi_{x}\) satisfies property \(<\)\(2\)\(>\) w.r.t. \(x\), policy \(\pi\) satisfies property \(<\)\(2\)\(>\) w.r.t. \(x+1\). Note that the bit configurations of \(x+1\) and \(x\) differ precisely in the first \(\ell>1\) bits, where we have \((x+1)_{\ell}=1\) and \((x+1)_{j}=0\) for all \(j\in[\ell-1]\). Due to the third canonical phase, \(\mathrm{enter}(\ell)\) is active in \(\pi\); and due to the fourth and fifth phases, \(\mathrm{enter}(i)\) is inactive for all \(i\in[\ell-1]\). Further, the canonical phases do not contain switches in levels above level \(\ell\). Since \(\pi_{x}\) satisfies \(<\)\(\mathrm{a}\)\(>\) w.r.t. \(x\), we can conclude that policy \(\pi\) satisfies \(<\)\(\mathrm{a}\)\(>\) w.r.t. \(x+1\). If \((x+1)_{2}=1\), then \(x\) being odd yields \(\ell=2\). Thus, the fifth and seventh phases ensure that \(\mathrm{skip}(1)\) and \(\mathrm{leave}(1)\) are active in \(\pi\). This yields that \(\pi\) has property \(<\)\(\mathrm{b}\)\(>\). Now consider some \(i\geq 3\) with \((x+1)_{i}=(x+1)_{i-1}=1\). Since \((x+1)_{j}=0\) for all \(j\in[\ell-1]\), this yields \(i-1\geq\ell\). Figure 2: An example that illustrates how the canonical phases transform one canonical policy into the next one. Active edges are depicted in a bold blue color, while inactive edges are slightly transparent. Note that \(\pi_{7}\) enters the first three levels, which correspond to the set bits in the binary representation of \(7\); analogously, \(\pi_{8}\) only enters the fourth level. If \(i-1=\ell\), then \(x_{\ell+1}=(x+1)_{\ell+1}=(x+1)_{i}=1\), so the first phase ensures that \(\mathrm{leave}(i-1)\) is active in \(\pi\). Otherwise, none of the phases include a switch in level \(i-1\) or above, and we have \(x_{i}=(x+1)_{i}=1\) as well as \(x_{i-1}=(x+1)_{i-1}=1\). Thus, property \(<\)c\(>\) w.r.t. \(x\) yields that \(\mathrm{leave}(i-1)\) is active in \(\pi_{x}\). Therefore, \(\mathrm{leave}(i-1)\) is still active in \(\pi\). We conclude that \(\pi\) satisfies property \(<\)c\(>\). Now consider some \(i\geq 3\) with \((x+1)_{i}=1\) and \((x+1)_{i-1}=0\). To show that \(<\)d\({}_{1}\)\(>\) and \(<\)d\({}_{2}\)\(>\) are satisfied, we consider the cases \(\ell<i\) and \(\ell\geq i\). First, assume that \(\ell<i\). Then, \((x+1)_{\ell}=1\) yields that \(\ell\neq i-1\), so \(\ell\leq i-2\). Therefore, we have \(x_{i}=(x+1)_{i}=1\) and \(x_{i-1}=(x+1)_{i-1}=0\). As \(\pi_{x}\) satisfies property \(<\)d\({}_{1}\)\(>\) w.r.t. \(x\) and as the phases do not contain switches above level \(\ell\), we obtain that \(\mathrm{stay}(i-1)\) and \(\mathrm{skip}(i-1)\) are still active in \(\pi\). Further, the only switches in level \(\ell\) are \(\mathrm{leave}(\ell)\), \(\mathrm{enter}(\ell)\), and \(\mathrm{travel}(\ell)\). Therefore, \(\mathrm{leave}(i-2)\) is also still active in \(\pi\). Hence, in case \(\ell<i\), property \(<\)d\({}_{1}\)\(>\) is satisfied. Due to \((x+1)_{\ell}=1\) and \(\ell\leq i-2\), we have \(L(i,x+1)\geq\ell>1\). As condition \(<\)d\({}_{2}\)\(>\) w.r.t. \(x+1\) becomes trivial otherwise, assume \(L(i,x+1)<i-2\) in the following. Since the bit configurations of \(x\) and \(x+1\) do not differ from each other above bit \(\ell\) and \(L(i,x+1)\geq\ell\), we have \(L(i,x+1)\geq L(i,x)\). policy \(\pi_{x}\) satisfies \(<\)d\({}_{2}\)\(>\) w.r.t. \(x\) and the canonical phases do not apply switches above level \(\ell\). Therefore, under the condition that \(\mathrm{stay}(\ell)\) is active in \(\pi\) if \(L(i,x+1)=\ell\), we obtain that \(\pi\) satisfies \(<\)d\({}_{2}\)\(>\) w.r.t. \(x+1\). To verify this condition, assume \(L(i,x+1)=\ell\). Then, \(L(i,x)\leq\ell<i-2\) and property \(<\)d\({}_{2}\)\(>\) yields that \(\mathrm{stay}(\ell)\) is active in \(\pi_{x}\). The switch \(\mathrm{leave}(\ell)\) from the first phase is the only switch which can cause that \(\mathrm{stay}(\ell)\) is not active in \(\pi\) anymore. However, if \(x_{\ell+1}=1\), we also have \((x+1)_{\ell+1}=1\). This yields \(L(i,x+1)\geq\ell+1\), which contradicts \(L(i,x+1)=\ell\). Hence, \(\mathrm{stay}(\ell)\) is still active in \(\pi\), which verifies our condition. We conclude that, in case \(\ell<i\), policy \(\pi\) satisfies property \(<\)d\({}_{2}\)\(>\). Second, we assume \(\ell\geq i\). As \((x+1)_{i}=1\) and \((x+1)_{j}=0\) for all \(j\in[\ell-1]\), we then have \(\ell=i\) and \(L(i,x+1)=1\). Due to the fifth phase, \(\mathrm{skip}(\ell-1)\) is active in \(\pi\). Further, \(x_{\ell-1}=x_{\ell-2}=1\) and properties \(<\)b\(>\) and \(<\)c\(>\) yield that \(\mathrm{leave}(\ell-2)\) is active in \(\pi_{x}\). Since none of the phases includes the switch \(\mathrm{stay}(\ell-2)\), we can conclude that \(\mathrm{leave}(\ell-2)\) is still active in \(\pi\). For property \(<\)d\({}_{1}\)\(>\), it remains to show that \(\mathrm{stay}(\ell-1)\) is active in \(\pi\). We can assume that \(x_{\ell+1}=0\) and \(\ell\leq\mathrm{m}_{1}(x)\) as otherwise the second phase yields that \(\mathrm{stay}(\ell-1)\) is active in \(\pi\). Then, there is some \(j>\ell+1\) with \(L(j,x)=\ell-1<j-2\) and \(x_{j}=1\). Hence, property \(<\)d\({}_{2}\)\(>\) w.r.t. \(x\) yields that \(\mathrm{stay}(\ell-1)\) is active in \(\pi_{x}\). As we have \(\ell=i\geq 3\), none of the phases includes the switch \(\mathrm{leave}(\ell-1)\), so \(\mathrm{stay}(\ell-1)\) is still active in \(\pi\). Hence, policy \(\pi\) satisfies property \(<\)d\({}_{1}\)\(>\). We have \(\ell=i\geq 3\), so \(\mathrm{board}(1)\) is active in \(\pi\) due to the fourth phase. Since it holds that \(L(i,x+1)=1\), we obtain that \(\pi\) satisfies property \(<\)d\({}_{2}\)\(>\) w.r.t. \(x+1\) if \(i=3\). We thus assume \(\ell\geq 4\). Then, the fourth phase yields that \(\mathrm{board}(j)\) is active for all \(j\in[i-2]\), and the sixth phase yields that \(\mathrm{stay}(j)\) is active for all \(j\in[i-3]\). Therefore, policy \(\pi\) satisfies condition \(<\)d\({}_{2}\)\(>\) w.r.t. \(x+1\). This concludes the proof. Finally, we show that Bland actually applies the canonical phases when given the corresponding canonical policy. Let \(x\in[2^{n}-3]\) be odd. Then, \(\textsc{Bland}(\mathcal{B}_{n},\pi_{x},\mathcal{N}_{\mathcal{B}_{n}})\) visits \(\pi_{x+1}\). Proof.: Let \(x\in[2^{n}-3]\) be odd, let \(\pi_{x}\) be the canonical policy for \(x\), and write \(\ell:=\ell_{0}(x)\geq 2\). By Lemma 3, it suffices to show that Bland applies the canonical phases w.r.t. \(x\) to \(\pi_{x}\). By properties \(<\)a\(>\), \(<\)b\(>\), and \(<\)c\(>\) from Definition 3, the edges \(\mathrm{enter}(\ell-1)\), \(\mathrm{enter}(i)\) and \(\mathrm{leave}(i)\) are active in \(\pi_{x}\) for all \(i\in[\ell-2]\). Further, \(\mathrm{travel}(1)\) is active, so there are clearly no improving switches for \(\pi_{x}\) in the first \(\ell-2\) levels. By Lemma 9, there are also no improving switches for \(\pi_{x}\) in \(t\). The first two canonical phases depend on the structure of \(x\), so we consider all possible cases here. First, assume \(x_{\ell+1}=1\). Then, by properties \(<\)a\(>\) and \(<\)d\({}_{1}\)\(>\), enter\((\ell+1)\), stay\((\ell)\), skip\((\ell)\), and leave\((\ell-1)\) are active in \(\pi_{x}\), cf. Figure 2(a). Hence, there is no improving switch in level \(\ell-1\), and leave\((\ell)\) is the only improving switch in level \(\ell\). Since the Bland numbering \(\mathcal{N}_{\mathcal{B}_{n}}\) prefers low levels, the algorithm applies leave\((\ell)\) to \(\pi_{x}\). After this switch, the edges enter\((\ell)\) and stay\((\ell-1)\) are the only improving switches in the first \(\ell\) levels. Thus, stay\((\ell-1)\) gets applied next. Second, assume \(\ell>\mathrm{m}_{1}(x)\). This yields \(\ell=\mathrm{m}_{1}(x)+1\) and \(x_{\ell+1}=0\). Due to property \(<\)2\(>\), the edges leave\((\ell-1)\), skip\((i)\), and leave\((i)\) are active for all \(i\geq\ell\), cf. Figure 2(b). Hence, the only improving switch in the first \(\ell-1\) levels is stay\((\ell-1)\), which gets applied next. Lastly, assume \(x_{\ell+1}=0\) and \(\ell\leq\mathrm{m}_{1}(x)\). Then, there is some \(i>\ell+1\) with \(x_{i}=1\) and \(L(i,x)=\ell-1<i-2\). Hence, property \(<\)d\({}_{2}\)\(>\) yields that board\((\ell)\) and stay\((\ell-1)\) are active in \(\pi_{x}\), cf. Figure 2(c). Let \(\pi\) be the policy resulting from the application of the first and second canonical phases to \(\pi_{x}\). From the case distinction above, we can conclude that \(\pi\) includes the path \((t,a_{1},b_{1},a_{2},b_{2},\ldots,a_{\ell-1},b_{\ell-1},b_{\ell})\). Further, we either know that leave\((\ell)\) and skip\((\ell)\) are active or we know that board\((\ell)\) is active. In both cases, there are no improving switches in the first \(\ell-1\) levels and enter\((\ell)\) is improving. Hence, by definition of \(\mathcal{N}_{\mathcal{B}_{n}}\), enter\((\ell)\) gets applied next. Entering level \(\ell\) yields a higher reward than entering all of the first \(\ell-1\) levels, so the edge travel\((\ell)\) is improving now. Since the \(\ell-1\) travel edges with a smaller Bland number are not improving, the algorithm applies travel\((\ell)\) next. Note that, by Lemma 9, travel edges are not improving for \(\pi_{x}\), and since travel edges above level \(\ell\) have certainly not become improving during the application of the first three Figure 3: A collection of figures supporting the proof of Lemma 13, where \(\pi_{x}\) denotes the canonical policy for some odd \(x\in[2^{n}-3]\). Bold edges denote active edges in the respective policy; some of the remaining edges might be active as well. phases, these edges are still not improving for the current policy. Consider Figure 2(d) for the structure of the current policy on the first \(\ell\) levels. The first \(\ell-1\) levels contain the improving switches \(\operatorname{skip}(\ell-1)\), \(\operatorname{leave}(\ell-1)\), and \(\operatorname{board}(j)\) for \(j\in[\ell-1]\) as these edges allow vertices of lower levels to enter level \(\ell\). The Bland numbering \(\mathcal{N}_{\mathcal{B}_{n}}\) yields that Bland applies \(\operatorname{board}(1)\) if \(\ell\geq 3\). This switch does not create new improving switches. Following this argument, we obtain that the algorithm applies \(\operatorname{board}(j)\) for all \(j\in[\ell-2]\) in increasing order. Since \(\operatorname{skip}(\ell-1)\) is still improving after that and since it precedes \(\operatorname{board}(\ell-1)\) and \(\operatorname{leave}(\ell-1)\) in the Bland numbering, it gets applied next. Let \(\pi^{\prime}\) denote the current policy, i.e., the policy resulting from the application of the first five phases to \(\pi_{x}\). The structure of \(\pi^{\prime}\) on the first \(\ell\) levels is given in Figure 2(e). Note that the travel edges still have not become improving during the latest switches. If \(\ell=2\), then \(\operatorname{leave}(1)\) is the only improving switch for \(\pi^{\prime}\) in the first level. Hence, the algorithm applies this switch and the canonical phases are completed. They are also completed if \(\ell=3\), so assume \(\ell\geq 4\) in the following. Applying \(\operatorname{skip}(i)\) or \(\operatorname{enter}(i)\) in some level \(i\in[\ell-3]\) is not improving for \(\pi^{\prime}\) since the costs of \(\operatorname{travel}(i+1)\) are higher than the costs of \(\operatorname{travel}(i)\). Similarly, \(\operatorname{stay}(i)\) is not improving for \(i\in[\ell-4]\) as \(\operatorname{travel}(i+2)\) is more expensive than \(\operatorname{travel}(i+1)\). However, by switching to \(\operatorname{stay}(\ell-3)\), the agent avoids to use the edge \(\operatorname{travel}(\ell-2)\) and reaches \(a_{\ell}\) via \(\operatorname{leave}(\ell-2)\) and \(\operatorname{skip}(\ell-1)\) instead. Hence, \(\operatorname{stay}(\ell-3)\) is the only improving switch in the first \(\ell-3\) levels and gets applied next. This switch makes \(\operatorname{enter}(\ell-3)\) and \(\operatorname{stay}(\ell-4)\) improving, so the Bland numbering \(\mathcal{N}_{\mathcal{B}_{n}}\) yields that the algorithm applies \(\operatorname{stay}(\ell-4)\) next. By iterating this argument, we obtain that Bland applies the improving switch \(\operatorname{stay}(j)\) for all \(j\in[\ell-3]\) in decreasing order, which completes the canonical phases entirely. According to Lemma 10, Bland transforms every even canonical policy \(\pi_{x}\) into \(\pi_{x+1}\), and by Lemma 13, the same holds for odd canonical policies. Since the initial policy is canonical for zero, this yields that \(\textsc{Bland}(\mathcal{B}_{n},\pi_{0},\mathcal{N}_{\mathcal{B}_{n}})\) visits all canonical policies \(\pi_{i}\) with \(i\in[2^{n}-1]\). Since these are pairwise different, this proves the main result of this section. There is an initial policy such that the policy iteration algorithm with Bland's pivot rule performs \(\Omega(2^{n})\) improving switches when applied to \(\mathcal{B}_{n}\). We close this section with two technical observations that help us later. For every \(i\in[n]\), whenever \(\textsc{Bland}(\mathcal{B}_{n},\pi_{0},\mathcal{N}_{\mathcal{B}_{n}})\) applies the improving switch \(\operatorname{skip}(i)\), this edge has higher reduced costs than \(\operatorname{board}(i)\); whenever it applies \(\operatorname{enter}(i)\), this edge has higher reduced costs than \(\operatorname{skip}(i)\) and \(\operatorname{board}(i)\). Proof.: Assume that \(\operatorname{skip}(i)\) gets applied by \(\textsc{Bland}(\mathcal{B}_{n},\pi_{0},\mathcal{N}_{\mathcal{B}_{n}})\) and denote the resulting policy by \(\pi\). We know that, at this point, the algorithm completed the fifth canonical phase w.r.t. some odd \(x\in[2^{n}-3]\), and \(i=\ell_{0}(x)-1\eqeqcolon\ell-1\). In the proof of Lemma 13, we argued that Figure 2(e) shows the structure of \(\pi\) on the first \(\ell\) levels. We see that \(\operatorname{travel}(\ell)\) is active in \(\pi\), which immediately yields that the reduced costs of the applied switch \(\operatorname{skip}(\ell-1)\) were higher than the ones of \(\operatorname{board}(\ell-1)\). Now assume that the algorithm applies the improving switch \(\operatorname{enter}(i)\) to the current policy \(\pi\). First, assume that this happens during the third canonical phase w.r.t. some odd \(x\in[2^{n}-3]\). Then \(i=\ell_{0}(x)\eqcolon\ell\). Further, we argue in the proof of Lemma 13 that \(\pi\) includes the path \(P=(t,a_{1},b_{1},a_{2},b_{2},\ldots,a_{\ell-1},b_{\ell-1},b_{\ell})\) and that either \(\operatorname{skip}(\ell)\) or \(\operatorname{board}(\ell)\) is active in \(\pi\). If \(\mathrm{skip}(\ell)\) is active, the reduced costs of \(\mathrm{enter}(\ell)\) exceed the ones of \(\mathrm{board}(\ell)\) as entering level \(\ell\) yields a higher reward than following path \(P\). Otherwise, \(\mathrm{board}(\ell)\) is active and condition \(<\)d\(>\) from Definition 6 yields that either \(\mathrm{board}(\ell+1)\) or \(\mathrm{leave}(\ell)\) is still active in \(\pi\) as it was active in \(\pi_{x}\). In the first case, \(\mathrm{skip}(\ell)\) is not improving for \(\pi\) since boarding \(t\) from a higher level is more expensive. In the other case, entering (and leaving) level \(\ell\) is more improving than skipping this level. Now assume that the switch \(\mathrm{enter}(i)\) is not part of the third canonical phase. Then, according to the proof of Lemma 4, it gets applied to the canonical policy \(\pi_{x}\) for some even \(x\in[2^{n}-2]_{0}\) and we have \(i=1\). Obviously, \(\mathrm{skip}(1)\) and \(\mathrm{board}(1)\) are not improving for \(\pi_{0}\), so assume \(x\neq 0\) in the following. In Lemma 4, we argue that \(\mathrm{travel}(2)\), \(\mathrm{skip}(1)\) and \(\mathrm{leave}(1)\) are active in \(\pi_{x}\) if \(\ell_{1}(x)=2\). Then, \(\mathrm{enter}(1)\) is clearly more improving than \(\mathrm{board}(1)\). Finally, if \(\ell_{1}(x)\geq 3\), we argue that \(\mathrm{board}(1)\) is active in \(\pi_{x}\). Further, property \(<\)d\(>\) yields that either \(\mathrm{board}(2)\) or \(\mathrm{leave}(1)\) is active in \(\pi_{x}\). In the first case, \(\mathrm{skip}(1)\) is not improving at all, and in the second case, it is not more beneficial than \(\mathrm{enter}(1)\). This concludes the proof. At any point during the execution of \(\textsc{Bland}(\mathcal{B}_{n},\pi_{0},\mathcal{N}_{\mathcal{B}_{n}})\), at most one of the edges \(\mathrm{travel}(i)\) with \(i\in[n]\) is improving. Proof.: If there are one or more improving travel edges at some point, the Bland numbering \(\mathcal{N}_{\mathcal{B}_{n}}\) yields that the algorithm immediately applies one of them (the one traveling to the lowest level). This only occurs during the third canonical phase and after the switch \(\mathrm{enter}(1)\) got applied to an even canonical policy. For the first case, we already argued in the proof of Lemma 4 that the applied travel edge is the only improving travel edge. For the second case, recall that, by Lemma 4, travel edges are not improving for canonical policies. As \(\mathrm{travel}(1)\) is inactive in even canonical policies, the switch \(\mathrm{enter}(1)\) does not affect the reduced costs of any travel edge besides \(\mathrm{travel}(1)\). Thus, this is the only travel edge becoming improving, which completes the proof. ## 4 A Combined Exponential Bound In this section, we consider a family \((\mathcal{D}_{n}=(V_{n}^{A},V_{n}^{R},E_{n}^{A},E_{n}^{R},r_{\mathcal{D}_{n}},p_{\mathcal{D}_{n}}))_{n\in\mathbb{N}}\) of Markov decision processes such that each process \(\mathcal{D}_{n}\) results from the process \(\mathcal{B}_{n}=(V_{\mathcal{B}_{n}},E_{\mathcal{B}_{n}},r_{\mathcal{B}_{n}})\) of the previous section by replacing every edge, besides the sink-loop, with the construction given in Figure 4; note that the construction uses a probability \(p_{v}\in(0,1]\) for every \(v\in V_{\mathcal{B}_{n}}\setminus\{s\}\), which we will choose later. In the following, consider \(\mathcal{D}_{n}\) for some arbitrary but fixed \(n\in\mathbb{N}\). The aim of this section is to show that policy iteration with Bland's rule, with Dantzig's rule, and with the Largest Increase rule performs \(\Omega(2^{n})\) improving switches to a suitable initial policy for \(\mathcal{D}_{n}\). Figure 4: The construction that replaces every edge \((v,w)\in E_{\mathcal{B}_{n}}\setminus\{(s,s)\}\) in \(\mathcal{D}_{n}\). Circular vertices are agent vertices, square ones are randomization vertices. Edge labels denote rewards and probabilities, where \(p_{v}\in(0,1]\). Note that, since \(v\) and \(w\) remain in the process, we have \(V_{\mathcal{B}_{n}}\subseteq V_{n}^{A}\). Before we can analyze the behavior of Bland on \(\mathcal{D}_{n}\), we need to specify the Bland numbering \(\mathcal{N}_{\mathcal{D}_{n}}\colon E_{n}^{A}\to|E_{n}^{A}|\) for \(\mathcal{D}_{n}\). It is constructed as follows: starting from the numbering \(\mathcal{N}_{\mathcal{B}_{n}}\), replace every edge \((v,w)\in E_{\mathcal{B}_{n}}\setminus\{(s,s)\}\) by the edges \((x_{v,w},y_{v,w})\) and \((v,x_{v,w})\). Then, insert all edges of the form \((x_{u,\cdot},u)\) with \(u\in V_{\mathcal{B}_{n}}\setminus\{s\}\) at the beginning of the numbering (the internal order of these edges can be chosen arbitrarily). We do not need to specify the Bland numbers of edges that are the unique outgoing edge of a vertex. Now that we have a Bland numbering, we want to transfer our results from the previous section to the new Markov decision process \(\mathcal{D}_{n}\). The following definition extends policies for \(\mathcal{B}_{n}\) to policies for \(\mathcal{D}_{n}\). Let \(\pi\) and \(\pi^{\prime}\) be policies for \(\mathcal{B}_{n}\) and \(\mathcal{D}_{n}\), respectively, and let \(v\in V_{\mathcal{B}_{n}}\setminus\{s\}\). Assume there is a \(w\in\Gamma_{\mathcal{B}_{n}}^{+}(v)\) such that \((v,x_{v,w})\), \((x_{v,w},y_{v,w})\), and \((x_{v,u},v)\) are active in \(\pi^{\prime}\) for all \(u\in\Gamma_{\mathcal{B}_{n}}^{+}(v)\setminus\{w\}\). Then, we say that \(v\) is _(w-)oriented w.r.t. \(\pi^{\prime}\)_. We call \(\pi^{\prime}\) the _twin policy_ of \(\pi\) if every vertex \(v\in V_{\mathcal{B}_{n}}\setminus\{s\}\) is \(\pi(v)\)-oriented w.r.t. \(\pi^{\prime}\). Let \(\pi^{\prime}_{0}\) denote the twin policy of the canonical policy \(\pi_{0}\) for \(\mathcal{B}_{n}\). We could start by showing that \(\textsc{Bland}(\mathcal{D}_{n},\pi^{\prime}_{0},\mathcal{N}_{\mathcal{D}_{n}})\) visits the twin policy of every policy that \(\textsc{Bland}(\mathcal{B}_{n},\pi_{0},\mathcal{N}_{\mathcal{B}_{n}})\) visits. Note that this would immediately imply the desired exponential number of improving switches. However, we prefer to gather some general results first, which then allows for a more unified treatment of the three pivot rules. Starting in a \(w\)-oriented vertex \(v\in V_{\mathcal{B}_{n}}\setminus\{s\}\), the agent reaches vertex \(w\) with probability one (due to \(p_{v}>0\)), while collecting a reward of \(r_{\mathcal{B}_{n}}((v,w))\). This immediately yields the following result. Let \(\pi\) be a policy for \(\mathcal{B}_{n}\) with twin policy \(\pi^{\prime}\) for \(\mathcal{D}_{n}\). Then, for every vertex \(v\in V_{\mathcal{B}_{n}}\), we have \(\mathrm{Val}_{\pi,\mathcal{B}_{n}}(v)=\mathrm{Val}_{\pi^{\prime},\mathcal{D}_{n }}(v)\). By the same argument, twin policies of weak unichain policies are weak unichain, and the proof idea of Lemma 3 carries over. The twin policy of every weak unichain policy for \(\mathcal{B}_{n}\) is a weak unichain policy for \(\mathcal{D}_{n}\). The twin policy of the optimal policy for \(\mathcal{B}_{n}\) is optimal for \(\mathcal{D}_{n}\). By Theorem 3, this guarantees the correctness of \(\textsc{PolicyIteration}(\mathcal{D}_{n},\pi^{\prime}_{0})\). Further, Theorem 3 will allow us to carry our results over to the simplex method. Since twin policies are central in our analysis, it comes in handy that only a certain type of edges might be improving for them. Let \(\pi^{\prime}\) be the twin policy of some policy for \(\mathcal{B}_{n}\). Then, all improving switches for \(\pi^{\prime}\) are of the form \((x_{v,w},y_{v,w})\in E_{n}^{A}\) for some \((v,w)\in E_{\mathcal{B}_{n}}\). Proof.: Since every vertex \(u\in V_{\mathcal{B}_{n}}\setminus\{s\}\) is oriented w.r.t \(\pi^{\prime}\), edges of the form \((x_{u,\cdot},u)\) or \((u,x_{u,\cdot})\) are either active or their application creates a zero-reward cycle of length two. Hence, none of these edges is improving for \(\pi^{\prime}\). The following Lemma shows how the probabilities \((p_{v})_{v\in V_{\mathcal{B}_{n}}\setminus\{s\}}\) affect the reduced costs of these potentially improving edges. Further, it yields a connection between the improving switches for a policy for \(\mathcal{B}_{n}\) and those for its twin policy. Let \(\pi\) be a policy for \(\mathcal{B}_{n}\) with twin policy \(\pi^{\prime}\), and let \((v,w)\in E_{\mathcal{B}_{n}}\setminus\{(s,s)\}\). Then, \(z_{\pi^{\prime},\mathcal{D}_{n}}(x_{v,w},y_{v,w})=p_{v}\cdot z_{\pi,\mathcal{B }_{n}}(v,w)\). In particular, \((x_{v,w},y_{v,w})\) is improving for \(\pi^{\prime}\) if and only if \((v,w)\) is improving for \(\pi\). Proof.: For convenience, we write \(x\), \(y\), and \(z\) instead of \(x_{v,w}\), \(y_{v,w}\), and \(z_{v,w}\). If \((v,w)\) is active in \(\pi\), vertex \(v\) is \(w\)-oriented w.r.t. \(\pi^{\prime}\). Thus, \((x,y)\) is active in \(\pi^{\prime}\) as well. Hence, both edges are not improving as they have reduced costs of \(z_{\pi^{\prime}}(x,y)=z_{\pi}(v,w)=0\). Now assume that \((v,w)\) is inactive in \(\pi\), which yields that \((x,v)\) is active in \(\pi^{\prime}\). We obtain \[\begin{split} z_{\pi^{\prime},\mathcal{D}_{n}}(x,y)& =\operatorname{Val}_{\pi^{\prime}}(y)-\operatorname{Val}_{\pi^{ \prime}}(x)=\operatorname{Val}_{\pi^{\prime}}(y)-\operatorname{Val}_{\pi^{ \prime}}(v)\\ &=p_{v}\operatorname{Val}_{\pi^{\prime}}(z)+(1-p_{v}) \operatorname{Val}_{\pi^{\prime}}(v)-\operatorname{Val}_{\pi^{\prime}}(v)\\ &=p_{v}(\operatorname{Val}_{\pi^{\prime}}(w)+r_{\mathcal{B}_{n}} ((v,w))-\operatorname{Val}_{\pi^{\prime}}(v))=p_{v}z_{\pi,\mathcal{B}_{n}}(v,w ),\end{split} \tag{1}\] where we used Observation 3.1 for the last equality. The equivalence holds since \(p_{v}>0\). Note that we can transform a given twin policy with three switches into a different one by changing the orientation of an agent vertex \(v\in V_{\mathcal{B}_{n}}\setminus\{s\}\). The following Lemma shows that, if applied consecutively, these switches all have the same reduced costs. Let \((v,w)\in E_{\mathcal{B}_{n}}\setminus\{(s,s)\}\) and let the policy \(\pi\) for \(\mathcal{D}_{n}\) be the twin policy of some weak unichain policy for \(\mathcal{B}_{n}\). If the edge \((x_{v,w},y_{v,w})\) is improving for \(\pi\), we have \[z_{\pi}(x_{v,w},y_{v,w})=z_{\pi^{\prime}}(v,x_{v,w})=z_{\pi^{\prime}}(\pi(v),v),\] where \(\pi^{\prime}\) denotes the policy that results from applying \((x_{v,w},y_{v,w})\) to \(\pi\) and \(\pi^{\prime\prime}\) denotes the policy that results from applying \((v,x_{v,w})\) to \(\pi^{\prime}\). Proof.: We write \(x\), \(y\), and \(z\) instead of \(x_{v,w}\), \(y_{v,w}\), and \(z_{v,w}\). Assume that \((x,y)\) is improving for \(\pi\). Then, \((x,v)\) is active and \(z_{\pi}(x,y)=p_{v}(\operatorname{Val}_{\pi}(w)+r_{\mathcal{B}_{n}}((v,w))- \operatorname{Val}_{\pi}(v))\) by equation (1). Since \((x,v)\) is active in \(\pi\), we know that \((v,x)\) is inactive. Hence, the value of \(v\) does not change during the switch to \((x,y)\), i.e., \(\operatorname{Val}_{\pi^{\prime}}(v)=\operatorname{Val}_{\pi}(v)\). As the value of \(w\) does not change either, we obtain \[\begin{split} z_{\pi^{\prime}}(v,x)&=\operatorname{ Val}_{\pi^{\prime}}(x)-\operatorname{Val}_{\pi^{\prime}}(v)=\operatorname{ Val}_{\pi^{\prime}}(y)-\operatorname{Val}_{\pi^{\prime}}(v)\\ &=p_{v}(\operatorname{Val}_{\pi^{\prime}}(w)+r_{\mathcal{B}_{n}} ((v,w))-\operatorname{Val}_{\pi^{\prime}}(v))=z_{\pi}(x,y).\end{split}\] The vertex \(v\) is \(u\)-oriented w.r.t. \(\pi\) for some \(u\in\Gamma^{+}_{\mathcal{B}_{n}}(v)\setminus\{w\}\). Thus, we have \(\pi(v)=x_{v,u}\) and \(\operatorname{Val}_{\pi}(v)=\operatorname{Val}_{\pi}(u)+r_{\mathcal{B}_{n}}((v,u))\). Policy \(\pi\) is the twin policy of a weak unichain policy \(\bar{\pi}\), in which \((v,u)\) is active. Therefore, \(\bar{\pi}\) does not contain a path from \(u\) to \(v\) in \(\mathcal{B}_{n}\), so \(\pi\) also does not reach \(v\) from \(u\) in \(\mathcal{D}_{n}\). This yields \(\operatorname{Val}_{\pi^{\prime\prime}}(u)=\operatorname{Val}_{\pi}(u)\) and thus \[\operatorname{Val}_{\pi}(v)=\operatorname{Val}_{\pi^{\prime\prime}}(u)+r_{ \mathcal{B}_{n}}((v,u)). \tag{2}\] The policy \(\pi\) is weak unichain, so Theorem 3.2 yields that \(\pi^{\prime\prime}\) is weak unichain as well. Since all vertices except \(v\) are oriented in \(\pi^{\prime\prime}\) and since \(\pi^{\prime\prime}\) walks from \(v\) to \(w\) with probability one, this yields that \(\pi^{\prime\prime}\) does not reach \(v\) from \(w\). The same thus holds for \(\pi\), so \[\operatorname{Val}_{\pi^{\prime\prime}}(w)=\operatorname{Val}_{\pi}(w). \tag{3}\] With this, we obtain \[\begin{split} z_{\pi^{\prime\prime}}(\pi(v),v)&= \operatorname{Val}_{\pi^{\prime\prime}}(v)-\operatorname{Val}_{\pi^{\prime \prime}}(\pi(v))=\operatorname{Val}_{\pi^{\prime\prime}}(v)-\operatorname{ Val}_{\pi^{\prime\prime}}(y_{v,u})\\ &=\operatorname{Val}_{\pi^{\prime\prime}}(v)-p_{v}(\operatorname{ Val}_{\pi^{\prime\prime}}(u)+r_{\mathcal{B}_{n}}((v,u)))-(1-p_{v})\operatorname{ Val}_{\pi^{\prime\prime}}(v)\\ &\stackrel{{(\ref{eq:def})}}{{=}}p_{v}(\operatorname{ Val}_{\pi^{\prime\prime}}(v)-\operatorname{Val}_{\pi}(v))\\ &=p_{v}(\operatorname{Val}_{\pi^{\prime\prime}}(w)+r_{\mathcal{B}_{n }}((v,w))-\operatorname{Val}_{\pi}(v))\\ &\stackrel{{(\ref{eq:def})}}{{=}}p_{v}(\operatorname{ Val}_{\pi}(w)+r_{\mathcal{B}_{n}}((v,w))-\operatorname{Val}_{\pi}(v))\\ &=z_{\pi}(x,y),\end{split}\] which completes the proof. It is essential for the proofs of Lemma 3 and Lemma 3 that \(\textsc{Bland}(\mathcal{B}_{n},\mathcal{N}_{\mathcal{B}_{n}})\) prefers switches in vertices appearing early in the _vertex numbering_\(\mathcal{N}_{V}\colon V_{\mathcal{B}_{n}}\to|V_{\mathcal{B}_{n}}|\) given by \((t,a_{1},b_{1},a_{2},b_{2},\ldots,a_{n},b_{n},d,s)\), i.e., let \(\mathcal{N}_{V}(t)=1\), \(\mathcal{N}_{V}(a_{1})=2\), and so on. Using the following definition, we can observe a similar behavior of policy iteration with Dantzig's rule, cf. Algorithm 3, on \(\mathcal{D}_{n}\). The edge \(e\in E_{n}^{A}\)_belongs_ to vertex \(v\in V_{\mathcal{B}_{n}}\setminus\{s\}\) if \[e\in\mathrm{B}(v)\coloneqq\bigcup_{w\in\Gamma_{\mathcal{B}_{n}}^{+}(v)}\{(x _{v,w},y_{v,w}),(v,x_{v,w}),(x_{v,w},v)\}.\] We obtain the following bounds on the reduced costs. Let \(v\in V_{\mathcal{B}_{n}}\setminus\{s\}\) and \(e\in B(v)\) be arbitrary. Let \(\pi\) be a weak unichain policy for \(\mathcal{D}_{n}\) such that all vertex values w.r.t. \(\pi\) are non-negative. If \(e\) is improving for \(\pi\), then its reduced costs are bounded by \(p_{v}\cdot 0.25\leq z_{\pi}(e)\leq p_{v}\cdot 2^{n+2}\). Proof.: Since \(e\in B(v)\), we have \(e\in\{(x_{v,w},y_{v,w}),(v,x_{v,w}),(x_{v,w},v)\}\) for some \(w\in\Gamma_{\mathcal{B}_{n}}^{+}(v)\). For convenience, we write \(x\), \(y\), and \(z\) instead of \(x_{v,w}\), \(y_{v,w}\), and \(z_{v,w}\). Firstly, assume that \(e=(x,y)\). Then, since \(e\) is improving, \((x,v)\) is active in \(\pi\). As in equation (1), we obtain \(z_{\pi}(x,y)=p_{v}(\mathrm{Val}_{\pi}(w)+r_{\mathcal{B}_{n}}((v,w))-\mathrm{ Val}_{\pi}(v))=:p_{v}\cdot\delta(\pi,v,w)\). Secondly, assume that \(e=(v,x)\). Then, \((x,y)\) is active in \(\pi\) as otherwise \(e\) would not be improving due to \(z_{\pi}(v,x)=\mathrm{Val}_{\pi}(x)-\mathrm{Val}_{\pi}(v)=0\). This yields \[z_{\pi}(v,x)=\mathrm{Val}_{\pi}(x)-\mathrm{Val}_{\pi}(v)=\mathrm{Val}_{\pi}(y )-\mathrm{Val}_{\pi}(v)=p_{v}\cdot\delta(\pi,v,w). \tag{4}\] Lastly, assume \(e=(x,v)\). Then, as before, \((x,y)\) is active in \(\pi\). We can thus conclude from (4) that \(z_{\pi}(x,v)=\mathrm{Val}_{\pi}(v)-\mathrm{Val}_{\pi}(x)=-p_{v}\cdot\delta(\pi,v,w)\). By assumption, every vertex has a non-negative value with respect to \(\pi\). Further, all vertex values are bounded from above by the maximum vertex value w.r.t. the optimal policy for \(\mathcal{D}_{n}\). By Lemma 3, Observation 3, and Observation 3, this is \(\mathrm{Val}_{\pi_{\pi}^{\prime}}(t)=2^{n+1}-1.25\). Since the absolute value of any edge reward is at most \(2^{n}\), we obtain \[|\delta(\pi,v,w)|\leq\left(2^{n+1}-1.25+2^{n}\right)\leq 2^{n+2}.\] Hence, we have an upper bound of \(z_{\pi}(e)\leq p_{v}\cdot 2^{n+2}\). As all edge rewards are integer multiples of \(0.25\), also \(\mathrm{Val}_{\pi}(u)\) is an integer multiple of \(0.25\) for every \(u\in V_{\mathcal{B}_{n}}\) (starting in \(u\), policy \(\pi\) visits every edge that has a non-zero reward either exactly once or never). This yields that \(|\delta(\pi,v,w)|\) is a multiple of \(0.25\) as well, which concludes the proof. Note that, by Theorem 3 and as all vertex values w.r.t. \(\pi_{0}^{\prime}\) are zero, \(\textsc{Dantzig}(\mathcal{D}_{n},\pi_{0}^{\prime})\) only visits weak unichain policies with non-negative vertex values. Therefore, if we choose the probabilities \((p_{v})_{v\in\mathcal{N}_{\mathsf{S}_{n}}\setminus\{s\}}\) for increasing vertex numbers \(\mathcal{N}_{V}(v)\) fast enough decreasing, then the previous lemma yields that Dantzig prefers improving switches that belong to vertices appearing early in the vertex numbering \(\mathcal{N}_{V}\). We use this in the proof of Theorem 4.2. The following technical result holds independently of the chosen pivot rule. Let \((v,w)\in E_{\mathcal{B}_{n}}\) be an improving switch that gets applied to a policy \(\pi\) during the execution of \(\textsc{Bland}(\mathcal{B}_{n},\pi_{0},\mathcal{N}_{\mathcal{B}_{n}})\). Let \(\bar{\pi}^{\prime}\) denote the policy for \(\mathcal{D}_{n}\) that results from applying the switches \((x_{v,w},y_{v,w})\) and \((v,x_{v,w})\) to the twin policy \(\pi^{\prime}\) of \(\pi\). Then, the edge \((x_{v,\pi(v)},v)\) is improving for \(\bar{\pi}^{\prime}\) and it remains improving during the execution of \(\textsc{PolicyIteration}(\mathcal{D}_{n},\bar{\pi}^{\prime})\) until it gets applied or until an improving switch of the form \((x_{u,\cdot},y_{u,\cdot})\) with \(\mathcal{N}_{V}(u)>\mathcal{N}_{V}(v)\) gets applied. Proof.: The policy \(\pi\) is weak unichain due to Theorem 4.2. Further, according to Lemma 4.2, the edge \((x_{v,w},y_{v,w})\) is improving for \(\pi^{\prime}\). Thus, Lemma 4.2 yields that \((x_{v,\pi(v)},v)\) is improving for \(\bar{\pi}^{\prime}\), which concludes the first part of the proof. Note that every vertex \(u\) succeeding \(v\) in \(\mathcal{N}_{V}\) is \(\pi(u)\)-oriented w.r.t. \(\bar{\pi}^{\prime}\). Therefore, as in Observation 4.2, the first improving switch getting applied by \(\textsc{PolicyIteration}(\mathcal{D}_{n},\bar{\pi}^{\prime})\) and belonging to a vertex \(u\) succeeding \(v\) in \(\mathcal{N}_{V}\) must be of the form \((x_{u,\cdot},y_{u,\cdot})\). We claim that the edge \((x_{v,\pi(v)},v)\) remains improving until it gets applied or until \(\textsc{PolicyIteration}(\mathcal{D}_{n},\bar{\pi}^{\prime})\) applies an improving switch belonging to some vertex \(u\) with \(\mathcal{N}_{V}(u)>\mathcal{N}_{V}(v)\), which concludes the proof. To verify this claim, we perform a case analysis on the specific improving switch \((v,w)\in E_{\mathcal{B}_{n}}\) that gets applied to \(\pi\). Assume that \((v,w)=\operatorname{enter}(1)\) is the improving switch that Bland applies, by Lemma 4.2, to the canonical policy \(\pi\) for any even \(x\in[2^{n}-2]_{0}\). If \(x=0\), then \((x_{a_{1},\pi(a_{1})},a_{1})\) is the only improving switch for \(\bar{\pi}^{\prime}\) that belongs to \(a_{1}\) or \(t\). Therefore, either \((x_{a_{1},\pi(a_{1})},a_{1})\) or a switch belonging to a vertex succeeding \(a_{1}\) in \(\mathcal{N}_{V}\) gets applied to \(\bar{\pi}^{\prime}\). If \(x\neq 0\), then, by Lemma 4.2 and Lemma 4.2, the edge \(\operatorname{travel}(1)\) is the only improving switch in \(t\) after the application of \(\operatorname{enter}(1)\). Therefore, it remains more beneficial to enter level \(1\) than to skip it or to board \(t\) from \(a_{1}\) as long as no switch in a vertex succeeding \(a_{1}\) in \(\mathcal{N}_{V}\) gets applied. Hence, the claim holds in this first case that \((v,w)=\operatorname{enter}(1)\). Assume now that \((v,w)=\operatorname{travel}(i)\) for some \(i\in[n]\). Then, Observation 4.2 and Lemma 4.2 yield that \((x_{t,\pi(t)},t)\) is the only improving switch belonging to \(t\). Since \(t\) is the first vertex in \(\mathcal{N}_{V}\), this verifies the claim. Recall that, by Lemma 4.2, \(\operatorname{enter}(1)\) and \(\operatorname{travel}(1)\) are the only improving switches that might get applied during the transition from \(\pi_{x}\) to \(\pi_{x+1}\) for even \(x\). Hence, it now suffices to prove the claim for the case that \((v,w)\) is part of the canonical phases w.r.t. some odd \(x\in[2^{n}-3]\) with least unset bit \(\ell\coloneqq\ell_{0}(x)>1\). The following analysis relies on the structural insights from the proof of Lemma 4.2. If \((v,w)=\operatorname{leave}(i)\) is the switch from the first or the last phase, then \(\operatorname{enter}(i+1)\) and \(\operatorname{stay}(i)\) are active in \(\pi\). Provided that level \(i+1\) gets entered, leaving level \(i\) is more beneficial than staying in it, which verifies the claim in this case. If \((v,w)=\operatorname{stay}(\ell-1)\), then \((x_{v,\pi(v)},v)\) is the only improving switch for\(\bar{\pi}^{\prime}\) in the first \(\ell-1\) levels, which again validates the claim. If \((v,w)=\operatorname{enter}(\ell)\), then Observation 4.2 implies that \(\operatorname{skip}(\ell)\) and \(\operatorname{board}(\ell)\) are not improving for the policy \(\pi^{(v,w)}\), i.e., \((x_{a_{\ell},\pi(a_{\ell})},a_{\ell})\) is the only improving switch belonging to \(a_{\ell}\). Since entering level \(\ell\) is more beneficial than entering all of the first \(\ell-1\) levels, this holds true as long as the algorithm only applies switches in levels below \(\ell\). Thus, the claim holds in this case. If \((v,w)=\operatorname{board}(j)\) for some \(j\in[\ell-2]\), then \((x_{v,\pi(v)},v)\) is the only improving switch for\(\bar{\pi}^{\prime}\) in the first \(j\) levels, verifying the claim. If \((v,w)=\operatorname{skip}(\ell-1)\), then \(\operatorname{stay}(\ell-1)\) and \(\operatorname{enter}(\ell)\) are active in \(\pi\). As long as these edges stay active, skipping level \(\ell-1\) is more beneficial than entering it. Further, \(\operatorname{travel}(\ell)\) is active in \(\pi\), so skipping level \(\ell-1\) remains better than boarding \(t\) from \(a_{\ell-1}\) until a switch in some higher level has made a travel edge \(\operatorname{travel}(k)\) with \(k>\ell\) improving. This supports the claim once again. Finally, assume \((v,w)=\operatorname{stay}(j)\) for some \(j\in[\ell-3]\). This switch is improving for \(\pi\) as it allows the policy to reach and enter level \(\ell\) without using \(\operatorname{board}(j+1)\) and \(\operatorname{travel}(\ell)\). An edge \(\operatorname{travel}(k)\) with \(k>\ell\) can only become improving after an improving switch in some higher level. Therefore, staying in level \(j\) is better than leaving it as long as no switches in levels above \(j\) get applied. Hence, our claim holds in all cases, which concludes the proof. With this, we can show that a certain class of pivot rules, including Bland's, Dantzig's, and the Largest Increase rule, yield an exponential number of improving switches on \(\mathcal{D}_{n}\). Assume that \(\textsc{PolicyIteration}(\mathcal{D}_{n},\pi_{0}^{\prime})\), where \(\pi_{0}^{\prime}\) denotes the twin policy of \(\pi_{0}\), gets applied with a pivot rule that satisfies the following conditions: 1. [label=()] 2. For every improving switch \((v,w)\in E_{\mathcal{B}_{n}}\) that \(\textsc{Bland}(\mathcal{B}_{n},\pi_{0},\mathcal{N}_{\mathcal{B}_{n}})\) applies to some policy \(\pi\), \(\textsc{PolicyIteration}\) applies \((x_{v,w},y_{v,w})\) and \((v,x_{v,w})\) to the twin policy of \(\pi\). 3. While an edge of the form \((x_{v,\cdot},v)\) is improving for some \(v\in V_{\mathcal{B}_{n}}\), \(\textsc{PolicyIteration}\) does not apply an improving switch of the form \((x_{u,\cdot},y_{u,\cdot})\) with \(\mathcal{N}_{V}(u)>\mathcal{N}_{V}(v)\). Then, \(\textsc{PolicyIteration}(\mathcal{D}_{n},\pi_{0}^{\prime})\) performs \(\Omega(2^{n})\) improving switches. Proof.: Let \(\pi\) be a policy for \(\mathcal{B}_{n}\) occurring during the execution of \(\textsc{Bland}(\mathcal{B}_{n},\pi_{0},\mathcal{N}_{\mathcal{B}_{n}})\), where we allow \(\pi=\pi_{0}\), and let \((v,w)\in E_{\mathcal{B}_{n}}\) denote the switch that \(\textsc{Bland}\) applies to \(\pi\). Let \(\pi^{\prime}\) be the twin policy of \(\pi\), and let \(\tilde{\pi}=\pi^{(v,w)}\). By condition 1, \(\textsc{PolicyIteration}\) applies \((x_{v,w},y_{v,w})\) and \((v,x_{v,w})\) to \(\pi^{\prime}\). Denote the resulting policy by \(\tilde{\pi}^{\prime}\). According to Lemma 4.2, the edge \((x_{v,\pi(v)},v)\) now stays improving until it gets applied as an improving switch or until an improving switch of the form \((x_{u,\cdot},y_{u,\cdot})\) with \(\mathcal{N}_{V}(u)>\mathcal{N}_{V}(v)\) gets applied. With condition 1, this yields that \((x_{v,\pi(v)},v)\) gets applied by \(\textsc{PolicyIteration}\) at some point, and that it is constantly improving until then. Note that, as long as \((v,x_{v,\pi(v)})\) is inactive, the policy's choice in \(x_{v,\pi(v)}\) only affects the reduced costs of its unique incidental edge \((v,x_{v,\pi(v)})\). This edge is not active in \(\tilde{\pi}^{\prime}\) and is not improving until the application of \((x_{v,\pi(v)},v)\). Therefore, if we were to force the algorithm to apply \((x_{v,\pi(v)},v)\) to \(\tilde{\pi}^{\prime}\), this would not alter the remaining behavior of the algorithm. The policy resulting from this forced switch is the twin policy of \(\tilde{\pi}\). Hence, without changing the total number of applied improving switches (we only rearrange them), we can assume that \(\textsc{PolicyIteration}(\mathcal{D}_{n},\pi_{0}^{\prime})\) visits the twin policy of every policy visited by \(\textsc{Bland}(\mathcal{B}_{n},\pi_{0},\mathcal{N}_{\mathcal{B}_{n}})\). By Theorem 4.2, this yields that the algorithm needs to perform an exponential number of improving switches, which concludes the proof. Now it suffices to check the conditions given in Lemma 4.2 for each pivot rule. Let \(\pi_{0}^{\prime}\) denote the twin policy of \(\pi_{0}\). Then, \(\textsc{Bland}(\mathcal{D}_{n},\pi_{0}^{\prime},\mathcal{N}_{\mathcal{D}_{n}})\) performs \(\Omega(2^{n})\) improving switches. Proof.: We check the two conditions from Lemma 4.2. For condition 1, let \(\pi\) be a policy for \(\mathcal{B}_{n}\) visited by \(\textsc{Bland}(\mathcal{B}_{n},\pi_{0},\mathcal{N}_{\mathcal{B}_{n}})\), including the case \(\pi=\pi_{0}\), and let \(\pi^{\prime}\) be the twin policy of \(\pi\). Assume that \(\textsc{Bland}\) applies the improving switch \((v,w)\in E_{\mathcal{B}_{n}}\) to \(\pi\). By Observation 20, all improving switches for \(\pi^{\prime}\) are of the form \((x_{\cdot},y_{\cdot})\). According to Lemma 4.2, the edge \((x_{v,w},y_{v,w})\) is improving for \(\pi^{\prime}\). As \((v,w)\) is the improving switch for \(\pi\) with the smallest Bland number in \(\mathcal{N}_{\mathcal{B}_{n}}\), we know that, by construction of \(\mathcal{N}_{\mathcal{D}_{n}}\), the algorithm applies the switch \((x_{v,w},y_{v,w})\) to \(\pi^{\prime}\). Further, since \(\pi\) is weak unichain due to Theorem 4, Lemma 22 yields that \((v,x_{v,w})\) is improving after this switch. As it is the successor of \((x_{v,w},y_{v,w})\) in \(\mathcal{N}_{\mathcal{D}_{n}}\) and as no other egde became improving due to the first switch, the algorithm applies \((v,x_{v,w})\) next. That is, Bland's rule satisfies condition (a). Additionally, condition (b) holds since the edge \((x_{v,\pi(v)},v)\) precedes any switch of the form \((x_{u_{\cdot},}y_{u_{\cdot}})\) with \(\mathcal{N}_{V}(u)>\mathcal{N}_{V}(v)\) in the Bland numbering \(\mathcal{N}_{\mathcal{D}_{n}}\). As motivated above, the choice of the probabilities \((p_{v})_{v\in V_{\mathcal{B}_{n}}\setminus\{s\}}\) in the following theorem yields that Dantzig prefers improving switches that belong to vertices appearing early in the vertex numbering \(\mathcal{N}_{V}\). Let \(p_{v}=2^{-\mathcal{N}_{V}(v)(n+5)}\) for all \(v\in V_{\mathcal{B}_{n}}\setminus\{s\}\), and let \(\pi^{\prime}_{0}\) denote the twin policy of \(\pi_{0}\). Then, \(\textsc{Dantzig}(\mathcal{D}_{n},\pi^{\prime}_{0})\) performs \(\Omega(2^{n})\) improving switches. Proof.: We check the two conditions from Lemma 26. For condition (b), we compute that the choice of the probabilities \(p_{v}\) yields that Dantzig prefers improving switches belonging to vertices with a small vertex number. Let \(u,v\in V_{\mathcal{B}_{n}}\setminus\{s\}\) with \(\mathcal{N}_{V}(u)>\mathcal{N}_{V}(v)\). Let further \(e_{u}\in B(u)\) and \(e_{v}\in B(v)\) be improving switches for some policy \(\pi\) for \(\mathcal{D}_{n}\), which gets visited by \(\textsc{PolicyIteration}(\mathcal{D}_{n},\pi^{\prime}_{0})\). Then, \(\pi\) is weak unichain and only induces non-negative vertex values, so Lemma 24 yields \[z_{\pi}(e_{v})\geq p_{v}\cdot 0.25=2^{-\mathcal{N}_{V}(v)(n+5)-2}\geq 2^{- \mathcal{N}_{V}(u)(n+5)+(n+5)-2}=p_{u}\cdot 2^{(n+3)}>z_{\pi}(e_{u}).\] Hence, Dantzig's rule prefers switches belonging to \(v\) over those belonging to \(u\), so it satisfies condition (b). For condition (a), let \(\pi\) be a policy for \(\mathcal{B}_{n}\) visited by \(\textsc{Bland}(\mathcal{B}_{n},\pi_{0},\mathcal{N}_{\mathcal{B}_{n}})\), including the case \(\pi=\pi_{0}\), and let \((v,w)\in E_{\mathcal{B}_{n}}\) denote the switch that Bland applies to \(\pi\). Let \(\pi^{\prime}\) be the twin policy of \(\pi\). By Observation 20, all improving switches for \(\pi^{\prime}\) are of the form \((x_{\cdot},y_{\cdot})\). By construction of \(\mathcal{N}_{\mathcal{D}_{n}}\), we know that Bland prefers those switches \((x_{\cdot},y_{\cdot})\) that belong to vertices with a small vertex number. In the proof of Proposition 27, we see that \(\textsc{Bland}(\mathcal{D}_{n},\pi^{\prime},\mathcal{N}_{\mathcal{D}_{n}})\) applies \((x_{v,w},y_{v,w})\) to \(\pi^{\prime}\). Since Dantzig also prefers switches belonging to vertices with a small vertex number, we conclude that \(\textsc{Dantzig}(\mathcal{D}_{n},\pi^{\prime}_{0},\mathcal{N}_{\mathcal{D}_{n}})\) applies an improving switch to \(\pi^{\prime}\) that belongs to \(v\). However, there might be multiple such switches. Recall that all improving switches for \(\pi^{\prime}\) are of the form \((x_{\cdot},y_{\cdot})\). If \(v=b_{i}\) for some \(i\in[n]\), then only two of these (possibly improving) edges belong to \(v\), one of which is active in \(\pi^{\prime}\). Therefore, in this case, Dantzig applies the improving switch \((x_{v,w},y_{v,w})\). Now assume \(v=a_{i}\) for some \(i\in[n]\). Then, Observation 15 and Lemma 21 yield \[z_{\pi^{\prime}}(x_{a_{i},b_{i}},y_{a_{i},b_{i}})>\max\{z_{\pi^{\prime}}(x_{a_ {i},a_{i+1}},y_{a_{i},a_{i+1}}),z_{\pi^{\prime}}(x_{a_{i},t},y_{a_{i},t})\}\] if \((v,w)=\mathrm{enter}(i)\), and \[z_{\pi^{\prime}}(x_{a_{i},a_{i+1}},y_{a_{i},a_{i+1}})>z_{\pi^{\prime}}(x_{a_{i},t},y_{a_{i},t})\] if \((v,w)=\mathrm{skip}(i)\). Therefore, the edge \((x_{v,w},y_{v,w})\) has higher reduced costs than the other edges that belong to \(v\), so Dantzig applies it to \(\pi^{\prime}\). Finally, if \(v=t\), then Observation 16 and Lemma 21 yield that \((x_{v,w},y_{v,w})\) is the only improving switch that belongs to \(t\). Thus, it gets applied by Dantzig. We conclude that, in all cases, Dantzig applies the switch \((x_{v,w},y_{v,w})\) to \(\pi^{\prime}\), which is the unique edge with the highest reduced costs. According to Lemma 22, the edge \((v,x_{v,w})\) has now the same reduced costs as \((x_{v,w},y_{v,w})\) had before its application. Since \((v,x_{v,w})\) is the only edge that became improving during the last switch, Dantzig applies this edge next. Therefore, Dantzig's rule also satisfies condition (a), which concludes the proof. ``` input: Markov decision process \(G\), weak unichain policy \(\pi\) for \(G\) while\(\pi\) admits an improving switch:\(\bar{s}\leftarrow\) an improving switch \(s\) for \(\pi\) maximizing \(\sum_{v\in V^{A}_{n}}\operatorname{Val}_{\pi^{\prime}}(v)\) \(\pi\leftarrow\pi^{\bar{s}}\) return\(\pi\) ``` **Algorithm 4**\(\textsc{LargestIncrease}(G,\pi)\) Finally, we turn to policy iteration with the Largest Increase pivot rule, cf. Algorithm 4. In the most general sense, consider an arbitrary improving switch \(s=(v,w)\) for some policy \(\pi\). Assume that no ingoing edges of \(v\) are active in \(\pi\). Then, the reduced costs of \(s\) coincide with the increase of the sum over all vertex values, that is, we obtain the equality \(z_{\pi}(s)=\sum_{v\in V^{A}_{n}}\operatorname{Val}_{\pi^{*}}(v)-\sum_{v\in V^ {A}_{n}}\operatorname{Val}_{\pi}(v)\). Further, the induced increase of the sum is always at least as large as the reduced costs. From this, using the structure of \(\mathcal{D}_{n}\), we can conclude that LargestIncrease mirrors the behavior of Dantzig if we choose the probabilities \(p_{v}\) as before. Let \(p_{v}=2^{-\mathcal{N}_{V}(v)(n+5)}\) for all \(v\in V_{\mathcal{B}_{n}}\setminus\{s\}\), and let \(\pi^{\prime}_{0}\) denote the twin policy of \(\pi_{0}\). Then, \(\textsc{LargestIncrease}(\mathcal{D}_{n},\pi^{\prime}_{0})\) performs \(\Omega(2^{n})\) improving switches. Proof.: We check the two conditions from Lemma 26. For condition (a), let \(\pi\) be a policy for \(\mathcal{B}_{n}\) occurring during the execution of \(\textsc{Bland}(\mathcal{B}_{n},\pi_{0},\mathcal{N}_{\mathcal{B}_{n}})\), where we allow \(\pi=\pi_{0}\), and let \((v,w)\in E_{\mathcal{B}_{n}}\) denote the switch that Bland applies to \(\pi\). Let \(\pi^{\prime}\) be the twin policy of \(\pi\). According to the proof of Theorem 28, Dantzig applies the improving switches \((x_{v,w},y_{v,w})\) and \((v,x_{v,w})\) to \(\pi^{\prime}\). By Observation 20, all improving switches for \(\pi^{\prime}\) are of the form \((x_{\cdot},y_{\cdot})\), where \(\pi^{\prime}\) does not reach \(x\). when starting in any other vertex. Therefore, since the probabilities \((p_{v})_{v\in V_{\mathcal{B}_{n}}\setminus\{s\}}\) are chosen as in Theorem 28, the reduced costs of each of these improving switches coincide with the induced increase of the sum over all vertex values. Hence, LargestIncrease also applies the improving switch \((x_{v,w},y_{v,w})\) to \(\pi^{\prime}\). This switch only increases the reduced costs of the edge \((v,x_{v,w})\), which, by Lemma 22, coincide with the previous reduced costs. Therefore, the induced increase of the sum over all vertex values is for \((v,x_{v,w})\) now at least as large as it was for \((x_{v,w},y_{v,w})\) before. Hence, LargestIncrease also applies \((v,x_{v,w})\) next. We conclude that the Largest Increase pivot rule satisfies condition (a). Note that the reduced costs of the edges from condition (b) again coincide with the induced increase of the sum over all vertex values. Further, by the proof of Theorem 28, we know that Dantzig's rule prefers switches belonging to vertices with a small vertex number. Therefore, the Largest Increase rule also satisfies condition (b). Note that Theorem 1 is now a direct consequence of Theorems 27, 28, and 29. Moreover, we have seen in the proofs of these theorems that Bland's rule, Dantzig's rule, and the Largest Increase rule satisfy the conditions from Lemma 26. Thus, any combination of these rules also satisfies the conditions, which immediately yields Corollary 2. Finally, Corollary 3 follows from Theorem 5 together with Obervation 7, Lemma 8, and Observation 19.
2309.00041
EDGE -- Dark matter or astrophysics? Breaking dark matter heating degeneracies with HI rotation in faint dwarf galaxies
Low-mass dwarf galaxies are expected to reside within dark matter haloes that have a pristine, `cuspy' density profile within their stellar half-light radii. This is because they form too few stars to significantly drive dark matter heating through supernova-driven outflows. Here, we study such simulated faint systems ($10^4 \leq M_{\star} \leq 2\times 10^6 \, M_\mathrm{\odot}$) drawn from high-resolution (3 pc) cosmological simulations from the `Engineering Dwarf Galaxies at the Edge of galaxy formation' (EDGE) project. We confirm that these objects have steep and rising inner dark matter density profiles at $z=0$, little affected by galaxy formation effects. But five dwarf galaxies from the suite also showcase a detectable HI reservoir ($M_{\mathrm{HI}}\approx 10^{5}-10^{6} \, M_\mathrm{\odot}$), analogous to the observed population of faint, HI-bearing dwarf galaxies. These reservoirs exhibit episodes of ordered rotation, opening windows for rotation curve analysis. Within actively star-forming dwarfs, stellar feedback easily disrupts the tenuous HI discs ($v_{\phi} \approx 10\, \mathrm{km} \, \mathrm{s}^{-1}$), making rotation short-lived ($\ll 150 \, \mathrm{Myr}$) and more challenging to interpret for dark matter inferences. In contrast, we highlight a long-lived ($\geq 500 \, \mathrm{Myr}$) and easy-to-interpret HI rotation curve extending to $\approx 2\, r_{1/2, \text{3D}}$ in a quiescent dwarf, that has not formed new stars since $z=4$. This stable gas disc is supported by an oblate dark matter halo shape that drives high-angular momentum gas flows. Our results strongly motivate further searches for HI in rotation curves in the observed population of HI-bearing low-mass dwarfs, that provide a key regime to disentangle the respective roles of dark matter microphysics and galaxy formation effects in driving dark matter heating.
Martin P. Rey, Matthew D. A. Orkney, Justin I. Read, Payel Das, Oscar Agertz, Andrew Pontzen, Anastasia A. Ponomareva, Stacy Y. Kim, William McClymont
2023-08-31T18:00:01Z
http://arxiv.org/abs/2309.00041v2
EDGE - Dark matter or astrophysics? Clear prospects to break dark matter heating degeneracies with Hi rotation in faint dwarf galaxies ###### Abstract Low-mass dwarf galaxies are expected to showcase pristine, 'cuspy' inner dark matter density profiles compared to their stellar sizes, as they form too few stars to significantly drive dark matter heating through supernovae-driven outflows. Here, we study such simulated faint systems (\(10^{4}\leq M_{\star}\leq 2\times 10^{6}\,\mathrm{M}_{\sun}\)) drawn from high-resolution (3 pc) cosmological simulations from the 'Engineering Dwarf Galaxies at the Edge of galaxy formation' (EDGE) project. We confirm that these objects have steep and rising inner dark matter density profiles at \(z=0\), little affected by galaxy formation effects. But five dwarf galaxies from the suite also showcase a detectable Hi reservoir (\(M_{\mathrm{H}i}\approx 10^{5}-10^{6}\,\mathrm{M}_{\sun}\)), analogous to the observed population of faint, Hi-bearing dwarf galaxies. These reservoirs exhibit episodes of ordered rotation, opening windows for rotation curve analysis. Within actively star-forming dwarfs, stellar feedback easily disrupts the tenuous Hi discs (\(v_{\phi,g}\approx 10\,\mathrm{km}\,\mathrm{s}^{-1}\)), making rotation short-lived (\(\ll 150\,\mathrm{Myr}\)) and more challenging to interpret for dark matter inferences. Contrastingly, we highlight a long-lived (\(\geq 500\,\mathrm{Myr}\)) and easy-to-interpret Hi rotation curve extending to \(\approx 2\,r_{1/2,\mathrm{3D}}\) in a quiescent dwarf, that has not formed new stars since \(z=4\). This stable gas disc is supported by an oblate dark matter halo shape that drives high angular momentum gas flows. Our results strongly motivate further searches for Hi rotation curves in the observed population of Hi-bearing low-mass dwarfs, that provide a key regime to disentangle the respective roles of dark matter microphysics and galaxy formation effects in driving dark matter heating. keywords: methods: numerical - galaxies: structure - galaxies: evolution - dark matter ## 1 Introduction The existence of a significant amount of dark matter in our Universe is firmly established, with its gravitational influence leaving distinct signatures on the cosmic microwave background (e.g. Planck Collaboration et al., 2020), the large-scale distribution of galaxies (e.g. Alam et al., 2021) or the dynamics of baryonic tracers in galaxies and galaxy clusters (e.g. Zwicky, 1933; Rubin et al., 1980; Clowe et al., 2006). But the microphysical nature of dark matter and its direct detection remains elusive, despite extensive efforts in the last decade (see e.g. Schumann, 2019 for a review). This calls for a wide and thorough scan of parameter space to robustly remove alternatives, motivating complementary efforts across disciplines (Bertone and Tait, 2018). The latest data from galaxy counts (e.g. Nadler et al., 2021), stellar stream gaps (e.g. Banik et al., 2021), strong lensing (e.g. Gilman et al., 2020; Hsueh et al., 2020), the Lyman-\(\alpha\) forest (e.g. Irsc et al., 2017; Armengaud et al., 2017; Rogers and Peiris, 2021) or their combination (e.g. Nadler et al., 2021; Enzi et al., 2021) all point to dark matter being a cold (i.e. non-relativistic at the time of decoupling) collisionless, particle. However, this still leaves plenty of possible options for the physical nature of the constituent sourcing the dark matter gravitational field (e.g. supersymmetric weakly interacting massive particles, sterile neutrinos, axions; see Bertone et al., 2005; Bertone and Tait, 2018 for reviews). Galactic rotation curves are one of the first historical probes of dark matter and continue to play a key role in the effort to narrow down the available parameter space of models (e.g. Rubin and Ford, 1970; Rubin et al., 1980; van Albada et al., 1985; de Blok and Bosma, 2002; Oh et al., 2011, 2015; Lelli et al., 2016; Posti et al., 2019; Mancera Pina et al., 2020). In particular, rotation curves and Hi kinematics of small dwarf galaxies are particularly powerful. They can be used to determine the dark matter halo masses hosting small dwarf galaxies, directly constraining the low-mass end of the galaxy-halo connection and dark matter models that suppress small-scale power in the cosmological power spectrum (e.g. a warm or wave dark matter; Polisensky and Ricotti, 2011; Anderhalden et al., 2013; Kennedy et al., 2014; Read et al., 2017; Nadler et al., 2021a; Yasin et al., 2023; Sardone et al., 2023). Rotation curves are also sensitive to the structure of the inner gravitational potential and can be used to infer the dark matter distributions and density profiles in dwarf galaxies (e.g. Flores and Primack, 1994; Moore, 1994; de Blok and Bosma, 2002; Oh et al., 2011, 2015; Ott et al., 2012; Iorio et al., 2017). This in turn provides constraints on mechanisms heating dark matter, either dynamically or through microphysical particle interactions (e.g. annihilation or self-interactions). Both of these features have historically garnered significant interest from the community because, at face value, they are discrepant with predictions assuming pure cold dark matter (CDM; see Pontzen and Governato, 2014; Bullock and Boylan-Kolchin, 2017; Sales et al., 2022 for reviews). CDM-only structure formation predict many more bound dark matter subhalos than observed satellite galaxies around the Milky Way and other nearby spiral galaxies - the'missing satellite problem' (e.g. Moore et al., 1999; Klypin et al., 1999). These same simulations predict centrally divergent dark matter density profiles inside dwarf galaxies, 'cusps', whereas observations favour lower density 'cores' - the 'cusp-core problem' (e.g. Flores and Primack, 1994; Moore, 1994; de Blok and Bosma, 2002; Oh et al., 2011, 2015; Iorio et al., 2017). Both problems can be solved by moving beyond the CDM assumption. For example, the cusp-core problem can be mitigated by making dark matter self-interacting (e.g. Burkert, 2000; Spergel and Steinhardt, 2000) or fuzzy (e.g. Schive et al., 2014; Veltmaat et al., 2018; Nori and Baldi, 2021), while satellite numbers can be reduced by suppressing small-scale cosmological power (e.g. Boehm et al., 2014; Vogelsberger et al., 2019). However, these discrepancies can also be explained by a careful modelling of the physics of galaxy formation. In the case of the missing satellite problem, the solution involves a mix of accounting for observational completeness (e.g. Kim et al., 2018), star formation becoming inefficient in low mass haloes (e.g. Efstathiou, 1992; Somerville, 2002; Read and Erkal, 2019) and the tidal destruction of satellites on plunging orbits (e.g. Read et al., 2006; Garrison-Kimmel et al., 2017). In the case of the cusp-core problem, it can be solved by dark matter being dynamically heated during galaxy formation via repeated gas inflows and outflows (e.g. Navarro et al., 1996; Read and Gilmore, 2005; Pontzen and Governato, 2012), or via dynamical perturbations induced by massive clumps or companions (e.g. El-Zant et al., 2001; Romano-Diaz et al., 2009; Goerdt et al., 2010; Nipoti and Binney, 2015; Orchney et al., 2021). There is now compelling observational evidence that 'dark matter heating' occurred in nearby dwarfs (e.g. Read et al., 2019; Bouche et al., 2022; De Leo et al., 2023). This makes testing dark matter models with rotation curves more ambiguous, as the effects of alternative dark matter models become degenerate with the physics of galaxy formation which remains challenging to model from first principles (see Somerville and Dave, 2015; Naab and Ostriker, 2017 for reviews). This motivates us to find 'clean' regimes to test models - galaxies in which the rotation curve data is straightforward to interpret, and where dark matter and astrophysical models make testable predictions with minimal overlap. The best candidates for this are the smallest dwarf galaxies, where low stellar masses leave little opportunity for star formation and galaxy formation effects to impact the inner dark matter density profile (e.g. Teyssier et al., 2013; Di Cintio et al., 2014; Chan et al., 2015; Tollet et al., 2016; Read et al., 2019; Lazar et al., 2020; Orchney et al., 2021). But these 'ultra-faint' dwarfs are typically devoid of gas (Geha et al., 2012; Putman et al., 2021) and thus unsuitable for rotation curve analysis. Excitingly, a growing number of isolated, gas-rich faint dwarfs have recently being reported, showcasing small but detectable Hi reservoirs (\(M_{\rm{H}}\approx 10^{5}-10^{6}\,\rm{M_{\sun}}\)) that matches a faint stellar component (\(10^{4}\leq M_{\star}/\rm{M_{\sun}}\leq 10^{6}\); Irwin et al., 2007; Cole et al., 2014; McQuinn et al., 2015, 2020, 2021; Sand et al., 2015; Adams and Oosterloo, 2018; Brunker et al., 2019; Janesh et al., 2019; Hargis et al., 2020; Bennet et al., 2022; Rhode et al., 2023). Such low stellar masses (and thus low galaxy formation effects) and the presence of Hi (and thus of a dynamical tracer) could precisely provide the rotation curves needed to cleanly separate dark matter models from galaxy formation effects. Further, these objects are typically isolated 'field' dwarfs, removing the need to model environmental effects from more massive hosts. However, a key puzzle remains before we can leverage this population of dwarf galaxies as a dark matter probe: none of them so far shows evidence for clear, ordered rotation in their Hi gas that can be easily exploited for dynamical modelling (Bernstein-Cooper et al., 2014; Adams and Oosterloo, 2018; McQuinn et al., 2021). It remains unknown whether this is due to unfortunate inclination in the few examples observed (i.e. near face-on orientations), to the observational challenges associated with working with such small galaxies (e.g. Read et al., 2016; Verbeke et al., 2017; Oman et al., 2019; Downing and Oman, 2023 for discussions), or to an intrinsic lack of ordered rotation in the Hi gas altogether at this mass scale. In this paper, we address this puzzle using a suite of high-resolution cosmological 'zoomed' simulations of faint dwarf galaxies from the EDGE project (introduced in Agertz et al., 2020). Our sample of simulated galaxies matches the observed population of isolated faint Hi-rich dwarfs, in both stellar masses, Hi masses and star formation activity (or lack thereof; Rey et al., 2019, 2020, 2022; Section 2). In Section 3, we extract their gas and Hi kinematics, highlighting multiple examples of rotationally-supported gas kinematics and Hi rotation curves. This includes short-lived Hi discs rapidly dispersed by the energy input from massive stars, but also a long-lived example with near-circular rotation that could easily be modelled by standard mass-modelling tools (Section 4). We discuss the physical drivers of our results and their significance for future observational campaigns in Section 5. ## 2 The key regime of faint and Hi -rich dwarfs We use the suite of faint (\(10^{4}\leq M_{\star}\leq 2\times 10^{6}\,\rm{M_{\sun}}\)) simulated dwarf galaxies presented in Rey et al. (2022), specifically focussing on the subset of five Hi-bearing dwarfs (\(10^{5}\leq M_{\rm{H}}\leq 10^{6}\,\rm{M_{\sun}}\)). Next, we briefly summarize how each simulated galaxy is evolved to \(z=0\) using cosmological, zoomed hydrodynamical simulations (see Agertz et al., 2020; Rey et al., 2020 for more in-depth descriptions) and the characteristics of the simulated suite (see also Rey et al., 2019, 2020, 2022; Orchney et al., 2021, 2023). All galaxies are evolved to \(z=0\) using cosmological zoomed, hydrodynamical simulations with the adaptative mesh refinement ramss code (Teyssier, 2002). The mass resolution inside the galaxy's Lagrangian region is enhanced using the genetIC software (Stopya et al., 2021) to reach \(m_{\rm{DM}}=960\,\rm{M_{\sun}}\), while the hydrodynamical refinement strategy ensures a spatial resolution of 3 pc across the galaxy's interstellar medium (ISM; Agertz et al., 2020). The cosmological streaming of the Lagrangian patch of each galaxy is zeroed to reduce advection errors (Pontzen et al., 2021). We follow the formation of stars and the injection of energy, momentum, mass, and metals from asymptotic giant branch stars (AGB), type-II and type-Ia supernovae (SNeII, SNeIa) according to Agertz et al. (2020). We track the cooling of primordial and metal-enriched gas using equilibrium thermochemistry (Courty and Alimi, 2004), accounting for on-the-fly self-shielding (Aubert & Teyssier, 2010; Rosdahl & Blaizot, 2012) and heating from a spatially uniform, time-dependent UVB (updated from Haardt & Madau, 1996; see Rey et al., 2020 for further details). To derive Hi distributions, we evaluate the code's internal cooling function at every spatial position of the simulation and compute the neutral hydrogen fraction (Rey et al., 2022). We track dark matter haloes over time using the hop halo finder (Eisenstein & Hut, 1998) and construct merger trees using the pynody and tangos library (Pontzen et al., 2013; Pontzen & Tremmel, 2018). We centre on our galaxies using the shrinking sphere algorithm (Power et al., 2003) on the dark matter and shift the velocity frame to put the central 1 kpc at rest. We interpolate a single stellar population model (Girardi et al., 2010) over a grid of ages and metallicities to obtain the luminosities of individual stellar particles and compute the three-dimensional stellar half-light radius, \(r_{1/2,\rm 3D}\). The EDGE simulated suite consists of ten low-mass dwarf galaxies (\(M_{\star}\leq 2\times 10^{6}\,\rm M_{\sun}\)) hosted in dark matter haloes with \(10^{9}\leq M_{200}\leq 3\times 10^{9}\,\rm M_{\sun}\) at \(z=0\)(Rey et al., 2022). At this mass-scale, all of our galaxies see their star formation truncated at high redshift (\(z\geq 4\)) following cosmic reionization, as their potential wells are then too shallow to accrete gas from the intergalactic medium (see e.g. Efstathiou, 1992; Gnedin, 2000; Hoeft et al., 2006; Okamoto et al., 2008; Noh & McQuinn, 2014 for further discussions). Five of our dwarf galaxies assemble little dynamical mass at late times (i.e. after reionization) and have vanishing gas and Hi contents at \(z=0\). Conversely, five others grow enough at late times to start re-accreting gas from the hot intergalactic medium and eventually host a detectable Hi reservoir at \(z=0\)(Rey et al., 2020, 2022; see also Ricotti, 2009; Benitez-Llambay et al., 2015; Frits et al., 2017; Lejon et al., 2017; Ledinauskas & Zubovas, 2018; Benitez-Llambay & Frenk, 2020; Benitez-Llambay & Fumagalli, 2021; Pereira Wilson et al., 2023 for further discussion on this re-accretion mechanism). The five Hi-bearing objects are the focus of this study. With \(10^{4}\leq M_{\star}\leq 2\times 10^{6}\,\rm M_{\sun}\) and \(10^{5}\leq M_{\rm H{\rm I}}\leq 10^{6}\,\rm M_{\sun}\), they provide excellent simulated analogues to the observed population of low-mass, Hi-bearing dwarfs (Rey et al., 2022, fig. 2). Three out of five galaxies re-accreted their gas reservoirs early enough to re-ignite star-formation several billion years ago and have been continuously forming stars at an average rate of \(10^{-3}\,\rm M_{\sun}\,yr^{-1}\), similar to the star formation histories inferred for star-forming, low-mass dwarfs (Clementini et al., 2012; Weisz et al., 2012; McQuinn et al., 2015). Another galaxy re-ignited star formation just before \(z=0\)(at \(z=0.03\), 500 Myr ago), after several billion years of quiescent but gas-rich evolution. The last galaxy is yet to reignite star formation despite hosting a significant gas reservoir (see Rey et al., 2020; Benitez-Llambay & Fumagalli, 2021; Pereira Wilson et al., 2023 for the physical mechanisms affecting the timing of star formation reignition). These differences in star-formation activity are directly reflected in our dwarfs' Hi properties. Star-forming dwarfs show strongly time varying, asymmetric Hi reservoirs that are often spatially offset from their stellar body (Rey et al., 2022). Quiescent dwarfs, by contrast, show more stable, more spherical and more aligned Hi contents over time (Rey et al., 2022). As we will see next, these distinctions are also reflected in the stability and structure of their gas and Hi kinematics. Furthermore, all of our simulated dwarf galaxies exhibit steep and rising dark matter density profiles around \(r_{1/2,\rm 3D}\). This is first shown in Orkney et al. (2021) using higher-resolution re-simulations (\(m_{\rm DM}=120\,\rm M_{\sun}\)) of a subset of the galaxies studied here. This conclusion also holds at the resolution of this work (\(m_{\rm DM}=960\,\rm M_{\sun}\)), with Figure 1 showing the spherically-averaged dark matter density profile at \(z=0\) using 100 log-spaced bins. All profiles are consistent with an increasing dark matter density towards the centre, with a 'cuspy' logarithmic slope (\(\approx-1\)) until the limited resolution of the simulations starts affecting the dynamics of dark matter particles (grey band; see Orkney et al., 2021, appendix B for further discussions). Figure 1 emphasizes that we have reached a critical regime. At this galactic mass-scale, dynamical effects and star formation-driven outflows can lower central dark matter densities in the very centre of dwarfs, but are inefficient at forming large (i.e. \(\approx r_{1/2,\rm 3D}\)) and flat (i.e. constant-density) dark matter cores (see Orkney et al., 2021 for further discussion). Any observational evidence for \(r_{1/2,\rm 3D}\)-sized dark matter cores in such faint dwarfs (e.g. Amorisco, 2017; Sanders et al., 2018; Malhan et al., 2022) thus become increasingly difficult to explain through purely astrophysical effects and could rather point to new dark matter physics (see e.g. Orkney et al., 2022 for further discussion). The gas reservoirs of faint and Hi-bearing dwarfs provide a unique opportunity to obtain such observational insights provided that Hi kinematics can be harnessed to infer the structure of host dark matter haloes which we now quantify. ## 3 Diverse and variable Hi kinematics in faint dwarfs Figure 2 illustrates the diversity of gas and Hi kinematics found across our sample of simulated gas-rich faint dwarfs. The top panels show an actively star-forming dwarf; the bottom a quiescent dwarf; each at three output times spanning the last billion years of their evolution (selected to be the same output times as in Rey et al., 2022, fig.1). The maps plot the gas velocity weighted by Hi mass along the same line of sight in all panels. Contours show constant \(10^{18}\), \(10^{19}\) Figure 1: Dark matter density profiles across our suite of simulated, Hi-bearing faint dwarf galaxies. At this galactic mass scale, dynamical effects and supernova-driven outflows naturally arising in \(\Lambda\)CDM cosmologies can reduce central dark matter densities but are inefficient at forming large (\(r_{1/2,\rm 3D}\)-sized; grey box) and flat dark matter cores. Inferring the structures of dark matter haloes hosting these faint dwarfs, for example through Hi rotation (Figure 2), thus holds great promises to distinguishing whether galaxy formation effects or new dark matter interactions drive dark matter heating in dwarfs. \(10^{20}\) and \(10^{21}\) cm\({}^{-2}\) Hi column densities (black) and \(r_{1/2,3\mathrm{D}}\) (blue circles). Focussing first on the star-forming dwarf (top panels), we recover Hi distributions that are strongly and rapidly varying in time (see Rey et al., 2022 for an in-depth quantification). In these low-mass systems, stellar feedback drives asymmetric and disturbed Hi morphologies (top, middle), that are often offset from the galaxy's stellar distribution (e.g. top, left), and that can become temporarily unobservable following powerful outflow episodes (top right; \(M_{\mathrm{H}}\) marked on each panel). This dynamic and time-varying behaviour is reflected in the gas kinematics. Over one billion years, gas travels in bulk flows of opposite directions within \(r_{1/2,3\mathrm{D}}\) (top, left and right), but also exhibits a snapshot of potential rotation with a disturbed but apparent gradient in line-of-sight velocity around \(r_{1/2,3\mathrm{D}}\) (top, centre). In contrast, the quiescent system (bottom row) shows a much more stable gas content, slowly accumulating Hi gas over the last billion years (growing \(M_{\mathrm{H}}\) at constant \(M_{\star}\); see also Rey et al., 2020, 2022). Furthermore, this Hi reservoir shows a distinctly flattened morphology, with a positive-to-negative line-of-sight velocity gradient across \(r_{1/2,3\mathrm{D}}\) at all time stamps. These two examples summarize well the more complete and quantitative investigation presented in the next section. Star-forming low-mass dwarfs host short-lived instances of Hi rotation across their evolution, but the small and tenuous discs are rapidly disrupted by the energy input from newborn massive stars. Quiescent systems have more stable Hi reservoirs and kinematics, increasing (but not guaranteeing) their chances to host organized and long-lived Hi rotation ideal for inferences of the structure of their host dark matter halo. ## 4 Rotating Hi Discs in Faint Dwarfs and Their Physical Drivers We now aim to gain more quantitative insights into the gas rotational support of our galaxies. In particular, we wish to (i) establish whether organized gas rotation can dominate thermal and turbulent motions, and thus be clearly identified observationally; (ii) test whether this rotation is close to circular and in equilibrium, and thus easy to relate to the host gravitational potential; and (iii) gain insights in the prospects of characterizing such rotating gas with radio interferometers. Gas contents and kinematics in our galaxies can be strongly varying on timescales comparable to the local dynamical times and to the lifetime of SNII (both \(\approx 10\) Myr). This is much shorter than the cadence with which we save simulation outputs for each galaxy (\(\approx 100\) Myr), making it challenging to establish causal evolutionary trends from one snapshot to the next (e.g. Figure 2, top panels are difficult to relate to one another). We thus start by statistically flagging times of potential organized gas rotation across each dwarf's history in Section 4.1, and then study the Hi kinematics at those times in more details in Section 4.2 and 4.3. Figure 2: Maps of the gas velocity, Hi-mass weighted along the same line of sight for a star-forming (top) and quiescent (bottom) simulated dwarf. From left to right, we show the galaxies at three different times. Actively star-forming dwarfs (top) show irregular and time-varying Hi distributions (the black contours show \(10^{18}\), \(10^{19}\), \(10^{20}\) and \(10^{21}\) cm\({}^{-2}\) column densities) following their cycle of gas accretion and stellar feedback (Rey et al., 2022). This dynamic behaviour is reflected in their gas kinematics, which are often disturbed and show rotational support in short-lived episodes (e.g. top, centre; Figures 3 and 4). In contrast, galaxies with quieter histories (e.g. bottom is yet to reignite star formation after quenching at \(z=4\)) show stable Hi reservoirs within their half-light radius (blue circles) that can host long-lived, stable Hi discs (Figure 5). ### Existence and prevalence of gas rotation We start by computing, for each simulated snapshot, profiles of the tangential gas velocity \(v_{\phi,g}\), the circular velocity \(v_{\rm circ}\), the isothermal sound speed \(c_{s}\) and the gas turbulent velocity \(\sigma_{\rm turb,\,g}\) to quantify rotational, gravitational, thermal and turbulent support respectively (see Appendix A for formal definitions). We compute projected \(v_{\phi,g}\), \(c_{s}\) and \(\sigma_{\rm turb,\,g}\) radial profiles viewed face-on (i.e. in the plane of the disc) in 100 bins linearly spaced between 0 and 2 kpc and construct the effective velocity dispersion of the gas \(\sigma_{\rm eff}=\sqrt{c_{s}^{2}+\sigma_{\rm turb,\,g}^{2}}\). The three-dimensional \(v_{\rm circ}\) profile is derived from the full gravitational potential in the same radial range, sourced by the combination of dark matter, gas and stars (but strongly dominated by the dark matter at all radii for these faint objects). Figure 3 shows the evolution of \(v_{\phi,g}/v_{\rm circ}\) and \(v_{\phi,g}/\sigma_{\rm eff}\) evaluated Figure 3: Time evolution of \(v_{\phi,g}/v_{\rm circ}\) (left; i.e. a proxy for gas rotation in equilibrium with the gravitational potential) and \(v_{\phi,g}/\sigma_{\rm eff}\) (right; i.e. comparing rotational-to-pressure support) across our suite of simulated Hi-bearing dwarfs after reionization. Star-forming galaxies (blue) show instances where they host rotationally-supported Hi kinematics close to equilibrium with the gravitational potential (\(v_{\phi,g}\equiv v_{\rm circ}\) and \(v_{\phi,g}\geq\sigma_{\rm eff}\); marked with diamonds). These episodes are short-lived, as the Hi is rapidly heated and dispersed by stellar feedback (Figure 4). Quiescent galaxies (brown lines) have more stable kinematics, with one hosting long-lived, stable Hi rotation that provides an ideal target for dark matter inferences (Figure 5). at 150 pc, where the highest column density Hi is most often found (Rey et al., 2022). We only show the late-time evolution of these dwarfs (\(z\leq 2\)), i.e. when they host detectable Hi contents (Rey et al., 2020, 2022) and omit their earlier phase where saved simulation outputs are sparser and miscentering due to mergers make \(v_{\phi,g}\) and \(v_{\rm circ}\), even noisier. Trends in Figure 3 are quantitatively unchanged if measuring velocities at 100, 200 pc or at each galaxy's \(r_{1/2,\rm 3D}\) instead. Focussing first on \(v_{\phi,g}/v_{\rm circ}\) (left-hand panels), we recover that gas kinematics are strongly variable in time, without clear evolutionary trends for star-forming low-mass dwarfs (blue) after the re-ignition of their star formation (marked by stars in Figure 3). This is expected as stellar feedback efficiently disrupts the ISM in these shallow potential wells (\(v_{\rm circ}\approx 10\,\rm km\,s^{-1}\) at \(r_{1/2,\rm 3D}\)). Nonetheless, some peaks approach \(v_{\phi,g}/v_{\rm circ}\approx 1\) (grey line in Figure 3) indicating short-lived episodes where the rotational velocity \(v_{\phi,g}\) is close to equilibrium with \(v_{\rm circ}\) sourced by the underlying gravitational potential. Furthermore, during these episodes, rotational motions can dominate over turbulent support, with \(v_{\phi,g}/\sigma_{\rm eff}\geq 1\) (right-hand panels). To quantify this further, we extract all time instances when \(0.75\leq v_{\phi,g}/v_{\rm circ}\leq 1.25\) (i.e. loosely bracketing gas in circular rotation, acknowledging that \(v_{\phi,g}\) is not yet corrected for asymmetric drift pressure terms; see Section 4.2 and Appendix A), \(M_{\rm H{\textsc{i}}}\geq 10^{4}\,\rm M_{\rm\odot}\) (i.e. a very optimistic detection threshold for radio surveys) and \(v_{\phi,g}/\sigma_{\rm eff}\geq 0.75\) (i.e. rotation loosely dominating over thermal and kinetic turbulence). These cuts should not be interpreted quantitatively, but rather as helpful to flag potential galactic discs in the noisy kinematics of our sensitive objects. We mark these times with diamonds in Figure 3, finding at least two examples satisfying these conditions per star-forming dwarf. As we will see in Section 4.2, these snapshots can showcase interpretable but short-lived Hi rotation curves. Contrasting again with our star-forming examples, quiescent dwarfs (brown lines in Figure 3) show more stable evolution over time and clearer evolutionary trends. One galaxy (second row) lacks evidence for gas rotation (\(v_{\phi,g}/v_{\rm circ}\approx 0\)) at all times, but the other (fourth row; also bottom panels of Figure 2) is regularly approaching \(v_{\phi,g}/v_{\rm circ}\approx 1\) and \(v_{\phi,g}/\sigma_{\rm eff}\geq 1\) notably over its entire last billion years of evolution. As we will see in Section 4.3, this galaxy hosts a stable, long-lived Hi disc with an easy-to-interpret rotation curve. Our analysis thus shows that intrinsic ordered gas rotation should be expected in low-mass, Hi-bearing dwarfs. However, stellar feedback in star-forming objects can efficiently disrupt small gas discs (\(v_{\rm circ}\approx 10\,\rm km\,s^{-1}\)), making them short-lived and rare. This provides a natural explanation for the lack of observed rotation in the faintest dwarfs (e.g. Bernstein-Cooper et al., 2014; Adams and Oosterloo, 2018; McQuinn et al., 2021). Quiescent, Hi-bearing dwarfs, that are yet to re-ignite star formation after cosmic reionization, offer a contransibly calmer and more stable environment. This promotes well-ordered and long-lived gas rotation, with greater prospects for dark matter science using rotation curves which we quantify now. ### Short-lived Hi discs in star-forming low-mass dwarfs To quantify Hi kinematics in the noisy, star-forming dwarfs, we visually inspect individual rotation curves and Hi column density maps at the times flagged to have higher probabilities of galactic gas discs (Figure 3, diamonds). Appendix B presents the full results of this systematic inspection, showcasing very diverse Hi distributions with complex spatial, kinematic and thermodynamical structures due to stellar feedback. But Figure 4 (and other examples in Appendix B) show that, even if short-lived, organized Hi rotation can occur in these systems. Figure 4 shows the total gas velocity profiles (left panel), surface density and temperature profiles (middle panels) and Hi column density maps viewed face-on and edge-on (right panels) of 'Halo 600' at \(t=13.07\) Gyr. At this time, the Hi distribution is spatially extended (right panels), reaching \(N_{\rm H{\textsc{i}}}\geq 3\times 10^{19}\) cm\({}^{-2}\) outside \(r_{1/2,\rm 3D}\) (dotted and dashed lines in left panel). Such surface brightnesses are at the limit of what can be achieved by deep follow-ups with current-generation interferometers in faint dwarfs (e.g. Adams and Oosterloo, 2018). Furthermore, despite showcasing holes and being lopsided at large radii, the Hi distribution is smooth and regular in the inner galaxy. In fact, the \(v_{\phi,g}\) profile (left, red) follows the rise of \(v_{\rm circ}\) (blue) within 200 pc, as expected from equilibrium circular orbits. The Hi gas also exhibits a close-to-exponential radial surface brightness profile (middle, bottom), reminiscent of classical rotation curves of galactic discs. However, rotation only marginally dominates compared to the primary source of gas velocity dispersion (thermal pressure, \(c_{s}\) in gold) and only at specific radii. Extracting and claiming a rotational signal from moment maps will thus be challenging once observational challenges associated with such faint and small objects are folded-in (discussed further in Section 5). Nonetheless, computing the standard pressure correction to \(v_{\phi,g}\) (also called asymmetric drift; see Appendix A for further details), we obtain the Hi rotational velocity (\(v_{\rm rot,g}\); red, dashed) which accurately recovers \(v_{\rm circ}\) deep into the diffuse Hi regime (\(N_{\rm H{\textsc{i}}}\geq 3\times 10^{19}\,\rm cm^{-2}\); \(\approx 2r_{1/2,\rm 3D}\)). These results are highly promising and show that, although rare and potentially difficult to identify, Hi rotation curves can be harnessed for dark matter science in star-forming low-mass dwarfs. Extending this analysis further is complicated by the unusual thermal structure and density profile of the gas compared to higher-mass galaxies. The temperature is steadily rising when moving to the outskirts (top, middle), with \(c_{s}\) following accordingly. Already at \(\approx 2\,r_{1/2,\rm 3D}\), thermal pressure fully dominates rotational signals and Hi has transitioned from colder (\(T\approx 10^{3}\,\rm K\)) to warmer temperatures (\(\approx 10^{4}\,\rm K\)). This transition also materializes in a change of slope of the gas density profile (middle, bottom). This structure is naturally explained by the rising importance of the cosmic ultraviolet background (UVB) at low galactic masses. Following cosmic reionization, the UVB provides a source of ionization and heating that maintains diffuse gas in photo-ionization equilibrium around \(T\approx 10^{4}\,\rm K\). Galaxies considered here have potential wells just deep enough to accrete fresh gas from their diffuse surroundings (e.g. \(c_{s}\) is only slightly below \(v_{\rm circ}\) at large radii). Gas can self-shield and cool below \(10^{4}\,\rm K\) in the centre of the dwarf (Rey et al., 2020) but gas in the outskirts rapidly transitions to \(\approx 10^{4}\,\rm K\) in a balance between gravity, cooling from metal lines and photo-heating from the UVB (e.g. Ricotti, 2009; Rey et al., 2020; Benitez-Llambay and Frenk, 2020). The detection of warm Hi (\(10^{4}\,\rm K\)) in projection thus cannot be unequivocally attributed to photo-heating from stars in these faint objects, particularly when undertaking deep observations probing the diffuse gas (e.g. Adams and Oosterloo, 2018). Establishing and interpreting Hi rotation is likely to be challenging in star-forming faint dwarfs. Although one can recover the gravitational potential with access to all simulated information to compute thermal support, how to achieve this feat from Hi datacubes is less clear. The dominance of pressure terms over rotation might point to the need to introduce new approaches to infer dark matter profiles (e.g. starting from hydrostatic equilibrium rather than axisymmetric rotation; Patra, 2018). Furthermore, this also yields thicker Hi discs compared to more massive disc galaxies. Once viewed inclined, the H\({}_{\rm H}\) linewidth from thicker discs receives contribution along the line of sight, leading to a potential mismatch between the rotation velocity measured from the H\({}_{\rm H}\) and the intrinsic value. And even if clear rotation can be established, the sensitivity of these galaxies to stellar feedback makes a detailed assessment of rotation curve systematics essential for robust dark matter inferences (see Appendix B for examples of out-of-equilibrium flows, non-circular motions, feedback-driven holes and Read et al., 2016; Oman et al., 2019; Downing and Oman, 2023 for further discussion). Performing these quantifications would be best undertaken by generating mock Hi datacubes from our simulated snapshots to assess the robustness of standards rotation curve fitting methods (e.g. 3dbarolo; Di Teodoro and Fraternali, 2015) and understand whether new approaches would be better suited to recover dark matter information. We are currently developing a package that can easily incorporate different pressure terms, and treat the impact of disc thickness on the line-of-sight velocity distribution, and leave the quantifications of these uncertainties to future work. In the next Section, we instead focus on easier-to-interpret and long-lived HI rotation curves that can be found in quiescent dwarfs. ### Long-lived Hi discs in quiescent dwarfs #### 4.3.1 Circular, equilibrium H\({}_{\rm H}\) rotation in a low-mass dwarf Figure 5 shows the rotation curve at \(z=0\) (left panel), the gas temperature and surface density radial and vertical profiles (right panels) for the quiescent dwarf hosting a clear, long-lived rotation signal (Figure 3, fourth row). In this example, \(v_{\phi,g}\) accurately tracks \(v_{\rm circ}\) without corrections to \(\approx 10\,\rm km\,s^{-1}\), indicating near-perfect circular rotation in equilibrium with the gravitational potential. Compared to our star-forming example (Figure 4), the inner gas is cold (right, top panels) and rotation strongly dominates thermal support and turbulence (\(c_{s}\) and \(\sigma_{\rm turb,\,g}\) in gold and brown) in the inner galaxy. Pressure corrections (red, dashed) are subdominant at all radii, only becoming significant when reaching more diffuse Hi (\(\geq r_{1/2,3\rm D}\); dotted line) brought to warmer temperatures by the UVB. The lack of disturbances from star formation in this object also ensures a regular, symmetric and well-ordered Hi distribution (recall Figure 2), showcasing close-to-exponential Hi radial and vertical profiles (bottom panels). The vertical profile (right panels) shows a thickened Hi disc (aspect ratios approaching 1:2; indicative exponential scale lengths in grey), as expected from the rising importance of pressure support towards larger radii. To summarize, this quiescent galaxy hosts a classical Hi rotation curve, at column densities achievable by current-generation radio interferometers (\(N_{\rm H}\geq 3\times 10^{19}\,\rm cm^{-2}\), dashed). This rotation curve is comparatively easy to interpret, holding great promises to extract unbiased estimates of the inner dark matter density profiles. Even further, we show in Appendix C that a similar rotation structure is present over the last two billion years of evolution of this galaxy (see also diamonds in Figure 3), with the cold and circular Hi rotation curve being in place for the last \(500\,\rm Myr\). The excellent agreement between the Hi rotation and the gravitational potential is thus long-lived and little disrupted. Our results strongly motivate targeting low-mass dwarfs with quieter evolutions when searching for high-quality Hi rotation curves. Such quiescent candidates have already been reported (e.g. Janesh et al., 2019) and their follow-up with deep and high-resolution Hi interferometers should be given high priority. However, our analysis Figure 4: Hi kinematics in an example star-forming Hi-rich low-mass dwarf galaxy (‘Halo 600’) singling a time of ordered Hi rotation (\(t=11.9\,\rm Gyr\)). The face-on 2D tangential velocity profile of the gas (left, red) follows the rise of the 3D rotation curve (left, blue) sourced by the gravitational potential, with the Hi distribution extending to \(\approx 2r_{1/2,3\rm D}\) and showcasing a cold and close-to-exponential disc structure (middle panels; indicative exponential scale lengths in grey). Stellar feedback drives asymmetric Hi features (right panels) in the outskirts, and photo-heating from the UV background leads to a rising thermal support (left, \(c_{s}\) in gold and top, middle). Despite these features, traditional pressure-support corrections to the gas velocity (dashed red; also called asymmetric drift, see Appendix A) can accurately recover \(v_{\rm circ}\) out to \(N_{\rm H}\geq 3\times 10^{19}\,\rm cm^{-2}\). also shows that a lack of star formation activity is insufficient to guarantee well-behaved rotation curves - the other quiescent galaxy in our suite does not exhibit rotation (second row in Figure 3) and star-forming examples lack clear signals during their quiescent periods (notably before the re-ignition of their star-formation marked by a star in Figure 3 ending several billion years of quiescent evolution). We thus turn next to understanding what leads to long-lived Hi discs in this specific object. #### 4.3.2 The link between Hi rotation and the shape of the host dark matter halo A defining feature of our quiescent galaxy with long-lived Hi disc lies in the properties of its host dark matter halo. The specific merger history of this object, particularly a major interaction at \(z\approx 4\) leads to a strongly oblate dark matter halo shape compared to the more triaxial or prolate shapes across the rest of the simulated suite (Orkney et al., 2023). We link these two aspects in Figure 6, visualizing the alignment between the Hi angular momentum compared to the halo shape. We plot the Hi column density map at \(z=0\), oriented side-on compared to the angular momentum of gas with \(\mathrm{\dot{m}_{H}}\geq 0.5\) and overlay the 3D halo shape computed exclusively from the dark matter particles as in Orkney et al. (2023) (grey mesh; whiter towards the foreground, blacker towards the background). Note that Orkney et al. (2023) derive halo shapes using higher-resolution re-simulations (\(m_{\mathrm{DM}}=120\,\mathrm{M_{\sun}}\)) of the galaxies studied in this work - we have checked that (i) halos shapes at \(r\geq 100\,\mathrm{pc}\) and (ii) the presence and orientation of the gas disc at \(z=0\) are both consistent between the two resolutions. This also validates that the presence and formation of the Hi disc is physical rather than stochastic or resolution limited. The Hi disc and the flattened axis of the oblate dark matter halo are exceptionally well aligned in Figure 6. This is best understood by the naturally axisymmetric geometry of a significantly oblate halo. Such geometry induces torques that align accreting gas along its revolution axis, a process best studied in the case of axisymmetric torques induced by galactic stellar discs (see e.g. Danovich et al., 2015 for a discussion). Here, these torques are sourced by the dark matter halo itself, as the gas and stars contribute only marginally to the gravitational potential. Figure 5: Classical Hi disc at \(z=0\) in a quiescent low-mass dwarf (‘Halo 624’). The tangential velocity profile (left, red) accurately tracks the rise of the rotation curve (blue) within \(r_{1/2,30}\) (dotted). Correcting the tangential velocity for pressure support (dashed) helps modelling the rising thermal support (gold) and recover \(v_{\mathrm{circ}}\) further into the outskirts of the Hi distribution (dashed shows \(N_{\mathrm{H}}\geq 3\times 10^{19}\,\mathrm{cm}^{-2}\)) where the disc gets thicker (right panels). This long-lived and easy-to-interpret Hi rotation curve is unique to this object, driven by the defining shape of the host dark matter halo (Figure 6 and 7). Characterizing such rotation curve would prove invaluable to obtain robust inferences of inner dark matter density profiles. Figure 6: Hi column density map oriented edge-on with respect to the gas for the galaxy hosting stable rotation. The host dark matter halo is strongly oblate (white-grey-black mesh showcasing the 3D shape), defining an axisymmetric geometry well aligned with the revolution axis of the Hi disc. This configuration induce torques that align infalling gas into the plane of the Hi disc (Figure 7) and favour its growth. To visualize this torque in action, Figure 7 shows the orientation of the gas angular momentum in a given radial shell compared to the angular momentum of the gas in the inner 100 pc (which is almost purely Hi; Figure 5). Starting from outside the virial radius (\(\geq 30\) kpc), gas is accreted with significant angular momentum but orthogonal to the inner disc (\(\theta\geq 50^{\circ}\)) before stabilizing around this angle across between 8 and 20 kpc. Towards smaller radii, however, the gas gradually gets torqued to align with the inner angular momentum of the galaxy (\(\theta\leq 10^{\circ}\) within \(\approx 3r_{1/2,\rm 3D}\)), at which point it shares the same revolution axis as that of the oblate dark matter halo shape (Figure 6). This gradual realignment of gas throughout the halo, starting at radii well outside the galaxy, firmly establishes the causal link between the halo shape and the presence of the Hi disc. ## 5 Summary and Discussion We have analysed the gas and Hi kinematics of simulated low-mass (\(10^{4}\leq M_{\star}\leq 2\times 10^{6}\) M\({}_{\odot}\)) dwarf galaxies, first introduced in Rey et al. (2019, 2020) and evolved to \(z=0\) using high-resolution (\(\approx 3\) pc) zoomed cosmological simulations using the edge galaxy formation model (Agertz et al., 2020). We studied five dwarf galaxies that are close analogues to the observed population of faint, but gas-rich and Hi-bearing dwarfs (\(10^{5}\leq M_{\rm H{\textsc{i}}}\leq 10^{6}\) M\({}_{\odot}\); see Rey et al. 2022 for a more detailed comparison on the observed population; Irwin et al. 2007; Cole et al. 2014; McQuinn et al. 2015, 2020, 2021; Sand et al. 2015; Adams and Oosterloo 2018; Brunker et al. 2019; Janesh et al. 2019; Hargis et al. 2020; Bennet et al. 2022; Rhode et al. 2023). At this galactic mass-scale, galaxy formation effects within \(\Lambda\)CDM are inefficient at dynamically heating dark matter into flat and large (\(\approx r_{1/2,\rm 3D}\)) dark matter cores, leading to steep dark matter density profiles at \(r_{1/2,\rm 3D}\) in all of our dwarfs (Figure 1). Inferring the structure of dark matter haloes in this regime, for example through Hi rotation curves, thus holds great promises to pinpoint the relative contributions of dark matter microphysics and galaxy formation in driving dark matter heating. We find that simulated low-mass dwarfs that are actively forming stars undergo strong variability in their Hi distributions, driven by the cycle of gas accretion and efficient stellar feedback (Rey et al., 2022). This variability is reflected in their Hi kinematics, showcasing disturbed and rapidly changing gas flows (Figure 2) as supernovae easily disrupt gas dynamics in these shallow potential wells (\(v_{\rm circ}\) and \(v_{\phi,g}\approx 10\) km s\({}^{-1}\) at \(r_{1/2,\rm 3D}\)). We find occasional, short-lived (\(\ll 150\) Myr) episodes of organized Hi rotation in these star-forming objects (Figure 3), for which rotation curves can recover the underlying gravitational potential (Figure 4). But the prevalence of out-of-equilibrium feedback-driven gas flows and the (comparatively) high velocity dispersions due to thermal support (\(\sigma_{\rm eff}\approx 10\) km s\({}^{-1}\)) lead to difficult-to-interpret rotation curves (see also Appendix B). Clear and robust Hi rotation that can be harnessed for dark matter science is thus expected to be rare in these active systems, aligning with the lack of observed rotation in the handful of low-mass star-forming dwarfs with detailed Hi observations (e.g. Bernstein-Cooper et al. 2014; Adams and Oosterloo 2018; McQuinn et al. 2021). Contrastingly, two of our low-mass dwarfs undergo significantly quieter evolution, with several billion years without forming new stars (see also Rey et al. 2020). The lack of star-formation activity since \(z\approx 4\) leads to more stable Hi reservoirs in these systems with better organized kinematics (Figure 2 and 3). In particular, one of our quiescent Hi-bearing dwarf showcases a long-lived, close-to-circular Hi rotation curve (Figure 5 and Appendix C) that could be readily and robustly interpreted for dark matter inferences. We tie the existence of this long-lived rotation curve to the specifically oblate shape of its host dark matter halo, which plays a key role in building the final Hi disc by torquing circumgalactic gas to align with its axisymmetric revolution axis (Figure 6 and 7). Our results point to Hi rotation being generally rare, sensitive and potentially challenging to interpret in faint Hi-bearing dwarfs. But we stress that the mere existence of several examples of ordered and easy-to-interpret Hi rotation curves across a suite of only five simulated galaxies is highly promising and strongly motivates further observational and theoretical investigations. In particular, our findings highlight clear avenues to find 'golden eggs' enabling robust dark matter inferences, i.e. targeting low-mass dwarfs that (i) have been quiescent for an extended period of time and have avoided rapid disruption of their gas flows by stellar feedback from newborn stars; and (ii) are hosted in an oblate dark matter halo whose axisymmetric geometry promotes disc formation. An extended gap in star formation and a quiescent period can be inferred from a color-magnitude diagram and a lack of young, blue stars when deep photometric imaging is available. Candidates for such quiescent low-mass dwarf galaxies have in fact already been reported (Janesh et al., 2019, although see also Rhode et al. 2023) but, unfortunately, the shape of their host dark matter halo cannot be known a priori (or at all). Oblate dark matter halo shapes are statistically rarer amongst the population of high-mass dark matter haloes (e.g. Jing and Suto 2002; Maccio et al. 2007; Schneider et al. 2012; Bonamigo et al. 2015). But their fraction is steadily rising towards lower halo masses (e.g. \(\approx 20\) per cent of haloes with \(M_{200}=10^{12}\) M\({}_{\odot}\) compared to \(\approx 10\) for \(M_{200}=10^{13}\) M\({}_{\odot}\); Vega-Ferrero et al. 2017). A statistical quantification of halo shapes across the low-mass dwarf galaxy population is currently missing, but these estimates are Figure 7: Orientation of the angular momentum of the gas in a radial shell compared to that in the inner 100 pc, for the same galaxy as in Figure 6. Gas outside the virial radius (grey line) is accreted tilted compared to the inner angular momentum (\(\theta\leq 50^{\circ}\)), but is gradually and coherently torqued with decreasing radius to align with the inner Hi disc (\(\theta\approx 0\) for \(r\leq 300\) pc). The revolution axis of the Hi disc also coincides with that of the dark matter halo shape (Figure 6). in line with the (very) small number statistics of one-out-five oblate halo in our suite. Our established link between halo shapes and gas rotation in small dark matter haloes makes quantitatively refining these numbers particularly pressing. When detected, observationally characterizing Hi rotation curves in such small and faint systems is likely to pose a difficult but achievable challenge. Inferring the inner slope of the density profile will require exquisite spatial resolution to resolve multiple radial bins within \(r_{1/2,30}\) (\(\approx 200\) pc) and sufficient channel resolution to capture the rise in velocity within \(v_{\rm circ}(r_{1/2,30})\approx 10\) km s\({}^{-1}\)(see McQuinn et al., 2021 for further discussion of observational challenges and new avenues to characterize Hi rotation). Current configurations of the Very Large Array (VLA) are already approaching these technical requirements for nearby systems (\(\approx 100\) pc and \(0.8\) km s\({}^{-1}\) of spatial and spectral resolution in Leo Pat \(\approx 1.7\) Mpc; Bernstein-Cooper et al., 2014), although concurrently achieving sensitivities comparable to those reported in our simulated suite (\(N_{\rm Hi}\geq 5\times 10^{19}\) cm\({}^{-2}\)) is challenging. Ongoing wide-sky, deep Hi surveys such as MONGHOOSE and MIGHTEE on Meer-Kat (de Blok et al., 2020; Maddox et al., 2021), Wallaby on the Australian Square Kilometer Array Pathfinder (Koribalski et al., 2020), Apertif-Medium deep on the Westerbrook Synthesis Radio Telescope (van Cappellen et al., 2022) and efforts with the Five-hundred-meter aperture Spherical radio Telescope (Kang et al., 2022) will soon provide new candidates for Hi-rich, low-mass dwarfs. This will provide ideal targets for high-quality follow-ups with current (e.g. VLA, Meer-KAT) and future interferometers (e.g. SKA-mid and ngVLA; Braun et al., 2019 and Selina et al., 2018), which will ensure we meet the clear promises offered by the very faint end of the Hi-bearing population to constrain dark matter physics. A key aspect to achieving this goal will be to obtain robust predictions of the connection between dark matter and Hi properties in low-mass dwarf galaxies. This requires us to pinpoint the coupling between a dwarf's ISM and stellar feedback, which is key not only to regulate the ability of supernova-driven outflows to drive dark matter heating, but also their ability to disrupt Hi discs. Many galaxy formation models, including our own, now converge in predicting that the low stellar masses of faint dwarfs does not provide enough SN energy to fully heat their central dark matter into a large and flat dark matter core (e.g. Penarrubia et al., 2012; Di Cintio et al., 2014; Chan et al., 2015; Onorbe et al., 2015; Tollet et al., 2016; Lazar et al., 2020; Orkney et al., 2021). This study provides the first link between these results and the efficiency of Hi disc formation at this galactic mass-scale. Despite these achievements and the accurate modelling of supernova explosions in our simulations, further quantifications are required to better understand the robustness of our predicted Hi kinematics. In particular, photo-ionization feedback can lead to a more gentle and less explosive regulation of star formation (e.g. Agertz et al., 2020; Smith et al., 2021) and could further promote Hi disc formation. Re-simulating all of our dwarfs accounting for radiative effects and improved tracking of gas flows over time (Cadiou et al., 2019) will be tackled in future work (Taylor et al. in prep), allowing us to pinpoint how gas spirals in and flows out of these sensitive objects. ## Acknowledgements MR would like to thank Betsey Adams, Erwin de Blok, Corentin Cadiou and Filippo Fraternali for insightful discussions during the construction of this work and comments on earlier versions of this manuscript. MR is supported by the Beecroft Fellowship funded by Adrian Beecroft. MO acknowledges the UKRI Science and Technology Facilities Council (STFC) for support (grant ST/R505134/1). OA acknowledge support from the Knut and Alice Wallenberg Foundation, the Swedish Research Council (grant 2019-04659) and the Royal Physiographic Society of Lund. AP is supported by the Royal Society. AAP acknowledges support of the STFC consolidated grant [ST/S000488/1/] and [ST/W000903/1]. WM thanks the Science and Technology Facilities Council (STFC) Center for Doctoral Training (CDT) in Data intensive Science at the University of Cambridge (STFC grant number 2742968) for a PhD studentship. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 818085 GMGalaxies. This work was performed using the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/R001014/1. DiRAC is part of the National e-Infrastructure. The authors acknowledge the use of the UCL Grace High Performance Computing Facility, the Surrey Eureka supercomputer facility, and their associated support services. This work was partially supported by the UCL Cosmoparticle Initiative. We thank the developers and maintainers of pynbody(Pontzen et al., 2013), tanos(Pontzen & Tremmel, 2018), NumPy(van der Walt et al., 2011), SciPy(Virtanen et al., 2020), jupyter(Ragan-Kelley et al., 2014), matplotlib(Hunter, 2007), the Astrophysics Data Service and the arXiv preprint repository for providing open-source softwares and services that were used extensively in this work. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author. ## Author Contributions The main roles of the authors were, using the CRediT (Contribution Roles Taxonomy) system1: Footnote 1: [https://authorservices.wiley.com/author-resources/Journal-Authors/open-access/credit.html](https://authorservices.wiley.com/author-resources/Journal-Authors/open-access/credit.html) MR: Conceptualization ; Data curation; Formal analysis; Investigation; Writing - original draft. MO: Data Curation; Formal analysis; Writing - review and editing. JR: Conceptualization; Resources; Writing - review and editing. PD: Conceptualization; Writing - review and editing. OA: Methodology; Software; Writing - review and editing. AP: Writing - review and editing. AAP: Writing - review and editing. SK: Conceptualization. WM: Conceptualization Writing - review and editing.
2309.10252
OPUS: An Integrated Assessment Model for Satellites and Orbital Debris
An increasingly salient public policy challenge is how to manage the growing number of satellites in orbit, including large constellations. Many policy initiatives have been proposed that attempt to address the problem from different angles, but there is a paucity of analytical tools to help policymakers evaluate the efficacy of these different proposals and any potential counterproductive outcomes. To help address this problem, this paper summarizes work done to develop an experimental integrated assessment model -- Orbital Debris Propagators Unified with Economic Systems (OPUS) -- that combines both astrodynamics of the orbital population and economic behavior of space actors. For a given set of parameters, the model first utilizes a given astrodynamic propagator to assess the state of objects in orbit. It then uses a set of user-defined economic and policy parameters -- e.g. launch prices, disposal regulations -- to model how actors will respond to the economic incentives created by a given scenario. For the purposes of testing, the MIT Orbital Capacity Tool (MOCAT) version 4S was used as the primary astrodynamics propagator to simulate the true expected target collision probability ($p_c$) for a given end-of-life (EOL) disposal plan. To demonstrate propagator-agnosticism, a Gaussian mixture probability hypothesis density (GMPHD) filter was also used to simulate $p_c$. We also explore economic policy instruments to improve both sustainability of and economic welfare from orbit use. In doing so, we demonstrate that this hybrid approach can serve as a useful tool for evaluating policy proposals for managing orbital congestion. We also discuss areas where this work can be made more robust and expanded to include additional policy considerations.
Akhil Rao, Mark Moretto, Marcus Holzinger, Daniel Kaffine, Brian Weeden
2023-09-19T02:18:13Z
http://arxiv.org/abs/2309.10252v1
# OPUS: An Integrated Assessment Model for Satellites and Orbital Debris* ###### Abstract An increasingly salient public policy challenge is how to manage the growing number of satellites in orbit, including large constellations. Many policy initiatives have been proposed that attempt to address the problem from different angles, but there is a paucity of analytical tools to help policymakers evaluate the efficacy of these different proposals and any potential counterproductive outcomes. To help address this problem, this paper summarizes work done to develop an experimental integrated assessment model--Orbital Debris Propagators Unified with Economic Systems (OPUS)--that combines both astrodynamics of the orbital population and economic behavior of space actors. For a given set of parameters, the model first utilizes a given astrodynamic propagator to assess the state of objects in orbit. It then uses a set of user-defined economic and policy parameters--e.g. launch prices, disposal regulations--to model how actors will respond to the economic incentives created by a given scenario. For the purposes of testing, the MIT Orbital Capacity Tool (MOCAT) version 4S was used as the primary astrodynamics propagator to simulate the true expected target collision probability (\(p_{c}\)) for a given end-of-life (EOL) disposal plan. To demonstrate propagator-agnosticism, a Gaussian mixture probability hypothesis density (GMPHD) filter was also used to simulate \(p_{c}\). We also explore economic policy instruments to improve both sustainability of and economic welfare from orbit use. In doing so, we demonstrate that this hybrid approach can serve as a useful tool for evaluating policy proposals for managing orbital congestion. We also discuss areas where this work can be made more robust and expanded to include additional policy considerations. Background Over the last ten years, there has been rapid growth in the number of satellites launched into Earth orbit. Today, more than 8,700 active satellites in Earth orbit, along with tens of thousands of pieces of debris left over from the several previous decades of space activities. Commercial companies and governments have announced plans to place tens to hundreds of thousands of additional satellites in orbit over the next decade, raising increasing concerns about the ability of policymakers to effectively manage traffic in orbit and reduce the probability of catastrophic collisions between satellites and orbital debris. The growing awareness of this situations has sparked debate over a wide variety of policy proposals for mitigating this risk. These proposals include orbital slotting concepts that would better organize where satellites can be placed, tightening the existing international standard of 25 years for post-mission disposal (PMD), mandatory propulsion on all future satellites, actively removing existing large debris objects, and various types of orbital use fees. While many of these proposals have merits on paper, policymakers have few analytical tools to evaluate their efficacy. More importantly, the existing tools tend to focus only on the astrodynamics of how objects might interact in orbit and do not include potential changes to the behaviors of the entities controlling current and planning future space activities, particularly their potential response to economic incentives. To address this situation, we proposed development of an experimental hybrid model that incorporates both physics and economic models. This Integrated Assessment Model (IAM) would be able to combine both the astrodynamic behavior of space objects on orbit and the economic behavior of their controlling entities on Earth, thus giving policymakers the ability to more robustly assess various public policy proposals. While still in its early stages, we believe OPUS--Orbital Debris Propagators Unified with Economic Systems--is a useful approach that will provide significant assistance to policymakers in deciding how best to mitigate the space sustainability challenges stemming from the growing number of satellites and large constellations in orbit. Unlike prior work on orbital-use IAMs using atheoretical/purely-empirical economic models of launch behavior (Rao and Letizia, 2021), the IAM developed here uses a theoretically-grounded economic model of launch behavior. Grounding launch behavior in economic theory enables a richer variety of counterfactual and policy analyses. The following sections describe the details of both the physics and economics models that make up the hybrid model we developed, along with some initial results to demonstrate the outputs from the model. The results presented here do not rely on a specific propagator, parameterization of orbital locations, or classification of orbital object types. Where necessary, MOCAT-4S location parameterization ("altitude shells") and object classifications ("slotted, unslotted, debris, and derelicts") are used. "Satellites" is used as a generic term for returns-producing orbiting objects, and "debris" is used as a generic term for all other orbiting objects. Section 2 provides an overview of the OPUS framework, with Sections 2.1 and 2.2 describing the debris environment and economic behavior models in more detail. The economic behavior model is the focus. Section 3 presents results from several scenarios to illustrate OPUS' capabilities, and Section 4 provides some directions for future work. We conclude in section 5. ## 2 Model architecture OPUS involves two coupled models interacting to determine at each time step (a) the state of the orbital environment, and (b) launch rates to different locations in the orbital environment. This architecture is tailored to achieve three outcomes: 1. to provide insight into economic and technical factors that may exacerbate or yield potential solutions to the debris problem; 2. to remain agnostic to the specific propagator used so as to maximize compatibility with existing analytical tools and workflows for debris environment analysis; 3. to enable sensitivity analysis over key parameters and proposed policy designs. This prototype is written in the MATLAB and R programming languages. Model simulation and propagation conducted in MATLAB and analytics and figures of merit are generated in R. OPUS is written for situations where a constellation is operating in a particular orbital volume with other satellite operators (the "competitive fringe"). Operators in the fringe each control relatively few satellites compared to the constellation operator and are assumed to behave according to a system of "open-access conditions". The open-access conditions are the key innovation in OPUS. Economic models of behavior use constrained optimization problems to reflect interactions between purposeful agents attempting to achieve their objectives subject to various limitations. When property rights over a desirable resource are absent but one agent's use detracts from another agent's use, the solution to the multi-agent constrained optimization problem can be reduced to an "open-access condition" which reflects the fact that agents will only stop changing their behaviors once there are no further economic profits to doing so (Gordon, 1954; Libecap and Wiggins, 1984). The open-access condition defines feedback rules linking the state of the resource and operating environment to agents' choices. Existing economic literature on the orbital debris problem notes that Article II of the Outer Space Treaty, combined with the possibility of physical congestion, imply that commercial use of orbital space will be governed by a system of open-access conditions (Adilov, Alexander, and Cunningham, 2015; Rao, Burgess, and Kaffine, 2020; Rouillon, 2020; Rao and Rondina, 2023). The constellation operators' launch plans are set exogenously by the user, along with parameters applying to both the constellation operators and the fringe such as EOL disposal requirements and compliance rates. By default OPUS assumes there are two constellations: a larger system near 550 km altitude and a smaller system near 1100 km altitude. The constellations are described in more detail in Section 2.2. Both the constellation and fringe operators' behaviors alter the state of the debris environment and incent behavioral responses from the competitive fringe. The economic model uses the open-access condition to project these behavioral responses from the fringe. Figure 1 illustrates the inputs and interactions in the model in a high-level schematic diagram. The user determines updates to key economic, physical, and policy parameters for the desired scenario in CSV files. The model begins from the initial population files used for MOCAT-4S simulations, reflecting the state of the orbital environment circa July 2022. A bash script orchestrates the desired series of scenarios and allows the user to set the desired simulation horizon. By default the script compares two types of launch behavior: "satellite feedback" behavior which maintains the initial population, and "equilibrium" behavior which uses the open-access conditions to determine launch patterns. OPUS operates in one-year time steps. The minimal mode of operation involves the user retaining default parameters except for desired changes listed in a CSV file. Default parameter values and examples of parameter changes are shown in Section 2.4. The OPUS repository provides examples of CSVs for scenarios shown in Section 3. ### Debris environment model Consider a set of \(K\) orbital locations ("orbits"), \(k=1,\ldots,K\), where each location may be an altitude shell ("orbital"), an altitude-inclination bin, or some other parameterization Figure 1: Schematic diagram outlining OPUS. of groups of paths in LEO. At each location, there are \(S_{it}\) satellites of type \(i\in I\) and \(D_{jt}\) debris objects of type \(j\in J\) in period \(t\). Let \(\cdot\) subscripts represent arrays over the subscripted index, e.g. \(S_{\cdot t}=[S_{it}]_{i}\) is a vector of stocks of all satellite types at time \(t\). The number of active satellites of type \(i\) in location \(k\) and orbit in period \(t+1\) (\(S_{ikt+1}\)) is the number of launches of type \(i\) in location \(k\) in the previous period (\(X_{itk}\)) plus the number of satellites which survived the previous period (\(\mathcal{S}_{ik}(S_{\cdot t},D_{\cdot t})\), where \(\mathcal{S}_{ik}\) are physical dynamics mapping satellite stocks of type \(i\) at location \(k\) and \(\mathcal{S}=[\mathcal{S}_{ik}]_{i}\) is the associated vector of next-period satellite stocks). The amount of debris of type \(j\) in location \(k\) and orbit in \(t+1\) (\(D_{jkt+1}\)) is the net debris remaining after orbital decay and fragment formation processes (\(\mathcal{D}_{jk}(S_{\cdot t},D_{\cdot t})\)), plus the amount of debris in the shell created by new launches (\(\sum_{i}m_{ijk}X_{itk}\)). The laws of motion for the satellite and debris stocks in each location are: \[S_{ikt+1} =\mathcal{S}_{ik}(S_{\cdot t},D_{\cdot t})+X_{itk} \tag{1}\] \[D_{jkt+1} =\mathcal{D}_{jk}(S_{\cdot t},D_{\cdot t})+\sum_{i}m_{ijk}X_{itk}. \tag{2}\] Equations 1 and 2 define general laws of motion for propagating the state of the orbital environment, and may be implemented by a variety of different propagators. #### 2.1.1 Propagators OPUS currently implements the physical dynamics functions \(\mathcal{S}_{ik}\) and \(\mathcal{D}_{jk}\) using MOCAT-4S or a GMPHD filter as propagators. We describe each below. Mocat-4s.The primary propagator used in OPUS is MOCAT-4S, a source-sink evolutionary model (SSEM) of low-Earth orbit. MOCAT itself describes a family of SSEMs used to assess orbital capacity under different object configurations. The 4S version, used here, models the evolution of four types of objects--slotted active satellites, unslotted active satellites, derelict intact objects, and small fragments--in a set of ordinary differential equations. Active satellites and intact derelicts are assumed to be physically identical. MOCAT-4S discretizes the region between 200-1600 km above mean sea level into 40 non-overlapping shells of 35 km each, so that the 40 shells are the \(K\) locations used in OPUS. It uses the NASA Standard Breakup Model to simulate catastrophic and non-catastrophic explosions, and propagates the state of the environment at annual timesteps. MOCAT-4S is open-source and computationally tractable, with a single year's propagation taking on the order of seconds. The slotted and unslotted satellite populations correspond to the active satellites shown in equation 1, while the derelicts and fragments correspond to the debris objects shown in equation 2. 'Slotting", in the terminology of the MOCAT family of models, refers to whether a satellite's orbital parameters have been configured to minimize collisions with other slotted satellites (Arnas et al., 2021). MOCAT has so far been primarily used to assess orbital capacity to sustainably hold satellites under different degrees of slotting effectiveness and adoption (D'Ambrosio, Lifson, and Linares, 2022; Lifson et al., 2022; Jang et al., 2022). While orbital capacity in this sense it is not the focus here, the distinction between slotted and unslotted is convenient for our purposes. Slotted satellites are mapped to constellations operated by a single entity, while unslotted satellites are mapped to the open-access fringe. These distinctions are described in more detail in Section 2.2. GMPHD filter.To demonstrate OPUS' propagator-agnosticism, a GMPHD filter is used to propagate the state of the LEO environment as well. GMPHD filters are generic statistical tools to propagate high-dimensional states under tracking uncertainty via mixtures of Gaussian distributions. Each Gaussian in the mixture represents a particular hypothesis about the state of the system being modeled, e.g. the location and size of a group of debris objects. These hypotheses include the covariance structure between the states being tracked to capture uncertainties and their correlations. A unique feature of GMPHD filters is that the integral of each Gaussian is the number of objects being tracked by that hypothesis (rather than simply integrating to one). As the system evolves, the filter splits and merges Gaussians to represent the degree of uncertainty in the system of hypotheses, subject to user-chosen computational considerations. GMPHD filters have been used in a range of multi-target tracking problems, including astrodynamics and robotics (Mashiku, Garrison, and Carpenter, 2012; Huang, Xie, and Su, 2022), though to our knowledge it has not been used to implement a SSEM of the LEO environment. We parameterize the GMPHD filter with three types of objects: constellation and open-access fringe satellites (equation 1), and the debris population 2. For simplicity, the satellites are identical and assumed to have no self-collisions with any other satellites, i.e. perfect universal slotting. The debris population is parameterized as a continuous family of objects with sizes ranging from 0-1 m. We parameterize altitude as a continuous range, so that debris objects are described by two infinite-dimensional parameters: their location and their size. Satellite objects are launched to discrete bins, similar to MOCAT, with uniform density within the bins. The model is propagated at annual timesteps and collisions are sampled at uniform sub-timesteps within each year. We integrate over altitudes and debris sizes to report summary measures in terms of discrete locations and debris size classes. The continuous range of debris over sizes is particularly useful for modeling the evolution of the LEO debris environment, as it enables tracking of lethal non-trackable (LNT) fragments. Though these fragments cannot be directly tracked, their existence can be statistically inferred from the distribution of trackable objects. Figure 2 shows observations of the size distribution of explosion fragments along with a parametric fit predicting fragments below the detection limit (Liou, 2012). As better understanding of the size distribution of fragments develops, the covariance structure and splitting/merging rates of the GMPHD filter can be reparameterized to better constrain the evolution of LNTs. Runtime of the GMPHD filter can vary substantially depending on computational parameters such as the number of Gaussians used in the mixture, the frequency of splitting/merging, and the number of timesteps at which collisions are sampled. We discuss potential future research directions using infinite-dimensional models such as the GMPHD filter in section 5. #### 2.1.2 EOL disposal We implement EOL disposal as immediate movement of satellites at the end of their productive life to the highest altitude consistent with the user-chosen disposal regulation via Hohmann transfer, followed by a recircularization burn. At the start of every simulation, the user may set a desired maximum disposal time regulation \(T_{D}\). This is a time within which an intact satellite must deorbit after its mission is over, i.e. a compliant satellite will be in orbit for mission life plus \(T_{D}\) years. The default disposal time is set to 5 years. Figure 2: Size distribution of explosion fragments from slide 30/43 of Liou (2012). The prevalence of fragments smaller than the detection limit can be inferred from the parametric fit. Every disposal time \(T_{D}\) implies a maximum compliant altitude \(A_{D}\). We apply the U.S. Standard Atmosphere and CIRA-72 atmospheric density model, also used in MOCAT-4S with values taken from David (2013), to compute decay rates and corresponding decay times by altitude. We assume a fraction \(\phi\) of satellites are non-compliant with the regulation and are left where they are at the end of their mission. The default setting is \(\phi=0\), i.e. full compliance. If a compliant satellite is at an altitude \(a>A_{D}\), at the end of its mission it is moved to altitude \(A_{D}\) as described above. The transfer is assumed to be instantaneous relative to the scale of model time steps.1 Footnote 1: Incorporating elliptical disposal is challenging given that such disposals can significantly increase collision risks across many locations for brief (relative to model timesteps) durations. This is an important step for future work. ### Economic behavior model Consider two types of satellites, constellation satellites (\(i=1\)) and open-access fringe satellites (\(i=2\)), serving different industries such as telecommunications and imaging.2 Define \(P_{ik}\) as the probability a satellite of type \(i\) collides with another object in location \(k\). A satellite in the fringe expects to face a collision probability (from all sources) of \(P_{2k}(S_{..t},D_{..t})\) across various locations \(k\).3 On average, fringe satellites have active lifetimes of \(\mu^{-1}\) years.4 Footnote 2: Fringe operators may operate multiple satellites, but at a scale much smaller than the constellation operator so that they can be approximated as owning a single satellite. The constellation index may represent multiple constellations with different operators, though we assume each constellation occupies a single location and does not co-locate with other constellations. These assumptions can be relaxed without any loss of generality. Footnote 3: It is convenient to assume \(P_{1k}\) is uniformly weakly smaller than \(P_{2k}\) since constellation satellites are slotted while fringe satellites are not. Footnote 4: If the propagator has separate compartments for active and derelict satellites, the lifetime \(\mu^{-1}\) should be consistent between economic and physical models. Under open access, the marginal fringe operator earns zero economic profits at any location they can access. The net rate of return earned by satellites in the fringe is \(R_{k}(S_{2\cdot t})-r\), where \(r\) is the discount rate representing the opportunity cost of funds, and \(R_{k}(S_{2\cdot t})\) is the gross rate of return earned by the satellite given economic competition within the fringe industry (and any spectrum congestion), and \(\tau_{kt}\) is a location-time specific tax rate--"orbital-use fees"--reflecting economic policies imposed to improve sustainability (Rao, Burgess, and Kaffine, 2020).5\(R_{k}(S_{2\cdot t})\) is weakly decreasing in \(S_{2\cdot t}\). Under open access to orbit in period \(t-1\), the fringe will launch \(\hat{X}_{2kt-1}\) satellites to the volume until the following system of equations is satisfied across all locations \(k\): \[\forall k,\ \ \hat{X}_{2kt-1}:R_{k}(S_{2\,t})-r-\mu-P_{2k}(S_{\cdot t},D_{\cdot t})- \tau_{kt}=0. \tag{3}\] We assume the constellation's launch plans are publicly announced in advance and are exogenous to the fringe's choices. \(P_{i}\) is calculated from the same model that computes \(\mathcal{S}_{i}\) and \(\mathcal{D}_{k}\). The net rate of return function has two components: the expected future revenues or payoffs that the satellite delivers in period \(t\), \(q_{k}(S_{2\,t})\), and the annualized unit cost of deploying it, \(c_{k}\): \[R_{k}(S_{2\,t})=\frac{q_{k}(S_{2\,t})}{c_{k}}. \tag{4}\] To better illustrate how open-access launching operates, we compare it against "satellite feedback" behavior, which simply replenishes location-specific populations assuming no collisions. Revenue function parameterization.When the location index \(k\) indicates altitude bins, we parameterize the revenue function as linear with a common coefficient across all altitudes: \[q_{k}(S_{2\,t})=\alpha_{1}^{q}-\alpha_{2}^{q}\sum_{k\in K}S_{2kt} \tag{5}\] The parameters of the revenue function are set to match the following conditions: 1. The maximum willingness-to-pay for service from a fringe satellite is \(\alpha_{1}^{q}=7.5\times 10^{5}\) $/sat. 2. Fringe satellites at all locations are perfect substitutes, and willingness-to-pay for service from a marginal fringe satellite declines at \(\alpha_{2}^{q}=100\) $/sat. Different rates of substitution across orbits due to different output characteristics--e.g. certain locations being ideal for specific types of imagery collections--could be reflected in \(k\)-specific coefficients of \(q_{k}(S_{2kt})\), i.e. \(\alpha_{2}^{q}\rightarrow\alpha_{2k}^{p}\). Collecting the right data and estimating that credibly is a task for future research. Cost function parameterization.The cost function reflects three factors: 1. The lift price, \(c_{lift}\). This is the dollar cost per kilogram of accessing LEO, multiplied by the mass of the satellite payload. We set the lift price to $5,000 per kg following the vehicle-weighted launch price index developed in Corrado, Cropper, and Rao (2023). 2. The cost of the delta-v budget at a given altitude \(k\), \(c_{\Delta v}(k)\). This is the total dollar cost of the delta-v (expressed in units of \(m/s\)) required to maintain the satellite in its target orbit and conduct any necessary maneuvers over its lifetime. Letting \(v_{drag}(k)\) be the force exerted on the satellite by atmospheric drag at altitude \(k\), be the satellite's lifetime in years, \(f_{s}\) be a multiplicative factor reflecting increased drag due to solar flux variations over the satellite's lifetime, and \(f_{m}\) be a safety margin for additional maneuvers in delta-v units, we compute the delta-v budget as: \[f_{s}\mu^{-1}v_{drag}(k)+f_{m}.\] (6) We set \(f_{s}=1.5\) and \(f_{m}=100\). We monetize the delta-v budget at \(p_{\Delta v}=\$1,000\) per \(m/s\). These values are arbitrarily chosen due to lack of data. 3. The opportunity cost of lost lifetime due to deorbit from altitude \(k\) to altitude \(k^{\star}\), \(c_{\mu}(k,k^{\star})\). This is the cost of a satellite's lifetime being reduced by expending fuel to deorbit. We keep the satellite's operational lifetime fixed at \(\mu^{-1}\) years for simplicity.6 We assume the satellite conducts a Hohmann transfer from its initial circular orbit to a circular orbit at the target disposal altitude, followed by a recircularization burn as described in Section 2.1.2. Given the delta-v requirement for this maneuver and the initial delta-v budget, the lifetime reduction is calculated as the lost share of delta-v required for stationkeeping and routine maneuvers, monetized at the maximum willingness-to-pay for satellite service. This provides an upper bound on the opportunity cost of deorbit maneuvers.7 Footnote 6: This modeling choice is made for tractability; future work will extend the model to feature dynamically-updating opportunity costs based on changes in annual revenues per satellite, accounting for every other operator’s launch and location choice. Letting the rate of non-compliance with deorbit regulations be \(\phi\), the complete cost function for altitude \(k\) given a target deorbit location \(k^{\star}\) is therefore: \[c_{k}=c_{lift}+c_{\Delta v}(k)+(1-\phi)c_{\mu}(k,k^{\star}) \tag{7}\] The cost functions across altitudes, given 25- and 5-year deorbit times with full compliance, are shown in figure 3. The cost functions initially decline over altitude with atmospheric density, reflecting lower stationkeeping costs. Above the maximum 5-year deorbit-compliant altitude, the cost rises steeply, reflecting the cost of the disposal maneuver. The cost flattens once deorbit maneuvers exhaust the satellite's full lifetime delta-v supply. ### Constellation parameterization We assume the constellations begin at sizes and locations determined by the initial population files. From there, the user can define the parameters governing the constellation's launch patterns by altering values in constellation-parameters.csv (assumed to be in scenarios/parameters/). The parameters and their defaults are listed below: Figure 3: Comparison of cost functions for different deorbit rules. 1. n_constellations: The number of constellations. Default is 2. 2. location_indices: The locations at which constellations are to be built up, expressed as indices under the model parameterization. Defaults are 10 and 29, corresponding to 500 km and 1100 km, i.e. a "large and low" constellation and a "small and high" constellation. (CHECK) 3. target_sizes: The final sizes the constellations seek to reach. The default values are 3000 for the lower constellation and 300 for the higher constellation. 4. max_launch_rates: The maximum launch rates the constellations can achieve, reflecting launch capacity available. Every period the constellation controller will check whether the constellations are at their target sizes and if not, launch up to the maximum launch rate for that constellation to replenish or build up the system. Defaults are 1500 for the lower constellation and 150 for the higher constellation. ### Model parameters Table 1 lists the key economic parameter values in the benchmark scenario, units, with a brief description. We refer readers to the code files for MOCAT-4S and the GMPHD filter for a full list of physical parameters. ### Using the model OPUS is divided into a series of MATLAB functions for simulation and R scripts for reporting, with CSV files used to read in user-defined policy, economic, and engineering \begin{table} \begin{tabular}{|l|c|c|l|} \hline **Parameter** & **Value** & **Units** & **Description** \\ \hline \(\mu^{-1}\) & 5 & years & Active satellite lifetime \\ \hline \(T_{D}\) & 5 & years & End-of-life disposal regulation \\ \hline \(\phi\) & 0 & \%/year & Disposal non-compliance rate \\ \hline \(r\) & 5 & \%/year & Discount rate \\ \hline \(\alpha_{1}^{q}\) & \(7.5\times 10^{5}\) & \$/year & Revenue with no competition \\ \hline \(\alpha_{2}^{q}\) & \(1.0\times 10^{2}\) & \$/satellite/year & Linear revenue coefficient \\ \hline \(\tau_{kt}\) & 0 & \%/satellite/year & Shell-specific orbital-use fee rate \\ & & & in period \(t\) \\ \hline \(f_{s}\) & 1.5 & unitless & Stationkeeping delta-v safety factor \\ \hline \(f_{m}\) & 100 & m/s & Additional delta-v for discretionary maneuvers \\ \hline \(p_{\Delta v}\) & 1000 & \$/m/s & Cost of delta-v \\ \hline \(c_{lift}\) & 5000 & \$/kg & Launch price per kilogram \\ \hline \end{tabular} \end{table} Table 1: Benchmark values of key economic parameters. parameters. The primary mode of operation is via command line interface using a shell (bash) script. The shell script and solver code are described below. conductor.shThe shell script conductor.sh calls the various functions and scripts to run simulations. The script is broken into 3 blocks: 1. **Initialization:** Sets up the initial parameters. This is where the user specifies the type of propagator to use, currently either MOCAT or GMPHD. 2. **Scenario Setup:** Sets up array of scenarios to be run. The default value is benchmark. CSV files with scenarios should be provided with paths relative to the location of conductor.sh. The default is scenarios/parsets/<filename>.csv. Simulation horizon and number of workers for parallelization are also set here in the model_horizon and n_workers variables. 3. **Execution and Post-Processing Loop:** The script iteratively executes the MATLAB solver for each scenario, first with equilibrium behavior (launching till zero profits each period) and then with sat_feedback behavior (launching to maintain populations, ignoring collisions). At the start of each run a unique human-readable name is generated based on the scenario parameters for better traceability and management of the results. analytics.R is run after each propagator-behavior combination to generate figures and CSVs for individual scenarios, and then compare-two-scenarios.R is run after analytics.R to generate images and CSVs comparing pairs of scenarios. While the main solver can be used independently, conductor.sh, facilitates setting up, executing, and analyzing multiple scenarios in batches without having to individually manage each run. iam_solver.mThe main solver file is iam_solver.m. It takes the following input strings in order to produce CSV files that describe the time evolution of the orbital environment across the MOCAT orbital shell discretization (40 shells of 35 km each between 200 - 1600 km): 1. stem: A unique filename for the model run. A folder with this name is created in the scenarios folder, and all model outputs have this prefix. The scenarioNumer.m file generates unique human-readable names based on the model parameters. 2. model_type: Choice of propagator. The current implementation allows either MOCAT or GMPHD to be supplied as inputs. Scenario functionality is most thoroughly tested for MOCAT. 3. launch_pattern_type: Choice of launch behavior. The current implementation allows either equilibrium (for open-access equilibrium) or sat_feedback. By default conductor.sh will run both types of behavior when called. Currently only equilibrium has been tested with both MOCAT and GMPHD. 4. parameter_file: Path to CSV file containing any changes to the default parameter values for scenario analysis. The file is expected to be in the scenarios/parsets/folder. 5. model_horizon_str: Number of years to propagate the orbital environment. Can be provided by the user as an int or string in conductor.sh. 6. n_workers: Number of workers for solver parallelization when equilibrium launch behavior is selected. Solver code is parallelized across locations, so something close to the number of locations is usually a good choice. Algorithm 1 provides a high-level description of the inputs, order of operations, and outputs involved in running OPUS. ## 3 Results We first validate the model against historical data, then describe the benchmark case, and conduct three exercises to illustrate the types of questions which can be studied using the OPUS framework. These exercises all use the MOCAT-4S propagator. We conclude this section by showing the benchmark case under the GMPHD filter propagator to demonstrate the flexibility of this framework. We start all model projections from the initial population of orbital objects around July 2022. While the physical parameters of MOCAT-4S have been used to estimate orbital capacity, many of the economic parameters used in the model have yet to be measured. Where possible we have attempted to ensure economic and behavioral parameters are plausible and/or be consistent with historical magnitudes under full compliance. We stress that the results presented below are only intended to demonstrate model capabilities, not to provide specific guidance. ### Model validation To validate OPUS we construct measures of launch patterns by constellations and an open-access fringe. We use historical data on orbital-use patterns compiled in Rao, Berner, and Lewis (2023). The dataset is a satellite-year panel covering the period 1957-2022, recording the active lifetime of each satellite launched along with orbital parameters, ``` Data: stem, model_type, launch_pattern_type, parameter_file, model_horizon_str, n_workers Result: Outputs CSV files describing orbital environment evolution 1Step 1: Initialization and Parameter Preparation; 2 Call MOCAT4S_Var_Cons() and GMPHD_VAR_Cons() to initialize MOCAT and GMPHD constants objects VAR and GMPHD_params; 3 Call set_econ_parameters() with VAR to initialize econ_params; 4Step 2: Modify Parameters for Scenarios; 5if parameter_file is providedthen 6 Call modifyParameters() for specified scenarios, update VAR and econ_params; 7 8 end if 9Step 3: Select and Initialize Propagation Method; 10if model_type is MOCATthen 11 Load initial orbital state and set up MOCAT model; 12 13 end if 14else ifmodel_type is GMPHDthen 15 Load initial orbital state and set up GMPHD model; 16 Perform one-step propagation to calculate debris distribution over discrete categories: lethal non-trackables, small trackables, large trackables in MOCAT-consistent altitude bins; 17 18 end if 19Step 4: Build Cost Function; 20 Call buildCostFunction() to get cost_fn_params incorporating disposal regulations (in econ_params) and compliance rate (in VAR); 21 Update econ_params and VAR with cost_fn_params; 22 23Step 5: Start Constellation Buildup; 24 Launch constellation satellites; 25 26Step 6: Select Launch Rate Controller and Propagate Initial Period; 27if launch_pattern_type is equilibriumthen 28 Apply open-access controller; 29 30 end if 31else iflaunch_pattern_type is sat_feedbackthen 32 Apply feedback rule to maintain initial populations; 33 34 end if 35 Propagate initial period; 36 37Step 7: State Propagation Loop; 38for each year in model_horizondo 39 Apply selected fringe launch rate controller and launch constellation satellites; 40 Propagate the orbital environment; 41ifmodel_type is GMPHDthen 42 Calculate debris distribution over discrete categories as in Step 3; 43 end if 44 45 end if 46 47Step 8: Save Results; 48 Write final outputs to CSV files; ``` **Algorithm 1**Overview of iam_solver.m ownership, sector (i.e. commercial, military, civil, or some combination), country of operator, launch site, launch vehicle, and several other fields. Satellites are removed from the panel once they are estimated to no longer be active, so derelict objects are not included. We define the constellation population as satellites belonging to the Starlink and OneWeb constellations. We define the open-access fringe population as all other satellites labeled as commercial, and construct a residual "Other" category for military and civil government satellites. Note that while the economic theory developed here does not predict the launch patterns of commercial constellations or non-commercial satellites, it does predict that orbital-use patterns of both will affect launch behavior by the commercial fringe via collision risk and any impacts on revenues or costs.8 Footnote 8: See Guyot, Rao, and Rouillon (2023) for an economic theory of commercial satellite constellations’ orbital-use patterns. Figure 4 plots the populations of each type of satellite over the 2005-2012 period. We focus on this period since the 25-year deorbit guideline ("25-year rule") was implemented in 2005, allowing us to compare OPUS predictions under 25-year disposal for the fringe against the historical pattern. Notice that both the constellation and the fringe begin to accumulate appreciable populations only near the end of the sample period, with activity in prior years being dominated by non-commercial actors. The latter half of the sample sees increased activity by non-commercial actors as well. Both fringe and non-commercial satellites appear to avoid altitudes utilized by constellations, most notably in the vicinity of Starlink. Next, we compare the model's predictions against historical orbital-use patterns over an appropriate sample. Figure 5 plots historical fringe population levels over 2017-2022--when fringe launch activity started picking up--and OPUS predictions over the first 6 years from the initial population. However, there are some caveats in making and interpreting these comparisons. First, the 2017-2022 period saw substantial declines in launch prices. The vehicle-weighted price index constructed in Corrado, Cropper, and Rao (2023) shows prices falling from around 10,000 $/kg to around 5,000 $/kg. The increased launch rate may have also contributed to broader sectoral cost reductions, as satellite supply chains benefited from economies of scale and learning-by-doing. These patterns are absent in OPUS predictions, as OPUS currently includes only static values of economic parameters. Future work will extend OPUS to allow for time-varying economic parameters. Second, key economic parameters--such as the cost of delta-v (\(c_{\Delta v}\)) and the maximum WTP for fringe service (\(\alpha_{1}^{q}\))--are not calibrated due to lack of data. These parameters are critical in determining the relative appeal of different orbital locations. The form of the revenue function may be particularly important here. The current linear form, while simple and transparent, means that the cost function is the only purely economic source of variation in the relative attractiveness of different locations. This likely also distorts Figure 4: Historical orbital-use patterns over 2005-2022. Panel A shows active constellation satellites, panel B shows active non-constellation commercial satellites, and panel C shows active non-commercial satellites. Figure 5: Comparison of historical orbital-use patterns over 2017-2022 against model predictions for first 6 years of a simulation with 25-year disposal and full compliance. Panel A shows active constellation satellites, panel B shows active non-constellation commercial satellites, and panel C shows active non-commercial satellites. quantitative comparisons. Finally, the model begins from the population of orbital objects in July 2022 rather than January 2017. While the initial population can be adjusted, this functionality has not yet been incorporated into OPUS. Additionally, the simulation assumes full compliance with the 25-year rule. though in practice some operators may not comply due to various reasons (e.g. component failure, operating from jurisdictions where local regulators do not mandate 25-year rule compliance). Despite these caveats, the model appears to capture some important elements of open-access fringe orbit use: 1. avoidance of altitudes where the lower and larger constellation operates; 2. preference for altitudes that are naturally compliant with the 25-year rule (i.e. just below the 585-620 km shell) 3. among the naturally-compliant altitudes, greater preference for altitudes that are below the nearby constellation rather than above; 4. avoidance of the lowest shells, where stationkeeping requirements due to drag are highest.9 Footnote 9: Although there is a trend in the model predictions to favor lower altitudes over time, we will see in subsequent sections that there appears to be a “lowest economically-viable altitude”. ### Benchmark case Next, to illustrate the effect of incorporating economic behavior into propagator projections, we compare the evolution of populations over time--35 years forward from July 2022--under a simple "satellite feedback" rule against economic behavior implied by the system of equations 3 ("open-access behavior"). Satellite feedback behavior involves launching just enough satellites to maintain the previous period's population levels at all altitudes, ignoring collisions. Figure 6 plots all 4 populations along with the launch rates for the constellation and fringe under satellite feedback behavior. Figure 7 plots the same outcomes under open-access behavior. This case assumes 5-year disposal with full compliance. There are several notable differences between feedback and open-access behavior. We briefly comment on three particularly striking differences: the periodicity over time and space under equilibrium behavior, the emergence of a lowest economically-viable altitude, and the abandonment of higher altitudes. Periodicity.Some form of repetitive behavior often features in debris environment analyses. These are typically either due to solar cycles or the use of autonomous traffic pat Figure 6: Location distribution of launches and orbital objects over time under satellite feedback behavior. Figure 7: Location distribution of launches and orbital objects over time under open-access behavior. terns (e.g. Liou et al. (2004); Lewis et al. (2008))--both exogenously-imposed assumptions. The projections from OPUS instead reflect two economic forces that endogenously respond to the state of the orbital environment: 1. As derelicts decay from higher altitudes, it becomes worthwhile to launch to locations with fewer derelicts and lower anticipated future collision risk. That is, open-access behavior implements a type of feedback controller that _to some degree_ accounts for the sustainability of particular orbital locations and avoids those that are expected to become less-sustainable in the near future (specifically, the next year). Such behaviors have been documented in both theoretical non-spatial and atheoretical spatial economic models of orbit use (Rao and Rondina, 2023; Rao and Letizia, 2021). 2. Some orbits that are naturally compliant with disposal rules are costlier to operate in than others, due to a combination of the delta-v required to stationkeep and the risk of colliding with existing objects there. As derelicts fill more-valuable orbits, either due to recent launch activity to those orbits or decay from higher orbits, they reduce the overall profitability of using LEO. In addition to spatial variation in where satellites are launched to, this generates variation in the _total_ number of satellites launched in each period. This variation also affects the revenues of operating a satellite through market competition, with periods that have fewer satellites leading to greater revenues, spurring further launches. This endogenous spatiotemporal periodicity is entirely absent in satellite feedback behavior and related exogenous launch models. In the absence of any exogenous shocks the system approaches a steady state, indicating the periodicity eventually fades as the derelicts reach a stationary distribution. Minimum viable altitude.As higher orbits that are naturally compliant with the disposal rule fill with constellation satellites or derelict objects, fringe operators move to lower orbits, where debris decays faster. There is a limit to this process of moving lower to avoid debris risk. At a certain altitude the increase in stationkeeping costs outweighs the gain in collision risk reduction, creating a "floor" on how low operators choose to go. Indeed, higher naturally-compliant orbits are preferable when they are clean enough to support greater satellite activity. However, since they also retain debris longer, short stretches of higher-altitude usage are matched by longer stretches of lower-altitude usage. The altitude that is most-used for the longest stretch is close to the lowest economically viable altitude. The minimum altitude appears only in satellite feedback behavior to the extent that the initial population already reflects it, and does not adapt to reflect changes in the state of the orbital environment. Abandonment of higher altitudes.Figure 2(b) shows that the costs of operating increase steeply above the highest naturally-compliant altitude due to the opportunity cost of delta-v expended on disposal (for operators who comply with the rule). Note that the increase in activity at lower altitudes is significant enough to make derelict satellites at higher altitudes difficult to see in the plots, though they will remain there until fragmentation or their eventual decay. To the extent that they comply with disposal rules, fringe operators therefore cease launching to higher altitudes. Satellite feedback behavior actively contradicts this economic response. Finally, before moving on to specific policy exercises, we illustrate the impact of economic behavior on aggregate summary metrics of the orbital environment. Specifically, we compute the Space Sustainability Rating (SSR) index for all objects of a given type across all altitudes (Rathnasabapathy et al., 2020). For this application we calculate the SSR index as the product of the sum of total \(p_{c}\) across all objects and altitudes with the sum of the Environmental Consequence of Breakup (ECOB) index--as the name suggests, a measure of the consequence of a fragmentation on orbit (Letizia et al., 2017; Letizia, Lemmens, and Krag, 2018)--across all objects and altitudes. While the SSR index and its components are typically applied to individual missions or objects, we compute the "aggregate" index to illustrate the impact of incorporating economic behavior on space sustainability projections. In all cases we normalize the SSR index to one in the initial period. Larger index values indicate less sustainable orbital-use patterns. Figure 8 plots the normalized aggregate SSR index by large object types--constellation satellites, fringe satellites, and derelict objects--for the first 20 years of the model simulation. Open-access behavior exhibits different sustainability metric dynamics than satellite feedback behavior. Though the ranking of metrics across object types is generally the same, all object types have lower index values under open-access behavior. Mechanically, the difference lies in the types of feedback controllers the behaviors implement: whereas satellite feedback behavior is a backward-looking controller (launch rates to location \(k\) in period \(t\) are proportional to the stock in \(k\) at \(t-1\)), the open-access system in equation 3 is a forward-looking controller (launch rates to \(k\) in \(t\) depend on the anticipated stock in \(k\) at \(t+1\)). Intuitively, firms anticipate the returns their asset will generate at each location and choose where to deploy their satellite accordingly. Both types of behavior feature a sharp increase in the index value for derelict objects as the constellations build up to their target values. For open-access behavior, the initial peak is followed by damped oscillations as fringe operators adjust from the initial condition toward a stationary distribution.10 By contrast, satellite feedback behavior reaches a stationary distribution much more quickly--the only source of significant changes from the initial conditions are the constellations, particularly the lower and larger one. OPUS also enables calculation of economic metrics, like the annual expected maximum economic welfare from orbit-use. The annual expected maximum economic welfare, \(\lambda_{t}\), is a dollar metric of the annual social value of orbit use under a given policy regime. Like the SSR, it has a "probability \(\times\) consequence" form with the probability being \(1-p_{c}\), i.e. the survival (rather than collision) probability. Unlike the SSR, the "consequence" is the net present value of all fringe satellites. Formally: \[\lambda_{t}=\sum_{k\in K}(1-P_{2k}(S_{\cdot t},D_{\cdot t}))\times S_{2kt}\left( \sum_{\Delta t=0}^{\mu^{-1}}\frac{q_{k}(S_{2\cdot t})}{(1+r)^{\Delta t}}-c_{k} \right). \tag{8}\] Economically, \(\lambda_{t}\) can be interpreted as the expectation of the upper envelope of lifetime Figure 8: Normalized aggregate SSR index value for all object types under satellite feedback and open-access behavior. Smaller numbers indicate greater sustainability. social welfare (i.e. social value generated net of real resource costs) generated by the open-access fringe in year \(t\), assuming all satellites were just launched and expected to survive their full design lifetimes if they survive year \(t\).11 This calculation is simpler than the full expected net present value of social welfare from the satellite fleet used in other economic analyses, e.g. Rao, Burgess, and Kaffine (2020); since our goal is only to demonstrate the type of metrics that can be obtained using OPUS, we do not implement the full net present value calculation.12 In contrast to the SSR index, larger expected maximum values indicate more socially-valuable orbit use.13 Note that open-access behavior will not maximize economic welfare without natural capital pricing policies like optimal Pigouvian taxes to address the externalities between satellite operators. Footnote 11: Note the order of terms: “expected maximum” rather than “maximum expected”. The expectation is taken over maximal values assuming no collisions after year \(t\), rather than maximizing an expected value that incorporates collision risk in every operational year. Footnote 12: While any taxes applied would reduce the firm’s profits, profits are only a component of the social welfare generated from orbit use. The tax revenue collected still contributes to economic welfare, and may even be used to fund public goods such as debris remediation. Subtracting taxes form \(\lambda_{t}\) would convert the metric from an upper bound on social welfare to an upper bound on profits. Footnote 13: The expected maximum welfare is non-monotone in the number of satellites in orbit. At low levels, increasing the number of satellites will increase the expected value even as collisions become more frequent. At high levels, the gain in expected value from additional satellites is offset by both the reduction in their survival probability (due to collisions) and the reduction in their individual value (due to competition). ### 25- vs 5-year disposal rules Having validated the model and established some properties of open-access launching behavior, we turn to our first policy evaluation exercise: comparing the effects of 5-year and 25-year disposal rules. We assume for this exercise that binding international disposal rules are implemented with full compliance. To assess the policies we evaluate the patterns of fringe satellite and derelict accumulation, the normalized aggregate SSR index, and the annual expected maximum welfare from orbit-use accrued under each policy. We emphasize once again that these exercises are only demonstrative of the framework's capabilities; while relative comparisons and qualitative patterns can provide some insight into the mechanics of open access, detailed quantitative insights cannot yet be drawn from model results. Figures 11 and 12 show the patterns of fringe satellite and derelict object accumulation over 35-year horizons from the initial condition. Under both disposal rules fringe satellites avoid the location where the lower and larger constellation is, but under the 25-year rule fringe satellites spread out over more altitudes and use the lower altitudes less intensively. Under the 25-year rule two derelict "hotspots" form, one associated with the lower and larger constellation and one associated with the (comparatively fewer) fringe satellites that locate above the constellation. The higher derelict hotspot due to the fringe satellites Figure 9: Annual expected maximum economic welfare under satellite feedback and open-access behavior. Larger numbers indicate greater social value from orbit use. Figure 10: Total object accumulations under satellite feedback and open-access behavior, grouped by object type. fades over time as fringe satellites largely abandon the higher reaches of the naturally-compliant region after initial periods of low usage. Under the 5-year rule satellites cluster more tightly below the lower and larger constellation, with no accumulation above the constellation. Consequently the only derelict hotspot is due to the constellation. The peak fringe satellite accumulation under the 5-year rule is lower than under the 25-year rule, as the concentration of satellites in a smaller region increases the congestion there. Figure 13 shows the normalized aggregate SSR index for the disposal rules over a 20-year horizon following the initial condition. As expected, 5-year disposal results in a substantially lower normalized aggregate SSR index compared to 25-year disposal. Most of the improvement is driven by reduction in the stock of derelicts on orbit, with smaller improvements due to reduced risks facing fringe and constellation satellites. Figure 14 shows the expected maximum economic welfare under both disposal rules. Unlike the SSR index value, the comparison is less clear. The 25-year rule allows for greater initial economic welfare, since more fringe satellites can consistently be maintained by spreading out over more altitudes. In contrast, the 5-year disposal rule leads to an initial decline in economic welfare as the cost of compliance forces operators to cluster at Figure 11: Accumulation of fringe satellites and derelict objects under 25-year disposal rule with full compliance. Figure 12: Accumulation of fringe satellites and derelict objects under 5-year disposal rule with full compliance. Figure 13: Normalized aggregate SSR index values for all object types under open-access behavior given full compliance with 25- or 5-year disposal rules. lower altitudes. This induces some operators to refrain from launching (relative to the 25-year rule counterfactual), reducing economic welfare. By the end of the simulation horizon, the improvements in the orbital environment lead to slightly higher economic welfare under the 5-year rule. Figure 15 plots the total numbers of constellation, fringe, and derelict satellites to explore the drivers of the outcomes in Figures 17 and 14. The difference in aggregate SSR for derelict objects appears to be due in large part to differing numbers of derelict objects. As expected, the 25-year disposal rule allows for a substantially larger buildup of derelicts than the 5-year rule. The total number of constellation satellites decrease over time as the initial condition includes constellation (i.e. "slotted") satellites in locations other than the two being simulated, while the two being simulated are larger than the extant population in the initial condition (generating concentrated buildups of derelicts). Figure 14: Expected maximum economic welfare from fringe satellites under open-access behavior given full compliance with 25- or 5-year disposal rules. Perhaps unexpectedly, the 5-year disposal rule also leads to fewer fringe satellites than the 25-year disposal rule, with more oscillatory behavior. This is consistent with the patterns observed in Figures 11 and 12. The 5-year rule forces operators to cluster in a tigher range of altitudes, leaving fewer locations that are economically valuable and increasing their sensitivity to fluctations in the distributions of derelict and other fringe satellites. ### Orbital-use fees proportional to anticipated \(p_{c}\) A unique feature of IAMs like OPUS is the ability to compare non-economic policies like binding disposal rules and economic policies like orbital-use fees (OUFs) in an internally-consistent manner. While dynamically-optimal OUFs as in (Rao, Burgess, and Kaffine, Figure 15: Total numbers of constellation, fringe, and derelict satellites under open-access behavior given full compliance with 25- or 5-year disposal rules. 2020) are computationally expensive to calculate in even the single-location case, simpler implementations may still be worth exploring.14 We therefore consider an exercise using OUFs that are proportional to the anticipated next-period aggregate \(p_{c}\), i.e. setting \(\tau_{kt}=\varepsilon P_{2k}(S_{.t},D_{.t})\) for a constant value of \(\varepsilon\) in equation 3. We arbitrarily set \(\varepsilon=0.5\) for this exercise. This value implies operators at location \(k\) are charged an annual fee equal to 50% of the expected replacement cost of a satellite deployed to \(k\).15 Footnote 14: Dynamically-optimal OUFs can be calculated in future iterations of OPUS. Footnote 15: To see this, note that equation 3 can be converted from rate units to dollar units by multiplying through by \(c_{k}\), making the tax level equal to \(\tau_{kt}c_{k}=0.5P_{2k}(S_{.t},D_{.t})c_{k}\). See Rao and Rondina (2023) for more on this point. This type of simple OUF only strengthens the already-extant incentive for fringe operators to control future collision risk growth. It does not directly introduce incentives to account for additional consequences such as runaway debris growth or consequences to other operators in other shells, except insofar as those consequences are correlated with \(p_{c}\) in the location the operator is considering launching to next period. As before, we evaluate the policies by the patterns of fringe satellite and derelict object accumulation, the normalized aggregate SSR index, and the expected maximum economic welfare under each policy. Figure 16: Accumulation of fringe satellites and derelict objects under 25-year disposal rule with full compliance and a 50%-anticipated-\(p_{c}\) OUF. Figure 16 shows the patterns of fringe satellite and derelict object accumulation. The 50%-anticipated-\(p_{c}\) OUF has a noticeable effect on accumulation patterns. The second hotspot above the constellation visible in Figure 11 no longer emerges, and the fringe satellites cluster below the constellation even more strongly than under the 5-year disposal rule (seen in Figure 12). Indeed, with the OUF in place fringe operators strongly cluster near the minimum economically viable orbit, further reducing derelict accumulation relative to even the 5-year disposal rule. Figure 17 compares the normalized aggregate SSR index for 5-year disposal without the OUF against 25-year disposal with the OUF. Perhaps surprisingly--but consistent with the accumulation patterns in Figure 16--25-year disposal with the OUF results in lower SSR index values for the derelict population than the 5-year disposal rule. That this is largely attributable to the pattern of satellite and derelict accumulation rather than their total numbers can be seen from Figure 19, which shows the difference in totals is smaller than the difference in SSR index values. Figure 18 compares the expected maximum economic welfare under both policies. The expected maximum welfare similar in both cases, though the OUF starts at a higher level, ends at a higher level, and features stronger oscillations. These features are intuitive from the nature and structure of the OUF. Initially, when higher altitudes are clearer, operators can use them, allowing for more satellites and lower economic costs overall. As those orbits fill and anticipated \(p_{c}\) from continuing to launch there grows, operators choose to cluster at lower altitudes to avoid fee liability. Since the OUF moves pro-cyclically with object accumulation and anticipated \(p_{c}\), the oscillations due to debris dynamics are relatively amplified compared to the 5-year rule (though still damped). The 5-year rule does not face operators with this kind of dynamic incentive to keep the orbit usable and also does not allow them to use higher orbits while they are clear, reducing economic welfare generated. Finally, Figure 19 compares total object accumulations under both policies. The overall numbers of derelicts are fairly similar though the OUF results in slightly smaller numbers. The initial trough in fringe satellite accumulation is deeper, though as time progresses both feature similar oscillations and approach similar levels. Again, we emphasize that these results are demonstrative of model capabilities and the quantitative magnitudes should not be interpreted as specific predictions. However, the qualitative patterns point to an important observation in environmental and public economics: binding policies that change the behavior of rational actors function as taxes, whether implemented as such or not (Baumol and Oates, 1988; Fullerton, 2001; Bovenberg and Goulder, 2002). Unlike the implicit taxes created by command-and-control policies such as deorbit mandates, explicit taxes allow operators to identify productive margins of substitution and can be used to directly target a desired policy outcome or metric (Tietenberg, 2013). They can also raise revenue, which can be used toward other Figure 17: Normalized aggregate SSR index values for all object types under open-access behavior given full compliance with 25- or 5-year disposal rules. Figure 18: Expected maximum economic welfare for fringe satellites under open-access behavior given full compliance with 5-year disposal rule or 25-year disposal rule with a location-specific 50%-anticipated-\(p_{c}\) OUF. Figure 19: Total numbers of constellation, fringe, and derelict satellites under open-access behavior given full compliance with 25- or 5-year disposal rules. policy goals (e.g. financing public goods such as debris remediation, or cutting other taxes) (Goulder, 1995). Yet the incidence of the policies, and therefore their political economies, can be quite different (Fullerton, 2011). Further research can help elucidate these differences in the orbital context and inform policy design. ### The GMPHD propagator Having demonstrated the potential of the OPUS framework to evaluate a diverse array of policies on equal footing, we demonstrate the framework's agnosticism to the specific propagator used. The GMPHD filter is still in early stages of development and requires further testing and validation before it can be used at the level of MOCAT-4S. However, we believe it shows promise, particularly for evaluating a larger state space of debris objects. To illustrate this potential we apply the GMPHD filter to evaluate the evolution of lethal non-trackable objects under equilibrium behavior. Since the GMPHD filter parameters have not been thoroughly tested, we run the simulation for only 10 years. Figure 20 shows these results. While the debris propagation appears to function effectively, the interaction with the economic solver produces unexpected results. After an initial period of plausible launch rates, the open-access condition appears to either have no interior solutions or become Figure 20: Total numbers of large trackable, small trackable, lethal non-trackable debris and fringe satellites from GMPHD filter propagation. sufficiently irregular that the optimizer cannot find a solution, resulting in spatially-uniform oscillations. Regardless, the model is able to calculate implied counts of non-trackable debris in addition to larger trackable objects from propagating the underlying size distributions. These outcomes point to the need for further investigation of the GMPHD filter as an additional propagator to incorporate into the OPUS framework. ## 4 Future research and development directions The results presented above are a sample of what is possible in an IAM framework like OPUS. Using systems of equations that reflect payoffs from orbit use under different states of the world to predict launch patterns presents opportunities to study a wide array of policies, from disposal rules (e.g. 25- vs 5- year) to natural capital pricing (e.g. OUFs and performance bonds) to directed technology support (e.g. targeted debris removal subsidies). The framework also enables studying the effects of economic changes in the space sector, such as increased availability of smallsat launchers or new heavy-lift rockets. Below we outline a few directions for research and development to improve the utility of this IAM and others which may be developed in the future. Improved economic parameter calibration.A key limitation of current physico-economic modeling of orbit use is the lack of detailed economic data on the costs and revenues different operators incur/receive from using different locations. Such data is necessary to provide meaningful quantitative policy analyses. Projects like the Bureau of Economic Analysis' Space Economy satellite accounts are a useful step forward in this direction (Highfill and MacDonald, 2022), though they do not yet provide the level of detail on orbit use necessary for reliable quantitative estimates. Such data collection can also enable development of IAMs which incorporate linkages between orbit use and terrestrial economies, e.g. Nozawa et al. (2023). Models of launch capacity and prices.OPUS currently assumes open-access launchers are not constrained by limited rocket availability. This may be true at some points in time and not at others, though disentangling the causes of limited capacity--whether due to insufficient willingness-to-pay or fundamental constraints--is a separate question. Similarly, the price of launch is assumed to be constant throughout the simulation, though for policy analysis it may be useful to consider specific price trajectories, e.g. as in Adilov et al. (2022). Constructing and incorporating models of the launch market would enable more realistic estimates of behavioral responses to policy proposals. Such models would also enable analysis of policies aimed at launch capacity, e.g. targeted support for particular types of launch vehicles. More detailed models of operator behavior.The economic behavior model applied here follows from the equilibrium conditions for launchers derived in Rouillon (2020); Rao, Burgess, and Kaffine (2020); Rao and Rondina (2023). It is analytically convenient in studying launch intensity and location choices, as well as the overall impact of natural capital pricing. However, the present formulation only allows for a single per-satellite tax ("orbital-use fee"). It does not allow for easy differentiation between different implementations of natural capital pricing policies such as explicit satellite taxes (Rao, Burgess, and Kaffine, 2020), satellite deorbit performance bonds (Adilov, Alexander, and Cunningham, 2023), or combinations of instruments intended to alter satellite design choices (Grzelka and Wagner, 2019; Guyot and Rouillon, 2023). While these instruments can be described as _types_ of orbital-use fee, their implementations differ and can matter for overall policy effectiveness as well as implications for debris mitigation/remediation (Macauley, 2015; Rao, 2018; Guyot and Rouillon, 2023). Models of constellation behavior.The model of launch behavior presented here focuses solely on operators who individually account for small shares of orbit use. Large constellations face different economic considerations and require different models. Recently there has been some progress in building economic models of constellations (Bernhard, Deschamps, and Zaccour, 2023; Guyot, Rao, and Rouillon, 2023), though these models do not incorporate open-access launchers. Incorporating models of constellation behavior into IAMs will enable more realistic and diverse policy analyses. Models of non-commercial demand for satellite services.Though commercial orbit use is growing, Figure 4 shows that non-commercial orbit use remains a significant share of satellites in orbit. Many economically and socially important uses of orbital space are provided by government actors without commercial motives, e.g. GPS, and governments may function as an important source of demand for commercial services like telecommunications or remote sensing. The model of launch behavior presented here does not account for such patterns. However, the overall framework of OPUS can be extended to incorporate non-commercial orbit use. Such models may be easier to estimate from empirical data rather than derived from first principles, e.g. as in Rao and Letizia (2021). ## 5 Conclusion In this paper we have introduced and validated an Integrated Assessment Model (IAM) capable of analyzing both the physical behavior of objects in orbit and the economic behavior of entities controlling these objects. By combining orbital propagators with economic modeling, OPUS provides a robust tool that can help policymakers better understand and mitigate the challenges associated with the growing congestion of Earth's orbital space. Many types of analyses are possible within the OPUS framework. These range from studying the environmental effects of reductions in the cost of launching or operating a satellite to exploring the implications of increases in commercial demand for satellite services. OPUS can also be applied to investigate the effects of anti-satellite (ASAT) tests, among other policy-relevant scenarios. While OPUS' capabilities have been demonstrated, it is worth emphasizing that the results presented here are intended to illustrate the model's versatility and potential applications rather than provide specific policy guidance. There are several limitations in the current implementation of OPUS. A major limitation is the lack of detailed economic data on the costs and revenues for different satellite operators. Furthermore, OPUS currently only endogenizes open access to orbit by commercial launchers, with large constellation behavior treated as exogenous and non-commercial use not included. Future research could extend OPUS to include models focusing on large constellations and non-commercial demands for satellite services. Despite its limitations, OPUS is a useful step toward a holistic understanding of space sustainability issues. Policymakers can use this tool to weigh the costs and benefits of various proposals, from orbital slotting concepts and deorbit timelines to targeted debris removal subsidies and orbital-use fees.
2305.20044
Probabilistic Uncertainty Quantification of Prediction Models with Application to Visual Localization
The uncertainty quantification of prediction models (e.g., neural networks) is crucial for their adoption in many robotics applications. This is arguably as important as making accurate predictions, especially for safety-critical applications such as self-driving cars. This paper proposes our approach to uncertainty quantification in the context of visual localization for autonomous driving, where we predict locations from images. Our proposed framework estimates probabilistic uncertainty by creating a sensor error model that maps an internal output of the prediction model to the uncertainty. The sensor error model is created using multiple image databases of visual localization, each with ground-truth location. We demonstrate the accuracy of our uncertainty prediction framework using the Ithaca365 dataset, which includes variations in lighting, weather (sunny, snowy, night), and alignment errors between databases. We analyze both the predicted uncertainty and its incorporation into a Kalman-based localization filter. Our results show that prediction error variations increase with poor weather and lighting condition, leading to greater uncertainty and outliers, which can be predicted by our proposed uncertainty model. Additionally, our probabilistic error model enables the filter to remove ad hoc sensor gating, as the uncertainty automatically adjusts the model to the input data
Junan Chen, Josephine Monica, Wei-Lun Chao, Mark Campbell
2023-05-31T17:14:25Z
http://arxiv.org/abs/2305.20044v2
Probabilistic Uncertainty Quantification of Prediction Models with Application to Visual Localization ###### Abstract The uncertainty quantification of prediction models (_e.g._, neural networks) is crucial for their adoption in many robotics applications. This is arguably as important as making accurate predictions, especially for safety-critical applications such as self-driving cars. This paper proposes our approach to uncertainty quantification in the context of visual localization for autonomous driving, where we predict locations from images. Our proposed framework estimates _probabilistic_ uncertainty by creating a _sensor error model_ that maps an internal output of the prediction model to the uncertainty. The sensor error model is created using _multiple_ image databases of visual localization, each with ground-truth location. We demonstrate the accuracy of our uncertainty prediction framework using the Ithaca365 dataset, which includes variations in lighting, weather (sunny, snowy, night), and alignment errors between databases. We analyze both the predicted uncertainty and its incorporation into a Kalman-based localization filter. Our results show that prediction error variations increase with poor weather and lighting condition, leading to greater uncertainty and outliers, which can be predicted by our proposed uncertainty model. Additionally, our probabilistic error model enables the filter to remove ad hoc sensor gating, as the uncertainty automatically adjusts the model to the input data. ## I Introduction The evolution of modern prediction models (_e.g._, neural networks) has revolutionized the performance of applications ranging from medical diagnostics, business analysis to robotics. However, much of the research in this field has focused primarily on enhancing performance (_e.g._, average prediction accuracy) through better data collection and architectures. Despite these advancements, one significant weakness of many models is their inability to provide a sense of confidence in individual predictions. Predictive accuracies of these models can vary based on factors such as the amount and diversity of training data, the model architecture details, and the complexity of the test environment [1, 2]. In certain applications, such as medical imaging or self-driving, _probabilistic uncertainty quantification_ of prediction outputs is crucial. Realizing uncertainty models for these networks will not only facilitate their integration into formal probabilistic perception and planning frameworks but also enable better reasoning over the outputs. For example, in medical diagnosis, doctors should intervene when the neural network lacks confidence in its prediction [3]. While some modern neural networks attempt to output probabilistic uncertainty, the reliability of the uncertainty prediction is still insufficient for safety-critical decision-making [4]. Most modern neural networks are deterministic or produce only _non-probabilistic_ confidence, such as the softmax function. Current uncertainty modeling methods can generally be divided into three categories: Bayesian neural networks, ensemble, and post-processing methods. Bayesian neural networks [5, 6] construct an inherent uncertainty estimation framework by formalizing a probability distribution over the model parameters [7]. However, they are difficult to train and often output poorly calibrated confidence scores [8]. Ensemble methods [9] typically train multiple neural networks with different training data or architectures, and the variance of the networks' output can indicate the uncertainty level. However, these methods require larger networks and additional training and inference steps. Post-processing methods, such as neural network calibration, are general enough to be used with different networks. However, they require uncalibrated uncertainty as an input and cannot predict uncertainty directly. Examples include histogram binning [10] and isotonic regression [11]. Some post-processing methods, such as Platt scaling [12], can predict uncertainty directly but require additional layers to be trained. The output of these methods is typically a simple confidence score, which is calibrated to be an approximate probability of correctness. This paper presents a _general_ uncertainty prediction framework that does not require additional training of the network or changes in network architecture. The framework is _probabilistically_ formulated to provide both probability/confidence and an uncertainty distribution across the outputs. To achieve this, we leverage the concept of _sensor models_ in estimation frameworks (_e.g._, Kalman filter). For traditional sensors, manufacturers typically provide error model specifications that indicate the accuracy of the sensor under different conditions, _e.g._ the accuracy of LiDAR as a function of range or the covariance of pseudo-ranges for GPS in various weather conditions. We propose creating an error uncertainty model for the network predictions using the internal network outputs and analysis across datasets. We demonstrate the effectiveness of our uncertainty prediction approach using the problem of visual localization [13]. We focus on this problem for two reasons: first, the neural network outputs a 2D position from an image, making it easy to analyze, and second, the network's performance is known to degrade in poor weather and lighting conditions [14, 15]. We build upon a typical visual localization model [16] which predicts the pose of a query image by searching the most similar image from a database of images with known poses using keypoint matching [17]. Firstly, we analyze the performance of a baseline neural network to understand its performance over different databases (weather and lighting). We then create a statistical error model using the internal outputs of the network (number of keypoint matches between the query and retrieved images) as the _cue_ to predict visual localization error uncertainty. Importantly, the matched keypoints of each model/database can be calibrated and binned based on both a probability and 2D error. During inference, given the number of keypoint matches from an image, the sensor error model can directly return an uncertainty estimate in the form of a 2D error covariance (analogous to a traditional sensor) and a formal confidence. We can also incorporate the error model output in a Kalman-based localization filter, which provides a range of formal evaluation tools such as filter integrity and sensor hypothesis testing. We evaluate our approach using Ithaca365 [18], a large-scale real-world self-driving dataset that includes _multiple_ traversals along repeated routes, varying weather and lighting conditions, and high precision GPS. Our main contributions are three-folded: First, we analyze a state-of-the-art neural network for visual localization across a comprehensive dataset that includes multiple routes, lighting, and weather conditions to understand how errors vary across these key conditions. Second, we propose an approach to predict well-calibrated uncertainty without modifying the base neural network or requiring additional training. Third, we validate our method in the visual localization problem on a large real-world dataset under various settings and demonstrate that it consistently produces well-calibrated uncertainty estimates and high integrity filters without ad hoc fixes. ## II Related Works ### _Uncertainty Modeling._ Modern prediction models are known for their high performance in various tasks, but they often lack the ability to tell the uncertainty in their predictions. While some models, such as classification neural networks, can produce a confidence score, it is not probabilistic and therefore may not be entirely reliable. Ensembles [19, 9, 20] offer a solution by training multiple networks and combining their predictions to calculate variance and represent uncertainty. However, ensembles require more costly training steps for training multiple networks, as well as more inference time. Bayesian neural networks (BNNs) [5, 6] offer another potential solution by treating neural network weights as random variables instead of deterministic values, with predictions in the form of an expectation over the posterior distribution of the model weights. Two prominent methods in BNN are Bayes by Backprop [21] and Monte Carlo (MC) Dropout [22]. Bayes by Backprop regularises the weights by minimising the expected lower bound on the marginal likelihood. MC Dropout interprets dropout approximately as integrates over the models' weights. However, BNN requires specifying a meaningful prior for the parameters which can be challenging. Additionally, the uncertainty is often poorly calibrated, necessitating post-processing methods [23, 8, 24] to map poorly calibrated uncertainty to well-calibrated uncertainty. For instance, temperature scaling is a widely used post-processing methods due to its simplicity and effectiveness [23]. [8] extends the technique from just classification tasks to regression tasks. However, such post-processing methods either require inputs of uncalibrated uncertainty or re-training some layers. In contrast, our method differs from these methods in that we do not alter the prediction model's structure, hence preserving its performance. Furthermore, our method can output accurate uncertainty with no additional training and can be applied to any prediction models. ### _Visual Localization_ Visual localization aims to predict the pose of a query image using environmental information such as images and point clouds. Two main branches of visual localization are image-based localization and 3D-structure-based localization. Image-based localization [25, 26, 27] can be understood as an image retrieval problem, _i.e._ retrieving the most similar image from an image database/library with known poses and taking the pose of the retrieved image as the predicted pose. Several approaches [28, 29] have been proposed to extract image features for this purpose In contrast, 3D-structure-based localization [30, 31, 32, 33, 16, 34, 35] predicts the location by finding the pose that best matches the detected 2D keypoints in the query image with the 3D keypoints in a pre-constructed 3D model. However, to the best of our knowledge, few works have considered the uncertainty associated with the predicted location. While some works [36, 17] output confidence scores on detected keypoints and their matching, they do not provide any information about the uncertainty of the predicted location. ## III Method In this section, we discuss our method for uncertainty quantification of prediction models, using visual localization as the application task. We start by defining a baseline visual localization framework, then present our approach to modeling the errors and calibrating the uncertainties of the predictive network, and finally, we define a full visual localization pipeline, with a filter and sensor gating, to be used in the validation steps. ### _Location Prediction from Image Retrieval_ Let \(X=\{k_{i}\}_{i=1}^{N}\) be a set of database images with known GPS locations \(r(k_{i})\). Given a query image \(q\), our goal is to estimate the location where the image was taken. As images taken from close-by poses should preserve some content similarity, we find the _closest_ image \(f_{\text{closest}}(q;X)\) from database \(X\) and use its corresponding location as the predicted location \(\hat{r}(q)=r(f_{\text{closest}}(q;X))\). We define the _closest_ image as the image with the most number of keypoint matches \(n_{\text{kpm}}\) to the query image. However, performing keypoint matching of the query image to all \(N\) database images is computationally expensive. Therefore, more efficient global feature matching (NetVLAD [29]) is performed first, followed by neural keypoint matching (SuperPoint [36] + SuperGlue [17]) on the top \(n<<N\) candidate images. This pipeline is shown in Figure 1 (top, green). A standard location prediction from image retrieval pipeline typically uses a database from just one traversal (passing the route once). We propose to use _multiple databases_ from multiple traversals, motivated by several key observations. First, a query image has a non-zero distance to even its closest image from a database (see Figure 2). Using multiple traversals increases the database image options and thus lowers the average error. Second, as the query image for localization can originate from different weather and lighting conditions, it is important to diversify the database images to reduce potential errors (those from the traversal and from keypoint mismatches). Finally and most importantly, data from multiple traversals can be used to provide a localization uncertainty prediction, as will be shown in III-B. One naive approach is to simply treat the additional data from multiple traversals \(X^{1},X^{2},\ldots,X^{L}\) as one combined (large) database, and apply the same pipeline. However, this is not effective, as the candidate images retrieved by global feature matching often are biased to come from a single database whose color or even foreground object appearance is most similar to the query image. This motivates our new approach that encourages retrieval of candidate images from _each_ traversal as shown in Figure 1. ### _Uncertainty Prediction and Quantification_ #### Iii-B1 Problem Definition We formally define the uncertainty quantification problem as predicting the error bound \(\sigma_{c}(q)\in\mathbb{R}^{+}\) of image \(q\) and confidence level \(c\in[0,1]\) such that the error between the predicted location \(\hat{r}=r(f(q;X))\) and the ground-truth \(r_{\text{gt}}\) is below \(\sigma_{c}\) by \(c\) probability: \[p\left(\|\hat{r}-r_{\text{gt}}\|<\sigma_{c}(q)\right)=c \tag{1}\] #### Iii-B2 Sensor Error Model We propose to create a _sensor error model_ to determine the confidence of the prediction (_e.g._ neural network output). A sensor error model maps key attributes of prediction to error bound \(\sigma_{c}\) and confidence \(c\) estimates; for example, the error of stereo depth sensor is quadratic to range [37]. We first analyze the performance of visual localization prediction as a function of the number of keypoint matches \(n_{\text{kpm}}\) by performing cross-validation using different databases. As an example, Figure 3 shows scatter plots of the location error between images from two databases (sunny and night) and their closest images from _another_ database (sunny) as a function of the number of keypoint matches. From this analysis, we learn two things. First, the number of keypoint matches \(n_{\text{kpm}}\) can serve as a good indicator for uncertainty quantification. Second, the relationship between number of keypoint matches and error can be different for different databases (traversals); the scatter plots have different distributions. Thus, we propose to build the sensor error model as a function of number of keypoint matches, and build one model for each different traversal. We can utilize multiple traversals to learn this mapping as follows. #### Iii-B3 Creating Sensor Error Model Key to our approach is creating a sensor error model for _each_ database/traversal. For database \(l\), we apply the image retrieval pipeline using traversal \(X^{l}\) as the query and another traversal \(X^{m\neq l}\) as the database. For every image \(k_{i}^{l}\in X^{l}\), we find the closest image \(f(k_{i}^{l};X^{m})\) from database \(m\) and compute the location error \(\|r(f(k_{i}^{l};X^{m}))-r(k_{i}^{l})\|\). Thus, for each image, we Fig. 1: Pipeline for location prediction form image-retrieval using multiple traversals. Fig. 3: Relationship between number of keypoint matches and location error for two different database traversals. Fig. 2: GPS locations of several traversals (zoomed in for illustration; full trajectory is not shown). Using multiple traversals increases the chances that a database image is closer to the query image location (_i.e._, smaller theoretical error). can compute the number of keypoint matches (to its closest image) and location error. This process is repeated using all \(L-1\) different traversals (other than \(X_{l}\)). We divide the data (number of keypoints vs error) into bins according to the number of keypoint matches (_e.g._, bin 1 contains data points with keypoint matches ranging from 0-200, bin 2 from 200-400, and so on). For each bin, we empirically determine the error bound \(\sigma_{c}\) for confidence \(c\) such that \(c\) fraction of data in that bin has smaller error than \(\sigma_{c}\). We repeat for each traversal/database, as shown in Figure 4(top). #### Iii-B4 Model Prediction with Uncertainty and Confidence The inference process is shown in Figure 4(bottom). Given a query image of unknown location, we retrieve the closest image (as detailed in III-A); the location of the closest image becomes the predicted location. To find the confidence of the prediction, we use the database of the closest image (say \(l\), or \(X^{l}\)). The corresponding (\(l\)) sensor error model is then used; the bin associated with the number of keypoint matches (between query and closest image) gives the corresponding error bound \(\sigma_{c}\) at confidence level \(c\). Finally, we form a quantified uncertainty (in the form of a 2D estimation error covariance in this case). Specifically, we compute the measurement covariance \(R\in\mathbb{R}^{2\times 2}\) from the cross-validation data, per database, per number of keypoint matches range (bin). The covariance matrices are formed and expressed in the ego car (sensor) coordinates. This covariance matrix will be used as the measurement covariance in subsection III-C. ### _Full Visual Localization Pipeline_ We build a full visual localization pipeline using the location prediction (III-A) as the uncertain measurement, the uncertainty prediction (III-B) as the error covariance, within a formal estimation framework using the Sigma Point (Unscented) filter (SPF) [38, 39]. Our goal is to estimate the \(p(s_{t}|m_{1:t})\) of the state vector \(s_{t}\) at time \(t\) given observed measurements \(m_{1:t}\). We define the state vector as follows: \[s=\begin{bmatrix}x&y&\theta&v&\hat{\theta}\end{bmatrix}^{T} \tag{2}\] where \(x,y,\theta\) are the inertial, planar position and heading angle, and \(v,\hat{\theta}\) are the linear and angular velocity of the car. In the prediction step of the SPF, we assume constant linear and angular velocity (\(v\) and \(\hat{\theta}\)) with a small process noise. In the measurement update, given an image input, we process the image through the location and uncertainty prediction pipeline (III-A and III-B) to give the (\(x,y\)) location measurement and error covariance; the covariance is transformed to the inertial coordinates for the filter. Most modern estimation frameworks also typically employ _sensor measurement gating_ to decide whether to accept a measurement (_i.e._, use it in the filter update) or reject the measurement (_i.e._, it is an outlier, outside the nominal error mode). Given a measurement vector \(m\), we compute the Mahalanobis distance \(d_{M}\) defined as follows: \[d_{\text{M}}^{2}=(m-\hat{m})^{T}(H\hat{C}H^{T}+R)^{-1}(m-\hat{m}) \tag{3}\] where \(R\) is the measurement covariance _transformed_ to the world coordinate, \(\hat{C}\) is the estimated state covariance from the SPF, \(H\) is the measurement matrix that maps the state vector to the measurement, and \(\hat{m}=Hs\) is the expected measurement. The measurement is rejected if it lies outside of the validation gate, \[\text{if }d_{\text{M}}^{2}>\chi_{k,\alpha}^{2}\rightarrow\text{ reject}, \tag{4}\] where \(\chi_{k,\alpha}^{2}\) is a threshold from the inverse chi-squared cumulative distribution at a level \(\alpha\) with \(k\) degrees of freedom. The level \(\alpha\) controls the validation gate, _i.e._ it rejects \((1-\alpha)\times 100\%\) of the measurements at the tail; typical values are \(0.99\), \(0.975\), and \(0.95\). ## IV Experiments ### _Dataset_ We use the Ithaca365 dataset [18], containing data collected over multiple traversals along a 15km route under various conditions: snowy, rainy, sunny, and nighttime. We utilize two types of sensor data, images, and GPS locations for our experiments. For our database, we randomly select nine traversals, with three traversals each from the sunny Fig. 4: Pipeline for uncertainty prediction. Top: creating sensor error model. Bottom: using sensor error model in inference. (\(X^{1}\), \(X^{2}\), \(X^{3}\)), nighttime (\(X^{4}\), \(X^{5}\), \(X^{6}\)), and snowy (\(X^{7}\), \(X^{8}\), \(X^{9}\)). We use three additional traversals (\(Q^{\text{sunny}}\), \(Q^{\text{anow}}\) and \(Q^{\text{night}}\)), one from each condition, as queries for testing and evaluation. To avoid double counting and ensure a uniform spatial distribution across the scenes in evaluation, we sample query images at an interval of \(\approx\)1m, except for highways where the spacing is larger. This results in an average of \(\approx\)10,000 images for each query traversal, \(Q^{(\cdot)}\). ### _Evaluation_ #### Iii-B1 Sensor Error Model First, we evaluate the correctness of our uncertainty prediction on _location prediction using image retrieval_. Following [8, 23], we use _reliability diagram_ to compare the expected confidence level with the observed confidence level. For a given expected confidence level \(c\), the observed confidence is obtained by computing the empirical frequency \(\hat{p}_{c}\) that the location error \(\|\hat{x}(q)-x_{\text{gt}}(q)\|\) is below the predicted uncertainty \(\sigma_{c}(q)\): \[\hat{p}_{c}=\frac{|\{q\in Q\ \ \text{s.t.}\ \|\hat{x}(q)-x_{\text{gt}}(q)\|\leq \sigma_{c}(q)\}|}{|Q|}. \tag{5}\] If the uncertainty quantification is accurate, the diagram should plot the identity function (a straight line with a gradient of one). The reliability diagram in Figure 5 shows that our method produces accurate probabilistic confidence, as evidenced by the small gaps between observed and expected confidence at all levels and across all three conditions. #### Iii-B2 Visual Localization: Filter + Prediction/Error Model Next, we evaluate the _full visual localization pipeline_, which uses the previous location predictions as measurements in the SPF (subsection III-C). We evaluate both the localization error and the uncertainty of the estimates. The localization error \(d_{\text{err}}\) is the average distance error between estimated and ground-truth locations, whereas covariance credibility measures the frequency that the 2D localization error lies within an \(n\)-sigma covariance ellipse; we use 1-, 2- and 3-sigma, corresponding to \(68\%\), \(95\%\) and \(99.7\%\) probability respectively in a 2D Gaussian distribution. We present three sets of experiments in Table I. The first set of experiments (rows 1-9) uses the original image inputs. The second and the third sets simulate high sensor error/failure by corrupting _several_ images along the _red paths_ of Figure 6. Specifically, the second set (rows 10-15) applies average blurring, and the third set (rows 16-21) applies salt and pepper noise, as shown in Figure 7. Within each set, three sets of measurement gating are evaluated, with 0, 1.0% and 2.5% probability gate. We compare our method to a constant covariance baseline commonly used in Kalman filter, where the constant covariance value is obtained by tuning on the validation data, separately for each weather condition. Our method and the constant covariance baseline receive the _same measurement vectors_ but use _different measurement covariance_. Additionally, in the first experiment set, we provide a comparison to the Monte Carlo (MC) Dropout method. Specifically, we apply a dropout layer after the final keypoint feature projection layer with a 0.3 dropout probability and repeat the dropout process multiple times until the SPF localization results stabilize. We report the converged results. Analysis of Table I yields several observations. Firstly, our method outperforms the MC Dropout and constant covariance baselines in terms of localization accuracy (\(d_{\text{err}}\)) in nearly all cases, suggesting that a good uncertainty model can improve localization accuracy, even with similar measurement quality. Our method also produces more accurate uncertainty estimates (indicated by covariance-credibility) than the two baselines in nearly all cases. This is crucial for making informed decisions in the future. Second, we observe that formal sensor gating through hypothesis testing with prediction networks is too sensitive and does not work well. However, our adaptive covariance method removes the need for sensor gating and standard outlier rejection in filters. In the typical estimation framework, sensor gating is used to reject bad measurements that could adversely affect the performance. While outlier rejection may improve performance, it is highly susceptible to threshold parameter selection (\(\chi^{2}_{k,\alpha}\)). We observe that there is hardly a single Fig. 5: Reliability diagrams for \(Q^{\text{sunny}}\), \(Q^{\text{night}}\), and \(Q^{\text{anowy}}\). Fig. 6: Data collection path (black), with corrupted images (red). Fig. 7: Examples of corrupted images. Top: original images. Mid: blurred images (average blur with kernel size 80, Bottom: images corrupted with salt and pepper noise (noise amount is 0.5)). appropriate threshold value that works for different query and measurement conditions. The analysis of the chi-square test using errors and covariances indicates that the errors produced by the DL algorithm do not conform1 to a Gaussian error model, which is essential to the chi-square test. This finding suggests potential future work in developing non-Gaussian uncertainty models and associated gating techniques that can better match the DL errors. Footnote 1: with exception in cases of sunny weather with many keypoints, where the errors do fit the Gaussian error model Fortunately, a key novelty of our approach is that it does not require a formal outlier rejection method. Our approach _automatically_ adjusts the error covariance based on the number of keypoints, which addresses the uncertainty of the measurement, even if it is an outlier. We argue that this is a key contribution for two reasons. First, it is clear that outlier prediction is highly sensitive. Second, even noisy, uncertain measurements can still contain useful information. Our uncertainty modeling approach allows the filter to incorporate all prediction outputs, resulting in better performance and more robust applications. #### Iv-B3 Latency and Data Size On a 1080Ti GPU, extracting global features of an image using NetVLAD takes about 8ms, while performing keypoint matching for a single pair of images using SuperPoint and SuperGlue takes approximately 112ms. Although keypoint matching is done between a query image and ten candidate images, the GPU can simultaneously process them in a batch without affecting the speed. The database comprises 127,225 images with a total size of 417.7 GB. Instead of storing the original images, we only need to store the extracted global features (2.24GB) and the keypoint features (161.9GB). ## V Conclusion We present a general and formal probabilistic approach for modeling prediction (_e.g._, neural network) uncertainties, which we validate in the context of visual localization problem. Our approach involves creating a sensor error model that maps the output of the internal prediction model (number of keypoint matches) to probabilistic uncertainty for each database. During inference, we use the sensor error model to map the number of keypoint matches to confidence probability and 2D covariance. We evaluate our approach using a large-scale real-world self-driving dataset with varying weather, lighting, and sensor corruption conditions, demonstrating accurate uncertainty predictions across all conditions. Notably, our approach of creating a different error covariance tailored to each measurement eliminates the need for sensor gating, which is overly sensitive due to their non-Gaussian nature. Our approach results in more robust and better-performing perception pipelines. ## Acknowledgement This research is supported by grants from the National Science Foundation NSF (CNS-2211599 and IIS-2107077) and the ONR MURI (N00014-17-1-2699). \begin{table} \begin{tabular}{c l l l l l l l l l l l} \hline \hline Row & Method & Gating \((1-\alpha)\) & \(d_{\text{err}}\)(m) \(\downarrow\) & Cov-credibility(\%) & \(n_{r}(\%)\) & \(d_{\text{err}}\)(m) \(\downarrow\) & Cov-credibility(\%) & \(n_{r}(\%)\) & \(d_{\text{err}}\)(m) \(\downarrow\) & Cov-credibility(\%) & \(n_{r}(\%)\) \\ \hline \multicolumn{10}{c}{Normal measurement — Average measurement error: 0.83m / 11.67m / 1.76m for MC dropout – 0.87m / 8.68m / 1.42m for baseline and ours} \\ \hline 1 & MC d.o & 0 & 0.792 & 32.8 / 67.4 / 84.7 & 0 & 7.442 & 27.6 / 56.2 / 71.0 & 0 & 1.501 & 20.4 / 50.4 / 69.8 & 0 \\ 2 & Baseline & 0 & 0.766 & 46.1 / 81.2 / 92.3 & 0 & 8.425 & 38.5 / 67.0 / 78.9 & 0 & 1.302 & 30.5 / 61.8 / 79.6 & 0 \\ 3 & Ours & 0 & **0.569** & **62.2 / 91.4 / 97.4** & **3.075** & **75.9 / 94.6** & **98.6** & **0** & **0.811** & **55.4 / 86.4 / **95.9** & 0 \\ \hline 4 & MC d.o & 1.0\% & 0.612 & 33.1 / 68.2 / 85.6 & 1.0 & 165.282 & 23.4 / 46.0 / 57.7 & 33.0 & **1.442** & 21.2 / 51.6 / **71.6** & 4.8 \\ 5 & Baseline & 1.0\% & 0.610 & 46.6 / 82.1 / 93.3 & 1.0 & 54.382 & 33.9 / 57.2 / 67.0 & 30.3 & 333.538 & 17.5 / 38.3 / 51.7 & 37.7 \\ 6 & Ours & 1.0\% & **0.570** & **62.3 / 91.4 / 97.5** & 0.2 & **2.926** & **76.2 / 94.6 / 98.7** & 0.3 & 222.608 & **34.5 / 56.4** / 63.9 & 33.2 \\ \hline 7 & MC d.o & 2.5\% & **0.613** & 33.1 / **68.2 / 85.9** & 1.5 & 7607.294 & 3.6 / 8.5 / 11.5 & 86.7 & 8144.861 & 3.7 / 9.0 / 12.3 & 82.5 \\ 8 & Baseline & 2.5\% & 790.307 & 32.8 / 54.5 / 61.6 & 35.2 & 54.205 & 34.1 / 57.9 / 69.1 & 31.3 & 8169.233 & 5.1 / 11.2 / 14.8 & 82.3 \\ 9 & Ours & 2.5\% & 82.828 & **47.7 / 67.9 / 72.5** & 26.2 & **2.942** & **76.3 / 94.6 / 98.7** & 0.4 & **8121.261** & **7.6 / 14.1 / 16.7** & 82.0 \\ \hline \multicolumn{10}{c}{Study case where 5.0\% / 4.7\% / 4.6\% of data are corrupted with average blurring — Average measurement error: 108.21m / 121.78m / 108.66m} \\ \hline 10 & Baseline & 0 & 107.163 & 41.9 / 74.6 / 85.4 & 0 & 121.370 & 36.3 / 63.5 / 74.4 & 0 & 108.080 & 29.2 / 58.9 / 75.3 & 0 \\ 11 & Ours & 0 & **3.371** & **61.0 / 90.3 / 96.9** & 0 & **7.702** & **72.9 / 92.7 / 97.2** & 0 & **3.351** & **56.4 / 87.0 / 95.7** & 0 \\ \hline 12 & Baseline & 1.0\% & 1726.025 & 3.6 / 7.1 / 8.3 & 91.1 & 8776.021 & 1.8 / 2.6 / 3.4 & 97.2 & 726.828 & 14.9 / 28.9 / 37.0 & 58.7 \\ 13 & Ours & 1.0\% & **3.529** & **61.3 / 90.6 / 97.0** & 1.3 & **66.813** & **65.8 / 82.3 / 86.7** & 12.4 & **264.000** & **35.8 / **57.4 / 64.2** & 33.3 \\ \hline 14 & Baseline & 2.5\% & 1756.856 & 6.0 / 8.0 / 8.9 & 91.4 & 1510.11 & 6.0 / 11.4 / 15.6 & 84.1 & 8169.514 & 5.5 / 11.6 / 15.0 & 83.3 \\ 15 & Ours & 2.5\% & **110.299** & **51.3 / 73.2 / 78.2** & 20.6 & **70.659** & **65.9 / 82.0 / 86.5** & 13.2 & **820.152** & **16.0 / 29.0 / 34.8** & 62.1 \\ \hline \multicolumn{10}{c}{Study case where 5.0\% / 4.7\% / 4.6\% of data are corrupted with salt and pepper noise — Average measurement error: 66.09m / 103.63m / 85.26m} \\ \hline 16 & Baseline & 0 & 65.365 & 42.0 / 74.9 / 85.6 & 0 & 102.887 & 36.8 / 63.9 / 75.0 & 0 & 84.738 & 29.1 / 58.7 / 75.1 & 0 \\ 17 & Ours & 0 & **1.971** & **61.9 / 91.1 / 97.4** & 0 & **7.083** & **73.1 / 92.2 / 97.1** & 0 & **2.468** & **57.0 / 87.4 / 96.1** & 0 \\ \hline 18 & Baseline & 1.0\% & **0.985** & 45.8 / 80.4 / 92.1 & 5.5 & 3563.210 & 24.6 / 36.2 / 40.2 & 60.4 & **724.023** & **15.1 / **29.3** / **38.0** & 56.7 \\ 19 & Ours & 1.0\% & 1.719 & **62.3 / 91.3 / 97.5** & 0.6 & **437.251** & **28.7 / **40.6 / 48.0** & 39.5 & 1329.145 & 13.3 / 17.7 / 18.8 & 76.7 \\ \hline 20 & Baseline & 2.5\% & 420.962 & 35.2 / 58.2 / 66
2309.06430
Into the Mystic: ALMA ACA observations of the Mystic Mountains in Carina
We present new observations of the Mystic Mountains cloud complex in the Carina Nebula using the ALMA Atacama Compact Array (ACA) to quantify the impact of strong UV radiation on the structure and kinematics of the gas. Our Band~6 observations target CO, $^{13}$CO, and C$^{18}$O; we also detect DCN J=3-2 and $^{13}$CS J=5-4. A dendrogram analysis reveals that the Mystic Mountains are a coherent structure, with continuous emission over $-$10.5 km s$^{-1}$ $<$ v < $-$2 km s$^{-1}$. We perform multiple analyses to isolate non-thermal motions in the Mystic Mountains including computing the turbulent driving parameter, $b$, which indicates whether compressive or solenoidal modes dominate. Each analysis yields values similar to other pillars in Carina that have been observed in a similar way but are subject to an order of magnitude less intense ionizing radiation. We find no clear correlation between the velocity or turbulent structure of the gas and the incident radiation, in contrast to other studies targeting different regions of Carina. This may reflect differences in the initial densities of regions that go on to collapse into pillars and those that still look like clouds or walls in the present day. Pre-existing over-densities that enable pillar formation may also explain why star formation in the pillars appears more evolved (from the presence of jets) than in other heavily-irradiated but non-pillar-like regions. High resolution observations of regions subject to an array of incident radiation are required to test this hypothesis.
Megan Reiter, P. D. Klaassen, L. Moser-Fischer, A. F. McLeod, D. Itrich
2023-09-12T17:52:12Z
http://arxiv.org/abs/2309.06430v1
# Into the Mystic: ALMA ACA observations of the Mystic Mountains in Carina ###### Abstract We present new observations of the Mystic Mountains cloud complex in the Carina Nebula using the ALMA Atacama Compact Array (ACA) to quantify the impact of strong UV radiation on the structure and kinematics of the gas. Our Band 6 observations target CO, \({}^{13}\)CO, and C\({}^{18}\)O; we also detect DCN J=3-2 and \({}^{13}\)CS J=5-4. A dendrogram analysis reveals that the Mystic Mountains are a coherent structure, with continuous emission over \(-10.5\) km s\({}^{-1}<v<-2\) km s\({}^{-1}\). We perform multiple analyses to isolate non-thermal motions in the Mystic Mountains including computing the turbulent driving parameter, \(b\), which indicates whether compressive or solenoidal modes dominate. Each analysis yields values similar to other pillars in Carina that have been observed in a similar way but are subject to an order of magnitude less intense ionizing radiation. We find no clear correlation between the velocity or turbulent structure of the gas and the incident radiation, in contrast to other studies targeting different regions of Carina. This may reflect differences in the initial densities of regions that go on to collapse into pillars and those that still look like clouds or walls in the present day. Pre-existing over-densities that enable pillar formation may also explain why star formation in the pillars appears more evolved (from the presence of jets) than in other heavily-irradiated but non-pillar-like regions. High resolution observations of regions subject to an array of incident radiation are required to test this hypothesis. keywords: stars: formation - HII regions - ISM: kinematics and dynamics - ISM: jets and outflows - Herbig-Haro objects ## 1 Introduction Feedback is the principal process by which molecular clouds are destroyed (Matzner, 2002). High-mass stars inject energy and momentum into their surroundings via winds and radiation before their eventual deaths as supernovae. Growing evidence suggests that pre-supernova feedback plays a dominant role reshaping the interstellar medium (ISM) and regulating star formation (e.g., Kruijssen et al., 2019; McLeod et al., 2021; Chevance et al., 2022). Feedback has also been invoked as the stimulus for the formation of new stars, either by stimulating the collapse of existing cores or collecting material into new ones (e.g., Bertoldi, 1989; Elmegreen & Lada, 1977). Many theoretical models have been developed to address the question of how stellar feedback shapes the surrounding gas. Models can produce dust pillars like those seen in many H ii regions (e.g., Hester et al., 1996) either through growing instabilities or by revealing pre-existing substructure (e.g., filaments and cores) as the surrounding, lower-density material is more easily swept away (Gritschneder et al., 2010; Dale et al., 2012; Tremblin et al., 2012; Walch et al., 2013; Menon et al., 2020). Despite their morphological similarities, models predict differences in the following: density contrasts between the pillars and the surrounding medium, progression speeds of the ionization front, star formation efficiencies, and timescales, particularly whether cores are already present or formed by the ionization-driven shock. This results in measurable differences in the gas kinematics within (and around) the photodissociation region (PDR) interface. Despite the diagnostic potential of cold gas kinematics, only a few studies exist with high spatial resolution to measure the variation of the cold gas kinematics in dust pillars in star-forming regions. Far-IR observations from SOFIA and _Herschel_ provide kinematic evidence for dust pillars forming via collapse on the edges of H ii regions and small globules produced by turbulent fragmentation (Schneider et al., 2012; Tremblin et al., 2013). However, it is only with millimeter interferometry that dust pillars can be spatially and spectrally resolved. One of the first such studies was Klaassen et al. (2014) who mapped a pillar in Vulpecula. The observed pillar properties are most consistent with models that have low velocity dispersions, but no one model matches all of the observed gas kinematics (e.g., Gritschneder et al., 2009, 2010; Dale et al., 2012). To probe a broader range of conditions, Klaassen et al. (2020) presented a survey of 13 dust pillars in the Carina Nebula that sample a range of morphologies and environments, including the incident ionizing flux. Many of these reside in the actively star-forming South Pillars where star formation may have been triggered by the hundreds of O- and B-type stars in the region (Smith et al., 2010; Berlanas et al., 2013). et al., 2023). Cold gas kinematics are broadly consistent with pillars forming from turbulent media as they are scatpted by ionizing radiation. However, this sample does not include the most intense ionizing radiation that affects the gas nearest the central clusters of Carina. To probe the most intense feedback in the Carina Nebula, we target a cloud complex that is heavily irradiated by the young, massive cluster Trumpler 14 (Tr14). The so-called Mystic Mountains1(Area 29 in Hartigan et al., 2015) lie \(\sim\) 10 pc to the north of Tr14. Copious ionizing photons (\(Q_{H}\sim 10^{50}\) s\({}^{-1}\); Smith 2006a) illuminate and sculpt multiple pillars in the Mystic Mountains. Three famous Herbig-Haro (HH) jets emerge from the tips of these pillars - HH 901, HH 902, and HH 1066 (see Figure 1 and Smith et al., 2010; Reiter and Smith, 2013, 2014). However, the pillars themselves are largely opaque and only a few protostars are detected in the infrared (IR; e.g., Povich et al., 2011; Oliendorf et al., 2012). With modest angular resolution (\(\sim 2\arcsec\)), only the HH 1066 jet-driving source was directly detected in the infrared (Oliendorf et al., 2012; Reiter et al., 2016, 2017). The HH 1066 driving source is also one of only two sources in Carina with a marginally resolved circumstellar disk (Mesa-Delgado et al., 2016). Reiter et al. (2017) argued that more intense feedback closer to Tr14 may have compressed the gas, leading to high densities that obscure the HH 901 and HH 902 jet-driving sources in the IR. Indeed, the first detections of the HH 901 and HH 902 driving sources were only recently reported by Cortes-Rangel et al. (2020). Footnote 1: as the region was dubbed when imaged to commemorate the 20\({}^{\rm th}\) anniversary of the _Hubble Space Telescope_ In this paper, we present ALMA observations using the Atacama Compact Array (ACA; also known as the Morita Array) of the entire Mystic Mountains complex. The ionizing photon flux incident on the Mystic Mountains is an order of magnitude higher than the pillars in Klaassen et al. (2020). The complex is large (\(\sim 1\arcmin\ \times 2\arcmin\)), and thus samples a range of incident ionizing flux within a single pillar complex. By studying gas kinematics in the Mystic Mountain, we will constrain the role of ionizing radiation in stimulating or starving future star formation, a key test of the role of feedback in regulating star formation. ## 2 Observations ALMA ACA Band 6 observations of the Mystic Mountains were obtained on 19 January 2019. Observations are a mosaic of 21 pointings of the 7m-array (12 antennae) mosaic centre R.A.=\(10^{h}.44^{m}02^{s}010\), decl.=\(-59^{\circ}30^{\prime}01\aas@@fstack{\prime\prime}0\) (ICRS). The maximum recoverable scale (MRS) is \(28.6\arcsec\). We also obtained Total Power (TP) data to ensure our observations capture emission from the largest scales. These were obtained on 28-29 November 2018 with a third epoch on 04 December 2018. Our observational setup targeted rotational transitions J=2-1 of the CO isotopologues \({}^{12}\)CO, \({}^{13}\)CO and C\({}^{18}\)O, as well as SiO J=5-4 and \({}^{13}\)CS J=5-4. All observations were imaged to a velocity resolution 0.17 km s\({}^{-1}\). We also observed a continuum spectral window with resolution \(\sim 40\) km s\({}^{-1}\). When combined with line-free channels in other bands, continuum emission covers approximately 2.5 GHz in Band 6. Bandpass, flux, and gain calibration were done with external calibrators using the _Common Astronomy and Software Applications_ (CASA, McMullin et al., 2007) v5.4.0-70. The bandpass and flux calibrators for the Band 6 observations performed in January 2019 were J0940\(-\)6107 and J1047\(-\)6217. Continuum subtraction and cleaning via the imaging pipeline yielded insufficient results for our needs, so continuum ranges were identified by eye and a deeper clean performed using tclean in CASA v6.4.4 to produce images. We applied auto-masking and Briggs-weighting with a robust parameter of 0.5. We deconvolved the image in multiscale mode with scale of 0, 5, and 15 times the pixel size of 1.0 arcsec. The absolute flux scaling uncertainty is estimated to be about 15%. The synthesized beam sizes of the reduced data range typically between 5\(\aas@@fstack{\prime\prime}\)4-7\(\aas@@fstack{\prime\prime}\)2, corresponding to a spatial resolution 12,420-16,560 AU at the distance of Carina (2.3 kpc; Smith, 2006b; Goppl and Preibisch, 2022). Finally, the ACA and TP data were combined using the CASA task feather. This task regrids the lower resolution data to match the higher resolution data, scales them by the ratio of the clean beams, then combines the two datasets in fourier space before transforming back to the image plane. This provides much better recovery of extended emission than the ACA alone, allowing us to capture structures up to \(\sim\)27\(\arcsec\) at 230 GHz (see Klaassen et al., 2020, for an example). ### Complementary data from the _Hubble Space Telescope_ (_Hst_) We compare the ALMA observations to a narrowband H\(\alpha\) image from _HST_. A four-point mosaic of the field containing HH 901, HH 902, and HH 1066, dubbed "The Mystic Mountains," was taken February 1-2, 2010 to commemorate the 20th anniversary of _HST_ (PID 12050, P.I. M. Livio). Images were obtained with the UVIS channel of the Wide Field Camera 3 (WFC3). The total integration Figure 1: CO J=2-1 contours on an H\(\alpha\) image from _HST_. The famous jets, HH 901, HH 902, and HH 1066, are labeled. The CO emission is integrated over the velocity range \(-10.5<v<-2\) km s\({}^{-1}\); contours are 20, 40, 60, 80, and 100% of the peak intensity. Figure 2: Integrated intensity (moment 0) maps of the lines detected in this study – CO, \({}^{13}\)CO, C\({}^{18}\)O, \({}^{13}\)CS, and DCN. All CO isotopologues are detected with \(>5\sigma\) significance; DCN and \({}^{13}\)CS are detected with \(>3\sigma\) significance. time in the F657N filter was 1980 s. The observations and their analysis are presented in more detail in Reiter & Smith (2013). ## 3 Results and Analysis Figure 2 shows the integrated intensity (moment 0) maps of all emission lines detected in this study. Extended CO emission traces the kinematics and cloud structure throughout the Mystic Mountains complex with knots of bright emission in \({}^{13}\)CO and C\({}^{18}\)O tracing \(\sim 1-2\) clumps of emission in each pillar. The most complex emission is in the pillar with the HH 902 jet. Two emission peaks in the CO and isotopologues hint at multiple star-forming clumps. Visual inspection of the CO datacube reveals that emission associated with the Mystic Mountains is contained within the velocity range \(-15<v<5\) km s\({}^{-1}\). This range includes the systemic velocities of the HH 901 (\(-5.0\) km s\({}^{-1}\)) and HH 902 (\(-8.5\) km s\({}^{-1}\)) pillars identified by Cortes-Rangel et al. (2020). Pillar-like emission to the east and north of these two pillars contains HH 1066 (for which we estimate a systemic velocity of \(-6.5\) km s\({}^{-1}\)). Additional CO emission not associated with the Mystic Mountains is detected at other velocities (\(\pm 20\) km s\({}^{-1}\) from the \(v_{\rm LSR}\) of Carina, \(-20\) km s\({}^{-1}\)), but we do not discuss this further. To determine the precise velocity range and extent of the gas associated with the Mystic Mountains complex, we use a dendrogram analysis to identify coherent structures in position-position-velocity space. As described in Rosolowsky et al. (2008) and Goodman et al. (2009), dendrograms provide a hierarchical representation of data, aiding the analysis of physical conditions on multiple scales from a single dataset. Large coherent features that are not part of another larger, parent structure are 'trunks'; these contain substructures called 'branches' that are composed of individual local maxima, or 'leaves,' that cannot be subdivided further. We use the python packages astrodenodro (Robitaille et al., 2019) and scimbs (Colombo et al., 2015) to decompose the structures in the CO datacube. We use the following parameters for the decomposition: a minimum value (min_value) that defines the noise threshold - we adopt \(6\sigma\); a minimum intensity (min_delta) for that defines the threshold the peak flux must exceed to be identified as a separate structure - we adopt \(2\sigma\); and a minimum number of pixels for a leaf to be an independent entity - we use the number equivalent to three beams (124 pixels in our case). With these parameters, a dendrogram analysis identifies the Mystic Mountains as a trunk, indicating that it is a coherent cloud complex with contiguous emission over the velocity range \(-10.5<v<-2\) km s\({}^{-1}\). Figure 3 shows the Mystic Mountains as a single tree in red with other structures shown in blue. Within the Mystic Mountains complex, the dendrogram analysis identifies three separate pillars as branches. Contours defining the outlines of these branches are shown in Figure 4. We refer to the pillars by the name of the prominent jets that they host - HH 901, HH 902, and HH 1066. In the following sections, we use this velocity range to derive the spatially-resolved physical parameters of the cold molecular gas in the Mystic Mountains. We provide maps of these spatially-resolved quantities in Appendix A and B. To provide representative values in Table 2, we compute the values within the dendrogram branches that define each of the pillars. ### Optical depth We compute the optical depth at each position and velocity where emission is detected with a significance \(\geq 5\sigma\) using the following expression (equation 1 from Choi et al., 1993): \[\frac{T_{\rm main,v}}{T_{\rm iso,v}}=\frac{1-e^{-\tau_{\rm min,v}}}{1-e^{-\tau _{\rm iso,v}}}=\frac{1-e^{-\tau_{\rm min,v}}}{1-e^{-\tau_{\rm min,v}}/R} \tag{1}\] where "main" is the more abundant species and "iso" is the optically thin (isotopologue) transition used to correct it. \(R\) is the scale factor for the relative abundance of the two species. We use [\({}^{12}\)CO/\({}^{13}\)CO]= 60 (Rebolledo et al., 2016, see also Jacob et al., 2020) and [\({}^{12}\)CO/\({}^{18}\)O]= 560 (Wilson & Rood, 1994). We assume the same excitation temperature for both molecules. We find that CO emission is optically thick over a large portion of the Mystic Mountains whereas \({}^{13}\)CO is only optically thick near the brightest emission in the HH 902 pillar. Maps of the spatially-resolved optical depth at the source velocity are shown in Appendix A and maximum values are reported in Table 2. ### Molecular column density We compute the column density of each observed transition using the following equation (see e.g., Mangum & Shirley, 2015): \[N_{tot}=\frac{8\pi k\nu^{2}Q(T_{ex})e^{E_{u}/kT_{ex}}J_{\nu}(T_{ex})}{hc^{3} g_{u}A_{ul}[J_{\nu}(T_{ex})-J_{\nu}(T_{\rm cmb})]}\int T_{\rm mb}\frac{\tau_{v}}{1-e^{- \tau_{v}}}dv~{}\,{\rm cm}^{-2} \tag{2}\] \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline Name & Frequency & Bandwidth & Resolution & \(\theta_{\rm min}\) & \(\theta_{\rm max}\) & P.A. & RMS & Comment \\ & [GHz] & [MHz] & [km/s] & [\({}^{\prime\prime}\)] & [\({}^{\prime\prime}\)] & [\({}^{\circ}\)] & [mJy bm\({}^{-1}\)] & \\ \hline \hline \multicolumn{8}{c}{Molecular lines} \\ \hline SiO J=5-4 & 217.1049800 & 468.75 & 0.337 & 5.39 & 7.18 & 87.6 & 100.3 & not detected \\ DCN J=3-2 & 217.23855 & 468.75 & 0.337 & 5.39 & 7.18 & 87.6 & 100.3 & in SiO spectral window \\ C\({}^{18}\)O J=2-1 & 219.5603568 & 117.19 & 0.167 & 5.35 & 7.09 & 85.9 & 141.8 & \\ \({}^{13}\)CO J=2-1 & 220.3986765 & 117.19 & 0.167 & 5.30 & 7.04 & 86.9 & 245.4 & \\ \({}^{12}\)CO J=2-1 & 230.538 & 117.19 & 0.159 & 5.07 & 6.77 & 87.0 & 993.0 & \\ \({}^{13}\)CS J=5-4 & 231.220686 & 117.19 & 0.158 & 5.02 & 6.77 & 87.5 & 242.3 & \\ \hline \multicolumn{8}{c}{Continuum} \\ \hline B6 & 225.8799 & 2530.0 & 40.038 & 4.95 & 6.75 & 86.9 & 1.700\({}^{*}\) & \\ \hline \multicolumn{8}{l}{\({}^{*}\) RMS of the aggregated bandwidth image.} \\ \end{tabular} \end{table} Table 1: Spectral and imaging characteristics of the data. where \(Q(T_{ex})\) is the rotational partition function for a given excitation temperature, \(g_{u}\) is the rotational degeneracy of the upper level with energy \(E_{u}\), \(A_{ul}\) is the Einstein A coefficient for the transition, \(k\) is the Boltzmann constant, \(h\) is the Planck constant, \(J_{\nu}=(h\nu/k)/[\exp(\nu/kT)-1]\) is the Planck function in temperature units (K), and \(\tau_{\nu}/(1-e^{-\tau_{\nu}})\) is a correction factor for non-zero optical depth (see, e.g., Goldsmith & Langer, 1999). This assumes that all transitions have the same \(T_{\rm ex}\). Physical parameters for the relevant molecules and transitions (frequency, rotational partition function, Einstein A coefficients, etc.) were obtained from the JPL Spectral Line Catalog (Pickett et al., 1998) and the Leiden Atomic and Molecular Database (LAMBDA; Schoier et al., 2005). \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Line & \(I_{\rm peak}\) & \(I_{\rm median}\) & log(N\({}_{\rm min}\))\({}_{\rm max}\) & log(N\({}_{\rm min}\))\({}_{\rm median}\) & \(\tau_{\rm max}\) & log(N\({}_{\rm min}\))\({}_{\rm peak}\) & log(N\({}_{\rm micic}\))\({}_{\rm median}\) \\ & [K km s\({}^{-1}\)] & [K km s\({}^{-1}\)] & [cm\({}^{-2}\)] & [cm\({}^{-2}\)] & & [cm\({}^{-2}\)] & [cm\({}^{-2}\)] \\ \hline \multicolumn{8}{c}{Mystic Mountains; \(v_{\rm LSR}\approx-6.7\) km s\({}^{-1}\)\({}^{\ddagger}\)} \\ \hline \({}^{12}\)CO J=2-1 & 185.0 & 46.5 & 17.0 & 16.2 & 44.6 & 18.3 & 17.2 \\ \({}^{13}\)CO J=2-1 & 57.5 & 8.8 & 16.5 & 15.4 & 4.2 & 16.6 & 15.5 \\ C\({}^{18}\)O J=2-1 & 6.9 & 1.14 & 15.6 & 14.8 & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ \hline \multicolumn{8}{c}{HH 901 pillar; \(v_{\rm LSR}\approx-5.0\) km s\({}^{-1}\)\({}^{\dagger}\)} \\ \hline \({}^{12}\)CO J=2-1 & 114.2 & 38.8 & 16.8 & 16.4 & 33.0 & 18.0 & 17.5 \\ \({}^{13}\)CO J=2-1 & 27.0 & 9.0 & 16.1 & 15.7 & 0.68 & 16.1 & 15.7 \\ C\({}^{18}\)O J=2-1 & 1.97 & 0.61 & 15.0 & 14.5 & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ \hline \multicolumn{8}{c}{HH 902 pillar; \(v_{\rm LSR}\approx-8.5\) km s\({}^{-1}\)\({}^{\dagger}\)} \\ \hline \({}^{12}\)CO J=2-1 & 174.1 & 65.6 & 17.0 & 16.6 & 44.6 & 18.3 & 17.9 \\ \({}^{13}\)CO J=2-1 & 56.8 & 18.3 & 16.5 & 16.0 & 1.9 & 16.6 & 16.0 \\ C\({}^{18}\)O J=2-1 & 6.90 & 1.84 & 15.6 & 15.0 & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ \hline \multicolumn{8}{c}{HH 1066 pillar; \(v_{\rm LSR}\approx-6.5\) km s\({}^{-1}\)} \\ \hline \({}^{12}\)CO J=2-1 & 138.9 & 42.0 & 16.9 & 16.6 & 35.0 & 18.2 & 17.6 \\ \({}^{13}\)CO J=2-1 & 42.1 & 7.65 & 16.4 & 15.7 & 3.1 & 16.4 & 15.8 \\ C\({}^{18}\)O J=2-1 & 4.03 & 0.81 & 15.3 & 14.7 & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ \hline \multicolumn{8}{c}{\({}^{\dagger}\)VLSR from Cortes-Rangel et al. (2020)} \\ \multicolumn{8}{c}{\({}^{\ddagger}\)VLSR from the intensity-weighted average velocity of the Mystic Mountains} \\ \end{tabular} \end{table} Table 2: Summary of molecular line derived physical properties. Columns are the species/transition, peak and median intensities, median column density if optically thin, median optical depth, and median column if optically thick, respectively. Spatially resolved maps of these quantities are shown in Appendices A–B. Figure 3: Applying a dendrogram analysis to the CO velocity cube (\(-15<v<5\) km s\({}^{-1}\)) shows the Mystic Mountains are a single, coherent structure. **Left:** Shows the CO integrated intensity with colored contours indicating dendrogram trunks. The Mystic Mountains is shown in red; all other features are shown in blue. **Right:** The dendrogram tree, using the same color scheme as the left panel. The Mystic Mountains, shown in red, is a separate feature with all substructures stemming from the same trunk. To compute the column density, we assume an excitation temperature \(T_{\rm ex}=30\) K. We assume that the gas temperature is the same as the dust temperature derived from the far-IR spectral energy distribution (SED) that Roccatagliata et al. (2013) used to compute a temperature map of the entire Carina region. As discussed in Mangum & Shirley (2015), assuming a single temperature is often a poor assumption and we expect that to be true for the Myistic Mountains. However, adopting higher excitation temperatures (\(T_{ex}\approx 40-80\) K) or a variable excitation temperature within this range changes the estimated column density by a factor of \(\lesssim 2\). In the absence of a better temperature measurement, we adopt a single number in this study. Maps of the spatially-resolved column density calculation for each of the CO isotopologues are shown in Appendix B and median column densities in each pillar are reported in Table 2. ### Molecular gas mass of the Myistic Mountains We estimate the mass of the Myistic Mountains complex from the optical-depth corrected spatially-resolved 2D CO column density map using the equation: \[M_{gas}=\mu_{\rm g}m({\rm H}_{2})A\left[\frac{{\rm H}_{2}}{{\rm CO}}\right] \Sigma N({\rm CO}) \tag{3}\] where \([{\rm H}_{2}/{\rm CO}]=1.1\times 10^{4}\)(Pineda et al., 2010) is the abundance of \({\rm H}_{2}\) compared to CO, \(\mu_{\rm g}=1.41\) is the mean molecular weight (Kauffmann et al., 2008), \(m(H_{2})\) is the mass of molecular hydrogen, and \(A\) is the area of each pixel in the map. We compute a mass of \(\sim\)36 \({\rm M}_{\sun}\) for the entire Myistic Mountains complex. More recent measurements find \([{\rm H}_{2}/{\rm CO}]=6000\)(Lacy et al., 2017) which reduce the estimated mass by a factor of 2. ### Clumps, cores, and YSOs #### 3.4.1 C\({}^{18}\)O clumps We repeat the dendrogram analysis on the C\({}^{18}\)O data to identify clumps using this higher density tracer. As for the CO analysis, we use the following thresholds: an intensity minimum value of \(6\sigma\) to ensure that we are using only well-detected emission, a minimum intensity of \(2\sigma\) to define a separate peak, and a minimum number of pixels equivalent to three beams. The leaves (local maxima) detected with this analysis are shown in Figure 5. We extract the emission of the CO isotopologues within each C\({}^{18}\)O leaf (see Table 3) and plot the summed line profiles in Figure 6. In general, the most optically thick line, CO (shown with a solid line), has the broadest line profile. \({}^{13}\)CO (dashed line) and C\({}^{18}\)O (dotted line) are each narrower, with some C\({}^{18}\)O profiles showing evidence of multiple velocity peaks. The three leaves associated with jet-driving sources are L1 (HH 1066), L7 (HH 902), and L10 (HH 901). L1 shows clear evidence of two distinct velocity components within the beam separated by \(\sim-2.65\) km s\({}^{-1}\). None of the three show evidence of red- and blue-shifted emission in the linewings that indicates an associated molecular outflow. These are likely washed out in the larger ACA beam as molecular outflows were detected in both HH 901 and HH 902 by Cortes-Rangel et al. (2020). We use the median C\({}^{18}\)O line profile of each leaf to estimate its virial mass. We compute the virial mass using the following equation: \[M_{vir}=\frac{3(5-2n)}{8(3-n)\ln(2)}\frac{(\Delta v^{2})R}{G} \tag{4}\] where \(n\) is the exponent of the density profile (\(\rho\propto r^{-n}\); we assume \(n=2\)), \(\Delta v\) is the C\({}^{18}\)O linewidth (FWHM), \(R\) is the mean leaf radius, and \(G\) is the gravitational constant. Further corrections to account for the non-spherical morphologies will change these values by \(<10\%\)(Bertoldi & McKee, 1992). Virial masses, along with the linewidths, and 1D velocity dispersions for each leaf are reported in Table 3. We compare this to the molecular mass of each clump, computed as in Section 3.3 using \([{\rm H}_{2}/{\rm C}^{18}{\rm O}]=[{\rm H}_{2}/{\rm CO}]\times[{}^{12}{\rm CO }/{\rm C}^{18}{\rm O}]=6.16\times 10^{6}\) using data from Wilson & Rood (1994) and Pineda et al. (2010). The molecular mass of all clumps is higher than their virial mass, suggesting that they unstable to collapse, as noted in Table 3. #### 3.4.2 Continuum sources Figure 7 shows continuum emission detected with significance \(\gtrsim 5\sigma\). This reveals 9 point sources. We detect continuum from clumps near the origin of all three of the famous HH jets in the Myistic Mountains - HH 901 MM, HH 902 MM, and HH 1066 MM. Five other continuum peaks fall within the Myistic Mountains (MM1-MMS); three of these reside in the HH 902 pillar (MM2-4; see Figure 7). The final point source, MM6, lies outside the Myistic Mountains complex so we do not discuss it further. All mm continuum sources are associated with a C\({}^{18}\)O leaf, although not all C\({}^{18}\)O leaves have a continuum detection. Continuum and C\({}^{18}\)O emission are well-aligned spatially in all leaves but L7 at the head of the HH 902 pillar. Using a 2D Gaussian fit, we determine an offset of \(2.7^{\prime\prime}\pm 0.1^{\prime\prime}\) (0.03 pc) between the C\({}^{18}\)O and continuum peaks. The continuum peak is offset toward the western side of the pillar, in the same direction as the HH 902 YSO seen at higher resolution by Cortes-Rangel et al. (2020). A second continuum source detected to the northeast of the HH 902 YSO, HH 902 B, is not resolved with our larger beam. Three additional continuum sources reside further north in the Figure 4: Column density map with a single white contour that outlines the branches identified by the CO dendrogram analysis. Peak and median values reported in Table 2 are computed within these pillar boundaries. HH 902 pillar. A distinct peak in the continuum and C\({}^{18}\)O emission traces MM4 immediately to the north of the HH 902 MM. A bridge of continuum emission connects the two sources. Two additional continuum sources lie further north in the wishbone-shaped HH 902 pillar. MM2 and MM3 coincide with C\({}^{18}\)O peaks at the east and west tips of the pillar. To the east of the HH 902 pillar, there are three continuum sources in the HH 1066 pillar. At the top (northernmost point) of the HH 1066 pillar is HH 1066 MM. Due south lies MM1. Like its neighbor, continuum and C\({}^{18}\)O emission from MM1 appear to peak behind a cloud edge traced by bright H\(\alpha\) emission. Further south, MM5 coincides with a C\({}^{18}\)O peak at the head of the HH 1066 pillar, adjacent to the continuum emission from the HH 902 MM and MM4. Finally, at the tip of the Mystic Mountains complex, the HH 901 pillar has one continuum source detected at the head of the pillar. The continuum source overlaps with a local peak in the C\({}^{18}\)O emission. However, the brightest C\({}^{18}\)O emission in the HH 901 pillar is seen at its center where there is no continuum detection. We compute masses of the continuum sources as: \[M_{d}=S_{\nu}d^{2}/B_{\nu}(T)\kappa_{\nu}. \tag{5}\] We use a dust absorption coefficient \(\kappa_{\nu}=0.8\) cm\({}^{2}\)g\({}^{-1}\)(Ossenkopf & Henning, 1994) appropriate for 1.3 mm observations, and a temperature of T=30 K. We multiply the dust mass by a gas-to-dust ratio of 100 and report the total mass in Table 4. We have assumed a single temperature for all sources regardless of differences in their evolutionary stages (i.e., the presence of jets). Assuming a higher or lower temperature (15 K or 45 K) changes the estimated mass by a factor of \(\sim 2\). We refrain from a more detailed analysis given the low resolution of our data and evidence that multiple sources are unresolved in the beam (note that Cortes-Rangel et al., 2020, detect at least two point sources in the same region as our HH 902 MM). The Figure 5: **Left:** Local maxima (leaves) identified by the dendrogram analysis shown in red contours on a moment 0 map of C\({}^{18}\)O. **Right:** Dendrogram leaves (red contours) compared to continuum peaks shown in the colorscale. A single gray contour shows the C\({}^{18}\)O emission at 20% of the peak emission. Both panels show YSOs detected in other surveys: black diamonds are YSOs from Povich et al. (2011); the cyan square is the point source from Ohlendorf et al. (2012); and the red stars are the YSOs identified by Cortes-Rangel et al. (2020). \begin{table} \begin{tabular}{l l c c c c c c c c c c} \hline \hline Leaf & MM & R.A. & decl. & v\({}_{\rm use}\) & \(<R>\) & \(\Delta v\) & \(\sigma_{\rm turb+1D}\) & M\({}_{H_{1}}\) & M\({}_{vir}\) & stable? & comment \\ & & (J2000) & (J2000) & [km s\({}^{-1}\)] & [pc] & [km s\({}^{-1}\)] & [km s\({}^{-1}\)] & [M\({}_{\odot}\)] & [M\({}_{\odot}\)] & & \\ \hline L1 & HH 1066 MM & 10:44:05.4 & \(-\)59:29:40 & -8.4 & 0.02 & 1.3 & 0.53 & 4.9 & 4.1 & N & \\ L2 & MM1 & 10:44:05.4 & \(-\)59:29:50 & -7.1 & 0.03 & 1.5 & 0.64 & 48.7 & 9.0 & N & \({}^{13}\)CS \\ L3 & & 10:44:05.2 & \(-\)59:29:56 & -7.0 & 0.02 & 1.3 & 0.52 & 5.6 & 4.5 & N & \\ L4 & MM2 & 10:44:02.9 & \(-\)59:30:02 & -7.5 & 0.02 & 0.93 & 0.37 & 5.9 & 1.9 & N & \\ L5 & MM3 & 10:43:59.7 & \(-\)59:30:09 & -7.0 & 0.02 & 1.1 & 0.43 & 4.5 & 2.6 & N & DCN, \({}^{13}\)CS \\ L6 & MM4 & 10:44:01.1 & \(-\)59:30:19 & -8.0 & 0.02 & 1.1 & 0.45 & 15.7 & 2.6 & N & DCN, \({}^{13}\)CS \\ L7 & HH 902 MM & 10:44:01.7 & \(-\)59:30:27 & -7.8 & 0.02 & 1.2 & 0.50 & 17.8 & 3.3 & N & DCN, \(\sim\)2.7\({}^{\prime\prime}\) offset \\ L8 & MM5 & 10:44:05.3 & \(-\)59:30:20 & -5.1 & 0.01 & 1.1 & 0.46 & 3.5 & 3.0 & N & \\ L9 & & 10:44:03.3 & \(-\)59:30:45 & -5.1 & 0.03 & 1.1 & 0.47 & 19.7 & 4.4 & N & \\ L10 & HH 901 MM & 10:44:03.3 & \(-\)59:30:60 & -5.2 & 0.02 & 0.58 & 0.21 & 0.95 & 0.66 & N & \\ \hline \end{tabular} \end{table} Table 3: Physical properties of dendrogram leaves shown in Figure 5. flux and mass estimates of all continuum point sources are reported in Table 4. #### 3.4.3 Candidate YSOs detected in other surveys Previous surveys at wavelengths \(<\)1.3 mm have also reported candidate YSOs in and around the Mystic Mountains. Three candidate YSOs from the Pan-Carina YSO Catalog (PCYC; Povich et al., 2011) fall within the area of our ALMA map; these are shown on Figure 5 as black diamonds. All three candidate YSOs have an ambiguous evolutionary classification. PCYC 429 was identified as the HH 1066 driving source by Reiter et al. (2016) and coincides with the continuum source HH 1066 MM and the C\({}^{18}\)O emission of L1. Immediately below HH 1066 MM, a second candidate YSO, PCYC 427, lies near the northwest boundary of L2, outside the continuum emission of MM1. PCYC 427 coincides with a point source visible in H\(\alpha\) images suggesting that this source lies in front of the cloud. Further west, a third candidate YSO, PCYC 399, is also visible in H\(\alpha\) images. This source falls well outside the lowest contour of CO emission from the HH 902 pillar and has no associated continuum emission, suggesting that it is also lies outside the cloud. outflow - that is, redshifted DCN (N\({}_{2}\)D\({}^{+}\)) is seen on the same side of the YSO as the blueshifted limb of the outflow. A second, fainter DCN peak coincides with MM4. The emission peaks of all three molecules - DCN, \({}^{13}\)CS, and C\({}^{18}\)O - coincide with continuum emission from MM4. Weak DCN emission extends from the position of MM2 along the edge of the pillar (see Figure 2). ### Velocity structure Intensity-weighted velocity (moment 1) maps of the CO isotopologues are shown in Fig. 10. The most prominent feature is the velocity difference between the pillars of the Mystic Mountains. Some velocity differences are also apparent along the north/south axis of Figure 8: Moment 0 map of the C\({}^{18}\)O (grayscale) with contours of the \({}^{13}\)CS J=5-4 in steps of 1 \(\sigma\) from 1–3 \(\sigma\) (_left_) and DCN J=3-2 emission in steps of 1 \(\sigma\) from 3–5 \(\sigma\) (_right_). Figure 7: **Left:** Colorscale of the aggregate continuum with C\({}^{18}\)O moment 0 contours from 10-100\(\sigma\) in steps of 10\(\sigma\). **Right:** Contours of the rarer molecules in our sample plotted on a colorscale of the continuum intensity. White contours show the DCN J=3-2 from 3-5 \(\sigma\) and the magenta contours show\({}^{13}\)CS J=5-4 from 1–3 \(\sigma\). the HH 1066 pillar and the wishbone-like extensions of the HH 902 pillar. C\({}^{18}\)O emission suggests that there are multiple clumps in the HH 1066 pillar with slightly different velocities. Higher resolution observations are required to determine if the inter-clump gas is at a markedly different velocity, as seen in Pillar 6 of Klaassen et al. (2020). ### Non-thermal motions To test whether photoionization contributes to the non-thermal motions in the pillars, we compare the average velocity dispersion to the incident ionizing photon flux. To probe a larger range of fluxes, we compare the Myvtic Mountains to the dust pillars in Carina from Klaassen et al. (2020) because those observations use the same angular resolution and spectral setup as this work. As in Klaassen et al. (2020), we compute the average velocity dispersion from the moment 2 map. We consider the Myvtic Mountains as a whole and each of the three pillars within it separately. To calculate the incident ionizing photon flux, we assume that the three main star clusters, Tr14, Tr15, and Tr16, dominate the external irradiation for all pillars in Carina. This provides a lower bound as we neglect the large number of O- and B-type stars located outside these clusters (see, e.g., Berlanas et al., 2023) and the effects of extinction. We use the ionizing photon luminosities from Smith (2006a) and compute the local flux using the projected distance between the pillars and the clusters2. The mean pillar velocity dispersion as a function of incident ionizing photon flux is shown in Figure 11 (color-coded by whether the pillars contain a jet seen at visual wavelengths, see discussion in Section 4.1). Footnote 2: The nearest cluster in projection does not always dominate the ionization. For example, Pillar 20 lies alongside Tr15 but points toward Tr16, suggesting that it has had a stronger influence. Within the (large) uncertainties, the mean velocity dispersions are similar across nearly two orders of magnitude in incident ionizing radiation. This is counter to expectation if ionizing radiation drives turbulence in the gas. However, moment 2 maps do not isolate non-thermal motions from other factors that contribute to the linewidth. Bulk motions from large-scale processes like infall and rotation as well as the influence of outflows from protostars embedded in the pillars will all contribute to this single value. In addition, higher temperatures will increase the thermal contribution to the linewidth. However, to first order we would expect pillars subject to higher ionizing photon fluxes would also have warmer temperatures (see, e.g., the dust temperature maps from Roccatagliata et al., 2013; Rebolledo et al., 2016). For the Myvtic Mountains we separate thermal and turbulent motions as follows. We use the linewidth measured as the full width at half maximum (FWHM; \(\Delta v\)) to calculate the 1D velocity dispersion, \(\sigma=\Delta v/2\sqrt{2\ln 2}\). To compute the thermal contribution, we use \(\sigma_{\rm therm,ID}=\sqrt{2k_{B}T/m_{\rm iso}}\) where \(m_{\rm iso}\) is the mass of the CO isotopologue and T=30 K is the temperature of the cold molecular gas. Subtracting the thermal component from the total velocity dispersion yields the 1D turbulent velocity dispersion, \(\sigma_{\rm turb;1D}=\sqrt{\sigma_{\rm tot}^{2}-\sigma_{\rm therm}^{2}}\). Finally, we convert the 1D estimate to 3D as \(\sigma_{\rm turb}=\sqrt{3}\sigma_{1D}\). The 3D turbulent velocity dispersion computed for each CO isotopologue is shown in Figure 12. Velocity dispersions are higher in CO than in \({}^{13}\)CO and C\({}^{18}\)O. The highest values for the turbulent velocity dispersion are observed where the pillars overlap, almost certainly reflecting the influence of multiple velocity components along the line of sight. Other regions with high values of the velocity dispersion fall behind ionization fronts traced by H\(\alpha\) (see Figure 13). The HH 902 pillar is well resolved with our \(\sim\)6'' beam and overlaps only minimally with the other pillars in the Myvtic Mountains. Qualitatively, the velocity dispersion is higher where the H\(\alpha\) emission is higher. Velocity dispersions are larger along the western edge of the pillar and modest peaks appear behind the two H\(\alpha\) ridges at Figure 9: **Top:** Moment 1 map of the DCN J=3-2 emission at the tip of the HH 902 pillar (see Figure 8) that shows the velocity gradient across the marginally resolved source (the beam is shown in the lower left). **Bottom:** Spectrum of the summed DCN J=3-2 emission compared to C\({}^{18}\)O spectrum from the same region. \begin{table} \begin{tabular}{c c c c c} \hline \hline source & \(I_{\rm peak}\) & \(I_{\rm median}\) & log(N)\({}_{\rm peak}\) & log(N)\({}_{\rm median}\) \\ & [K km s\({}^{-1}\)] & [K km s\({}^{-1}\)] & & \\ \hline & & & DCN J=3-2 & & \\ \hline MM HH902 & 0.812 & 0.310 & 12.5 & 12.1 \\ MM 4 & 0.298 & 0.116 & 12.1 & 11.7 \\ \hline & & & \({}^{13}\)CS J=5-4 & & \\ \hline MM 4 & 0.355 & 0.136 & 12.3 & 11.9 \\ MM 1 & 0.121 & 0.076 & 11.8 & 11.6 \\ \hline \end{tabular} \end{table} Table 5: Summary of the rarer molecules. Columns are the species/transition, peak and median intensities, and median optically-thin column density. the head of the pillar. Peaks in the velocity dispersion do not coincide with the location of the continuum sources. When estimating \(\sigma_{\rm therm;3D}\), we assumed a single temperature of T=30 K for the entire Mystic Mountains. Gas temperatures are likely higher in the most heavily irradiated parts of the Myystic Mountains. Assuming a higher temperature, T=50 K, yields a larger thermal contribution \(\sigma_{\rm therm;1D}=0.59\) km s\({}^{-1}\) which is roughly equivalent to the average velocity dispersion of the Mytic Mountains (as estimated from the moment 2 map). In addition to temperature uncertainties, the lack of correlation between velocity dispersion and incident ionizing photon flux may result from one of the following possibilities: (1) turbulence is enhanced only near irradiated cloud surfaces in layers too thin to be resolved with our \(\sim\)6\({}^{\prime\prime}\) beam; (2) different pillars have a different level of embedded star formation that contributes unequally to the observed linewidth (i.e. via outflows) but is unresolved in our beam (and star formation activity itself does not vary as a function of incident ionizing flux); or (3) that ionizing radiation does not drive turbulence. However, the velocity dispersion may not be best discriminant of the impact of ionizing radiation on the gas. External radiation may drive non-thermal motions through compressive shocks but shock energy will quickly dissipate and the velocity signature may not be long-lived. Compressive motions will also reshape the density of the gas, leaving behind local regions of higher density. We consider this signature of compressive turbulence in the next section. ### Isolating turbulent motions Menon et al. (2021) used the data of Klaassen et al. (2020) to measure the turbulence in a few well-resolved pillars. They reconstruct the dominant turbulence driving mode from the column density and intensity-weighted velocity maps (Federrath et al., 2010, 2016). In this section, we repeat this analysis for the Mytic Mountains complex for a highly irradiated comparison. **2D density structure:** To compute the density probability distribution function (PDF), we use the column density of the optically thin \({}^{13}\)CO as a proxy for the H\({}_{2}\) along the line of sight, \(N\), substituting the C\({}^{18}\)O in places where the \({}^{13}\)CO is optically thick. We compute \(\sigma_{\eta}\), the dispersion of the natural logarithm of the column density scaled by its mean (\(\eta=\log(N/N_{0})\)) by fitting a Hopkins (2013) intermittency density PDF model to the volume-weighted PDF of \(\eta\). This has the form \[P_{\rm HK}(\eta)d\eta=I_{1}(\sqrt{2\lambda\omega(\eta)}\exp\left[-(\lambda+ \omega(\eta))\right]\sqrt{\frac{\lambda}{\theta^{2}\omega(\eta)}}d\eta \tag{6}\] where \[\lambda=\frac{\sigma_{\eta}^{2}}{2\theta^{2}} \tag{7}\] and \[\omega(\eta)=\frac{\lambda}{(1+\theta)}-\frac{\eta}{\theta}\ \ (\omega\geq 0) \tag{8}\] and \(\theta\) is the intermittency parameter. The values of \(\sigma_{\eta}\) and \(\theta\) derived from the fit are used to compute \(\sigma_{N/N_{0}}\), the linear dispersion, as \[\sigma_{N/N_{0}}=\sqrt{\exp\left(\frac{\sigma_{\eta}^{2}}{(1+3\theta+2\theta^ {2})}\right)-1} \tag{9}\] using an expression from Hopkins (2013). Figure 11: Mean CO velocity dispersion compared to the estimated incident ionizing photon luminosity for the three pillars in the Mytic Mountains (black squares) and the average of the Mytic Mountains (red square). These are compared to other pillars in Carina observed with the same resolution and spectral setup from Klaassen et al. (2020) with (blue) and without (cyan) one or more prominent jets seen at visual wavelengths. All three pillars in the Mytic Mountains complex have at least one jet. Figure 10: Intensity-weighted velocity (moment 1) maps of the CO isotopologues. **Conversion from 2D to 3D density structure:** We estimate the 3D density dispersion from the 2D column density dispersion using the method of Brunt et al. (2010). The 3D density power spectrum, P\({}_{3D}(k)\), of the variable \(\rho/\rho_{0}-1\) is reconstructed from the 2D column density power spectrum, P\({}_{2D}(k)\), of the variable \(N/N_{0}-1\); \(k\) is the wavenumber. This is converted to the 3D power spectrum as P\({}_{3D}(k)=2k\)P\({}_{2D}(k)\). The ratio \(\mathcal{R}^{1/2}\) of the 2D and 3D dispersions is defined as \[\mathcal{R}^{1/2}=\frac{\sigma_{N/N_{0}}}{\sigma_{\rho/\rho_{0}}}=\frac{\sum_{ k}P_{2D}(k)}{\sum_{k}P_{3D}(k)} \tag{10}\] where we have mirrored the column density to provide a periodic dataset (Ossenkopf et al., 2008). **Isolating turbulent motions:** We fit a plane to the intensity-weighted velocity map (the first moment map) to remove bulk motions and isolate turbulent motions in the pillars. For purely turbulent motion, we expect the line of sight motions to trace a Gaussian PDF. We fit a Gaussian to the line of sight velocity PDF to derive the 1D velocity dispersion, \(\sigma_{v,1D}\). We convert this to the 3D velocity dispersion as \(\sigma_{v,3D}=3^{1/2}\sigma_{v,1D}\), implicitly assuming isotropy. From this, we compute the Mach number, \(\mathcal{M}=\sigma_{v,3D}/c_{s}\), which is the ratio of the 3D velocity dispersion and the sound speed, \(c_{s}\sim 0.3\) km s\({}^{-1}\) for our assumed temperature of 30 K. Finally, we compute \(b\), the turbulence driving parameter as \(b=\sigma_{\rho/\rho_{0}}/\mathcal{M}\). The value of \(b\) is used to determine the type of turbulence: \(b\sim 0.33\) is purely solenoidal; \(b\sim 1.0\) is purely compressive; and \(b\sim 0.4\) is a combination of both. Values of each of the derived parameters and their \(1\sigma\) uncertainties are listed in Table 6. PDFs of the column density and velocities for the Mysto Mountains are shown in Figure 14. We focus our comparison on the Mysto Mountains as a whole because the individual pillars are significantly smaller, with only a few beams covering their major and minor axes. Results of this analysis for each individual pillar are shown in Appendix C but we do not discuss them further here as the individual pillars are inadequately resolved for this analysis (similar to Pillar 44 in Menon et al., 2021; see also Sharda et al., 2018). **Comparing the Mysto Mountains to other pillars in Carina:** Within the Mysto Mountains, it is clear that there are multiple velocity components; these roughly correspond to the systemic velocity of each pillar (see Figure 14 and Table 2). Subtracting a linear function (a plane) reduces the peakiness of the velocity distribution, but the velocity dispersion remains large even after gradient subtraction, nearly a factor of two higher than the values found by Menon et al. (2021) for other pillars in Carina. As a result, the Mach number (\(\mathcal{M}=\sigma_{v,3D}/c_{s}\)) is also a factor of 2 higher. The resulting turbulence driving parameter b is within the range of compressively-dominated turbulence (\(0.4-1.0\)) but somewhat lower than the values (\(0.8-1.7\)) found by Menon et al. (2021). ## 4 Discussion Recent numerical work suggests that ionizing radiation may drive turbulence in cold molecular gas via the rocket effect (e.g., Gritschmeder et al., 2009; Boneberg et al., 2015; Dale, 2017; Menon et al., 2020). Photoevaporation drives a flow of free electrons from ionized cloud surfaces. Momentum is conserved, so this flow also drives a shock into the cloud, compressing the gas and perhaps injecting turbulence. In this picture, highly irradiated pillars like the Mysto Mountains should show a high level of turbulent velocity dispersion in the gas. The Mysto Mountains cloud complex lies in the heart of the Carina Nebula where copious ionizing photons from Tr14 (\(Q_{H}\sim 10^{50}\) s\({}^{-1}\), see Smith, 2006a) illuminate and sculpt its mountainous morphology. The incident ionizing photon flux illuminating the Mysto Mountains is an order of magnitude higher than that illuminating other pillars in \begin{table} \begin{tabular}{c c} \hline \hline Pillar & MM \\ \hline A [pc\({}^{2}\)] & 0.32\(\pm\)0.03 \\ \(N_{0}\) [\(10^{21}\) cm\({}^{-2}\)] & 3.01\({}^{+5.3}_{-5.7}\) \\ \(M\) [M\({}_{\rm G}\)] & 33.73\(\pm\)5.7 \\ n [\(10^{5}\) cm\({}^{-3}\)] & 2.33\(\pm\)1.3 \\ \(\sigma\sigma_{v,3D}^{M}\) [km s\({}^{-1}\)] & 0.34\(\pm\)0.30 \\ \(\alpha_{vir}\) & 26.7\(\pm\)15 \\ \(\mathrm{i}q\mu_{\rm turb}\) & 2.22\(\pm\)0.62 \\ \(\sigma_{q}\) & 1.01\(\pm\)0.02 \\ \(\sigma\nu_{N/N_{0}}\) & 0.56\(\pm\)0.02 \\ \(\mathcal{R}^{1/2}\) & 0.14\(\pm\)0.01 \\ \(\sigma_{\rho/\rho_{0}}\) & 4.02\(\pm\)0.37 \\ \(\sigma_{v,1D}^{\rm total}\) [km s\({}^{-1}\)] & 1.76\(\pm\)0.33 \\ \(\sigma_{v,1D}^{\rm total}\) [km s\({}^{-1}\)] & 1.26\(\pm\)0.07 \\ \(\sigma_{v,3D}\) [km s\({}^{-1}\)] & 2.19\(\pm\)0.13 \\ \(\mathcal{M}\) & 7.29\(\pm\)0.43 \\ \(b\) & 0.55\(\pm\)0.05 \\ \hline \hline \end{tabular} \end{table} Table 6: Turbulence parameters for the Mysto Mountains as a whole, as in Menon et al. (2021). Figure 12: The 3D turbulent velocity dispersion computed assuming a single gas temperature of \(T=30\) K. Carina with similar observations (see Figure 11 and Klaassen et al., 2020). A clear correlation between the incident ionizing radiation and electron density in ionization fronts has been observed in dust pillars in Carina (McLeod et al., 2016). We do not find a similarly straightforward relationship between the cold molecular gas and incident radiation. Despite the higher ionizing flux illuminating the Mystic Mountains, the average velocity dispersion (derived from the moment 2 map) is comparable to other pillars in Carina (see Figure 11). At \(b=0.55\), the turbulence driving parameter is consistent with compressive turbulence but is below the range found by Menon et al. (2021). For all of the pillars in Carina that have been observed with the ACA alone, it is unclear what role unresolved star formation activity plays in the observed kinematics. In the Mystic Mountains, we detect 8 continuum sources (see Section 3.4.2) and 10 C\({}^{18}\)O clumps (see Section 3.4). The pillars in the Klaassen et al. (2020) also display a wide range of star-formation activity from the relatively quiescent Pillar 22 which has only two C\({}^{18}\)O cores to the actively star-forming Pillar 6 with evidence for \(>\)4 separate YSOs. Protostars and their (unresolved) outflows can contribute to internal velocity dispersion (e.g., Larson et al., 2015) and may provide a local source of turbulence. Quantifying this is difficult, however, as we do not detect the known outflows in the Mystic Mountains or any other molecular outflows, likely due to the modest resolution of our data (at 2.3 kpc, our \(\sim 6^{\prime\prime}\) beam corresponds to \(\sim\)0.07 pc). The physical properties of Figure 13: **Top Left:** Map of the CO turbulent velocity dispersion with H\(\alpha\) contours overplotted in black. **Top Right:** Contours from the CO moment 2 map plotted on a grayscale of the H\(\alpha\) image from _HST_. Many peaks in the velocity dispersion correspond to regions where the pillars appear to overlap. The one exception is HH 902 which we zoom in on in the bottom two panels. **Bottom:** A zoomed-in view of the HH 902 pillar showing the _HST_ H\(\alpha\) image alongside the CO moment 2 map. Both the H\(\alpha\) intensity and the CO velocity dispersion are higher along the western side of the HH 902 pillar. Figure 14: **Top:** Left panel shows the H\({}_{2}\) column density derived from the optically thin \({}^{13}\)CO and C\({}^{18}\)O (see Section 3.7). Right panel shows the fit to the column density PDF, as in Menon et al. (2021). **Middle:** Image of the intensity-weighted velocity map prior to gradient subtraction (left) and histogram of its distribution (right). **Bottom:** Velocity map (left) and histogram (right) after gradient subtraction. the famous jets in the Mystic Mountains (see Figure 1) are well measured (Reiter and Smith, 2013, 2014; Cortes-Rangel et al., 2020) but these largely propagate outside the cloud. Higher angular resolution observations are required to detect and remove the influence of embedded outflows as well as analyze the individual pillars of the Mystic Mountains independently. Existing data covers a small region at the head of the HH 901 and HH 902 pillars (Cortes-Rangel et al., 2020); extending this to a larger portion of the Mystic Mountains would be interesting to do in the future. For now, we compare the results of the Mystic Mountains and the Klaassen et al. (2020) pillars to higher resolution observations of other portions of the Carina region. ### Comparison to other regions in Carina A few other studies in the Carina region have quantified the impact of ionizing feedback on star-forming gas. Rebolledo et al. (2020) compared two clouds in the Carina region that are subject to very different ionizing radiation fields: the 'North Cloud' which is located near the center of Carina where it is heavily irradiated by Tr14 and the 'South Pillars' regions which is located in the outskirts of Carina where it is subject to much less intense radiation. The two regions are separated from the central clusters, Tr14 and Tr16, by \(\sim\)2.5 pc and \(\sim\)30 pc, respectively, and thus experience an order of magnitude difference in their radiative environment. With Band 3 observations targeting the dense gas tracers HCN and HCO+ with a resolution of 2.8'' \(\times\) 1.8'', Rebolledo et al. (2020) find evidence for more turbulence in the heavily irradiated North Cloud. The North Cloud also has fewer cores than the South Pillars region. However, the cores that have formed in the North Cloud are higher in mass than the more numerous cores found in the South Pillars cloud, consistent with turbulent fragmentation. Hartigan et al. (2022) recently published observations of the 'Western Wall', a region in the center of Carina that overlaps with the North Cloud from Rebolledo et al. (2020). These higher resolution observations (synthesized beamsize \(\sim\) 1'') in Band 6 target CO and its isotopologues. Hartigan et al. (2022) conclude that the influence of feedback is modest with no signs of triggered star formation and no prominent dust pillars. Gas densities appear higher immediately behind the ionization front but cores appear starless, and there is no evidence for grain growth. A follow-up analysis of the Hartigan et al. (2022) observations from Downes et al. (2023) determined that turbulence is driven at large scales, but not necessarily by irradiation from nearby high-mass stars. This conclusions stands in contrast to Menon et al. (2021) who argue that predominantly compressive modes of turbulence may have triggered star formation in the pillars. However, Downes et al. (2023) argue that their result is not in tension with Menon et al. (2021). Unlike the Western Wall, the pillars have been sculpted by radiation. Pillars may self-shadow, allowing compressive motions to lead to their development, altering their internal kinematics. In the simulations of Dale et al. (2012), prominent pillars form in clouds with more diffuse gas and those with smoother density fields. Above a certain density (\(\gtrsim\)100 cm\({}^{-3}\)), ionization is dynamically ineffective, especially in more turbulent regions. On the smaller scales of the pointed observations presented in this paper and those in Klaassen et al. (2020), Rebolledo et al. (2020), and Hartigan et al. (2022), differences in the local initial conditions may be responsible for the morphological differences observed at present. In this case, ionizing radiation carved more diffuse gas into dust pillars, perhaps triggering star formation (Brooks et al., 2002; Rathborne et al., 2004; Smith et al., 2010; Ohlendorf et al., 2012). Meanwhile, the higher density Western Wall / North Cloud began as and remained a higher density region (see, e.g., Figure 1 in Rebolledo et al., 2020), less vulnerable to compression. Local density variations may also help explain why there is a dust pillar \(\sim\)1' (\(\sim\)0.7 pc) to the northeast of the Western Wall (see Smith et al., 2010). This picture is in line with the simulations of Tremblin et al. (2012, 2012) who find that pillar formation depends strongly on shock curvature. Pre-existing overdensities help curve the shock driven by the ionization front. Pillars form when the curved shock collapses in on itself. Pillars formed in this way will naturally have a high-density core at their tips in much the same way that desert buttes have high-density caprock at their apices. While differences in the initial density may help explain the absence of pillars in the Western Wall, the low level of star formation activity remains a challenge. High initial densities in the Western Wall suggest that star formation would happen sooner than in the feedback-cerved pillars. But this is not what is observed. Hartigan et al. (2022) find that the cores in the Western Wall are starless. Rebolledo et al. (2020) argue that the higher level of turbulence may have made the Western Wall more resilient to fragmentation as there are fewer but higher mass cores compared to their more quiescent South Pillars region. In contrast, several pillars show evidence for star formation in the form of their prominent jets. Turbulence rapidly decays and must be constantly resupplied to maintain observed levels. Rebolledo et al. (2020) attribute the higher turbulence in the North Cloud / Western Wall to the impact of external irradiation. Downes et al. (2023) also find evidence that turbulence is driven on large scales, but they do not attribute this to ionization. Resolving this tension requires a more homogeneous dataset that covers the large (pc) scales where turbulence is driven while resolving the 0.02 pc - 0.03 pc scales that Downes et al. (2023) find are also dynamically important. ## 5 Conclusions In this paper, we present maps of the CO, \({}^{13}\)CO, and C\({}^{18}\)O emission from the Mystic Mountains, a large cloud complex with multiple dust pillars located in the heart of the Carina Nebula. A dendrogram analysis reveals a coherent, connected structure with three individual pillars. We detect eight continuum cores within the Mystic Mountains. Most continuum cores are associated with a C\({}^{18}\)O clump, but not all C\({}^{18}\)O clumps have a continuum counterpart. The rarer isotopologues DCN J=3-2 and \({}^{13}\)CS J=5-4 are detected in two clumps located in the region with the highest column density. The Mystic Mountains region experiences an order of magnitude higher ionizing flux from the nearby star clusters than the flux incident on other dust pillars in Carina observed with similar tracers and angular resolution. However, bulk pillar properties like the average velocity dispersion derived from moment 2 maps are similar for all pillars, regardless of their irradiation. A more detailed analysis to isolate turbulent motions reveals a turbulent driving parameter, \(b=0.55\), consistent with compressive turbulence dominating in the Mystic Mountains. The derived \(b\) is within the range of values found by Menon et al. (2021) for the pillars from Klaassen et al. (2020). The derived Mach number for the Mystic Mountains is a factor of 2 higher than that found for other pillars in the Carina region, either reflecting a stronger shock from the more intense UV field, or, more likely, is artificially inflated by the broad velocity distribution of the Mystic Mountains. The similarity of pillar properties across a range of incident ionizing fluxes contrasts with studies of other irradiated clouds in the Carina Nebula. Rebolledo et al. (2020) compared two regions with an order of magnitude difference in the incident radiation and find evidence for more turbulence and fewer (but higher mass) cores in the more heavily irradiated cloud. From a different analysis and dataset of the same region presented by Rebolledo et al. (2020), Downes et al. (2023) argue that differences between the pillar results and the cloud results are not inconsistent because pillar kinematics may reflect dynamical compression. We argue that this may be true if the different morphologies observed today result from different initial densities that aided or prevented UV irradiation from compressing the local cloud into a pillar. Pre-existing overdensities may also explain the observed difference in star-forming activity. Cores in the Western Wall / North Cloud appear starless whereas a fraction of the cores in the Mystic Mountains and other pillars drive prominent jets, signifying a more advanced evolutionary stage (i.e. HH 901 and HH 902 in the Mystic Mountains; see Cortes-Rangel et al., 2020). Future work probing a broader range of environments with the angular resolution to probe both the large scales of turbulence driving and the small scales where its consequences are most evident will help resolve this tension. ## Acknowledgements We would like to thank the referee, Neal Evans, for a prompt and thoughtful report that improved the manuscript. We would like to thank Shyam Menon for a careful reading of the manuscript and thoughtful feedback. MR was partially supported by an ESO fellowship. DI was funded by the European Research Council (ERC) via the ERC Synergy Grant ECOGAL (grant 85130). This paper makes use of the following ALMA data: ADS/JA0.ALMA#2018.1.01001.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This work uses observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. The HST observations are associated with GO-12050. This research made use of Astropy3, a community developed core Python package for Astronomy (Astropy Collaboration et al., 2013; Price-Whelan et al., 2018). This research made use of APLpy, an open-source plotting package for Python (Robitaille and Bressert, 2012). This research made use of the following software packages: astrodendro, a Python package to compute dendrograms of Astronomical data ([http://www.aendrograms.org/](http://www.aendrograms.org/)); SCIMES, a Python package to find relevant structures into dendrograms of molecular gas emission using the spectral clustering approach (Colombo et al., 2015); and Turbu-Stat, a Python package to compute 14 turbulence-based statistics described in the astronomical literature (Koch et al., 2017, 2019). Footnote 3: [http://www.astropy.org](http://www.astropy.org) ## Data Availability The ALMA data used in this study are publicly available from the ALMA archive 4 under the program ID number ADS/JAO.ALMA#2018.1.01001.S. Data from _HST_ are publicly available via the MAST archive5. Footnote 4: [https://almascience.nrao.edu/aq/?result_view=observations](https://almascience.nrao.edu/aq/?result_view=observations) Footnote 5: [https://mast.stsci.edu/portal/Mashup/Clements/Mas/Portal.html](https://mast.stsci.edu/portal/Mashup/Clements/Mas/Portal.html)
2309.00055
Precision Enhancement of Distribution System State Estimation via Tri-Objective Micro Phasor Measurement Unit Deployment
A tri-objective optimal Micro Phasor Measurement Units ({\mu}-PMUs) Placement method is presented, with a focus on minimizing the following three parameters: i) the total number of {\mu}-PMU channels, (ii) the maximum state estimation uncertainty, and (iii) the sensitivity of state estimation to line parameter tolerances. The suggested formulation takes single-line and {\mu}-PMU failures into consideration while guaranteeing the complete observability of the system in the presence and absence of contingencies. It also takes into account the impact of zero injection nodes and the quantity of {\mu}-PMU channels carried out at every node. The suggested placement issue is addressed using a customized version of the nondominated sorting genetic algorithm II (NSGA-II). According to the results achieved utilizing three test systems of varying sizes, {\mu}-PMU channels beyond predetermined thresholds only result in higher costs and negligible further decreases in state estimation uncertainty and sensitivity to line parameter tolerances. Additionally, we may omit to instrument between 30 and 40% of buses if {\mu}-PMUs with only two three-phase channels are utilized, with only a modest negative effect on state estimate performance even in the event of contingencies.
Arya Abdolahi, Navid Taghizadegan Kalantari
2023-08-31T18:00:59Z
http://arxiv.org/abs/2309.00055v1
Precision Enhancement of Distribution System State Estimation via Tri-Objective Micro Phasor Measurement Unit Deployment ###### Abstract A tri-objective optimal Micro Phasor Measurement Units (\(\mu\)-PMUs) Placement method is presented, with a focus on minimizing the following three parameters: i) the total number of \(\mu\)-PMU channels, (ii) the maximum state estimation uncertainty, and (iii) the sensitivity of state estimation to line parameter tolerances. The suggested formulation takes single-line and \(\mu\)-PMU failures into consideration while guaranteeing the complete observability of the system in the presence and absence of contingencies. It also takes into account the impact of zero injection nodes and the quantity of \(\mu\)-PMU channels carried out at every node. The suggested placement issue is addressed using a customized version of the nondominated sorting genetic algorithm II (NSGA-II). According to the results achieved utilizing three test systems of varying sizes, \(\mu\)-PMU channels beyond predetermined thresholds only result in higher costs and negligible further decreases in state estimation uncertainty and sensitivity to line parameter tolerances. Additionally, we may omit to instrument between 30 and 40% of buses if \(\mu\)-PMUs with only two three-phase channels are utilized, with only a modest negative effect on state estimate performance even in the event of contingencies. Distribution system state estimation, multi-objective optimization, optimal PMU placement, Micro phasor measurement unit (\(\mu\)PMU). ## 1 Introduction \(\mu\)-PMUs are smart devices used to received the estimation of system phasors. Since the 1990s, \(\mu\)-PMUs have usually been used in power transmission systems and have been crucial in protecting systems by quickly identifying faults or other imminent critical operational situations. Unfortunately, the M-PMU placement problem is a sensitive problem complicated by the requirement to ensure the system's full observability. This is according to the high device cost and the massive amount of information gathered by the \(\mu\)-PMU [1]. A single objective function is used in the majority of optimal \(\mu\)-PMU placement problem formulations, as was mentioned in the literature. Initial attempts to solve the optimal \(\mu\)-PMU placement problem used standard integer (typically binary) linear programming optimization techniques, as both the total cost of \(\mu\)-PMU deployment and topological observability are typically linearly proportional to the number of \(\mu\)-PMUs [2]. Author in [3] proposes an optimal PMU placement problem with the integer linear programming model. In [4], a different strategy was put forth to reduce the quantity of \(\mu\)-PMUs required to achieve full observability while enhancing state estimation of system by considering the measurement redundancy index. For the purpose of identifying inaccurate data in state estimation, [5] proposed a similar distinction between critical and redundant measurements. Due to the incorporation of more complex and varied operational circumstances and restrictions, the fundamental optimal \(\mu\)-PMU placement issue has become more complex over time. The findings and the influence of \(\mu\)-PMU placement in the presence of various contingencies may be significantly impacted by considering \(\mu\)-PMU channels number [6]. With these arrangements, the observability constraint must now take into account a greater number of situations [7]. As a result, adding such conditions to the issue formulation causes the connection matrix's row count, which is utilized to apply the observability requirement, to rise quickly. In distribution networks, which typically include a high number of buses, this condition may be particularly challenging to handle. Due to the restricted number of channels, the lines in this study that cannot be directly viewed by a \(\mu\)-PMU are chosen a priori. The majority of the factors mentioned above are now included in the optimal \(\mu\)-PMU placement issue formulation inside a special framework. Furthermore, the goal function, which is provided by a linear or quadratic combination of numerous components, has progressively become more complicated. However, this function frequently ignores the influence of \(\mu\)-PMU performance entirely and only considers economic and observability factors. Just to provide a few instances, an integer quadratic programming optimal \(\mu\)-PMU placement model is presented in [8], and it consists of two parts including the level of redundancy and other one is the investment cost of \(\mu\)-PMUs. By increasing network observability and reducing sensitivity to grid factors, the cost function used in [9] seeks to reduce the \(\mu\)-PMUs installed in the system. The goal function described in [10] incorporates extra parameters that take into consideration the costs of network unobservability and redundancies to improve system observability in both normal operating settings and contingencies, in addition to the overall deployment costs. The fundamental drawback of single-objective optimization is the need to combine many values into a scalar cost function in order to find a single solution. The typical method for doing such an aggregate is to balance the goals taken into account by the issue. The results of this method, however, heavily depend on the weights that are selected, which may ultimately result in a lack of variety [11]. One of the major areas for future study in this area is now thought to be "the generalization of the optimal \(\mu\)-PMU placement issue considering various goals including not just installation cost, but also redundancy, performance, and other design restrictions" [12]. Currently, only a small percentage of optimal \(\mu\)-PMU placement research studies, the majority of which have two goals, are multi-objective formulation-based. The reduction of the number of \(\mu\)-PMUs and the maximizing of measurement redundancy--two objectives that are in fact in opposition to one another--are what determine where the \(\mu\)-PMUs are placed in [13]. A thorough overview of available methods is provided in [14] with regard to the OPP issue solution. A customized edition of the NSGA-II algorithm is implemented in this paper, despite the fact that a number of heuristic algorithms [15] can be used to find satisfactory sub-optimal solutions in a reasonable time horizon. The technique used in earlier works [37], [39] is consistent with the employment of a genetic algorithm for multi-objective OPP. Due to the [16] and [17], NSGA-II is a proper optimization algorithm to \(\mu\)-PMU placement problem with high accuracy. The ease with which a decent starting population may be generated and the application of penalties to exclude impractical solutions are two further crucial aspects that make the NSGA-II technique especially appropriate in the situation at hand. Most studies use the single objective function of optimal \(\mu\)-PMU location in the transmission networks, but this paper presents some contribution as follow: * Define a multi-objective optimization model with compromising objectives * Include some limitations for complete observability in the presence of contingencies, kind and number of measurements at every node * Concentrate on distribution networks that are an emerging field of application for \(\mu\)-PMUs. The objective functions taken into account in this study are the total number of \(\mu\)-PMU channels, the maximum state estimation uncertainty, and the maximum sensitivity of state estimation uncertainty to line parameter tolerance limits. The proposed problem are investigated under the various contingency condition like line outage or M-PMU failure. The \(\mu\)-PMU sites are discovered using a customized version of the Nondominated Sorting Genetic Algorithm II (NSGA-II), which minimizes the Pareto frontier of the collection of potential solutions, given the tri-objective nonlinear formulation of the current issue [18], [19]. The selected \(\mu\)-PMU placement technique is suitable for the distribution system because the number of buses and equipment installed in the system is high than in the transmission network. The remainder of the essay is organized as follows. The tri-objective optimal PMU placement problem is formalized in Section 3, and following a quick summary, each function is represented and supported. The proposed placement strategy's results are reported in Section 4 while taking into account various constraints. Conclusion presented in Section 5. ## 3 Problem formulation In the case of a grid with \(N\) buses and \(L\) lines, let \(x\in X=\{0,1\}^{N}\) be a binary vector that \(i\)th element \(x_{i}\) is adjust to 1 or 0. C(x) shows the available \(\mu\)-PMU channels number to monitor the network status. The maximum state estimation uncertainty is presented by the U(x). Maximum sensitivity related to the tolerances of network parameter is presented by the S(x). With these explanations, the suggested tri-objective optimal \(\mu\)-PMU placement problem are formulated as (1). \[\underset{x\in X}{\text{min}}(C(x),U(x),S(x)) \tag{1}\] There is considered some limitations for the proposed objective function as below. 1) Ensure the complete observability based on the measurement devices and ZINs without considering the effect of contingency condition. \[A(x+u)\geq 1 \tag{2}\] In inequality constraint (2), the parameter \(\mathbf{A}\) shows the \(N\times N\) undirected graph modeling binary connectivity matrix of the proposed network. As regards, if a \(\mu\)-PMU failure and branch outage happens, a harder observability limit that (3) is required. \[\begin{bmatrix}\tilde{A}^{1}\\ \vdots\\ \tilde{A}^{N}\end{bmatrix}(x+u)\geq 1\ \ \text{and}\ \ \begin{bmatrix}\tilde{A}^{1}\\ \vdots\\ \tilde{A}^{L}\end{bmatrix}(x+u)\geq 1 \tag{3}\] where matrix \(\tilde{A}^{b}\) is achieved by substituting the \(b\)th column of \(\mathbf{A}\) with an all-zero vector. 2) The maximum number of \(n_{ci}\)\(\mu\)-PMU channels that are available and the kind of measurement both affect the \(\mu\)-PMUs that may be carried out at the node \(i\). In many optimal \(\mu\)-PMU placement articles, the quantity of \(\mu\)-PMU channels is ignored, and this has to be emphasized. In actuality, not every current phasor is really tracked and utilized for state estimate. This work, on the other hand, takes into account this fundamental technology-related restriction. The restriction on the number of permitted measurements may be modeled as (4). \[\left[\mathrm{I}_{N}\qquad\Gamma\right]\left[\mathrm{x}_{\mathrm{x}}\right]\leq \mathrm{n}_{\mathrm{c}} \tag{4}\] where channels number of measurement devices are shown by \(\mathrm{n}_{\mathrm{c}}\). \(\mathrm{I}_{N}\) is the \(N\times N\) identity matrix, and the measurement installed at each node are shown by a binary matrix as \(\Gamma\). ### _A. First objective function: Minimum number of \(\mu\)-PMU channels_ The optimal \(\mu\)-PMU placement is considered for make the \(\mu\)-PMU placement strategy as generic as feasible. In fact, this quantity not only increases with the number of \(\mu\)-PMU, but it also significantly affects equipment cost. \[\mathrm{C(x)=c^{\mathrm{r}}\,.x} \tag{5}\] where \(\mathrm{c\leq n}_{\mathrm{c}}\) is the column vector including the number of \(\mu\)-PMU channels available for both voltage and current measurements at all buses. ### _B. Second objective function: minimize the state estimation uncertainty_ The uncertainty related to system state estimation is modeled in the second objective function that relies on the parameter chosen and the desired state estimation. The weighted least squares estimator method considered depends on two hypotheses. 1) The data collected from \(\mu\)-PMU based on zero injection nodes are utilized for state estimation; 2) As given in [20], the state variables and measurement devices are converted from polar to rectangular coordinates. The observability of system is only determined by \(\mu\)-PMUs and zero injection nodes. This hypothesis is completely expected in research papers on optimal PMU placement procedures. The choice to use rectangular variables instead of simplifying the linear system equations used for state estimation comes with associated advantages from a computational viewpoint [21]. Suppose x is a binary vector representing the nodes where \(\mu\)-PMUs are located. The distribution system state estimation depends on current phasor measurements (\(M_{\mathrm{\mathit{I}}}\)), voltage phasor measurement (\(M_{\mathrm{\mathit{V}}}\)), and zero injection node current measurement (\(M_{\mathrm{\mathit{Z}}}\)). The sum of measurements data (\(M=M_{\mathrm{\mathit{V}}}+M_{\mathrm{\mathit{I}}}+M_{\mathrm{\mathit{Z}}}\geq N\)) divided into real and imaginary sections presented by \(R\) and \(I\), and converted to a single 2\(M\)-long vector ( z = \(\left[\begin{array}{cc}x_{V_{x}}^{T}\,,&x_{I_{x}}^{T}\,,&x_{I_{x}}^{T}\,,&0^{T} \,,&0^{T}\,,&0^{T}\end{array}\right]^{T}\) ). The state variables and measurement data can be presented by the Eq. (6), if the variables of the system state are described as rectangular coordinates and are converted to a 2\(N\)-long vector ( v = \(\left[\begin{array}{cc}v_{R}^{T}\,,&v_{I}^{T}\end{array}\right]^{T}\) ) [20]. \[\begin{array}{c}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm}\vspace{0.2cm} \vspace{0.2cm}\vspace{0.2cm}\vspace{0. \[\mathrm{U(x)=max\left\{\sqrt{\mathrm{Eig}(\Phi_{v}^{c})}\right\}} \tag{9}\] where the eigenvalues of the argument matrix are returned by the function \(\mathrm{Eig}(.)\)[22]. ### _Third objective function: Minimize sensitivity to line parameter tolerances_ It is impossible to describe the sensitivity of system state estimate to the uncertainty influencing grid parameters using a single formulation. For a certain measurement setup, the sensitivity function S(x) in this study is defined as the greatest increase of the members of the covariance matrix of state estimation errors caused by unidentified tolerances of line parameters. \[\delta\mathrm{H(x)=}\left(\begin{array}{ccc}0&0\\ 0&0\\ \delta\mathrm{G(x)}&-\delta\mathrm{B(x)}\\ \delta\mathrm{B(x)}&\delta\mathrm{G(x)}\\ \delta\mathrm{G_{z}}&-\delta\mathrm{B_{z}}\\ \delta\mathrm{B_{z}}&\delta\mathrm{G_{z}}\end{array}\right) \tag{10}\] The covariance matrix of \(\mathrm{e_{v}}\) represented as [23], assuming that the estimate errors of the real and imaginary sections of the selected state variables provided by (8) are rearranged into the 2\(N\)x1 vector \(\mathrm{e_{v}=}[(\hat{\mathrm{v}}_{\mathrm{{}_{R}}}-\mathrm{v}_{\mathrm{{}_{R} }})^{T},(\hat{\mathrm{v}}_{\mathrm{{}_{I}}}-\mathrm{v}_{\mathrm{{}_{I}}})^{T }]^{T}\). \[\tilde{\Phi}_{\mathrm{v}}=\mathrm{E(e_{v}e_{v}^{T})=\tilde{F}(x)E\{\varepsilon E ^{T}\}\tilde{F}^{T}\left(x\right)=\tilde{F}(x)R\tilde{F}^{T}\left(x\right) \tag{11}\] where \(\tilde{F}(x)=\left[\tilde{H}^{T}\left(x\right)R^{-1}\tilde{H}(x)\right]^{-1} \tilde{H}^{T}\left(x\right)R^{-1}\). It is necessary to separate the effects of tolerances and measurement uncertainty in (11) in order to assess how line parameter tolerances affect \(\tilde{\Phi}_{\mathrm{v}}\) independently of the accuracy of the measurements that are available. If \(\mathrm{e_{,}}\) denotes the relative standard uncertainty comment to all measurements, then matrix R cam be written as \(R=\sigma_{\mathrm{r}}^{2}\tilde{R}\). Therefore, recalling that \(\tilde{R}^{-1}=\tilde{R}^{-\tilde{r}^{T}}\), after a few steps (11) can be written as \[\tilde{\Phi}_{\mathrm{v}}=\tilde{S}(x)\sigma_{\mathrm{r}}^{2}=[\tilde{H}^{T} \left(x\right)\tilde{R}^{-1}\tilde{H}(x)]^{-1}\sigma_{\mathrm{r}}^{2} \tag{12}\] where \(\tilde{S}(x)\) can be thought of as the sensitivity matrix since its components show the rate of changes of the entries of the state estimation errors covariance matrix caused simply by the tolerance values. If the components of (10) are supposed to be equally distributed around the respective nominal values within a specified relative proportion \(\pm\Delta\) of \(H_{ij}\) for \(j=\)1,\(\ldots,2M\) and \(i=\)1,\(\ldots,2N\), Eq. (13) shows the maximum sensitivity to line parameter tolerance. \[\mbox{S(x)}=\max_{i_{j}=1,\cdots,N}\left\{\max_{\{H_{ij}\,\,\mbox{\tiny$\phi$}\, \mbox{\tiny$\phi$}\,\mbox{\tiny$-$}\,\mbox{\tiny$\Delta$}\,H_{ij}\,\mbox{ \tiny$\Delta$}\,H_{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{ \bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\}\ \} \} \} \} \} \} \ \} \ \ \ \} \ \ \ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\} _Case A_ with considering the contingency condition. Simulations were implemented on a Windows 10 laptop configured with Intel(R) Core (TM), CPU i7-8565U, 2.8 GHz, and 16 GB of RAM. The 3-D Pareto frontiers solutions determined by the NSGA-II are presented in Figs. 2-4 for the three suggested distribution networks, supposing that no limitations of the contingency condition have been taken into account. Every axis of these figures show one of the objective functions (2-4). Nevertheless, the amount of uncertainties are described as a percentage of the nominal slack bus voltage to enhance readability. The Pareto frontiers achieved by containing the limitation of contingency condition are not displayed for the briefness sake, since they are nearly containing in those displayed in Figs. 2-4, as it will be presented briefly. The results supply a qualitative rather than a quantitative outline of optimal \(\upmu\)-PMU location results, It is noteworthy that * In _Case A_, the restriction on the maximum number of channels and the permitted measurement types clearly limits the number of \(\upmu\)-PMU channels. However, as compared to _Case B_, the state estimate uncertainty and sensitivity do not seem to significantly increase. * Additional findings, achieved with various amounts of A, demonstrate that the Pareto frontier's maximum state uncertainty amounts scale as anticipated. Fig. 1: Hypervolume curves related to the optimal \(\upmu\)-PMU placement Figure 3: Three-dimensional tri-objective optimal \(\mu\)-PMU placement Pareto fronts of the 85-bus distribution system Figure 2: Three-dimensional tri-objective optimal \(\mu\)-PMU placement Pareto fronts of the 37-bus distribution system The trade-offs between the optimal \(\mu\)-PMU location answers achieved by the suggested tri-objective method can be inferred from the curves displayed in Figs. 4 and 5. The projection minimum envelopes of the 3-D Pareto frontiers onto orthogonal planes are drawn in Figs. 4(a)-(b) and 5(a)-(b), respectively. The results with contingency are shown with dashes lines and the results without considering contingency are shown with solid lines. Some interesting information about the optimal \(\mu\)-PMU locations are provided by figures 4 and 5 that are summarized as follow. If no limitations for contingencies are considered, once the number of \(\mu\)-PMUs is adequate to guarantee the system full observability, the maximum standard estimation uncertainty and the maximum sensitivity tend to decrease very dramatically as the number of \(\mu\)-PMUs channels increases. Distribution networks with small-scale produce slightly steeper curves. This tendency is due to the fact that state estimation in small systems is more significantly impacted by new \(\mu\)-PMUs than it is in big systems. It is crucial to emphasize that the quantity of \(\mu\)-PMU channels should not be confused with the quantity of \(\mu\)-PMU locations and, accordingly, with the quantity of measurement devices that need to be placed. As predicted, the curves in Figs. 4(a)-(b) and 5(a)-(b) reveal that _Case B_ always has more \(\mu\)-PMU channels than _Case A_. However, in _Case B_, the minimum number of grid buses being observed by a \(\mu\)-PMU is typically a little lower than in _Case A_. Particularly, if no limitations for contingencies are included in the issue, system observability can be accomplished by instrumenting 39% \(\pm\) 5% of network buses in _Case A_ and _Case B_, respectively. In addition to network structure, these values also depend on the quantity and arrangement of ZINs within the system. Figure 4: Pareto frontiers of _Case A_ for optimal PMU placement for three distribution networks Figure 5: Fig. 4. Pareto frontiers of _Case A_ for optimal PMU placement for three distribution networks Optimal \(\mu\)-PMU placement problem results for three test system are presented as Table 1-3. All of the results are implemented with and without considering M-PMU channels are depicted. As can be seen from these tables, number of \(\mu\)-PMU channels had have a significant effect on the bus number equipped to the measurement devices. Finally, it is dedicated that the measurement devices with high \(\mu\)-PMU channels cause to use lees number of measurement devices. \begin{table} \begin{tabular}{c c c c} \hline & Type of bus & No. of \(\mu\)-PMU & Bus Numbers \\ \hline \multirow{4}{*}{_Case A_} & Without \(\mu\)-PMUs & 12 & 22, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 \\ & With 2 \(\mu\)-PMUs channels & 25 & 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26 \\ & Without \(\mu\)-PMUs & 17 & 15, 17, 18, 20, 22, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 \\ \multirow{4}{*}{_Case B_} & With 3 \(\mu\)-PMUs channels & 10 & 1, 3, 6, 8, 9, 12, 16, 19, 23, 25 \\ & With 4 \(\mu\)-PMUs channels & 9 & 2, 4, 5, 10, 11, 13, 14, 21, 24 \\ \multirow{4}{*}{_Case B_} & With 5 \(\mu\)-PMUs channels & 1 & 7 \\ \hline \end{tabular} \end{table} Table 1: Optimal \(\mu\)-PMU placement problem results for the 37-bus distribution network under two case study \begin{table} \begin{tabular}{c c c c} \hline & Type of bus & No. of \(\mu\)-PMU & Bus Numbers \\ \hline \multirow{4}{*}{_Case A_} & Without \(\mu\)-PMUs & 24 & 2, 3, 5, 7, 8, 9, 10, 12, 13, 27, 29, 32, 34, 35, 41, 48, 49, 52, 58, 64, 65, 67, 70, 81 \\ & With 1 \(\mu\)-PMUs channels & 2 & 68, 73 \\ & With 2 \(\mu\)-PMUs channels & 59 & All the others \\ & Without \(\mu\)-PMUs & 38 & All the others \\ \multirow{4}{*}{_Case B_} & With 3 \(\mu\)-PMUs channels & 21 & 1, 15, 16, 17, 36, 38, 47, 51, 54, 55, 56, 59, 62, 66, 72, 74, 75, 78, 82, 84, 85 \\ & With 4 \(\mu\)-PMUs channels & 14 & 4, 6, 11, 21, 31, 44, 46, 50, 53, 57, 61, 63, 80, 82 \\ & With 5 \(\mu\)-PMUs channels & 11 & 2, 3, 5, 8, 19, 26, 29, 32, 41, 48, 70 \\ \multirow{4}{*}{_Case A_} & With 6 \(\mu\)-PMUs channels & 1 & 67 \\ \hline \end{tabular} \end{table} Table 2: Optimal \(\mu\)-PMU placement problem results for the 85-bus distribution network under two case study \begin{table} \begin{tabular}{c c c c} \hline & Type of bus & No. of \(\mu\)-PMU & Bus Numbers \\ \hline \multirow{4}{*}{_Case A_} & Without \(\mu\)-PMUs & 24 & 2, 3, 5, 7, 8, 9, 10, 12, 13, 27, 29, 32, 34, 35, 41, 48, 49, 52, 58, 64, 65, 67, 70, 81 \\ & With 1 \(\mu\)-PMUs channels & 2 & 68, 73 \\ & With 2 \(\mu\)-PMUs channels & 59 & All the others \\ & With 3 \(\mu\)-PMUs channels & 21 & 1, 15, 16, 17, 36, 38, 47, 51, 54, 55, 56, 59, 62, 66, 72, 74, 75, 78, 82, 84, 85 \\ & With 4 \(\mu\)-PMUs channels & 14 & 4, 6, 11, 21, 31, 44, 46, 50, 53, 57, 61, 63, 80, 82 \\ & With 5 \(\mu\)-PMUs channels & 11 & 2, 3, 5, 8, 19, 26, 29, 32, 41, 48, 70 \\ \multirow{4}{*}{_Case B_} & With 6 \(\mu\)-PMUs channels & 1 & 67 \\ \hline \end{tabular} \end{table} Table 3: Optimal \(\mu\)-PMU placement problem results for the 141-bus distribution network under two case study ## 5 Conclusion In this article, the problem of installing phasor measurement units (PMUs) is approached from a novel angle, i.e. by solving a tri-objective optimization problem with the following objective functions: the total number of \(\mu\)-PMU channels, the maximum state estimation uncertainty based only on high-rate \(\mu\)-PMU measurements, and the maximum state estimation sensitivity to line parameter tolerances. The issue formulation also takes into account limits on the number of \(\mu\)-PMU channels, the kind of allowable \(\mu\)-PMU measurements at each bus, and the system observability (both with and without taking constraints for contingencies into account). A customized version of the genetic algorithm NSGA-II is used to solve the tri-objective optimization issue. Perhaps other heuristic optimization techniques might do even better in terms of computing. The suggested NSGA-II method, however, clearly converges to the Pareto borders of interest in each of the distribution systems under test, therefore no significant changes are anticipated in either the findings or the conclusions. Since extensive \(\mu\)-PMU deployments to enable smart grid operation is becoming more popular, our study's emphasis is on distribution systems. Outcomes from three test distribution networks with 37, 85, and 141 buses each show that as the number of buses equipped with \(\mu\)-PMUs approaches definite tolerances, there is little to no reduction in the maximum state estimation uncertainty and maximum sensitivity to line parameter tolerances, while there is a significant increase in the cost of the instruments. According to the findings from the report's three distribution networks, also with \(\mu\)-PMUs equipped with just two measurement channels, between 30 and 40 percent of all buses might potentially go unmonitored. If there are no restrictions on how many \(\mu\)-PMU channels are available, this proportion might increase by a certain percentage. The second option does not, however, often provide significant state estimate improvements and is not typically economically viable. It's rather intriguing to note that an ideal \(\mu\)-PMU placement configuration capable of ensuring low state estimation uncertainty and sensitivity is likely to be resilient to contingencies as well.
2309.10246
Impact of strain on the SOT-driven dynamics of thin film Mn$_3$Sn
Mn$_3$Sn, a metallic antiferromagnet with an anti-chiral 120$^\circ$ spin structure, generates intriguing magneto-transport signatures such as a large anomalous Hall effect, spin-polarized current with novel symmetries, anomalous Nernst effect, and magneto-optic Kerr effect. When grown epitaxially as MgO(110)[001]$\parallel$ Mn$_3$Sn($0\bar{1}\bar{1}0$)[0001], Mn$_3$Sn experiences a uniaxial tensile strain, which changes the bulk six-fold anisotropy landscape to a perpendicular magnetic anisotropy with two stable states. In this work, we investigate the field-assisted spin orbit-torque (SOT)-driven response of the order parameter in single-domain Mn$_3$Sn with uniaxial tensile strain. We find that for a non-zero external magnetic field, the order parameter can be switched between the two stable states if the magnitude of the input current is between two field-dependent critical currents. Below the lower critical current, the order parameter exhibits a stationary state in the vicinity of the initial stable state. On the other hand, above the higher critical current, the order parameter shows oscillatory dynamics which could be tuned from the 100's of megahertz to the gigahertz range. We obtain approximate expressions of the two critical currents and find them to agree very well with the numerical simulations for experimentally relevant magnetic fields. We also obtain unified functional form of the switching time versus the input current for different magnetic fields. Finally, we show that for lower values of Gilbert damping ($\alpha \leq 2\times 10^{-3}$), the critical currents and the final steady states depend significantly on the damping constant. The numerical and analytic results presented in our work can be used by both theorists and experimentalists to understand the SOT-driven order dynamics in PMA Mn$_3$Sn and design future experiments and devices.
Ankit Shukla, Siyuan Qian, Shaloo Rakheja
2023-09-19T01:54:20Z
http://arxiv.org/abs/2309.10246v2
# Impact of strain on the SOT-driven dynamics of thin film Mn\({}_{3}\)Sn ###### Abstract Mn\({}_{3}\)Sn, a metallic antiferromagnet with an anti-chiral 120\({}^{\circ}\) spin structure, generates intriguing magneto-transport signatures such as a large anomalous Hall effect, spin-polarized current with novel symmetries, anomalous Nernst effect, and magneto-optic Kerr effect. When grown epitaxially as MgO(110)[001]\(|\)Mn\({}_{3}\)Sn(01\(\bar{1}\)0)[0001], Mn\({}_{3}\)Sn experiences a uniaxial tensile strain, which changes the bulk six-fold anisotropy landscape to a perpendicular magnetic anisotropy with two stable states. In this work, we investigate the field-assisted spin orbit-torque (SOT)-driven response of the order parameter in single-domain Mn\({}_{3}\)Sn with uniaxial tensile strain. We find that for a non-zero external magnetic field, the order parameter can be switched between the two stable states if the magnitude of the input current is between two field-dependent critical currents. Below the lower critical current, the order parameter exhibits a stationary state in the vicinity of the initial stable state. On the other hand, above the higher critical current, the order parameter shows oscillatory dynamics which could be tuned from the 100's of megahertz to the gigahertz range. We obtain approximate expressions of the two critical currents and find them to agree very well with the numerical simulations for experimentally relevant magnetic fields. We also obtain unified functional form of the switching time versus the input current for different magnetic fields. Finally, we show that for lower values of Gilbert damping (\(\alpha\leq 2\times 10^{-3}\)), the critical currents and the final steady states depend significantly on the damping constant. The numerical and analytic results presented in our work can be used by both theorists and experimentalists to understand the SOT-driven order dynamics in PMA Mn\({}_{3}\)Sn and design future experiments and devices. ## I Introduction Antiferromagnets (AFMs) are a class of magnetic materials that produce negligible stray fields, are robust to external magnetic field perturbations, and exhibit resonant frequency in the terahertz (THz) regime. These distinctive properties are a consequence of a strong exchange interaction between the uniquely arranged spins of the neighboring atoms, and a negligible net macroscopic magnetization [1; 2; 3; 4]. AFMs are, therefore, considered as promising candidates for building next generation magnonic devices, high-density memory devices, and ultrafast signal generators. [5] Among the various possible AFMs, noncollinear but coplanar metallic AFMs of the form Mn\({}_{3}\)X, with a triangular spin structure, have recently been explored extensively, owing to their intriguing magneto-transport characteristics such as a large spin Hall effect (SHE) [6], anomalous Nernst effect (ANE), anomalous Hall effect (AHE) [7; 8; 9; 10], and magneto-optical Kerr effect (MOKE) [11], ferromagnet-like spin-polarized currents [12; 13], and a finite tunneling magnetoresistance (TMR) [14; 15]. These noncollinear AFMs are chiral in nature and could be further classified as positive (X = Ir, Pt, Rh) or negative (X = Sn, Ge, Ga) chirality materials based on the type of spin interaction [16]. Here, we focus on the negative chirality material Mn\({}_{3}\)Sn owing to various factors. Mn\({}_{3}\)Sn has a high Neel temperature of approximately \(420-430\) K [11; 17]. Recent experiments have demonstrated that the magnetization in Mn\({}_{3}\)Sn can be switched between stable states using spin-orbit torque (SOT) in a bilayer setup of heavy metal (HM) and Mn\({}_{3}\)Sn [18; 19; 20; 21; 22; 23]. The charge current density required in these switching experiments was found to be approximately \(10^{6}-10^{7}\) A/cm\({}^{2}\)[19; 20; 21], which is smaller than or comparable to that in the case of most SOT-driven ferromagnets (\(\sim 10^{7}-10^{8}\) A/cm\({}^{2}\)) [24]. SOT-generated oscillatory dynamics of the order parameter in Mn\({}_{3}\)Sn have also been investigated experimentally and theoretically [19; 25; 26], where it was shown that the oscillation frequency could be tuned from 100's of MHz to GHz range for the charge current density of the order of \(10^{6}-10^{7}\) A/cm\({}^{2}\). Contrary to collinear AFMs, the magnitude of anomalous Hall conductivity in Mn\({}_{3}\)Sn has been reported to be quite large, roughly \(30-40\)\(\Omega^{-1}\cdot\mathrm{cm}^{-1}\) at 300 K [22; 27]. Furthermore, application of small in-plane uniaxial strain of \(\sim 0.1\%\) was recently shown to control the magnitude and sign of the anomalous Hall signal in Mn\({}_{3}\)Sn, owing to its piezomagnetic properties [28]. More recently, thin films of Mn\({}_{3}\)Sn, grown epitaxially on MgO(110)[001] substrate, were found to exhibit perpendicular magnetic anisotropy (PMA) due to the in-plane tensile strain arising from the lattice mismatch between Mn\({}_{3}\)Sn and MgO [22]. Finally, a TMR of approximately \(2\%\) was recently reported at room temperature in an all-antiferromagnetic tunnel junction comprising thin films of Mn\({}_{3}\)Sn/MgO/Mn\({}_{3}\)Sn [15]. These experimental developments in both the manipulation and detection of magnetization state in thin films of Mn\({}_{3}\)Sn make it a prospective candidate for future spintronic devices such as high-density antiferromagnetic memory and ultrafast nano-oscillator. In this work, we theoretically investigate the magnetic field-assisted SOT-driven dynamics in thin films of mon
2309.11158
Generalized Kohn-Sham Approach for the Electronic Band Structure of Spin-Orbit Coupled Materials
Spin-current density functional theory (SCDFT) is a formally exact framework designed to handle the treatment of interacting many-electron systems including spin-orbit coupling at the level of the Pauli equation. In practice, robust and accurate calculations of the electronic structure of these systems call for functional approximations that depend not only on the densities, but also on spin-orbitals. Here we show that the call can be answered by resorting to an extension of the Kohn-Sham formalism, which admits the use of non-local effective potentials, yet it is firmly rooted in SCDFT. The power of the extended formalism is demonstrated by calculating the spin-orbit-induced band-splittings of inversion-asymmetric MoSe$_2$ monolayer and inversion-symmetric bulk $\alpha$-MoTe$_2$. We show that quantitative agreement with experimental data is obtainable via global hybrid approximations by setting the fraction of Fock exchange at the same level which yields accurate values of the band gap. Key to these results is the ability of the method to self-consistently account for the spin currents induced by the spin-orbit interaction. The widely used method of refining spin-density functional theory by a second-variational treatment of spin-orbit coupling is unable to match our SCDFT results.
Jacques K. Desmarais, Giacomo Ambrogio, Giovanni Vignale, Alessandro Erba, Stefano Pittalis
2023-09-20T09:04:29Z
http://arxiv.org/abs/2309.11158v2
# Generalized Kohn-Sham Approach for the Electronic Band Structure of Spin-Orbit Coupled Materials ###### Abstract Spin-current density functional theory (SCDFT) is a formally exact framework designed to handle the treatment of interacting many-electron systems including spin-orbit coupling at the level of the Pauli equation. In practice, robust and accurate calculations of the electronic structure of these systems call for functional approximations that depend not only on the densities, but also on spin-orbitals. Here we show that the call can be answered by resorting to an extension of the Kohn-Sham formalism, which admits the use of non-local effective potentials, yet it is firmly rooted in SCDFT. The power of the extended formalism is demonstrated by calculating the spin-orbit-induced band-splittings of inversion-asymmetric MoSe\({}_{2}\) monolayer and inversion-symmetric bulk \(\alpha\)-MoTe\({}_{2}\). We show that quantitative agreement with experimental data is obtainable via global hybrid approximations by setting the fraction of Fock exchange at the same level which yields accurate values of the band gap. Key to these results is the ability of the method to self-consistently account for the spin currents induced by the spin-orbit interaction. The widely used method of refining spin-density functional theory by a second-variational treatment of spin-orbit coupling is unable to match our SCDFT results. pacs: 71.15.Mb, 71.15Rf, 31.15.E- ## I Introduction Since the early days of quantum mechanics, spin-orbit interactions have played a central role in our understanding of the electronic properties of atoms, molecules, and solids. The Dirac equation and its simplified version,[1] the two-component Pauli equation, were pinnacle achievements of that era, leading to a unified description of fine structure and Zeeman splittings in practically all systems known at the time, including those which would later turn out to be crucial to the semiconductor revolution (e.g., Ge, Si, and GaAs).[2] In this century, the discovery of nontrivial topological properties of the band structure of periodic solids, such as topological insulators and Weyl semimetals, whose extraordinary properties include quantized transport coefficients, magnetoelectric response, chiral anomalies, non-reciprocity, etc. has led to an explosion of interest in spin-orbit interactions. Indeed, by making the electronic wave functions complex even in the absence of a magnetic field, the spin-orbit interaction sets the scenario for the nontrivial response properties, band inversions and topological quantum numbers that underlie the above mentioned effects.[3; 4] In a parallel development, the emergence of spintronics has raised the interest in non-collinear spin textures both in real and in momentum space.[5] For instance, the phenomenon of spin-momentum locking - the emergence of a spin texture in momentum space - is responsible for remarkable magneto-transport effects, such as the unidirectional magnetoresistance.[6] In this context, it has become more pressing than ever to develop the tools of computational electronic structure so that they can be trusted to quantitatively predict the impact of spin-orbit interactions on properties such as spin-orbit splitting of bands, closing and re-openings of gaps at topological phase transitions, the positions of conical intersection (Dirac and Weyl points) in the Brillouin zone, the dispersion of Fermi arcs and the shape of non-collinear spin textures. Given the dominance of density functional theory (DFT) on the landscape of computational electronic structure, it seems natural to seek to include spin-orbit interaction effects through an extension of the DFT framework. What we mean by this is much more than simply including the spin-orbit interaction as an additional one-body term in the Kohn-Sham equation of DFT - an option that is already incorporated and widely available in existing electronic structure packages. Rather, in order to achieve quantitative accuracy and predictive power, we believe it is essential to include the effect of the spin-orbit interaction in the _many-body potentials_ that appear in a (suitably generalized) Kohn-Sham theory. The formal framework for implementing this program has been known for a long time: it is the U(1) \(\times\) SU(2)-invariant Spin Current DFT (SCDFT), see Refs. [7; 8]. This theory includes 16 external fields coupling to 16 densities, i.e., the scalar potential coupling to the particle density, the Zeeman magnetic field (three components) coupling only to the spin density, the charge vector potential (3 components) coupling only to the orbital current density, and, lastly, the SU(2) vector potential (a \(3\times 3\) tensor) coupling to the spin current densities. Depending on which functional form is invoked in the calculations, there are several convenient ways of organizing this extended set of densities and potentials; see for example Refs. [9; 10; 11]. Because it deals in a unified fashion with the magnetic interactions and the spin-orbit coupling, SCDFT appears to be the ideal framework to simulate a multitude of materials useful for magnetism, spintronics, orbitronics, valleytronics, and topologically non-trivial states as described above. In spite of its great promise, SCDFT has so far lagged behind other DFT and non-DFT methods in its application to real material. The reason for this delay can be traced to the lack of good and transferable approximations for the exchange-correlation (xc) energy functional in terms of spin-current densities. In the last two decades, it has become increasingly evident that accurate calculations of the electronic structure, including in particular band gaps and band splittings require functionals that depend on the densities not only explicitly (as in the traditional formulation of SCDFT) but also implicitly, through single-particle spin orbitals.[12; 13; 14; 15; 16; 17; 18; 9; 10; 12; 13; 14; 15; 16; 17; 18] The emergence of orbital-dependent functionals began with the widespread practice of including exact exchange, or a fraction thereof, in the energy functional (the so-called "hybrid" functionals), and gained momentum with the development of "meta-GGA" functionals, in which the traditional set of densities is augmented by the inclusion of the (spin-)kinetic-energy density. Most importantly for SCDFT, it was realized that spin-orbit-dependent functionals are explicitly required in any non-trivial gauge-invariant formulation.[19] The problem with orbital functionals is that, because they are regarded as implicit nonlocal functionals of the density, they must be differentiated with respect to the densities in order to yield the Kohn-Sham potentials. This differentiation is difficult, as it involves the functional derivative of the orbitals with respect to the densities. The procedure is usually referred to as the "Effective Potential Method" (OPM), and, while the resulting "Optimized Effective Potential" (OEP) is a legitimate local Kohn-Sham potential, the benefits of locality are wiped out by the complexity and costliness of the numerical treatment.[20; 21; 22; 23; 24] Experience in regular (Spin-)DFT has demonstrated that the cost of implementing spin-orbital functionals can be lowered, and the numerical treatment simplified by switching to a _generalized_ Kohn-Sham (GKS) framework,[25; 26; 27] which admits the use of _non-local_ effective potentials, as naturally appear in hybrid functional forms. In fact, the key ideas of this approach are also used in calculations involving meta-GGA functionals.[22; 28; 29] In this method, as in the OPM, the xc potential is expressed as the sum of two parts: a functional of the spin-orbitals - typically, but not necessarily, a fraction of the exact exchange - plus a regular explicit functional of local densities. This simple shift in perspective has far-reaching consequences. The functional derivative of the explicitly orbital-dependent part of the functional yields a nonlocal, but simple potential - the _Fock potential_ in the case of exact exchange - while the functional derivative of the regular part yields a local potential as in the standard Kohn-Sham formalism. The resulting GKS equation combines the accuracy of exact nonlocal exchange with the flexibility of semilocal density functional approximations for the correlation energy. Crucially, band gaps calculated in this manner become more closely related to the KS gaps,[27; 30]since part of the derivative discontinuity of the exact functional is captured by the discontinuous dependence of the orbitals on band occupation. The rigorous theoretical foundation of the GKS approach is presented in Ref. [26]. Encouraged by these successes, in this paper we cast the SCDFT in the GKS framework and unambiguously demonstrate its usefulness by calculating properties such as band gaps and SOC-induced or -enhanced band-splittings in materials of interest in spintronics and valleytronics (i.e. exhibiting Rashba-I and Rashba-II effects - more below). The calculations are implemented in the Crystal code,[31] which has recently enabled unprecedented real-world applications of SCDFT [32; 33; 34; 35]. We demonstrate that at the heart of the success of the method, is its ability to include the dependence of the effective many-body potentials on spin currents. Even when this dependence is only implicit (as discussed below) its inclusion is essential to obtain agreement with experimental results -- whereas conventional second-variational treatments of spin-orbit coupling fail. The paper is organized as follows: We start with an introductory section on the (regular) Kohn-Sham approach to SCDFT. We then describe the generalized Kohn-Sham approach and proceed to its application. We discuss first the prominent case of exact exchange followed by an in-depth discussion of global hybrid forms in which we highlight the features which are peculiar to SCDFT. We demonstrate the effectiveness of the approach by reporting the results of calculations of valence-band splittings induced/enhanced by SOC in inversion-asymmetric single-layer, 2D, MoSe\({}_{2}\) with spin-splitting (Rashba-I effect) and inversion-symmetric bulk \(\alpha\)-MoTe\({}_{2}\) with spin-valley locking (Rashba-II effect). We then outline near-future developments, for more sophisticated treatment of correlation effects beyond global hybrid functionals and regular semi-local density functional approximations. ## II Formal aspects ### Spin-Current Density Functional Theory In order to appreciate the key difference between SCDFT and Spin-DFT (SDFT, the most popular flavor of DFT), it is useful to start with the SDFT Hamiltonian: \[\hat{H}_{\rm SDFT}=\frac{1}{2}\int d^{3}r\ \hat{\Psi}^{\dagger}({\bf r})\left(-i \nabla\right)^{2}\hat{\Psi}({\bf r})+\int d^{3}r\ [\hat{n}({\bf r})v({\bf r})+\hat{m}^{a}({\bf r})B^{a}({\bf r})]+\hat{W}\;, \tag{1}\] and obtain the SCDFT Hamiltonian via the minimal substitution \(-i\nabla\rightarrow-i\nabla+\frac{1}{c}{\bf A}({\bf r})+\frac{1}{c}\sigma^{a}{ \bf A}^{a}({\bf r})\): \[\hat{H}_{\rm SCDFT}=\frac{1}{2}\int d^{3}r\ \hat{\Psi}^{\dagger}({\bf r}) \left[-i\nabla+\frac{1}{c}{\bf A}({\bf r})+\frac{1}{c}\sigma^{a}{\bf A}^{a}({ \bf r})\right]^{2}\hat{\Psi}({\bf r})+\int d^{3}r\ [\hat{n}({\bf r})v({\bf r})+\hat{m}^{a}({\bf r})B^{a}({\bf r })]+\hat{W}\;. \tag{2}\] Eq. (2) accounts not only for a scalar multiplicative potential \(v({\bf r})\) and a multiplicative magnetic field \(B^{a}({\bf r})\) but it also includes a (charge-) vector potential \({\bf A}({\bf r})\) and a spin-vector potential \({\bf A}^{a}({\bf r})\). While \({\bf A}({\bf r})\) is useful to represent an external magnetic field, \({\bf A}^{a}({\bf r})\) is useful to represent the (one-body) spin-orbit couplings in the system.[8] These vector potentials may be viewed as some "induction" fields, in the sense that they can induce spin currents in the systems on which they act. In Eqs. (1-2) and below, \(a=x,y,z\) is a Cartesian index over which summation is implied when repeated. Towards a density funtionalization, we can expand the first term in Eq. (2) such to group the kinetic term \[\hat{T}=\int d^{3}r\ \hat{\Psi}^{\dagger}({\bf r})\left(-\frac{\nabla^{2}}{2} \right)\hat{\Psi}({\bf r})\;, \tag{3}\] where \(\hat{\Psi}^{\dagger}({\bf r})=(\hat{\Psi}^{\dagger}_{\uparrow}({\bf r}),\hat {\Psi}^{\dagger}_{\downarrow}({\bf r}))\) denotes a two-component creation field operator (\(\uparrow\) and \(\downarrow\) refer to spin "up" and "down"), along with the interaction \[\hat{W}=\int d^{3}r\int d^{3}r^{\prime}\frac{\hat{\Psi}^{\dagger}({\bf r}) \hat{\Psi}^{\dagger}({\bf r}^{\prime})\hat{\Psi}({\bf r}^{\prime})\hat{\Psi}( {\bf r})}{2|{\bf r}-{\bf r}^{\prime}|}\;, \tag{4}\] yielding: \[\hat{H}_{\rm SCDFT}=\hat{T}+\hat{W}+\int d^{3}r\ \hat{n}({\bf r})\tilde{v}({\bf r })+\int d^{3}r\ \hat{m}^{a}({\bf r})\tilde{B}^{a}({\bf r})+\frac{1}{c}\int d^{3}r\ \hat{\bf j}({\bf r})\cdot{\bf A}({\bf r})+\frac{1}{c}\int d^{3}r\ \hat{\bf j}^{a}({\bf r})\cdot{\bf A}^{a}({\bf r })\;. \tag{5}\] Note that in Eq. (5) \[\tilde{v}=v+\frac{1}{2c^{2}}\left[{\bf A}\cdot{\bf A}+{\bf A}^{a}\cdot{\bf A} ^{a}\right]\;, \tag{6}\] and \[\tilde{B}^{a}=B^{a}+\frac{1}{2c^{2}}{\bf A}\cdot{\bf A}^{a}\;. \tag{7}\] To simplify the notation, \(B^{a}\) include \(\mu_{B}\) and \({\bf A}^{a}\) include \(\frac{\mu_{B}}{2}\); plus we work in units such that \(e=1,\hbar=1,m=1\). Eq. (5) stresses that the coupling of the external potentials with the many-electron system is mediated by the particle-density operator \(\hat{n}=\hat{\Psi}^{\dagger}\hat{\Psi}\), the spin-density operator \(\hat{\vec{m}}=\hat{\Psi}^{\dagger}\hat{\vec{\sigma}}\ \ (\vec{\sigma}\) being the vector of Pauli matrices \(\sigma^{x},\sigma^{y},\sigma^{z})\), the (paramagnetic) particle-current operator \(\hat{\vec{\bf j}}=\frac{1}{2i}\left[\hat{\Psi}^{\dagger}\nabla\hat{\Psi}- \left(\nabla\hat{\Psi}^{\dagger}\right)\hat{\Psi}\right]\) and the (paramagnetic) spin-current operator \(\hat{\vec{\bf j}}=\frac{1}{2i}\left[\hat{\Psi}^{\dagger}\vec{\sigma}\nabla \hat{\Psi}-\left(\nabla\hat{\Psi}^{\dagger}\right)\vec{\sigma}\hat{\Psi}\right]\), respectively. Therefore, while SDFT only accounts for particle- and spin-density self-consistently, SCDFT accounts for particle- and spin-density as well as particle- and spin-current self-consistently (we discuss more about this below). Given the external fields \(v\), \(B^{a}\), \({\bf A}\), and \({\bf A}^{a}\), the ground-state energy may then be determined by means of a constrained-search minimization:[36; 37] \[E=\min_{(n,\ \vec{m},\ \vec{\bf j},\ \vec{\bf J})}\left\{F[n,\vec{m},\vec{\bf j}, \vec{\bf J}]+\int d^{3}r\ n({\bf r})\tilde{v}({\bf r})+\int d^{3}r\ m^{a}({ \bf r})\tilde{B}^{a}({\bf r})+\frac{1}{c}\int d^{3}r\ {\bf j}({\bf r})\cdot{\bf A}({\bf r})+\frac{1}{c}\int d^{3}r\ {\bf J}^{a}({\bf r })\cdot{\bf A}^{a}({\bf r})\right\}\;, \tag{8}\] with \[F[n,\vec{m},\vec{\bf j},\vec{\bf J}]=\min_{\Psi\rightarrow(n,\ \vec{m},\ \vec{\bf j},\ \vec{\bf J })}\langle\Psi|\hat{T}+\hat{W}|\Psi\rangle\;. \tag{9}\] In Eq. (8), the inner minimization stated in Eq. (9) is carried out over all the antisymmetric many-electron wave functions yielding the prescribed set of densities and the outer minimization is carried out with respect to all \(N\)-representable densities (see note at Ref. [38] for more details). Eq. (9) defines a universal functional of the densities and currents. The term "universal" (as usual) highlights the fact that its definition does not involve external potentials. The Kohn-Sham scheme in SCDFT invokes the non-interacting universal functional: \[T_{\rm KS}[n,\vec{m},{\bf j},\vec{\bf J}]=\min_{\Phi\rightarrow(n,\vec{m},{\bf j },\vec{\bf J})}\langle\Phi|\hat{T}|\Phi\rangle\;, \tag{10}\] which is obtained from Eq. (9) by setting\(\hat{W}=0\). Here and in the following \(\Phi\) denotes a Slater determinant of \(N\) single-particle orbitals as opposed to more general \(N\)-particle antisymmetric wave functions \(\Psi\). Crucially, assuming that the same set of densities is both interacting and non-interacting \(v\)-representable, one may further decompose \(F\) as follows: \[F[n,\vec{m},{\bf j},\vec{\bf J}]=T_{\rm KS}[n,\vec{m},{\bf j},\vec{\bf J}]+E_{ H}[n]+E_{\rm xc}[n,\vec{m},{\bf j},\vec{\bf J}]\;, \tag{11}\] in terms of the KS kinetic energy \(T_{\rm KS}[n,\vec{m},{\bf j},\vec{\bf J}]\), the Hartree energy \(E_{\rm H}[n]=\frac{1}{2}\iint\ \frac{n({\bf r})n({\bf r}^{\prime})}{|{\bf r}-{\bf r}^{ \prime}|}\) and a remainder, \(E_{\rm xc}[n,\vec{m},{\bf j},\vec{\bf J}]\) -- the xc-energy functional in SCDFT. Given \(E_{\rm xc}\), or an approximation thereof in practice, the problem of determining the ground-state energies of an interacting system is therefore translated into finding the ground state of a non-interacting system. The KS equations in SCDFT have the form of single-particle Pauli equations including scalar, vector, and magnetic fields: [7; 8] \[\left[\frac{1}{2}\left(-i\nabla+\frac{1}{c}\mathbf{\mathcal{A}}_{\rm KS}\right)^{ 2}+\mathcal{V}_{\rm KS}\right]\Phi_{k}=\varepsilon_{k}\Phi_{k}\;, \tag{12}\] where \[\mathbf{\mathcal{A}}_{\rm KS} = \left(\mathbf{A}+\sigma^{a}\mathbf{\rm A}^{a}\right)+\left(\mathbf{\rm A}_{ \rm xc}+\mathbf{\rm A}_{\rm xc}^{a}\right) \tag{13}\] \[= \mathbf{\mathcal{A}}+\mathbf{\mathcal{A}}_{\rm xc}\;,\] \[\mathcal{V}_{\rm KS} = v_{\rm H}+\left(v+v_{\rm xc}\right)+\sigma^{a}\left(B^{a}+B_{ \rm xc}^{a}\right) \tag{14}\] \[+ \frac{1}{2c^{2}}\left[\mathbf{\mathcal{A}}^{2}-\mathbf{\mathcal{A}}_{\rm KS }^{2}\right]\;,\] in which \(\frac{1}{c}\mathbf{\rm A}_{\rm xc}({\bf r})=\frac{\delta E_{\rm xc}}{\delta\mathbf{ \rm J}({\bf r})}\) is an Abelian xc-vector potential, \(\frac{1}{c}\mathbf{\rm A}_{\rm xc}^{a}({\bf r})=\frac{\delta E_{\rm xc}}{\delta \mathbf{\rm J}^{a}({\bf r})}\) is the \(a\)-th component of a non-Abelian xc-vector potential, \(B_{\rm xc}^{a}({\bf r})=\frac{\delta E_{\rm xc}}{\delta n({\bf r})}\) is the \(a\)-th component of a xc-magnetic potential, \(v_{\rm xc}({\bf r})=\frac{\delta E_{\rm xc}}{\delta n({\bf r})}\) is a xc-scalar potential, and \(v_{H}({\bf r})=\int d{\bf r}\frac{n({\bf r}^{\prime})}{|{\bf r}-{\bf r}^{ \prime}|}\) is the usual Hartree potential. The KS densities are obtained from the (occupied) KS spinors as follows: \[n_{\rm KS}({\bf r})=\sum_{k=1}^{N}\Phi_{k}^{\dagger}({\bf r})\Phi_{k}({\bf r})\;, \tag{15}\] already used in the expression of \(v_{\rm H}\), \[\vec{m}_{\rm KS}({\bf r})=\sum_{k=1}^{N}\Phi_{k}^{\dagger}({\bf r})\;\vec{ \sigma}\;\Phi_{k}({\bf r})\;, \tag{16}\] \[{\bf j}_{\rm KS}({\bf r})=\frac{1}{2i}\sum_{k=1}^{N}\Phi_{k}^{\dagger}({\bf r} )\left[\nabla\Phi_{k}({\bf r})\right]-\left[\nabla\Phi_{k}^{\dagger}({\bf r}) \right]\Phi_{k}({\bf r})\;, \tag{17}\] and \[\vec{\bf J}_{\rm KS}({\bf r})=\frac{1}{2i}\sum_{k=1}^{N}\Phi_{k}^{\dagger}({\bf r })\vec{\sigma}\left[\nabla\Phi_{k}({\bf r})\right]-\left[\nabla\Phi_{k}^{ \dagger}({\bf r})\right]\vec{\sigma}\Phi_{k}({\bf r})\;. \tag{18}\] By virtue of the non-interacting \(v\)-representability assumption, the _exact_\(E_{\rm xc}\) yields the exact interacting densities, which coincide with the KS densities: \(n_{\rm KS}\equiv n\), \(\vec{m}_{\rm KS}\equiv\vec{m}\), \({\bf j}_{\rm KS}\equiv\vec{\bf j}\), and \(\vec{\bf J}_{\rm KS}\equiv\vec{\bf J}\). As argued in the Introduction, spin-orbital functionals can enable sufficiently general SCDFT applications. For determining the effective _local_ potentials from spin-orbital dependent functionals, however, an extra set of integro-differential equations needs then to be solved for determining the corresponding _local_ potentials. Such a numerical task is subtle, [20; 21; 22; 23; 24] and it usually exceeds the cost of more straightforward generalized-gradient-approximations (GGA). Fortunately, the cost involved in the application of spin-orbital functionals can be lowered, and the corresponding numerical implementations can also be simplified, by invoking an appropriate _exact_ generalization of the KS approach. This is usually handled by admitting _partially interacting_ KS systems, which exhibit _non-local_ effective potentials. [39; 26] Below, we spell out and analyze the case for SCDFT. ### From Regular to Generalized-KS Systems in SCDFT GKS systems can be introduced in SCDFT in a way that is similar to (S)DFT by noting that the minimization in Eq. (8) can _equivalently_ be performed by invoking different splittings of \(F[n,m^{a},{\bf j},{\bf J}^{a}]\) and a different minimization procedure. In detail, let us consider: \[F[n,\vec{m},{\bf j},\vec{\bf J}]=F_{\rm GKS}[n,\vec{m},{\bf j},\vec{\bf J}]+E_{ \rm Hxc}^{\rm GKS}[n,\vec{m},{\bf j},\vec{\bf J}]\;, \tag{19}\] where \[F_{\rm GKS}[n,\vec{m},{\bf j},\vec{\bf J}]=\min_{\Phi\rightarrow(n,\vec{m},{ \bf j},{\bf J})}\langle\Phi|\hat{O}_{\rm GKS}|\Phi\rangle \tag{20}\] is the analogous of Eq. (10) but here \(\hat{O}_{\rm GKS}\) may differ from \(\hat{T}\) by including some interaction (more below). Next, note that \[E = \min_{(n,\vec{m},\vec{\bf j},{\bf J})}\ \Big{\{}\min_{\Phi\to(n,\vec{m}, \vec{\bf j},{\bf J})}\langle\Phi|\hat{O}_{\rm GKS}|\Phi\rangle+E^{\rm GKS}_{\rm Hxc }\Big{[}n,\vec{m},\vec{\bf j},\vec{\bf J}\Big{]} \tag{21}\] \[+ \int d^{3}r\ n({\bf r})\tilde{v}({\bf r})+\int d^{3}r\ m^{a}({\bf r })\tilde{B}^{a}({\bf r})+\frac{1}{c}\int d^{3}r\ {\bf j}({\bf r})\cdot{\bf A}({\bf r})+\frac{1}{c}\int d^{3}r\ {\bf J}^{a}({\bf r})\cdot{\bf A}^{a}({\bf r})\Big{\}}\] \[= \min_{\Phi}\ \Big{\{}\langle\Phi|\hat{O}_{\rm GKS}|\Phi\rangle+E^{\rm GKS }_{\rm Hxc}\Big{[}n[\Phi],\vec{m}[\Phi],{\bf j}[\Phi],\vec{\bf J}[\Phi]\Big{]}\] \[+ \int d^{3}r\ n[\Phi]({\bf r})\tilde{v}({\bf r})+\int d^{3}r\ m^{a }[\Phi]({\bf r})\tilde{B}^{a}({\bf r})+\frac{1}{c}\int d^{3}r\ {\bf j}[\Phi]({\bf r})\cdot{\bf A}({\bf r})+\frac{1}{c}\int d^{3}r\ {\bf J}^{a}[\Phi]({\bf r})\cdot{\bf A}^{a}({\bf r}) \Big{\}}\] may be admissible, provided interacting and non-interacting \(N\)- and \(v\)-representability hold true. In practice, the form of GKS schemes depends upon the detail of \(\hat{O}_{\rm GKS}\) and \(E^{\rm GKS}_{\rm Hxc}\). A prominent example is \(\hat{O}_{\rm GKS}\equiv\hat{T}+\alpha\hat{W}\) where \(\alpha\in(0,1]\)_turns on_ the interaction in the GKS reference system -- yet the minimization is restricted to single Slater determinants \(\Phi\), only -- and \(E^{\rm GKS}_{\rm Hxc}\equiv(1-\alpha)E^{\rm HFA}_{\rm Hxc}+E^{\rm CPA}_{\rm c}\); i.e., we mix Fock exchange with (standard) LDAs or GGAs, in the form of typical global hybrid approximations. Approximations of this kind can fix, at least partially, the self-interaction error of DFAs. They have been justified by the necessity of mimicking an exact (almost) semi-local xc-hole by the combination of a non-local exact-exchange hole with an approximate (semi-)local correlation hole. Global hybrids and refinements thereof, have been guided by the (so-called) adiabatic connection integration.[25; 39; 40; 41; 42] But there is no prescription for choosing an optimal value of the hybridization parameter \(\alpha\) that works for all systems. One needs to devise ways to find optimal values, driven by first principle calculations,[43; 44; 45] or consider more sophisticated forms and procedure of hybridizations.[46; 47; 48; 30] Furthermore, the optimization task always targets specific observables; most commonly, energy gaps. These aspects have received a huge attention and application in (S)DFT. _In this work we show that switching from Spin-DFT to SCDFT is crucial for determining optimal mixing that can work both for band gaps and band splittings of spin-orbit coupled materials._ Hence, let us start with the functional form \[E[\Phi] = T_{\rm GKS}[\Phi]+\alpha E^{\rm Fock}_{\rm x}[\Phi]+E^{\rm DFA} _{\rm Hxc}\Big{[}n[\Phi],\vec{m}[\Phi],{\bf j}[\Phi],\vec{\bf J}[\Phi]\Big{]} +\int d^{3}r\ n[\Phi]\tilde{v}({\bf r})+\int d^{3}r\ v[\Phi]({\bf r})\tilde{B} ^{a}({\bf r})\] \[+ \frac{1}{c}\int d^{3}r\ {\bf j}[\Phi]({\bf r})\cdot{\bf A}({\bf r})+ \frac{1}{c}\int d^{3}r\ {\bf J}^{a}[\Phi]({\bf r})\cdot{\bf A}^{a}({\bf r})\;,\] where \[E^{\rm Fock}_{\rm x}[\Phi]\equiv-\frac{1}{2}\int d^{3}r\int d^{3}r^{\prime}\ \frac{{\rm Tr}\ \{\Gamma({\bf r},{\bf r}^{\prime})\Gamma({\bf r}^{\prime},{\bf r})\}}{|{\bf r}-{ \bf r}^{\prime}|}\;, \tag{22}\] is the Fock exchange, here evaluated with GKS spinors, and \[\Gamma({\bf r},{\bf r}^{\prime})\equiv\sum_{k=1}^{N}\Phi_{k}({\bf r})\Phi_{k}^{ \dagger}({\bf r}^{\prime}) \tag{23}\] is the one-electron reduced density matrix (1RDM). In Eq. (22), Tr denotes the trace over spin. As announced, we shall consider "typical" global hybrids forms. With "typical", we intend forms that mix a fraction of Fock exchange, \(E^{\rm Fock}_{x}[\Phi]\), with GGAs (or lower rung approximations). Note, GGAs in SCDFT (as in Spin-DFT) may dependent on all the basic variables and their gradients, but not on other quantities (e.g. kinetic energy densities). The corresponding (generalized) KS equation, then, reads as follows \[\hat{H}_{\rm GKS} = \frac{1}{2}\left(-i\nabla+\frac{1}{c}{\mathbf{\cal A}}_{\rm GKS} \right)^{2}+\alpha\hat{\cal V}^{\rm NL}_{\rm x} \tag{24}\] \[+ {\cal V}_{\rm GKS}\] where \[{\mathbf{\cal A}}_{\rm GKS}={\mathbf{\cal A}}+(1-\alpha){\mathbf{\cal A}}_{\rm x}^{\rm DFA} +{\mathbf{\cal A}}_{\rm c}^{\rm DFA} \tag{25}\] with \[{\mathbf{\cal A}}_{\rm x/c}^{\rm DFA}={\bf A}_{\rm x/c}^{\rm DFA}+\sigma^{a}{\bf A }_{\rm x/c}^{\rm DFA,a}\;, \tag{26}\] \[\hat{\cal V}^{\rm NL}_{\rm x}\Phi_{k}=\frac{\delta E^{\rm Fock}_{\rm x}[\Phi]}{ \delta\Phi_{k}{}^{\dagger}}=-\int d^{3}r^{\prime}\ \frac{\Gamma({\bf r},{\bf r}^{\prime})\Phi_{k}({\bf r}^{\prime})}{|{\bf r}-{ \bf r}^{\prime}|}\;; \tag{27}\] \(\hat{\mathcal{V}}_{\mathrm{x}}^{\mathrm{NL}}\) is the Non-Local Fock potential -- here evaluated with GKS spinors. Next, \[\mathcal{V}_{\mathrm{GKS}} = \mathcal{V}+v_{\mathrm{H}}+(1-\alpha)\mathcal{V}_{\mathrm{x}}^{ \mathrm{DFA}}+\mathcal{V}_{\mathrm{c}}^{\mathrm{DFA}} \tag{28}\] \[+ \frac{1}{2c^{2}}\left[\mathbf{\mathcal{A}}^{2}-\mathbf{\mathcal{A}}_{ \mathrm{GKS}}^{2}\right]\] with \[v_{\mathrm{H}}=\int d^{3}r^{\prime}\ \frac{\mathrm{Tr}\ \Gamma(\mathbf{r}^{ \prime},\mathbf{r}^{\prime})}{|\mathbf{r}-\mathbf{r}^{\prime}|} \tag{29}\] and \[\mathcal{V}_{\mathrm{x}/\mathrm{c}}^{\mathrm{DFA}}=\left(v_{\mathrm{x}/ \mathrm{c}}^{\mathrm{DFA}}+\sigma^{a}B_{\mathrm{x}/\mathrm{c}}^{\mathrm{DFA},a }\right). \tag{30}\] The GKS equations reduce to the regular KS equations for \(\alpha=0\). For \(\alpha\neq 0\), the xc-scalar, xc-magnetic, and xc-vector potentials produced by \(E_{\mathrm{Hxc}}^{\mathrm{DFA}}\) get, partially, replaced by a fraction of the non-local potential \(\hat{\mathcal{V}}_{\mathrm{x}}^{\mathrm{NL}}\). At \(\alpha=1\), the DFA gives no contribution: we end up with the HF approximation for the Pauli equation. In passing, also note that, had we invoked a meta-GGA instead of a GGA, the differentiation w.r.t. the spin-orbitals would have generated additional terms from the explicit dependence on the (spin-)kinetic-energy density -- yielding to terms like the ones already accounted for in (non-collinear) SDFT.[49] It is expedient to contrast the GKS equations including exact-exchange, Eqs. (24)-(30), against the exact-exchange approximation of the regular KS approach. In the present GKS scheme, the non-local Fock potential is directly given in terms of \(\Gamma(\mathbf{r}^{\prime},\mathbf{r}^{\prime})\) [see Eq. (27)]. On the other hand, in the regular KS approach to SCDFT, exact-exchange leads to the 16 integro-differential OEP equations that produce 16 local exchange potentials in response to variations in 16 basic density components.[9; 10] At the present stage of development, the determination of local exact-exchange potentials are both numerically more involved and more costly than the evaluation of the non-local Fock potential. ### Dependence of Fock Exchange on the SCDFT Variables The framework of GKS in SCDFT makes it apparent that \(E_{\mathrm{x}}^{\mathrm{Fock}}[\Phi]\) and, thus, \(\hat{\mathcal{V}}_{\mathrm{x}}^{\mathrm{NL}}\) are valid (spinor-)orbital functionals of SCDFT. But it may be unsatisfactory that we cannot read the dependence on the basic variables of SCDFT explicitly! To this end, let us look more closely at the short-range behaviour of the Fock energy-density. Coming back to Eq. (22), we employ the shorthand notation: \[Q_{\mathrm{x}}(\mathbf{r},\mathbf{r}^{\prime})=\mathrm{Tr}\left\{\Gamma( \mathbf{r},\mathbf{r}^{\prime})\Gamma(\mathbf{r}^{\prime},\mathbf{r})\right\}\, \tag{31}\] and change integration variables by introducing the interparticle separation \(\mathbf{u}\): \[E_{\mathrm{x}}^{\mathrm{Fock}}[\Phi] = -\frac{1}{2}\int d^{3}r\int d^{3}u\ \frac{Q_{\mathrm{x}}(\mathbf{r}+\mathbf{u}/2,\mathbf{r}-\mathbf{u}/2)}{u}\.\] We recall that \(Q_{\mathrm{x}}\) from Eq. (31) is the trace of a \(2\times 2\) matrix, and thus may be decomposed in the basis \(I\), \(\sigma^{x},\sigma^{y},\sigma^{z}\). Next, a Taylor expansion of the spherical average around \(\mathbf{r}\), \(\langle Q_{\mathrm{x}}\rangle\), to second-order in \(u\) gives: \[\langle Q_{\mathrm{x}}(\mathbf{r},u)\rangle \sim \frac{\left[n^{2}+\vec{m}\circ\vec{m}\right]}{2}+\frac{u^{2}}{6} \Bigg{[}\left(2n\tau-\mathbf{j}\cdot\mathbf{j}-\frac{n\nabla^{2}n}{4}\right)+ \left(2\vec{m}\circ\vec{\tau}-\vec{\mathbf{J}}\odot\vec{\mathbf{J}}-\frac{ \vec{m}\circ\nabla^{2}\vec{m}}{4}\right)\Bigg{]}+\mathcal{O}\left(u^{4}\right). \tag{32}\] Here, "\(\circ\)" denotes contraction w.r.t. spin indices, and "\(\odot\)" denotes a double contraction w.r.t. spin and real-space indices, \[\tau(\mathbf{r})=\frac{1}{2}\sum_{k=1}^{N}\left(\nabla\Phi_{k}^{\dagger}( \mathbf{r})\right)\cdot\left(\nabla\Phi_{k}(\mathbf{r})\right)\,, \tag{33}\] is the kinetic energy density and \[\vec{\tau}(\mathbf{r})=\frac{1}{2}\sum_{k=1}^{N}\left(\nabla\Phi_{k}^{\dagger}( \mathbf{r})\right)\cdot\left[\vec{\sigma}\Big{(}\nabla\Phi_{k}(\mathbf{r}) \Big{)}\right] \tag{34}\] is the _spin_-kinetic energy density of the occupied GKS spinors. Eq. (32) shows that all basic variables of SCDFT contribute -- via the spinors -- to the Fock exchange energy-density at short range and, thus, they also contribute to the corresponding non-local potential. Last but not least, notice that Eq. (32) contains the natural combinations of the local densities which satisfy the U(1)\(\times\)SU(2) gauge invariance as required by the SCDFT framework.[7; 8; 19] ## III Further Analyses and Results In what follows we shall consider only systems with _time-reversal symmetric_ ground states; i.e., states with vanishing magnetization (\(\vec{m}=0\)) and vanishing particle current (\(\mathbf{j}=0\)). Yet, the spin currents (under the action of SOC) may not vanish (\(\vec{\mathbf{J}}\neq 0\)). We also restrict to global hybrids that mix a fraction of exchange with DFAs of the form: \[E_{\rm xc}^{\rm GKS}[\Phi]=\alpha E_{\rm x}^{\rm Fock}[\Phi]+(1-\alpha)E_{\rm x}^{ \rm GGA}[n[\Phi]]+E_{\rm c}^{\rm GGA}[n[\Phi]]. \tag{35}\] Eq. (35) satisfies U(1)\(\times\)SU(2) gauge invariance, and -- as clarified in Sec II.3 -- does contain the spin-currents, implicitly, via Fock exchange. Importantly, we show below that using Eq. (35) via SCDFT allows us to _significantly_ improve upon the results of the widespread SDFT+SOC@SV approach. We recall that in this popular approach, SOC is accommodated via first-order perturbation theory: once a SDFT calculation (without SOC) is fully converged (in a "first-variational" step), SOC is then added to the SDFT KS Hamiltonian in a second step - i.e. in the "second-variational" step (SV); the SDFT+SOC@SV Hamiltonian is thus diagonalized in the basis of SDFT orbitals.[50] Notice that in this approach the spin-currents are not fed back into (any of) the effective potentials. We perform calculations with the Crystal23 package.[31] The calculations employ global-hybrid functionals, with the PBE generalized-gradient approximation for \(E_{\rm x/c}^{\rm DFA}\).[51] Computational details are reported in the supplementary material.[52] (see also Refs. 53-65 therein). ### Fundamental Band Gaps First, we apply our GKS-SCDFT framework to the estimation of band gaps and SOC-induced/enhanced band splittings near the top of the valence band of layered molibdenum dichalcogenides: the inversion-asymmetric hexagonal single layer MoSe\({}_{2}\) and the inversion-symmetric hexagonal bulk \(\alpha\)-MoTe\({}_{2}\) (space-group \(P6_{3}mmc\)). We note that, while in principle GKS eigenvalues \(\varepsilon_{k}({\bf k})\) are not representative of true electronic energy levels, an exception can be made for the top of the valence band. Indeed, this band can be exactly related to the experimentally observable ionization energy of the material system (for both spin up and down electrons) by invoking the SDFT equivalent to Koopman's theorem.[27; 67] Generalization to SCDFT is straightforward. We first discuss electronic band gaps \(E_{g}\) of the layered molibdenum dichalcogenides. Table 1 reports \(E_{g}\) values (in eV) for the two systems. Precise experimental values are available for the \(\alpha\)-MoTe\({}_{2}\) 3D crystals, while a larger range of values has been reported for 2D single layer MoSe\({}_{2}\). For each system, computed values are reported from the calculation without SOC (SDFT) and from calculations including SOC with both a SV and SCDFT treat Figure 1: SOC-induced/enhanced band splitting near the top of the valence band of A) the MoSe\({}_{2}\) single-layer, B) the \(\alpha\)-MoTe\({}_{2}\) crystal. Bands obtained from SDFT calculations (without SOC) are in black; from SDFT+SOC@SV obtained by correcting the SDFT results by including SOC in a second variational step (in yellow); and those obtained from the present work, which accounts for SOC self-consistently, are in blue. The mixing parameter between Fock exchange and PBE xc functional was set at \(\alpha=0.15\) for MoSe\({}_{2}\) and \(\alpha=0.1\) for \(\alpha\)-MoTe\({}_{2}\) in these calculations. Band structure images are produced with the CRYSTALpytools Python interface to Crystal.[66] \begin{table} \begin{tabular}{c c c} \hline \hline & MoSe\({}_{2}\) & MoTe\({}_{2}\) \\ SDFT & 2.11 & 1.07 \\ SDFT+SOC@SV & 2.02 & 1.06 \\ SCDFT & 1.99 & 1.05 \\ Exp. & 1.6-2.3 & 1.03 \\ \hline \hline \end{tabular} \end{table} Table 1: Fundamental band gap \(E_{g}\) (in eV) of the three systems here considered. Experimental data are from Refs. 68–72. Computed values are reported for \(\alpha=0.15\) for MoSe\({}_{2}\) and \(\alpha=0.1\) for \(\alpha\)-MoTe\({}_{2}\) from SDFT, SDFT+SOC@SV, and SCDFT. ment. Computed values are reported for a fraction of Fock exchange \(\alpha=0.15\) for MoSe\({}_{2}\) and \(\alpha=0.1\) for \(\alpha\)-MoTe\({}_{2}\). Below, we show that these values for \(\alpha\) also give agreement against the experiments on band splittings via SCDFT calculations. The same level of agreement is not matched via SDFT+SOC@SV calculations. ### Band Splittings #### iii.2.1 Band splittings: Preliminaries Let us discuss formal aspects of valence band splittings in the presence of SOC and different symmetry constraints. The systems here considered preserve time-reversal symmetry (TRS): \[\varepsilon_{k}^{\uparrow}({\bf k})=\varepsilon_{k}^{\downarrow}(-{\bf k})\;, \tag{36}\] where \(\varepsilon_{k}\) are the energy values of band \(k\) at different points of the first Brillouin zone (FBZ). Let us recall that space inversion symmetry (SIS) results in the following constraint on the band structure: \[\varepsilon_{k}^{\sigma}({\bf k})=\varepsilon_{k}^{\sigma}(-{\bf k})\;. \tag{37}\] TRS and SIS together imply bands which are doubly degenerate in spin. The inclusion of SOC makes the Hamiltonian spin-dependent, corrispondingly spin-up and spin-down states feel a different potential and split, if allowed by symmetry. _The single-layer MoSe\({}_{2}\) system_ preserves TRS but breaks SIS, which, in the presence of SOC, leads to possible spin-splittings of bands that would otherwise be doubly-degenerate at the SDFT level: \[\varepsilon_{k}^{\uparrow}({\bf k})\neq\varepsilon_{k}^{\downarrow}({\bf k})\;. \tag{38}\] In uniaxial (or low-dimensional) systems, such as 2D hexagonal MoSe\({}_{2}\), the spin-splittings are embodied by the Rashba Hamiltonian (Rashba-I effect).[73] Figure 1 A) shows such spin-splitting at the high-symmetry point K of the FBZ and along K-\(\Gamma\) and K-M paths. At the SDFT level (black line), the top valence band is doubly degenerate. The spin degeneracy is lifted by SOC according to Eq. (38); for instance, see the SCDFT description (blue lines). _The \(\alpha\)-MoTe\({}_{2}\) hexagonal crystal_ is characterized by stacked MoTe\({}_{2}\) layers along the \({\bf c}\) crystallographic axis, separated by van-der-Waals gaps. As both TRS and SIS are preserved, the combination of Eqs. (36) and (37) leads to: \[\varepsilon_{k}^{\uparrow}({\bf k})=\varepsilon_{k}^{\downarrow}({\bf k})\;, \tag{39}\] so that all bands are necessarily spin degenerate. Therefore, in the case of \(\alpha\)-MoTe\({}_{2}\), SOC enhanced band splittings are related to the dipole field of the locally asymmetric Mo crystallographic sites. This so-called Rashba-II effect results in the appearance of spatially localized "hidden spin valleys" associated with the band splittings.[74; 75] Figure 1 B) shows such enhanced band splitting around K in the top of the valence band. At the SDFT level (black lines), the two top valence bands are spin doubly degenerate and are already split. With SOC, the bands are still doubly degenerate according to Eq. (39) but get further split (see the SCDFT blue lines). #### iii.2.2 Band splittings: Results All SOC-induced/enhanced band splittings discussed above are experimentally measured and allow for the quantitative assessment of different computational treatments of SOC. We start by discussing Rashba-I type SOC-induced spin-splitting in 2D single-layer MoSe\({}_{2}\). A graphical representation (side and top views) of the atomic structure of this system is given in Figure 2. The splitting occurs at the K point of the FBZ of the system and has been measured by angle-resolved photo-electron spectroscopy (ARPES) experiments (0.180-0.185 eV).[76; 77; 78] Figure 2 reports the computed spin-splittings as a function of \(\alpha\) (i.e. fraction of Fock exchange). The Figure 2: Rashba-I type SOC-induced spin-splitting at the K point of the FBZ of 2D single-layer MoSe\({}_{2}\). At the SDFT level (black line), SOC is not included. Yellow and blue lines describe computed spin-splittings from different treatments of SOC as a function of \(\alpha\) (i.e. fraction of Fock exchange): SDFT+SOC second-variational (yellow line) and SCDFT (blue line), respectively. Experimental data (red lines) are taken from Refs. [76; 77; 78]. The atomic structure of the system is also shown. following is observed: (i) The black line reports the SDFT results _no_ spin-splitting is observed, as expected; (ii) The yellow lines describes the results from one-shot second-variational treatment of SOC (SDFT+SOC@SV). A value of 0.14 eV is obtained that significantly underestimates the experimental values, (almost) independently of \(\alpha\); iii) The blue line shows the results from SCDFT calculations. The experimental band splittings are reproduced at values of \(\alpha\) in the range 14-17%. Correspondingly, the effect on the band splitting is found to amount to 22% of the total SOC-induced splitting. Next, we discuss Rashba-II type SOC-enhanced band-splitting in bulk \(\alpha\)-MoTe\({}_{2}\). A graphical representation of the atomic structure of this system is given in Figure 3. Here, two spin doubly-degenerate bands near the top of the valence are already split at the SDFT level (black lines) at the K point of the FBZ. The energy gap further widens upon inclusion of SOC, by an extent that depends on how spin-currents are treated. The splitting has been measured by optical experiments (0.30-0.34 eV).[79; 75; 80] Figure 3 compares the experimental values with computed band-splittings from different treatments of SOC as a function of \(\alpha\) (i.e. fraction of Fock exchange). We note that, for this system, experimental values are more significantly spread, which results in a more difficult quantitative assessment of the different theoretical approaches. However, the following is observed: i) SDFT values visibly underestimate experimental results; ii) The SDFT+SOC@SV results are better than the SDFT results, as expected; iii) The slope of the SDFT(+SOC@SV) results, however, is significantly different from the slope of the SCDFT results. As a consequence of which, agreement for the band splittings is obtained at an \(\alpha\) which does _not_ yield a band gap in agreement with experiments. Indeed, the SV calculation at a fraction \(\alpha=0.2\) provides a splitting of 0.30 eV, but the band gap is much too large at 1.47 eV; iv) The experimental band splitting is reproduced via SCDFT calculations at values of \(\alpha\) in the range 6-15%. It amounts to about 20% of the total band-splitting at a fraction \(\alpha=10\%\). Thus, only SCDFT calculations allow for a simultaneous agreement against the experiment on both fundamental band gaps and band splittings. Before concluding, we should stress that the difference in the slopes between the yellow and blue lines in Fig. 2 and Fig. 3 can unambiguously be attributed to the different treatment of the spin-currents. Such a dependence is implicitly encoded in the Fock exchange and it can be exploited by taking as an input spinors derived under the action of SOC. SDFT, however, neglects SOC from the outset; so spin currents vanish in the corresponding solutions. SDFT+SOC@SV accounts for SOC but evaluates Fock exchange at the level of SDFT spinors; thus, after the second variation step, spin-currents do not vanish but are not used as a feedback in the calculation. SCDFT, by construction, evaluates Fock exchange under the action of SOC; thus spin currents can drive the convergence toward more accurate _self-consistent_ results. ### Near-future road-map for GKS-SCDFT The results illustrated above show that GKS-SCDFT is readily useful. Two questions can be posed, however: (i) Will it be possible to get rid of the empiricism involved in the determination of \(\alpha\), the "optimal" fraction of exchange; or - we may ask -- can the fraction be determined self-consistently in SCDFT without having to resort to other (computationally more demanding) methodologies? (ii) Will it be possible to derive functional approximations with an explicit dependence on the spin-current? We foresee that the answer to both questions is likely to be positive. Question (i) may be resolved by upgrading a very recent development for optimally-tuned range separated hybrids,[30] which has been shown to work both for molecular and periodic materials. Question (ii) may be answered by invoking meta-GGAs, which can satisfy U(1)\(\times\)SU(2) gauge invariance.[19] ## IV Conclusions We have put forward a generalization of the Kohn-Sham formalism (GKS), which admits the use of non-local effective potentials firmly rooted in SCDFT. This Figure 3: As figure 2 but for Rashba-II type SOC-enhanced band-splitting at the K point of the FBZ of bulk \(\alpha\)-MoTe\({}_{2}\). Experimental data are taken from Refs. [75; 79; 80]. The atomic structure of the system is also shown. formulation is the analogous of the popular GKS formulation of (Spin-)DFT.[26] Here, we have spelled out and analyzed the novel and subtle aspects that are uniquely brought forth by the SCDFT framework. We have demonstrated via applications that GKS-SCDFT readily allows us to obtain results beyond the state-of-the-art in electronic structure calculations for spin-orbit coupled materials. By considering time-reversal symmetric spin-orbit coupled states, we have demonstrated that the dependence of the energy functional on spin currents is important even when it is only implicit, as in the prominent case of Fock exchange. Global hybrid approximations can yield significantly more accurate results when used in GKS-SCDFT calculations rather than in perturbative SDFT+SOC calculations. In particular, we have applied GKS-SCDFT to the evaluation of band gaps _and_ SOC-induced band splittings in materials of great interest in spintronics and valleytronics, with Rashba-I, Rashba-II effects. At the level of the global hybrid approximations, we have shown that by applying the self-consistent SCDFT treatment of spin-orbit interactions one can find an optimal fraction of Fock exchange which works well for both the fundamental band gaps _and_ the SOC-induced or -enhanced band splittings. We have shown that the widely used method for refining Spin-DFT results via a second-variational treatment of SOC can fail to reproduce the experimental results - superior agreement can be achieved by switching from SDFT to full-fledged SCDFT calculations. Efforts, in the near future, will be devoted to reduce empiricism in finding the "optimal" fraction of Fock exchange. We believe that the optimally-tuned range-separated hybrids offer, presently, a valid and very promising option.[30] Furthermore, currents may be included in the density functional approximations explicitly via the first-principle procedures reported in Ref. [19]. Most importantly, already at this stage of the development, the GKS approach of Spin-Current DFT can offer significant improvements in the calculation of the electronic structure of challenging spin-orbit coupled materials. ###### Acknowledgements. This research has received funding from the Project CH4.0 under the MUR program "Dipartimenti di Eccellenza 2023-2027" (CUP: D13C22003520001). GV was supported by the Ministry of Education, Singapore, under its Research Centre of Excellence award to the Institute for Functional Intelligent Materials (I-FIM, project No. EDUNC-33-18-279-V12). We are grateful to Stephen Dale for a reading of the manuscript.
2309.06424
Unveiling the potential of large language models in generating semantic and cross-language clones
Semantic and Cross-language code clone generation may be useful for code reuse, code comprehension, refactoring and benchmarking. OpenAI's GPT model has potential in such clone generation as GPT is used for text generation. When developers copy/paste codes from Stack Overflow (SO) or within a system, there might be inconsistent changes leading to unexpected behaviours. Similarly, if someone possesses a code snippet in a particular programming language but seeks equivalent functionality in a different language, a semantic cross-language code clone generation approach could provide valuable assistance. In this study, using SemanticCloneBench as a vehicle, we evaluated how well the GPT-3 model could help generate semantic and cross-language clone variants for a given fragment.We have comprised a diverse set of code fragments and assessed GPT-3s performance in generating code variants.Through extensive experimentation and analysis, where 9 judges spent 158 hours to validate, we investigate the model's ability to produce accurate and semantically correct variants. Our findings shed light on GPT-3's strengths in code generation, offering insights into the potential applications and challenges of using advanced language models in software development. Our quantitative analysis yields compelling results. In the realm of semantic clones, GPT-3 attains an impressive accuracy of 62.14% and 0.55 BLEU score, achieved through few-shot prompt engineering. Furthermore, the model shines in transcending linguistic confines, boasting an exceptional 91.25% accuracy in generating cross-language clones
Palash R. Roy, Ajmain I. Alam, Farouq Al-omari, Banani Roy, Chanchal K. Roy, Kevin A. Schneider
2023-09-12T17:40:49Z
http://arxiv.org/abs/2309.06424v1
# Unveiling the potential of large language models in generating semantic and cross-language clones ###### Abstract Semantic and Cross-language code clone generation may be useful for code reuse, code comprehension, refactoring and benchmarking. OpenAI's GPT model has potential in such clone generation as GPT is used for text generation. When developers copy/paste codes from Stack Overflow (SO) or within a system, there might be inconsistent changes leading to unexpected behaviours. Similarly, if someone possesses a code snippet in a particular programming language but seeks equivalent functionality in a different language, a semantic cross-language code clone generation approach could provide valuable assistance. In this study, using SemanticCloneBench as a vehicle, we evaluated how well the GPT-3 model could help generate semantic and cross-language clone variants for a given fragment. We have comprised a diverse set of code fragments and assessed GPT-3's performance in generating code variants. Through extensive experimentation and analysis, where 9 judges spent 158 hours to validate, we investigate the model's ability to produce accurate and semantically correct variants. Our findings shed light on GPT-3's strengths in code generation, offering insights into the potential applications and challenges of using advanced language models in software development. Our quantitative analysis yields compelling results. In the realm of semantic clones, GPT-3 attains an impressive accuracy of 62.14% and 0.55 BLEU score, achieved through few-shot prompt engineering. Furthermore, the model shines in transcending linguistic confines, boasting an exceptional 91.25% accuracy in generating cross-language clones. Language Models, Software Clone, Semantic Clone, Cross-language Clone, GPT, Semantic-CloneBench, Software Engineering ## I Introduction Clones are almost identical code copies. One of the most prominent causes of clones is developers copying and pasting code between software projects. Research indicates that 7-23% of software systems are recycled from previous projects [1][2]. Semantic clones promote code reuse, consistency, and productivity throughout the software development lifecycle, giving developers a strategic advantage. Using replicas allows developers to focus on innovation and complex issues, leading to faster development cycles and better software solutions [3][4][5]. Developers can improve software quality, development costs, risks, bug prevention, and detection by monitoring and restructuring clones [6]. The dynamic world of software development requires constant innovation and efficiency. Code variant creation, a complicated process that reuses and duplicates code segments to overcome initial development limits, is vital to achieving these goals. As software systems develop in breadth and complexity, efficient code use and adaptation become more important. Research indicates that programmers repeat type-3/type-4 or semantic clones in commits of each project at a rate of 6.32% to 8.38% [7]. Additionally, developers commonly copy/paste and reuse throughout the software system, which may cause issues/introduce bugs [8]-[10]. Semantic clones or variants are essential to prevent inconsistencies within software systems. Wu et al. [11] searched for Java files using "stackoverflow" and manually inspected the results. The researchers found that in 31.5% of their samples, developers had to modify SO source code for compatibility with their projects. Additionally, 35.5% of software engineers used SO posts for reference rather than copying code samples. It is evident that software developers often copy/paste and reuse code fragments during development or attempt to reuse code fragments from Crowdsourced forums such as SO for certain functionality. However, the code fragment at hand may not be their first choice for various reasons ranging from coding structure complexity to the potential of having bugs in them [8][9]. Furthermore, directly copying code fragments and then adapting is associated with introducing inconsistent changes in systems, resulting in severe unexpected behaviour of the software system [8], [9], and they may be looking for an alternative semantically similar fragment. Sometimes, developers may simply look for an alternative implementation (e.g., semantically similar) of a code fragment they currently have in their system and improve their system through refactoring. Similarly, one might have a code fragment in a certain programming language, but they might be looking for similar functionality in a different language [12]. Of course, given that clone detection is an active area and semantic and cross-language clone generation may help build benchmarks for evaluating and comparing such tools and techniques. Because of GPT's good performance in code and text generation, we utilized GPT-3 to generate semantic and cross-language clones. In this research, we explore the efficacy of the GPT-3 model in generating semantic and cross-language clones. We followed a similar methodology as of GPTCloneBench study [13] in generating semantic and cross-language clones using GPT-3 and SemanticCloneBench. In particular, we randomly chose 15,000 semantic clone pairs and 12,000 cross-language clone pairs that were generated as part of the GPTCloneBench's prompt engineering/clone generation step. These are a subset of intermediate data from GPTCloneBench study before going into further validation towards building the clone benchmark. After that, to remove the syntactic clones from this data, we followed a similar approach as the GPTCloneBench paper's methodology by utilizing NiCad. After NiCad filtration, we underwent a different and more in-depth manual validation process that confirmed consistent output for identical inputs. Post manual validation, GPT-3 exhibited a 62.14% accuracy with a 0.55 BLEU [14] score in generating semantic clones and an impressive 91.25% accuracy in cross-language clones. We employed BLEU in a manner to assess the degree of divergence between generated code fragments and human-written code. To reinforce our findings, we also utilized Oreo [15] for semantic clone detection. The distinction between the GPTCloneBench paper and the current study lies in their respective focuses and manual validation. In the GPTCloneBench paper, our emphasis was on introducing a comprehensive benchmark of semantic clones and cross-language clones. However, in the present study, our objective shifts towards a more in-depth investigation of GPT's efficiency in formulating semantic and cross-language clones. This involves conducting extensive manual evaluations, where human experts meticulously review and compare the generated clones against the original code snippets. In GPTCloneBench, we did not add any snippets to the benchmark that have been tagged false by any judges or have any conflict among the judges. However, the present research takes a more nuanced approach. Here, we rigorously investigated the conflicted code snippets to see if they were actually semantic clones or not. Our objective is to ascertain whether these disputed snippets truly qualify as semantic clones or not. Furthermore, we employed Cohen's kappa [16] agreement metric to quantify the level of agreement among judges regarding code snippets. This metric provides us with an empirical measure of the consistency of the judgment among the evaluators. Through this thorough manual process, we seek to uncover the true extent of GPT's aptitude for generating semantic and cross-language clones for a given code fragment. This study adds a layer of empirical validation to our findings of GPTCloneBench, enabling us to draw more robust conclusions about GPT's capabilities in these specific domains. As we navigate through this unfamiliar territory, we confront inquiries that resonate with the fundamental aspects of our research-related software development: **(RQ1)**_To what extent does GPT-3 demonstrate the capability to generate high-quality semantic clones?_ **(RQ2)**_What is the efficacy of GPT-3 in accurately converting code snippets from one programming language to another?_ The results of our study not only illuminate these research questions but also offer guidance for effectively utilising sophisticated language models in the field of software development, considering both their potential applications and associated limitations. The remaining sections are organized as follows: Section II discusses the background of our study. The architecture of generating clones from GPT-3 is described in Section III. In Section IV, we have talked about manual validation. Section V talked about findings of GPT-3 related to accuracy, tested a clone detection tool and analyzed the results. In Section VI, we have talked about the threats to the validity of our research. Related work is described in Section VII and in Section VIII, the conclusion of our research is discussed. ## II Background Identical or similar code segments within a codebase are termed code clones, with the initial being a clone of the second, constituting a clone pair [17]-[19]. Diverse terms are used for defining clones, including relative [20], redundant [21][22], dependent [23], functional [24], functionally similar [25][26][27], and Type-4 [6], [17] clones. While researchers agree on semantic clones sharing functionality but differing in syntax, no uniformity exists about the precise semantic similarity. Semantic clone definitions vary, from narrow interpretations focusing on specific similarities to broader, less precise ones. Nevertheless, the consensus remains that semantic clones involve identical functionality with differing syntax [28][19] We have used SemanticCloneBench [29] to facilitate our research. SemanticCloneBench [29] is a dataset of semantically equivalent code snippets intended to help researchers develop and evaluate techniques for detecting semantic clones. ## III Architecting Semantic Clones In this section, we are focusing on processing semantic and cross-language clones. We have utilized the results that we got after prompt engineering from the GPTCloneBench paper [13]. The clone generation process in the GPTCloneBench paper starts with the selection of the initial clone fragment from the clone pair of SemanticCloneBench. To assist with this, an automated script was developed for GPTCloneBench to identify functions, which were later given as input for GPT-3. For prompt engineering, the few-shot prompting technique was employed. For the prompt, the GPT-3 model was provided with textual instructions and a representative input to indicate the type of output anticipated. As discussed in the GPTCloneBench paper, two prompts have been used to generate the clones. To generate cross-language clones, the emphasis was given to two programming languages, Java and C#, which were used as input for GPT-3 in the GPTCloneBench paper. As a result, GPT-3 created 80,664 semantic clone pairs and 22,364 cross-language clone pairs. After GPT generated the clones from the given input, we randomly selected 15,000 semantic clones and 12,000 cross-language clones. We used this data to conduct this research. This data (15,000 semantic and 12,000 cross-language clones) represents the data of the GPTCloneBench paper before going into any validation (including NiCad and manual validation). After that, at first, we employed the textual similarity measurement tool NiCad [30] to exclude syntactic clones. Second, a rigorous manual validation process (Section IV) was undertaken for all prospective Type-3, Type-4, and cross-language clones. We utilized the established framework of BigCloneBench [18] for code clone classification, with allowances for slight variations within a defined grey area. Moderately Type-3 (MT3) clones [18] exhibit 50%-70% similarity, supplemented by a 5% gray area. Weak Type-3/Type-4 (WT3/4) clones [18] align with Type-4 clones, marked by 0%-50% similarity. This framework extends to cross-language clones, treating them as Type-4 due to shared logic despite diverse programming languages. Notably, while not all semantic clones are cross-language, all cross-language clones fall under this category. To remove syntactic clones, NiCad was utilized, configuring a 99% dissimilarity threshold to identify Type-1 and Type-2 clones, as NiCad cannot detect Type-4 clones. For semantic clone detection, we analyze similarity percentages in the metadata file, facilitated by the 3-line minimum size and blind renaming in NiCad. Pairs exceeding 75% similarity are discarded; those under or equal to 75% are saved for further manual validation. For this research, NiCad filtered out a total of 4,379 syntactic clones. In cross-language clone detection, NiCad is inapplicable due to differing programming languages. Nonetheless, generated cross-language clones undergo manual validation. Furthermore, another validation process (input-output testing) has been adopted to ensure the code clones follow the same functionality. ## IV Human-Centric Analysis After filtering out undesired clones as described earlier, we have engaged in a rigorous manual validation process. This involved thoroughly examining all code fragments to determine whether filtered data was accurate and whether the clone pairs produced the same output for the same input. After file generation, we manually validated the clone pairs to ensure their validity. To facilitate accurate assessment, BigCloneBench's GUI-based Clone Validator1 was utilized, which provided syntax highlighting for the candidate code and displayed the exemplar functions and specifications alongside the candidate for reference. Footnote 1: [https://github.com/jeffsvajlenko/ValidateClones](https://github.com/jeffsvajlenko/ValidateClones) During the validation procedure, a cohort of nine judges took part, consisting of six undergraduate research students and three post-doctoral researchers. The undergraduate students were partitioned into three cohorts, with each cohort comprising a pair of students. The dataset was subsequently divided into three distinct portions. In this research, after NiCad filtration, we got 10,621 semantic clones and 12,000 cross-language clones. In GPT-Clonebench, if any judges had any conflict or if any of the judge's decisions did not match, we discarded that, but here in this study, we also validated the clones that faced conflicts and got the tag false positive by anyone within the group. Our manual validation consists of three rounds. **In round one**, we divided the semantic clone pairs into three groups consisting of 3,540, 3,540, and 3,541 respectively. Each group contained two members. Each person in every group conducted an individual assessment of their designated portion, categorising the clone pairs as either true positive, false positive, or undecided based on their understanding. For every group, they were given different code fragments, but each member of one group received the same code fragments. For a clone pair to be considered a true semantic pair, both members of the group had to tag it as true. Conflicting results within a group led to excluding that pair from the true pairs listing and have gone for further validation. The first six judges followed this procedure. In round one, group_1 tagged 2,947 as true semantic, 80 as false semantic and 513 as undecided or conflicted pairs. For group_2, they tagged 2,953 as true semantic, 87 as false semantic and 500 as undecided or conflicted pairs. For group_3, they tagged 2,979 as true semantic, 98 as false semantic and 464 as undecided or conflicted pairs. For group_1, Cohen's k is 0.700, for group_2, Cohen's k is 0.730 and for group_3, Cohen's k is 0.77, which means all groups are in substantial agreement. **In round two**, we shuffled the undecided or conflicted pairs among the three groups. The first two groups were given 492 pairs, and the last group was given 493 pairs. In the second round, Cohen's K for all three groups is 0.52, 0.58 and 0.54, respectively, which means Moderate Agreement. This outcome can be attributed to the intricate nature of the code under consideration. Given the participants' status as undergraduates, reaching definitive decisions becomes challenging due to the complexity of the code snippets. These code snippets, which were shuffled and carried uncertainties from the initial round, further contribute to the difficulties in decision-making, leading to the observed moderate Cohen's Kappa agreement level. The overall Cohen's K result for this analysis can be found in Table I. **In round three**, there is only one group consisting of three Postdoctoral Fellows. They mitigated the rest of the undecided or conflicted clone pairs from round two. Finally, we received 9,321 true semantic clone pairs through their discussion. In terms of cross-language pairs, we followed the same procedure we described for semantic clones. In round one, every group received 4,000 different pairs each. Group_1 tagged 3,460 as true, 98 as false and 442 as undecided or conflicted pairs. For group_2, they tagged 3,489 as true semantic, 67 as false and 444 as undecided or conflicted pairs. For group_3, they tagged 3471 as true, 58 as false and 471 as undecided or conflicted pairs. For group_1, Cohen's k is 0.69, for group_2, Cohen's k is 0.59, and for group_3, Cohen's k is 0.73. In round two for cross-language, Cohen's K is 0.48, 0.56 and 0.60. The overall Cohen's K result for this analysis can be found in Table II. Finally, the remaining 1,153 undecided pairs were collectively assessed and labelled by the three post-doctoral fellows through discussion. Approximately 212 hours were spent by nine judges to validate the clone pairs. We want to mention that the undergraduate research students were trained and given instructions on the functionalities of why and how we defined the semantic clones. ## V Unveiling the Findings: Results and Analysis After a thorough screening process and manual validation, we decided 9,321 as true semantic clone pairs from four different languages, Java, C, C#, and Python, out of 15,000 semantic clone pairs and 10,950 cross-language clone pairs out of 12,000 cross-language clone pairs. We have used an accuracy metric to validate how good GPT-3 is to generate semantic and cross-language clones. In our first prompt, we generated four outputs for one given input and for the second prompt, we got ten outputs for one given input. So, our accuracy is based on the data that we found using this procedure. The accuracy of GPT-3 in generating semantic clones is 62.14%, and for cross-language clones, the accuracy of GPT-3 is 91.25%. \[\small\mathit{Accuracy}=\frac{\mathit{Number\,of\,Validated\,True\,Clones}}{ \mathit{Total\,Number\,of\,Randomly\,Selected\,Generated\,Clones}} \tag{1}\] In our research, we thought to assess the similarity between code fragments generated by GPT and human-written code. To quantify this similarity, we calculated the BLEU [14] score, obtaining a result of 0.55, which can be interpreted as very high quality and adequate. This score provides valuable insights into how closely the generated code fragments resemble human-written code. It is important to highlight that the objective of our study was to investigate the proximity between the two, and the BLEU score serves as a quantitative measure to accomplish this. Code fragments, by their nature, can exhibit complexity, and subtle variations can have substantial implications for functionality. Furthermore, coding style disparities across different developers and projects introduce additional nuances that the BLEU metric accounts for. It is crucial to emphasize that our primary focus was on achieving code fragments that met functional requirements. Recognizing that BLEU was initially designed for natural language tasks, we acknowledge its limitations in capturing all code-specific attributes. Hence, our evaluation should be interpreted in the context of understanding how closely generated code fragments resemble their human-written counterparts. We have evaluated a semantic clone detection tool with the newly formed data (9,321 clones). As a testing metric, we have used Recall to confirm that the newly created data does not have syntactic clones. \[\small Recall=\frac{True\,Positive}{True\,Positive+False\,Negative} \tag{2}\] ### **Oreo** To check if the validated data is actually semantic clones or not, we ran Oreo on our dataset. The results of our evaluation are presented in Table III. Oreo performs with a recall of 0.46 for the clones. We were not expecting a very high recall (more than 0.5) for Oreo on our data because our data represents the region where most detection tools are difficult to perform. _(RQ1) To what extent does GPT-3 demonstrate the capability to generate high-quality semantic clones?_ GPT-3 showcases a notable degree of capability in generating high-quality semantic clones, as evidenced by an achieved accuracy of 62.14% with 0.55 BLEU score. The accuracy reflects GPT-3's proficiency in paraphrasing and producing semantically correct variations of original code fragments. The model's ability to attain such a substantial accuracy rate highlights its potential as a tool for generating semantic clones that closely emulate the intentions of the source code. However, it is important to acknowledge that the accuracy percentage will only be achieved if we follow a proper prompt engineering technique. In addition to that, high BLEU score can be interpreted that GPT generated codes are quality codes. _(RQ2) What is the efficacy of GPT-3 in accurately converting code snippets from one programming language to another?_ The efficacy of GPT-3 in accurately converting code snippets from one programming language to another is marked by a notable success rate. The model demonstrates a substantial ability to comprehend the structural and syntactical intricacies inherent to different programming languages, enabling it to produce conversions with a high level of accuracy. Notably, GPT-3 achieves an impressive accuracy of 91.25% in cross-language clone generation, which underscores its proficiency in seamlessly transposing code logic between disparate linguistic frameworks. While this achievement showcases GPT-3's prowess, it is essential to acknowledge that the accuracy may vary based on factors such as code complexity, domain specificity, and the nuances of each programming language. Nevertheless, GPT-3's capacity to effectively bridge the gap between programming languages signifies its potential to expedite cross-platform development and streamline code migration processes within the realm of software engineering. ## VI Threats to Validity The first major concern that can be raised for our research is that the clones are generated by a machine-learning model, which may not be real-world clones. Clones can be real-world or artificial [31]. To call a clone pair a real-world clone, that clone pair needs to be written by a human. However, to mitigate this issue, we used SemanticCloneBench [29] code fragments to generate the results. SemanticCloneBench [29] is created based on provided knowledge by developers who participate in the SO. As we utilized SemanticCloneBench data as input (few-shot prompt engineering technique), GPT-generated codes are similar to real-world clones. That is why we are claiming that our generated codes are not fully real world but are in between the real world and artificial clones. Another important consideration arises regarding the overall applicability and adaptability of GPT-3's performance. To address this potential limitation, we systematically evaluated the model's performance using four prominent and widely used programming languages. This strategic approach aims to shed light on GPT-3's robustness and effectiveness in a diverse programming language, thereby contributing to a more comprehensive understanding of its generalizability. By conducting this thorough examination across multiple programming languages, we have gained valuable insights into the extent to which GPT-3's performance transcends language boundaries and remains reliable across different coding paradigms. Furthermore, there can be another concern regarding whether it is possible to get more efficient results from GPT-3. To answer this question, we can say that the effectiveness of prompt engineering techniques could impact the results. That is why in our research, we have followed a formal prompt engineering method [32], [33]. So, if any other people try to replicate the research with different prompts, they will get similar results if they follow proper prompt engineering techniques. We do not think there will be massive differences. In addition to that, there can be another concern regarding the manual evaluation bias. Manual evaluation of clone quality involves subjectivity, and the judgement of human annotators may introduce inconsistency. To avoid this problem, we have given the necessary knowledge regarding code clones to the undergrad students. To further evaluate, three post-doctoral fellows discussed the remaining undecided and false tagged code snippets and mitigated the problem to keep the decision unbiased. Still, we agree that manual evaluation can introduce some errors, and we will try to analyze the code fragments more rigorously by introducing more judges. Finally, we should note that despite the fact that the generated clones are mostly artificial clones, there are a number of important applications of such clones in cloning area and in software development in general. If large language models are efficient in generating semantic and cross-language clones, they could be used in building clone detection training data sets such as GPTCloneBench and beyond with confidence. This could then also be used for comparing and evaluating semantic and cross-language clone detection tools and may extend even comparing those detectors that detect clones across Microsoft.NET programming languages [29]. Such a clone generation approach and its resulting benchmarks could help evaluate whether source code transformation based clone detection tools such as CloneWorks [34] or SimCad [35] could in fact detect semantic clones by applying flexible source transformations and normalizations. This could then further help build IDE-based flexible clone detection and management [6], [36] tools, or even could potentially be used in building similar benchmarks in other contexts [37]. It is thus our Fig. 1: Semantic code clone generation sample understanding that such a study of large language models could help cloning areas despite being the fact that the clones are generated clones. ## VII Related Work Generating code involves using programming languages to create scripts, applications, or software. There are many ways to generate code. Manual coding, Integrated development environments, code generators, templates and frameworks, AI-powered text generators, Domain-specific languages, Data-driven code generation, code refactoring tools, and scripting languages are some of the techniques [38]-[44]. Code generation models backed by artificial intelligence have exhibited impressive abilities in aiding developers, automating repetitive coding processes, and even suggesting innovative solutions. According to Victor [45], in a recent survey conducted by GitHub in partnership with Wakefield Research, 92% of developers are already using AI-powered coding tools in their work. So, we tried one of the latest models of OpenAI's GPT model to generate semantic clones There are a lot of code recommendation systems, such as GitHub Copilot [46], a tool developed collaboratively by OpenAI and GitHub, is a revolutionary code generation AI model that integrates directly into software development environments. This tool facilitates the inclusion of code snippets and automated code completion, thereby enhancing the coding experience for users. In our approach, we've incorporated few-shot prompting alongside the natural language description. This strategic choice aims to enhance GPT's performance in generating semantic clones, focusing on this aspect rather than completing the code outright. Additionally, our evaluation encompasses GPT-3's ability to generate cross-language code clones, distinguishing it from Copilot's functionality in this regard. Building on the achievements of Copilot, Codex [47] further pushes the boundaries of AI-assisted code generation. Codex, also developed in collaboration with OpenAI, is an advanced language model that can generate entire functions, classes, and methods based on natural language prompts. The Codex framework was proposed by Chen et al. [47], who conducted an evaluation of its performance using a dataset of 163 coding tasks. In their work, authors focused on the task of generating standalone Python functions from docstrings and evaluating the correctness of code samples automatically through unit tests. For our work, we utilized OpenAI's text-davinci-003 model, which has 175 billion parameters compared to Codex's 14.8 billion parameters. In addition to that, OpenAI Codex is most capable in Python, whereas we wanted to use a more generalized model. In another study, Li et al. introduced AlphaCode [48], which is a system designed for code generation. The model was trained utilising data from GitHub and CodeContests. According to the authors, AlphaCode demonstrated an average ranking of 54.3% in competitive programming competitions on the Codeforces platform. The authors conducted a comparative analysis between their proposed solution and the solutions developed by other participants in the contest. The evaluation was based on contest parameters, including the remaining time portion and penalties incurred for wrong submissions. Overall, while there are many great tools and techniques available for code generation, our aim in this work has been to examine whether the recently proposed GPT-3 model could help the cloning community. In particular, we aim to examine to what extent GPT-3 model could be used in generating semantic and cross-language clones. ## VIII Conclusion Our research introduces a transformative paradigm for code reuse, refactoring, migration and renovation. Our goal was to explore the efficacy of the GPT-3 in generating semantic and cross-language clones. We have utilized SemanticCloneBench to generate close to real-world code clones through GPT-3. After thorough validation process, we have managed to get 9,321 true semantic clone pairs and 10,950 cross-language clone pairs after handling all the limitations of GPT-3. With a noteworthy accuracy rate of 62.14% and a 0.55 BLEU score, GPT-3 showcases its potential in accurately replicating semantic structures within a given programming language. Additionally, our investigation into cross-language clones further underscores GPT-3's prowess, boasting an impressive accuracy of 91.25%. These findings highlight the big progress of GPT-3 in improving code generation, opening up new possibilities for creative uses in software development and other areas. As GPT-3 continues to showcase remarkable performance, it holds the promise of contributing to the advancement of code-related tasks across diverse linguistic and domain contexts. ## Acknowledgment This work was supported by NSERC Discovery grants, NSERC USRAs, CFI-JELF, and NSERC CREATE graduate program on Software Analytics Research (SOAR) grants.
2310.00136
Optimizing performance in Basketball: A Game-Theoretic Approach to Shot Percentage Distribution in a team
In this paper, we propose a shot percentage distribution strategy among the players of a basketball team to maximize the score that can be achieved by them. The approach is based on the concepts of game theory related to network flow.
Aditya Singh
2023-09-29T20:50:30Z
http://arxiv.org/abs/2310.00136v1
**Optimizing performance in Basketball: A Game-Theoretic Approach to Shot Percentage Distribution in a team** ###### Abstract In this paper we propose a shot percentage distribution strategy among the players of a basketball team to maximize the score that can be achieved by them. The approach is based on the concepts of game theory that are related to network flow. The paper starts with drawing similarity between network flow problem and passing sequence in basketball. The concept of Price of anarchy was applied in the basketball. Different strategies that can be used by teams are evaluated and compared with the proposed strategy that consider the players shooting behavior as the game progresses. The work also looks at the interaction of different participating players and how their collective behavior can be used to achieve an optimum performance. The paper explains that giving the ball to the best player of the team to take a shot might not be the best strategy to maximize the team's overall score. **Keywords:** basketball, game theory, price of anarchy, network flow ## 1 Introduction Basketball is a popular sport played by two teams with five players each. The team roster can have a maximum of twelve players with unlimited substitution allowed. The end goal is to score more points than the other team by throwing the ball through a hoop mounted on a ten foot high backboard at both ends of the court. Among the two teams, at any given moment the ball is with one team hence they are on offense and the other team is on defense. The ball is moved between players of the attacking team, a single instance of ball movement is called a pass. At the end of this complex movement of ball, a player takes an attempt to shoot a basket. We can see this movement of ball as a network flow problem as we will see in the next section. Network traffic flow is a well researched domain and game theory is one of the tools that can be used to optimize the network. In subsequent sections we will draw similarities between network flow problem and basketball, go through the key game theory concepts, and understand the mathematics behind the optimizing process. The paper will conclude with results, limitations and future scope of this work. Literature Review ### Network flow problem There are many interesting optimization problems in the domain of transportation, fluids and several other domains that can be viewed as a network flow problem. The general theory of networks tries to solve these problems in many diverse contexts using mathematical tools like but not limited to combinatorics and linear algebra [2]. Every network requires two entities: _nodes_ and _edges_ (also called arcs, link). The edges of a network can be unidirectional or bidirectional. Sometimes there can be a value associated to an edge that can be called its "capacity". A network is often showed pictorially as in Figure 1. ### Game theory on networks Game theory is a branch of mathematics that helps in understanding how individuals, groups, organization or countries interact with each other in a system where the decision taken by one entity can impact the choices available to the other entities in that system. Generally, the participating entities are called _"players"_ in game theory, we will be using them interchangeably. Every option the entity of network chooses results in an outcome or reward which is called _"payoffs"_. Game theory can be applied to understand a system of network where multiple entities are competing or cooperating to optimize the flow of information or resources through the network. "Price of Anarchy" is a concept in game theory that calculates the degradation of network due to selfish behavior of a player who seek of optimize their payoff without taking into consideration the global collective optimum. Game theory provides a very powerful and effective framework to understand the dependencies between the entities in the network, identify the equilibrium and to design solutions to incentivize cooperative behavior between players of the system. Figure 1: Generic network with edges and nodes #### Price of Anarchy in traffic flow network - Braess Paradox In traffic network flow, price of anarchy [6] can be clearly visualized often termed as "Braess Paradox". The following example is taken from the work of Skinner, B. (2009) [1]. Consider a simple network with two nodes A and B (represents cities here). There are two edges between them, edge 1 is a highway and edge 2 is a small sub lane along the highway. The network can be represented pictorially as shown in Figure 2. Assuming that there are 10 cars, the time it takes to go from point A to point B using highway is a constant 10 units. If using sub lane, then the time linearly depends on the number of cars present in the sub lane i.e., if there is just 1 car then 1 minute, if there are 3 cars then 3 minutes and so on. The equilibrium of system can be achieved when every car takes the sub lane, as from a duration point of view there is no incentive in using the highways. This equilibrium is called "Nash Equilibrium" where every player (i.e., car) is acting selfishly without thinking about the entire system consisting of other players. It can be proved mathematically that the best course of action would be that 5 cars take the highway and other 5 cars take the sub lane. The complete mathematical solution can be found in the work of Skinner, B. (2009) [1]. The difference between the overall system payoff in Nash equilibrium and the payoff when everyone is acting towards the optimum of entire system is called "Price of Anarchy". ### Similarity between basketball and network flow Basketball if viewed from a distance is a network flow problem. Each player is a node, every pass is a possible _edge_ (or arc) between the players. If a player takes the final shot, then the basket mounted on the backboard is a _sink_ (or end node). There are infinite options of sequences which a team can opt for in an offensive position. The number would be much huge if we consider the defensive actions that is taken by the opposition. In this scope of this work, we will only consider the offensive action in the game of basketball. In a network form, a generic gameplay in basketball can be Figure 2: Two nodes A and B connected by two edges termed as Highway and sub lane represented as shown in figure 3 Every directional edge from a node to another is a pass completed. If we try to draw similarity between the traffic network and basketball then every passing sequence starts with a player let say node A, there are some intermediate nodes and edges between them. Finally, a player takes the shot to score a basket, basket is like node B in traffic network discussed in the section above. It might feel like the best possible passing sequence would be the one that gives the highest possible chance of success. It might be a sequence where the player with the best shooting accuracy takes the final shot, but we will show later that such choice of passing sequence is equivalent to "Nash equilibrium" or selfish and it doesn't guarantee the best efficiency from a team point of view. ### Previous Research There is very limited research done in the field of optimizing gameplay in basketball. Most of the research is performed to find the best sequence of passing or to find the trajectory of shooting to maximize total points without considering the changes in players shooting capability once the game starts. Moreover, nearly all the work focuses on using machine learning (see Javadpour [3] et. al.) that tries to achieve the result that is close to Nash equilibrium which might not be the most optimal result in the real world scenario. There are very few research pieces when it comes to using game theory to optimize the performance of the team. Work done by Brian Skinner (2009) [1] talks about using game theory to optimize team's performance in basketball. This research work perfectly explains the use of game theory and explains how game theory can be used to achieve the true global optimum for a basketball team. ### Research gaps As discussed in the previous section, most of the research around optimizing team's performance is in machine learning domain. The only drawback about the Skinner, B [1] work is that it just talks about how it's not always a good option to pass the ball to Figure 3: Passes between players is represented by directional arrows, final player who takes the shot is represented by ‘x’ and there is a node for basket which is like sink node in a flow network. the player with best shooting accuracy in a game. The paper talks about shooting behavior of just one player with all other players from the team having a constant shooting behavior. The research broadly focuses on single player in a multi player system. There is a need to explore domains like game theory to optimize team performance in basketball. There is a need to extend the work of Skinner, B. to a multi-player system where every player has an independent shooting behavior. It is important to understand how different players with different shooting behaviors can collaborate in a team to maximize the team's utility, to understand different strategies and to compare the payoffs each of those strategies can deliver. This paper will try to cover all these gaps discussed above. ## 3 Methodology ### Terminologies Before diving into the details, it is important to understand the terminologies that will be used in this work. Most of these are basketball terminologies that will be used in the later sections. There are few terms that are not generally used in basketball but are required to understand the later sections. 1. Free throw attempts (FTA): When a player gets an opportunity to score from a free throw line without any interference from the opposition. These are awarded because of foul committed by an opposition player. 2. Field goal attempts (FGA): It refers to any attempt taken to try to score points by shooting basketball in the opponent's basket. Field goal can be attempted from various distances, and they are the fundamental way to score points in the game. 3. Points score (PS): This is the total points scored by a player or the team. In this paper it will always be the points scored by a player unless stated otherwise. 4. Total shots (TS): This is the summation of FTA and FGA of a player. 5. True Shooting percentage (TS%): This is an advanced statistics and it best thought of as a field goal percentage adjusted for free throws and field goal shots (see Kubatko et. al., 2007 [4]). TS% is defined by the formula given in equation 1. \[TS\%=\frac{(0.5*(points scored))}{((field\ goal\ attempts)+0.44*(free\ throw\ attempts))}\] (1) 6. Fraction of team shots (FTS): To calculate the fraction or % of team shots taken (equation 2) by each player per game, we used the total time the player played for in a game, total shots the player took and the total team shots. This was a calculated for each game. \[\mathit{FTS}=\left(\frac{\mathit{players\ shots}/\mathit{game}}{\mathit{team\ shots}/\mathit{game}}\right)*\left(\frac{\mathit{48\ minutes}/\mathit{game}}{\mathit{player\ minutes}/\mathit{game}}\right) \tag{2}\] These are all the technical terms that will be used in the later sections. ### Dataset Data was collected for Washington Wizards which is an NBA team based out of Washington DC. The data about overall players performance, teams' performance and individual players statistics on a game by game basis was collected for the 2022-23 season. The data was collected from _[https://www.basketball-reference.com/_](https://www.basketball-reference.com/_) which accumulates the data of teams and players. Each players' data had 30 features associated for each match day. After careful analysis we reduced the number of required features to five. Those five features are minutes played, games started, field goal attempts (FGA), free throw attempts (FTA) and points scored. We further calculated TS% and FTS for each player on a game to game basis. ### Players shooting behavior Calculating the player's shooting behavior is a very difficult task and it is impacted by a lot of factors. First and foremost, the defensive action Figure 4: The graph above shows the inverse relationship between TS% and FTS. The slope and y-intercept of the linear relationship is mentioned in the graph title along with the player’s name. The slope and intercept are stored for every player, and it defines the players shooting behavior i.e. shot accuracy team plays a significant factor in deciding a player's shooting behavior. It can also be impacted by factors that are beyond the scope of game. In the book "Basketball on Paper (2004)" [5], author Dean Oliver envisioned that there is an inverse relationship between TS% and FTS of a player. This relationship between TS% and FTS remains a highly theoretical concept but this inverse relationship is the closest imitation to the real world scenario. Using the TS% and FTS calculated earlier, we find a linear function that inversely relates the two factors for every player of Washington Wizards. The linear relationship between TS% and FTS for a Bradley Beal is shown in figure 4. ### Optimizing function and constraints Let us assume a player A takes x percent of total team shots, using the behavior model created in the above section we can find the actual TS% for a given percentage of shots as shown in equation 3 (it is very much like sub lane, one of the paths in the traffic network example discussed in section 2.2). \[(TS\%\ for\ x\%\ shots)=\left(x_{player\ A}slope_{player\ A}+intercept_{player\ A}\right) \tag{3}\] With the calculated TS% for x% shots, we can find the actual shot % that would result in a score when a player takes x% of team shots. This value will be called utility. The sum of utility contributed by all five players of the team will be the team payoff. We can define the TS% of a player as a function of percentage of shots taken (i.e. f\({}_{player\ A}\)(x) ). To calculate the utility for player\({}_{A}\) we can use equation (3): \[(Utility)_{player\ A}=x_{A}\ f_{player\ A}(x) \tag{4}\] In a team of five players, with each having their own shooting behavior we can create the objective function F that is to be maximized as \[F=x_{1}f_{1}(x_{1})+\ x_{2}f_{2}(x_{2})+\ x_{3}f_{3}(x_{3})+\ x_{4}f_{4}(x_{4 })+\ x_{5}f_{5}(x_{5}) \tag{5}\] In the equation 5, F is the objective function that we want to maximize, x\({}_{i}\) is the percent of team shots player\({}_{i}\) takes in a game, and f\({}_{i}\) is the shooting behavior function that we had calculated earlier. There are some constraints over the variables used in the objective function that is to be considered. The sum of shot % taken by every player should be equal to 1 (equation 6). The shot % of any player should not exceed 40% of overall team shots in a game (equation 7). This value was determined by past behavior observed across teams in different seasons. The utility contributed by each player should never be negative (equation 8). \[x_{1}+x_{2}+x_{3}+x_{4}+x_{5}=1 \tag{6}\] \[0.0\leq x_{i}\leq 0.40\] (7) \[x_{l}f_{t}(x_{i})\geq 0 \tag{8}\] We can maximize the objective function by staying in the bounds of constraint. The value of the objective function is what the team can achieve by taking into consideration the player's shooting behavior. ## 4 Results We ran the simulation model for every combination of five players who are part of the team and have played at least a defined number of games. There were two groups created for the players. Group 1 had players who started at least 30 games (regular starters) and group 2 had all the players which were part of the team roster and have played (different from starting the game) a certain number of games. For each of those groups we evaluated four different scenarios in which a team can operate. We looked at four scenarios to compare the payoff a team (group of five players) can get. The most obvious scenario, where the player with the best shooting ability (the player with highest intercept value) takes the shot for most of the times. Then we moved to scenario where every player tries to contribute an equal utility value to the final objective function, we also looked at scenario where every team member is taking an equal percentage of shots i.e. 20% of total team shots each. Finally, we concluded with the proposed model we defined above, which considers the constraints, and shooting behavior model which we had calculated earlier. ### Group of regular starters There were only 7 players who started more than 30 games, an observation which aligns with what we have seen in earlier season. On evaluating the result for all four different scenarios in which a team can operate for every combination of 5 players. Our proposed strategy model was better by 2% than the second best approach. The second best approach being that the player with best intercept value takes the most percentage of shots which is not a very ideal scenario in the real world but can be stated as the best case scenario. The proposed strategy model was better by 9% than the strategy where every player takes an equal percentage of shots which is much closer to real world scenario. The comparison of different strategies can be seen in figure 5. ### Group of all players on the team roster There are a total of 14 players which were present on the team roster which satisfied our initial filtering of players based on various factors like minutes played and have played at least 10 games (not necessarily started the games). There were a total of 2002 different combination of players. Figure 5: Payoffs for different strategies for the group of regular starters. The proposed strategy which considers player’s shooting behavior is shown in red. The utility value for proposed model is 0.678 Figure 6: Payoffs for different strategies for the group of all the players in the roster. The utility value for proposed model in this case is 0.717 (shown in red) Our proposed model was better by 4% than the second best approach where the best player takes the most percentage of team shots. The result was much more prominent when it comes to scenarios that imitate the real world situation. Our strategy was 12% better than the strategy where every player takes an equal percentage of shots. The comparison of different strategies and our proposed strategy is shown in figure 6. The result obtained for this group was much more prominent. ## 5 Conclusion In the first part we looked at the existing research that has been done in the area. We looked at the potential avenues and gaps in the work which has been done at the intersection of sports and game theory. The second part of the work looked at the methodology of the work starting with introducing the concepts that would be required to understand the work, then we deep lived into the data that has been used in the work. Later, we put our focus on modelling the player's shooting behavior using terms derived from player stats using linear regression and we derived the optimization function and walked through the constraints applied over the function. In the final section we looked the result and compared proposed strategy with other different strategies. There are a lot of external factors that affects the performance of the team in basketball. This work doesn't take into consideration the defensive action that the opposition takes against the team with the ball. Defensive action is surely an important factor that affects the performance of the team. We only looked at the final arc of a passing sequence i.e. when the player takes a final shot that may or may not result in the increase of the score. This paper provides the foundation and introduces optimization techniques for applying game theory concepts to basketball. We showed similarity between network flow and basketball, hence extending the concept of "Price of anarchy" to the game of basketball to find optimum distribution of shot percentage among the players of the team to maximize the points in a game of basketball. Sports is in general a domain where there is a lot of external influence, this makes analysis of sports very difficult. This paper provides a new perspective to improve the performance of basketball teams by finding the optimum distribution of shot attempts in the team to maximize points scored, a small step towards a more accurate quantitative analysis.
2309.05893
Numerical Study of Distorted Tulip Flame Propagation in Confined Systems
Understanding the dynamics of premixed flames that propagates in confined systems is important in a wide range of applications. The study of premixed flames propagating in a closed channel covers a variety of complexities related to flame ignition, laminar flame development, and strong non-linear interaction between the flame and the surrounding walls. Accordingly, to study the dynamics of premixed flames propagating in closed channels, numerical simulations of the propagation of distorted tulip flames are carried out in this work. More specifically, a set of fully reactive compressible transport equations are solved here using the high-order PPM. A 21-step chemical kinetic mechanism is employed to model the chemical kinetics and the energy release in an air-hydrogen mixture. Computational mesh independence studies are carried out in this work by both refining grid elements and employing different levels of adaptive mesh refinements (AMR). The main results show that the classic tulip flame behavior evolves into a distorted one. Indeed, two consecutive collapses on the flame front are observed, which are related to wave pressure and the presence of reverse flow. It is particularly found that the pressure wave produced by the interaction of the flame skirt with the side walls reduces the flame velocity and contributes to the formation of tulip flames. This is consistent with the reduction in both flame area and pressure gradient at the flame tip. Furthermore, the collapse of flame cups is associated with the formation of the vortex near the channel side walls and the increase of pressure waves.
Fernando Illacanchi, Sebastian Valencia, Cesar Celis, Andres Mendiburu, Luis Bravo, Prashant Khare
2023-09-12T00:34:59Z
http://arxiv.org/abs/2309.05893v1
# 2023 277-International Technical Congress of Mechanical Engineering ###### Abstract Understanding the dynamics of premixed flames that propagates in confined systems is important in a wide range of applications. The study of premixed flames propagating in a closed channel covers a variety of thermochemical complexities related to flame ignition, laminar flame development, and strong non-linear interaction between the flame and the surrounding walls. Accordingly, to study the dynamics of premixed flames propagating in closed channels, numerical simulations of the propagation of distorted tulp flames are carried out in this work. All the numerical simulations are performed using the open-source computational tool PeleC, which is part of the Exascale Computing Project (ECP). More specifically, the fully reactive compressible Navier - Stokes equations are solved here using the high-order PPM (piecewise parabolic method). A 21-step chemical kinetic mechanism is employed to model the chemical kinetics and the energy release in a stoichiometric air/hydrogen mixture. Computational mesh independence studies are carried out in this work by both refining grid elements and employing different levels of adaptive mesh refinements (AMR). The final mesh employed here features an element size of 1/96 cm with 5 levels of refinement performed based on density gradients. The main results show that the classic tulp flame behavior evolves into a distorted one. Indeed, two consecutive collapses on the flame front are observed, which are related to wave pressure and the presence of reverse flow. Important aspects of the flame formation and propagation process analyzed include (i) the initial evolution of the tulp flame and its comparison with previous experimental and analytical results, (ii) the propagation of acoustic waves and its influence on flame evolution, and (iii) the formation of the distorted tulp flame and the collapse of flame cups. It is particularly found that the pressure wave produced by the interaction of the flame skirt with the side walls reduces the flame velocity and contributes to the formation of tulp flames. This is consistent with the reduction in both flame area and pressure gradient at the flame tip. Furthermore, the collapse of flame cups is associated with the vortex's formation near the channel side walls and the increase of pressure waves. P remarked flames, Distorted tulp flame, Flame formation and propagation, AMR. ## 1 Introduction Understanding the dynamics of premixed flames and its relationship with more complex phenomena such as deflagration to detonation transition (DDT) is essential for a variety of engineering applications including pressure gain systems, such as rotating detonation engines (RDEs) (Dos et al., n.d.; Xiao et al., 2011). Due to the steep cost and limitations in experimental measurements, numerical simulations today provide a viable alternative that allow an investigation of the underpinning thermochemical mechanism. Flame propagation in closed and half-open channels is a complicated dynamic process because many physical phenomena play a key role during the flame ignition, the development of the laminar flame, and the interaction of the flame with the lateral walls reflecting pressure waves (Xiao et al., 2015). For instance, a laminar flame propagation is dominated by intrinsic instabilities such as the Darrieus-Landau (DL), thermal-diffusive, and Rayleigh-Taylor (RT) ones (Chung, 2006; Xiao et al., 2015). Several past studies about premixed flames propagating in closed tubes discussed a series of flame shape changes such as spherical, finger, concave, and convex shapes. During the associated flame propagation processes, two main flame configurations are usually identified, (i) tulip and (ii) distorted tulip flames. The first one is a concave flame front with cusps at the lateral walls pointing toward the unburned mixture (Xiao et al., 2017a). The first photograph of this flame was obtained by Ellis in 1928 (Ellis, 1928) and it was first named as tulip flame by Salamandra in 1959, who noticed the required flame aspect ratio to successfully reproduce the flame inversion (length/diameter \(>2\) for closed tubes) (Clanet & Searby, 1996). The flame transition to a tulip flame is accompanied by a flame surface area reduction and a velocity decrease. Over the years different physical mechanisms have been proposed to explain the flame front inversion from a convex shape to a concave one. For instance, based on an experimental work, Ponizy et al. (2014) proposed that the tulip flame is purely governed by a hydrodynamic process without any dependency on intrinsic instabilities. In contrast, after carrying out a theoretical and experimental study, Clanet and Searby (1996) suggested that the flame inversion is governed by Taylor's instabilities at the flame skirt right after the flame front inversion. The second flame configuration, distorted tulip flame (DTF), was observed more recently by Xiao et al. (2011) in a hydrogen/air mixture. In the referred work it is concluded that this flame can be observed in both closed and half-open channels, but in the latter ones, DTF is reproducible only in equivalence ratios in the range of \(0.84\leq\varphi\leq 4.22\). Notice that a DTF forms from consecutive collapses of the cusps near the channel lateral walls right after a well-noticeable tulip flame. The formation of this flame is usually associated with intrinsic instabilities such as RT ones and pressure waves generated from the interaction between the flame skirt and the duct side walls (Xiao et al., 2015). Nevertheless, various complex physical phenomena are involved in the formation of a DTF such as those coming from viscosity effects and other intrinsic instabilities. Accordingly, in the present work, a numerical study of propagating flames in closed tubes is carried out. More specifically, using a 21-step chemical mechanism, this numerical study aims to elucidate the dynamics of flame formation and propagation, especially that prevailing in tulip and distorted tulip flames. Likewise, this study assesses the capabilities of the computational tool PeleC to properly describe the reactive flows accounted for here. Summarizing what follows, sections 2 and 3 describe, respectively, the mathematical and numerical models employed here. And sections 4 and 5 the main results obtained in this work and the conclusions drawn from them, respectively. ## 2 Mathematical Modeling The numerical simulations carried out here involved the solution of the 2D fully compressible reactive Navier-Stokes equations, coupled with a 21-step kinetic mechanism developed by Li et al. (2004) to describe the chemical kinetics. Notice that this kinetic mechanism is suited for nitrogen as the bath gas. Accordingly, the governing equations used in this work to describe the reacting flow (Henry de Frahan et al., 2022; Marc T & Jon S Rood, n.d.; Poinsot & Veynante, n.d.) read as follows, \[\frac{\partial\rho}{\partial t}+\frac{\partial\rho u_{i}}{\partial x_{i}}=0 \tag{1}\] \[\frac{\partial\rho V_{k}}{\partial t}+\frac{\partial(\rho(u_{i}+V_{k,i})Y_{k} )}{\partial x_{i}}=\omega_{k} \tag{2}\] \[\frac{\partial\rho u_{j}}{\partial t}+\frac{\partial p}{\partial x_{j}}+ \frac{\partial(\rho u_{i}u_{j})}{\partial x_{i}}=\frac{\partial T_{ij}}{ \partial x_{i}}+\rho\sum_{k=1}^{N}Y_{k}f_{k,j} \tag{3}\] \[\frac{\partial(\rho E)}{\partial t}+\frac{\partial(\rho Eu_{i}+pu_{i})}{ \partial x_{i}}=\frac{\partial\big{(}u_{i}T_{ij}\big{)}}{\partial x_{j}}-\frac {\partial q_{i}}{\partial x_{i}}+\omega_{T}+\rho\sum_{k=1}^{N}Y_{k}f_{k,i} \big{(}u_{i}+V_{k,i}\big{)} \tag{4}\] \[\omega_{T}=\sum_{k=1}^{N}\Delta h_{f,k}\omega_{k} \tag{5}\] \[q_{i}=-k\frac{\partial T}{\partial x_{i}}+\rho\sum_{k=1}^{N}h_{k}Y_{k}V_{k,i} \tag{6}\] \[\tau_{ij}=\frac{-2}{3}\mu\frac{\partial u_{k}}{\partial x_{k}}\delta_{ij}+\mu \left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial u_{j}}{\partial x_ {i}}\right) \tag{7}\] \[E=\sum_{k=1}^{N}Y_{k}E_{k}(T) \tag{8}\] \[p=RT\sum_{k=1}^{N}\frac{Y_{k}}{W_{k}}\frac{1}{\tau-b_{m}}-\frac{a_{m}}{\tau(\tau+b _{m})} \tag{9}\] where \(\rho,u_{i},Y_{k},t,\dot{\omega}_{k},p,E,\omega_{T},R,W_{k}\) and \(T\) stand for density, velocity, mass fraction, time, reaction rate of species \(k\), pressure, total energy, heat release by combustion, universal gas constant, molecular weight of species \(k\), and temperature, respectively. The diffusive velocity along the \(i\) direction of species \(k\), \(V_{k,i}\), is included in the mass conservation equation for species \(k\), Eq. (2). The viscous stress tensor, \(\tau_{ij}\), is defined in turn in Eq. (7), where \(\mu\) is the dynamic viscosity. In the linear momentum equation, Eq. (3), the body force acting on species \(k\) along the \(i\) direction is also considered. Notice as well that the energy conservation equation, Eq. (4), includes the power produced by the body force acting on species \(k\), \(\rho\sum_{k=1}^{N}Y_{k}f_{k,i}(u_{i}+V_{k,i})\). In addition, the energy-diffusive term \(q_{i}\), Eq. (6), comprehend the diffusion term given by Fourier's law, and a second term associated with diffusion of species with different enthalpy. To close the system involving the compressible Navier-Stokes equations, the gas equation of state is employed. In this work, the Soave-Redlich-Kwong (SRK) equation of state, Eq. (9), has been utilized, where \(a_{m}\) and \(b_{m}\) represent the repulsion terms. Transport properties, including dynamic viscosity, mass diffusivity, and thermal diffusivity, are calculated in turn using the relationships developed by Ern and Givangigli (1995), \[\mu=\sum_{m=1}^{N}(X_{m}(\mu_{m})^{\alpha})^{\frac{1}{\alpha}}\,with\alpha=6 \tag{10}\] \[k=\sum_{m=1}^{N}(X_{m}(k_{m})^{\alpha})^{\frac{1}{\alpha}}\,\,with\alpha=1/4 \tag{11}\] \[D_{m,mix}=\frac{1-Y_{m}}{\sum_{j=m}\frac{X_{j}}{D_{mj}}} \tag{12}\] Finally, as indicated above, the combustion of the stoichiometric hydrogen/air mixtures considered here is described using a 21-step chemical kinetic mechanism, which includes 8 chemical species, \(H_{2},O_{2},H_{2}O,H,O,OH,HO_{2},H_{2}O_{2}\). ## 3 Numerical Approach The numerical simulations are carried out here using the open-source computational tool PeleC (Marc T & Jon S Rood, n.d.), which provides a platform for combustion and turbulence-chemistry interaction research in the ExaScale computing era (Marc T & Jon S Rood, n.d.). Since it allows focusing the computational resources on specific flow regions, a block-structured adaptative mesh refinement (AMR) method is also employed. Notice that the AMR processes performed do not obey a fixed relationship between coarser and finer grids. That is, the refinements vary over time as a function of a flag or tracer such as the temperature gradient (Zhang et al., 2019). In terms of numerical schemes, the PPM (piecewise parabolic method) one extensively described in (Colella & Woodward, 1984) is used in this work. This scheme, an extension of Godunov's method first introduced by van Leer (van Leer, 1979), is well suited for computing strong shocks. ### Computational domain and boundary conditions A closed tube, 4 cm wide and 28 cm long, is numerical studied in this work. The corresponding 2D computational domain is shown in Figure 1. More specifically, following previous works (Li et al., 2021), a half tube with a symmetric axis is considered here. Therefore, the upper boundary condition (\(y=2\) cm) corresponds to a symmetry condition. In addition, to focus exclusively on the flame propagation dynamics, all the walls are considered adiabatic, non-slip, and reflecting. The ignition is performed in turn using a semi-circular pocket of hot gas at a specified temperature, \(T_{ ignition}\). The radius of this hot gas pocket is determined from the theoretical study by Bychkov et al. (2007), where a dimensionless analysis for the early stages of the flame propagation process is proposed. Figure 1: Closed tube computational domain. All walls are adiabatic, non-slip, and reflecting. Dimensions in cm. ### Mesh refinement criteria The PeleC solver (Marc T & Jon S Rood, n.d.) includes an adaptive mesh refinement (AMR) method, which allows the dynamic creation of grid levels based on various tagging criteria such as species mass fractions or density gradient (Marc T & Jon S Rood, n.d.; Zhang et al., 2019). The associated grid refinement is performed according to criteria defined by the user, so the solver allows limitless levels of refinement. In this work, two tagging criteria, 5 levels of refinement, and a refinement ratio of 8 for each level, have been employed. Following verification examples of the solver (Marc T & Jon S Rood, n.d.), \({HO_{2}}\) has been used as the chemical species that characterizes the reaction zone. Thus, mesh refinements have been performed where the mass fraction of \({HO_{2}}\) is above a specified value. Similarly, due to the density jump at the flame front, additional mesh refinement criteria based on the density ratio increase, allowing an optimization of the computational cost and a reduction of the numerical errors coming from the initial discretization of the flame front (Figure 2), have been also utilized. These criteria are expressed as follows, \[Max\left(\left\|\frac{f_{i+1,j}}{f_{i,j}}\right\|,\left\|\frac{f_{i,j+1}}{f_{i,j}}\right\|,\left\|\frac{f_{i,j}}{f_{i-1,j}}\right\|,\left\|\frac{f_{i,j}}{f_{ i,j-1}}\right\|\right)\geq u \tag{13}\] \[f_{i,j}\geq u \tag{14}\] where \(f_{i,j}\) is a computed parameter at a given control volume \(i,j\). Eq. (13) is applied for the density jump at the flame front and the configured ratio is 1.02. Finally, the \({HO_{2}}\) threshold accounted for is equal 2.5e-4. ## 4 Results and Discussions ### Chemical kinetic mechanism To reproduce major characteristics of air/hydrogen mixtures, previous works involving flame propagation in closed ducts mostly employed ideal one-dimensional steady-state flame models with one-step global reactions (Gamezo et al., 2007; Xiao et al., 2015). This type of models, widely validated experimentally in (Oran & Gamezo, 2007) for instance, have been mainly employed in large eddy simulation (LES) contexts (Han et al., 2019) to reduce the associated computational cost. In the present work, finite rate chemistry is accounted for through the use of a 21-step chemical kinetic mechanism for nitrogen as a main dilutant (Marc T & Jon S Rood, n.d.). Accordingly, Figure 3 shows the temporal evolution of the ratio between the temperature and density of the unburned mixture and the burnt one. Notice that the variation in time of both temperature and density ratio, whose initial values agree with the input model parameters defined in (Gamezo et al., 2007; Xiao et al., 2015), comes from the compressibility effects caused by the overall pressure and temperature increase in the closed channel. Additionally, the temporal variation of the flame thickness at the leading flame-tip is shown in Figure 4. The flame thickness is computed by the temperature profile, \(\delta_{L}^{\alpha}=\frac{T_{2}-T_{1}}{\max\frac{\partial T}{\partial x}}\), (Poinsot & Veynante, n.d.). The rapid decrease of this parameter is due to the behavior of the density and temperature ratios, as well as to the increase of the overall pressure over time, also shown in Figure 4. These results emphasize that the employed chemical kinetic mechanism properly describes the main characteristics of the air/hydrogen reacting mixture studied here. Figure 2: Details of the computational mesh at the start of the numerical simulations. ### Analytical and numerical results of flame propagation in early stages The acceleration mechanism of premixed flames in open channels was previously studied experimentally by Clanet and Searby (1996). Similarly, Bychkov et al. (2007) proposed an analytical model for the acceleration of premixed flames at the early stages accounting for cylindrical channels with slip adiabatic walls. This analytical model employs dimensionless coordinates for position \(\frac{(r,\pi)}{R}\), velocity \(\frac{(u_{r}u_{\theta})}{u_{r}}\), and time \(\tau=\frac{\nu_{r}t}{R}\). In addition, it considers an uncompressible mixture that ignites at the center of the closed wall and propagates towards the open end. In particular, the referred analytical model describes accurately the initial stages of a propagating premixed flame, that is, the initial hot chunk of gas that evolves into a finger shape flame until the skirt's flame touches the lateral walls. In their model, Bychkov et al. (2007) established reduced times for each main stage of the flame propagation process. Accordingly, the spherical shape is defined when the flame skirt moves halfway to the side wall \(\tau_{sph}=\frac{1}{2\alpha}\), where \(\alpha=\sqrt{\theta\ (\theta-1)}\) and \(\theta\) is the expansion factor \(\left(\frac{\rho_{\Lambda}}{\rho_{b}}\right)\), defined as the density ratio of the unburned and burnt gas (Bychkov et al., 2007). The reduced time at which the flame skirt touches the lateral wall is in turn defined as, \[\tau_{wall}=\frac{1}{2\alpha}\binom{\theta+\ \alpha}{\theta-\ \alpha}. \tag{15}\] Later Xiao et al. (Xiao et al., 2012) suggested a scaled time for the tulip flame phenomenon, \[\tau_{tulip}=\ \tau_{inv}+\ \tau_{wall}\, \tag{16}\] where \(\tau_{tup}=\lambda/\alpha\) is the period between the \(\tau_{wall}\) and \(\tau_{tulip}\). Finally, the position of the leading flame-tip is defined as (Bychkov et al., 2007), \[Z_{tip}=\frac{\theta}{4\alpha}\left[\exp(2\alpha\tau)-\exp(-2\alpha\tau) \right]=\frac{\theta}{2\alpha}\sinh(2\alpha\tau). \tag{17}\] This theoretical model has been validated in the past using the experimental data obtained by Clanet and Searby (1996) and the agreement has been relatively good (Bychkov et al., 2007). Figure 5 and Figure 6 show, respectively, a comparison of the reduced leading flame-tip position and displacement velocity between the described theoretical model and the numerical results obtained using Pelec. As noticed from these figures, initially the analytical and numerical results show a good agreement. However, since the theoretical model considers a constant pressure, non-compressible phenomena, and constant burning velocity (Xiao et al., 2012), it tends to overestimate the velocity, consequently, the position of the leading flame-tip (Figure 5). Thus, the analytical model is only valid for the early stages, where intrinsic instabilities, pressure waves, and turbulence do not play a key role in the flame propagation process. From Figure 6, due to the interaction of the flame front with the reflecting pressure wave originated at the right-end wall, at a reduced time of \(\tau=0.147\), the displacement velocity computed numerically largely differs from the analytical one. This is an expected result because pressure wave interactions are not considered in the analytical model proposed by Bychov et al. (2007). It is worth noticing here that the combustion process studied in this work generates weak pressure waves that propagate out ahead of the flame front, which are reflected by the tube lateral walls. However, these initial waves are not strong enough to greatly alter the flame propagation dynamics (Xiao et al., 2015). The first reflected pressure waves from the right end wall travels back through the channel and interact with the flame front at \(1.28\ ms\), as shown in Figure 7. This reflecting wave, unlike the initial ones coming from the lateral walls, reduces considerably the velocity displacement of the leading tip-flame, from 30 m/s to 15 m/s (Figure 6). Once the reflection reaches the flame front, the pressure wave passes through the reaction zone and strongly increases the local vorticity, mainly at the flame front. At the same time, the flame front generates a weak reflecting wave. Finally, the first reflecting wave is reflected by the left-end wall of the tube to the unburned mixture and, as shown in Figure 7 at 1.29 ms, the flame front reflects immediately a weak pressure wave. Figure 8 highlights the influence of intrinsic instabilities such as the Darrieus-Landau (DL) on flame dynamics (Clavin & Searby, 2016). In premixed propagating flames, the difference in density of the fresh and burnt gases is responsible for the piston effect that pushes away the fresh gas to the right-end wall. Figure 4 shows in particular that the expansion ratio varies according to the overall pressure. Moreover, there are some local pressure variations due to the interaction between the flame front and the reflecting waves. Additionally, there are nonlocal hydrodynamics and thermos-diffusive instabilities which are intrinsic of flame front. For instance, the flow streamlines are deflected by the curvature of the flame front, DL instabilities. Thus, as shown in Figure 8, the vorticity in the burnt gas is concentrated mainly at the back of the reaction zone where the curvature is the highest. It is worth noticing that the maximum value of vorticity is concentrated in the reaction zone, thermal-diffusive instabilities. This perturbation increases over time and generated wrinkles at the flame front, as is shown in Figure 10. ### Overall evolution and shape change of the flame Figure 9 shows Schlieren photographs (left) and numerical predictions (right) of flame temperatures at representative times. The flame propagation is usually divided into 5 stages, (i) spherical flame, (ii) finger-shape flame, (iii) front flame skirt touching lateral walls, (iv) tulip flame (TF), and (v) distorted tulip flame (DTF). After the flame ignition, the flame front expands uniformly and, due to the combustion process, pressure waves that travel out ahead of the flame front are generated. These pressure waves reach the lateral walls and are reflected, creating a series of crisscrossed lines representing local pressure increases. More specifically, in the first stage of the flame propagation process, the flame front expands without any effects on the lateral walls. Then, at \(t=0.52\ ms\), when the flame front reaches halfway to the tube lateral wall (Bychkov et al., 2007), the transition from spherical to finger shape takes place. In the second stage, due to the enclosure of the lateral walls (Kurdyumov & Matalon, 2015), the flame front is elongated in the axial direction and the flame surface area increases considerably, consequently, so does the displacement velocity. The flame front first touches the tube side wall at \(t=2.34\ m\) This flame-wall interaction generates pressure waves that travel along the tube and interact with the flame front. The velocity displacement of the leading tip is consequently reduced considerably due to the generated pressure waves and the surface reduction. During this third stage, there is a constant pressure wave production by the interaction of the flame skirt with the lateral walls. As is shown in Figure 9, the flattened flame front is formed at \(t=3.3\ ms\), thereafter, the flame front inversion takes place. This last phenomenon occurs mainly due to both the reverse flow generated at the central region and the hydrodynamic effects of the lateral pressure waves. The tulip flame characterized by a well-pronounced cusp pointing to the right end wall is formed at \(t=4.13\ ms\). After the formation of the TF, secondary cusps are formed at the flame front near the lateral walls, which move towards the central axis. The Schlieren images show the referred motion inside the burnt gas after the formation of the flattened flame front (\(t=3.3\ ms\)). In the central region, the flow travels in the opposite direction of the leading flame-tip and forms a mushroom structure due to compression of the lighter burnt gas. ### Generation of pressure waves Following previous works (Xiao et al., 2015, 2017a), the generation of pressure waves and their interaction with the flame front in premixed flames has also been analyzed in this work. Accordingly, Figure 10 shows temperature (left) and pressure (right) fields at the third stage of the flame propagation process, finger shape flame, at which the flame front touches the lateral walls and generates pressure waves. Indeed, after the flame skirt reaches the side wall, at \(t=2.34\ ms\), a semi-circular rarefaction wave that propagates front and back through the channel is generated. This rarefaction wave reaches the leading flame front at \(t=2.44\ ms\), which increases locally the pressure and reduces the displacement Figure 9: Sequence of Schlieren photographs (left) and numerical predictions (right) of temperature fields showing the evolution of premixed air/hydrogen mixtures in a closed channel of 4 cm x 28 cm. velocity. The contact between the flame skirt and the sides walls also generates a reverse flow in the burnt region (Figure 15). Additionally, the flame surface area reduces considerably the displacement velocity (Xiao et al., 2015). When the flame front elongates due to the enclosure of the lateral walls, local wrinkles along the surface flame front are formed. Specifically, at the time the flame skirt reaches the lateral wall, three wrinkles generate the rarefaction waves, Figure 10 (\(t=2.34\ ms\)). The second interaction between the flame front and the lateral walls generates a stronger rarefaction, Figure 10 (\(t=2.72\ ms\)), which, similar to the first rarefaction wave, decelerates the leading flame tip more intensively. Finally, there is a third flame-wall interaction that generates a weak wave. This is produced near the time of formation of the flattened flame and does not impact considerably the flame propagation dynamics. ### Flame propagation dynamics In this section, the flame propagation dynamics is discussed in terms of the leading flame-tip, which is defined as the nearest point of the flame front to the right-end wall. Thus, unlike previous works (X. Li et al., 2021; Xiao et al., 2013, 2017b), where the leading tip moves initially along the channel centerline, the y-axis position of the leading tip in this work moves slightly away from the centerline. This occurs because of the interaction between the flame front and the pressure waves present in the flow (Figure 7), i.e., the displacement velocity is temporally reduced due to the pressure waves that shocks the leading flame-tip. Consequently, the position of the leading flame-tip moves down. After the flame inversion, the flame front near the lateral walls moves ahead of the one at the centerline, thus, the leading flame-tip moves near the sidewalls, where the cusps are formed. To illustrate this point, Figure 12 shows the temporal evolution of both the position of the leading flame-tip and the displacement velocity. Notice that the flame position here is defined as the maximum distance of the reaction zone to the left end wall, and the origin of the coordinate system is set at the center of the left end wall. The displacement velocity is defined thus as the variation of the flame position over time and computed by a centered difference. As shown in Figure 14, and in agreement with the results shown in (Xiao et al., 2013) It is worth emphasizing here that the flame propagation velocity fluctuates greatly temporally during the combustion process. This local acceleration and deceleration lead to the formation of wrinkles at the flame front, and these perturbations are amplified along the flame development (Clavin & Searby, 2016; Xiao et al., 2013). However, the overall velocity profile is described by a well-defined curve that depicts the development of the flame. Initially, due to the flame front surface increase, the flame accelerates rapidly until the formation of the finger-shape flame. At \(t=1.28\ ms\), the first reflecting wave from the end wall reaches the flame front and strongly decreases the displacement velocity (Figure 14). In contrast, there is a temporal strong increase of the pressure growth. Unlike previous works (Xiao et al., 2012), this first reflecting wave plays a key role in the initial development of the flame since this wave retards the time when the flame skirt touches the lateral walls. The flame skirt touches the side walls at \(t=2.34\ ms\) and this flame-wall interaction generates a rarefaction wave that reaches the flame front. These waves and the surface area reduction govern the dynamics of the flattened flame. A second rarefaction wave is generated at \(t=2.72\ ms\) and this reinforces and extends the flame deceleration until \(t=3.03\ ms\). At this time, the flattened flame is noticeably visible (Figure 9) and starts the inversion of the flame, that is, the flame front near the side walls moves ahead of the flame at the centerline. According to Markstein (1953), flame inversion is due to the deceleration of the flame front and the interaction of pressure waves. In the present Figure 10: Temperature (left) and pressure (right) contours, and pressure wave generation. work, no strong pressure waves are acting in the closed tube, however, the pressure waves generated during the development of the TF are comparable. After the inversion of the flame front, and in concordance with the formation of distorted tulip flames, the velocity profile changes periodically. Moreover, the pressure growth accompanies the displacement velocity, i.e., a positive slope of pressure growth is associated with an increase in displacement velocity and, consequently, a negative slope indicates a decrease in velocity. At \(t=4.33\ ms\), the tulip flame is already formed and starts the formation of the DTF characterized by an increase in velocity. The formation of the first DTF takes about 1.11 ms, whereas the second one is formed in a shorter period, 0.98 ms. These results are in agreement with the experimental ones discussed in (Xiao et al., 2013). Further discussions about the relationship between pressure oscillations and velocity in closed tubes are included in (Gonzalez, 1996). Figure 13 shows the temporal evolution of the vorticity and the pressure growth at the leading flame front. It is noticeable from this figure that at \(t=0.52\ ms\), i.e., when the flame shape starts to elongate along the x-axis, the vorticity suffers an abrupt increase. This occurs because the associated gas expansion leads to the production of vorticity. More specifically, the burnt gas behind the flame front is always rotational and depends on the radius of curvature. Additionally, the streamlines at the flame front are deviated due to the conservation of tangential momentum and the increase in the normal velocity due to mass conservation, Darius Landau (DL) instability. The referred deviation becomes stronger when the curvature radius increases. This phenomenon is explained in detail in (Clavin and Searby, 2016). At \(t=1.30\ ms\), the first reflecting wave shocks the flame front and amplifies the vorticity. Additionally, it is created a wrinkle at the flame front, especially at the most curved region. Similar to velocity, vorticity is coupled with pressure growth at later flame stages. Hence, the formation of TF and DTF is usually associated with the increase in velocity and vorticity, and the reduction in pressure growth. The formation of wrinkles or cellular deformation is akin to Rayleigh-Taylor (RT) instabilities (Gonzalez, 1996; Xiao et al., 2013). According to Xiao et al. (2013), RT instabilities are responsible for DTF formation at later stages and depend on the aspect ratio \(\alpha\) defined as the ratio between the tube length and radius. Notice as well that RT instabilities are generated by the misalignment of pressure and density gradients (Xiao et al., 2017). Finally, Figure 11 shows the temporal evolution of the instantaneous pressure growth at the flame front and the pressure. This figure, unlike the overall pressure growth shown in Figure 14, where a Savitzky-Golay filter (Press and Teukolsky, 1990) is applied to reduce oscillations, highlights the coupling between the flame propagation and the production of pressure waves. Initially, the pressure does not change abruptly. After the interaction of the reflecting waves with the flame front, there is an increase in the instantaneous pressure growth. Nevertheless, this pressure growth increase does not affect the propagation dynamics of the leading flame-tip. After the flame inversion, the amplitude of the pressure growth increases considerably and agrees with previous numerical findings (Xiao et al., 2015). ### Formation of tulip (TF) and distorted tulip (DTF) flames The formation of tulip flames involves the inversion of the flame front from a finger shape to a concave one with a cusp pointing to the right end wall (Ponizy et al., 2014). As highlighted above, there are several factors that affect the propagation dynamics of premixed flames. Figure 15 shows the evolution of the streamlines near the flame front. After the skirt flame touches the tube lateral walls, the fluid inside the burnt region moves in two opposite directions, Figure 15 (\(t=2.53\ ms\)), the fluid near the flame front expands towards the fresh gas, whereas the fluid near the left end wall moves in opposite direction. As is shown in Figure 15, \(t=2.60\ ms\), this reverse flow in the burnt gas generates a single vortex near the side walls, which moves towards the flame front. The referred vortex reaches the flame front at \(t=2.90\ ms\), that is when the flame front starts to flatten. The phenomenon just described is crucial for tulip flame formation. In addition, at \(t=3.17\ ms\), the vortex under discussion generates a reverse flow in the fresh gas region. The influence of the pressure gradient in this reverse flow was observed by Kurdyunov and Matalon (2015) in their study carried out accounting for open narrow channels. The vortex formed creates a strong coupling between burnt and the fresh gases. Thus, reverse flows dominate the flame propagation dynamics at the centerline. According to Xiao(Xiao et al., 2015) indeed, reverse flows are responsible for both flame inversion and tulip flame formation. Figure 16 shows in turn velocity distributions and streamlines during the formation of TF and DTF. It is first observed form this figure that the vortex generated by the skirt touch accompanies the flame propagation, and the magnitude of the velocity at the vortex regions changes according to the flame development stage. Thus, at \(t=4.25\ ms\), when the tulip flame is being formed, the flow velocity near to the lateral wall is around \(3500\) cm/s, maximum velocity in the tube, and the burnt gas pushes the flame in the lower region. At \(t=4.31\) then, the velocity at the side wall is reduced considerably, to \(2000\) cm/s, and, at \(t=4.39\), this flow velocity is about \(750\) cm/s. Consequently, in agreement with Figure 14, the tulip flame is fully formed when the displacement velocity of the leading flame tip is completely decelerated. In contrast, the flow velocity at the centerline increases when the leading flame tip decelerates and decreases when the flow velocity near the lateral wall increases its velocity. These results agree with previous studies such the one by Matalon and Metzener (2001). They suggest indeed that the TF formation is due to the large vortex formed at the early stages of the flame development fostered by the enclosure of the lateral walls. The formation of the first DTF follows a similar mechanism to that one described in the case of TF. The interaction of the reverse flow, vortex, and flame front produces the required conditions for its formation. After the sudden deceleration of the leading flame-tip and the reverse flow velocity increase in the burnt region, it is created a strong burnt gas flux at the bottom of the channel that pushes the fresh gas toward the right end wall, Figure 16 (\(t=4.72\ ms\)). As the velocity difference between both burnt and fresh gas fluxes increases, the vortex generated contributes to the growth of the first DTF. Additionally, a vortex behind the flame front that interacts with its surroundings and produces wrinkles along the flame surface is also generated, Figure 16 (\(t=4.82\ ms\)). These wrinkles become more intense as the displacement velocity increases. However, it is observed that the cusp formed at the bottom wall is the one that governs the flame propagation dynamics. Finally, at \(t=4.93\ ms\), compared to the flow at the side wall, the reverse flow increases its magnitude considerably, so the displacement velocity decelerates. This mechanism repeats and forms a second cusp (Figure 14). The oscillation period of the velocity profile at later flame development stages defines the formation of DTF (Xiao et al., 2013). Notice that for the formation of the first DTF here, the oscillation period was \(1.10\ ms\). Figure 15: Streamlines and velocity magnitude distribution during a flame inversion. ## 5 Conclusions The propagation of premixed flames in closed channels was numerically studied in this work. More specifically, to have an insight about the associated reactive flow dynamics, the formation of tulip and distorted tulip flames was analyzed. All numerical simulations were carried out with the open-source computational tool PeleC. In particular, a fully compressible reactive 2D Navier-Stokes equation system, coupled with a 21-steps chemical kinetic mechanism, was solved. The numerical simulations were performed using PPM (piecewise parabolic method) as numerical scheme and employing adaptative mesh refinement (AMR) processes. The interaction between pressure waves, vorticity, and flame fronts was particularly evaluated. In terms of results, the ones obtained here highlight a flame dynamic similar to others discussed in prior experimental and numerical studies: the initial spherical flame first evolves to an elongated one, and then to tulip and distorted tulip flames. Nevertheless, unlike other numerical results obtained in the past, where the flame front is modeled as a hydrodynamic discontinuity, thickened flame model, the ones obtained in this work show the formation of wrinkles (usually associated with DL and thermos-diffusive instabilities), especially at the initial stages of the flame development and where the flame front radius of curvature is higher. Notice that the streamlines deviation caused by the wrinkles tend to intensify the initial flame deformation. Likewise, it has been also observed that the pressure waves reflected by the channel walls interact with the flame front increasing locally the vorticity; however, the propagation dynamics of the leading flame-tip is not affected by these pressure waves. In addition, the obtained results illustrate the formation of a relatively large-scale vortex after the flame skirt touches the channel side walls. This vortex then expands and travels towards the flame front. As a result, a reverse flow is generated at the channel centerline whose velocity magnitude increases over time, which causes the displacement velocity of the leading flame-tip to decrease sharply. This mechanism creates favorable conditions to produce a flame inversion. Furthermore, as a result of the pressure gradient, a vortex ahead of the flame front is also generated, which increases the Figure 16: Streamlines and velocity magnitude distribution near the flame front during the formation of a DTF. reverse flow velocity. Thus, the reverse flow is the main responsible for the flame inversion, and naturally, the main mechanism which generates it. However, there is also another mechanism that foster the TF formation, that is, Rayleigh-Taylor instabilities. These instabilities are produced by the pressure waves created by the interaction between the skirt flame and the channel lateral walls. DTF formation initiates in turn with a strong reverse advection motion at the center of the channel due to the large vortex. Additionally, a strong vortex motion is also produced behind the flame front, which results in a temporal positive displacement in the burnt region and the formation of multiple wrinkles along the flame surface. The referred vortex is dispelled by the pressure waves reflected by the lateral walls. This formation mechanism is repeated to form a second DTF, but this time at a different formation period. Overall, the propagation dynamics of premixed flames are mainly influenced by hydrodynamics and thermos-diffusive instabilities, and by the interaction of reflected pressure waves with the curved flame front. This interaction mainly predominates in the formation of TF and DTF, since it triggers into unstable regimens, such as those characterized by the presence of Rayleigh-Taylor instabilities. ## 6 Acknowledgements This work has been supported by the US Army Research Laboratory under Research Grant No. W911NF-22-1-0275. Luis Bravo was supported by the US Army Research Laboratory 6.1 Basic research program in propulsion sciences.
2309.13029
Memory-augmented conformer for improved end-to-end long-form ASR
Conformers have recently been proposed as a promising modelling approach for automatic speech recognition (ASR), outperforming recurrent neural network-based approaches and transformers. Nevertheless, in general, the performance of these end-to-end models, especially attention-based models, is particularly degraded in the case of long utterances. To address this limitation, we propose adding a fully-differentiable memory-augmented neural network between the encoder and decoder of a conformer. This external memory can enrich the generalization for longer utterances since it allows the system to store and retrieve more information recurrently. Notably, we explore the neural Turing machine (NTM) that results in our proposed Conformer-NTM model architecture for ASR. Experimental results using Librispeech train-clean-100 and train-960 sets show that the proposed system outperforms the baseline conformer without memory for long utterances.
Carlos Carvalho, Alberto Abad
2023-09-22T17:44:58Z
http://arxiv.org/abs/2309.13029v1
# Memory-augmented conformer for improved end-to-end long-form ASR ###### Abstract Conformers have recently been proposed as a promising modelling approach for automatic speech recognition (ASR), outperforming recurrent neural network-based approaches and transformers. Nevertheless, in general, the performance of these end-to-end models, especially attention-based models, is particularly degraded in the case of long utterances. To address this limitation, we propose adding a fully-differentiable memory-augmented neural network between the encoder and decoder of a conformer. This external memory can enrich the generalization for longer utterances since it allows the system to store and retrieve more information recurrently. Notably, we explore the neural Turing machine (NTM) that results in our proposed Conformer-NTM model architecture for ASR. Experimental results using Librispeech train-clean-100 and train-960 sets show that the proposed system outperforms the baseline conformer without memory for long utterances. Carlos Carvalho\({}^{1,2}\), Alberto Abad\({}^{1,2}\)\({}^{1}\)INESC-ID, Lisbon, Portugal \({}^{2}\)Instituto Superior Tecnico, Universidade de Lisboa, Portugal [email protected], [email protected] **Index Terms**: conformer, end-to-end speech recognition, neural Turing machine, memory-augmented neural networks, long-form speech ## 1 Introduction Traditional speech recognition systems rely on sophisticated and individual components, including acoustic, pronunciation and language models (LMs) [1]. In contrast, end-to-end (E2E) speech recognition systems rely on a single deep neural network that learns to map an input sequence of features or raw audio to the corresponding labels; usually, characters or subwords [2, 3]. Because of this simplicity, and in some situations, superior performance over traditional systems, E2E systems have become a favoured procedure for automatic speech recognition (ASR) [4, 5]. Some widely used E2E approaches are based on connectionist temporal classification (CTC) [6], recurrent neural network transducers (RNN-T) [7] and attention-based encoder-decoders (AEDs) [4]. The transformer [8] architecture is an AED-based system that uses self-attention to capture long-range interactions. Nevertheless, a transformer has more difficulty extracting fine-grained local feature patterns than convolution neural networks (CNNs) [9]. For this reason, conformers [9] have been proposed as an approach for E2E ASR, which outperform RNN-based approaches and transformers since they can model the global and local dependencies of an audio sequence by combining CNNs with transformers. Nonetheless, E2E ASR methods, particularly AED-based procedures, are known to degrade performance on long utterances when trained on short utterances [10, 11]. Besides, long-form transcription is crucial for creating continuous transcriptions of real-world scenarios, like lectures, meetings, and video captions (e.g. YouTube). The problem of long-form speech has been addressed in some previous works by simulating training on longer utterances [12, 13]. For example, [12] proposed a method where the transformer or conformer accepts multiple consecutive utterances simultaneously and predicts an output for the last utterance only. This procedure is repeated with a sliding window using one-utterance shifts to recognise the whole recording. Another solution is to segment the audio in advance using a separate voice activity detector (VAD) based approach [14], or an E2E model that learns to predict segment boundaries [15]. The E2E experiment proposed in [15] relies on human-created heuristics to insert end-of-segment tokens in utterances at training time so that the model can learn to predict those tokens. Only few works try to improve the generalisation of E2E ASR systems to long speech without requiring some pre-processing stage or changing how the model trains and decodes compared to traditional E2E ASR. For instance, [16] proposes the replacement of self-attention with fast attention, which improves the model generalisation ability for long utterances. In contrast to the works mentioned above, we hypothesise whether adding a memory-augmented neural network (MANN) in between the encoder and decoder module - like a neural Turing machine (NTM) [17] - may be a convenient method to enrich the learning capacity of a conformer, contributing to increase the network generalisation for longer utterances without the need for any _ad hoc_ pre-processing or optimisation in training or decoding. Thus, NTMs have demonstrated superior performance over long short-term memory cells (LSTMs) in several learning tasks. Moreover, to our knowledge, few works have investigated the use of MANNs for the E2E ASR task. In particular, NTM has been used to perform unsupervised speaker adaptation in [18] by storing i-vectors [19] and then reading from the memory to combine the resulting read vector with the hidden vectors of the encoder of the listen, attend and spell (LAS) architecture [4]. However, in that work, the write operation of the NTM was not explored, therefore not taking advantage of the full potential of the external memory. In this work, we propose incorporating a MANN based on NTM to improve the generalisation of the offline E2E ASR system to long sentences. We refer to this newly proposed ASR architecture as Conformer-NTM1. This proposed model and the state-of-the-art (SOTA) conformer baseline (without memory) are trained on Librispeech [20] 100 hours clean and 960 hours. Then, we use the test clean and other partitions to evaluate the overall performance of all models. We follow this with an ablation study, testing the models with different utterance lengths (long and very-long). Our results show that the E2E system can generalise better with the external memory for longer utterances with the Conformer-NTM model. Notice that while the focus of this work is on offline ASR settings, the proposed MANN is also expected to complement streaming ASR approaches that address the long-form ASR problem [10, 21, 22]. The rest of the paper is organised as follows. Section 2 summarises the MANN system. Section 3 introduces the proposed approach. In Section 4, we describe the experimental evaluation and obtained results, and in Section 5, we provide some concluding remarks and possible directions for future work. ## 2 Memory-augmented neural networks MANNs refer to a class of neural networks equipped with external memory that can help improve the learning capability of the neural network [17, 23, 24]. Examples of MANNs are the NTM [17], described below and the differentiable neural computer (DNC) [23]. ### NTM model NTMs [17] can read and write arbitrary content to memory cells without supervision by using a soft-attention mechanism. Moreover, they are fully differentiable, which makes it possible to train them in an E2E fashion. The overall architecture of the NTM is composed of a controller network, e.g., a deep neural network or RNN, that receives inputs, reads vectors and emits outputs in response. The controller reads and writes from an external memory matrix via a set of parallel read and write heads. Additionally, the controller network emits a set of vectors and values for each individual read (e.g., \(\mathbf{k}_{t}\), \(\beta_{t}\), \(g_{t}\), \(\mathbf{s}_{t}\) and \(\gamma_{t}\)) and write head (e.g., \(\mathbf{a}_{t},\mathbf{e}_{t}\), \(\mathbf{k}_{t}\), \(\beta_{t}\), \(g_{t}\), \(\mathbf{s}_{t}\) and \(\gamma_{t}\)), detailed below, to help in the reading and writing operations. The memory is a matrix \(\mathbf{M}\in R^{N\times W}\), where \(N\) is the number of memory locations (rows) and \(W\) is the vector size of each memory location (columns). The read operation at time step \(t\) is the weighted average sum of all memory locations, i. e., \[\mathbf{r}_{t}=\sum_{i}\mathbf{w}_{t}^{read}(i)\mathbf{M}_{t}(i), \tag{1}\] where \(\mathbf{w}_{t}^{read}(i)\) is the weight associated to row \(i\), and \(\mathbf{M}_{t}(i)\) is the memory vector from row \(i\). Also, the sum of all weights adds up to one. The write operation at time step \(t\) contains two main steps. The first step is an erase phase, i. e., \[\mathbf{M}_{t}(i)^{{}^{\prime}}=\mathbf{M}_{t-1}(i)[\mathbf{1}-\mathbf{w}_{t} ^{write}(i)\mathbf{e}_{t}], \tag{2}\] where \(\mathbf{1}\) is a vector of ones and \(\mathbf{e}_{t}\in R^{W}\) is an erase vector. Last, the second step is an add phase: \[\mathbf{M}_{t}(i)=\mathbf{M}_{t}(i)^{{}^{\prime}}+\mathbf{w}_{t}^{write}(i) \mathbf{a}_{t}, \tag{3}\] where \(\mathbf{a}_{t}\in R^{W}\) is an add vector. The weights mentioned above for reading and writing are computed using the same addressing mechanism in parallel for each of the two heads. For this reason, we explain the addressing process in general terms. Overall, the addressing mechanism combines two main addressing mechanisms: _content-based addressing_ and _location-based addressing_. The first step to computing the weights is to measure the similarity between \(\mathbf{k}_{t}\), outputted by the controller, and each entry of the memory, \(\mathbf{M}_{t}(i)\), by using cosine similarity: \[K[\mathbf{u},\mathbf{v}]=\frac{\mathbf{u}\cdot\mathbf{v}}{\|\mathbf{u}\|_{2} \left\|\mathbf{v}\right\|_{2}}. \tag{4}\] By applying cosine similarity and softmax over the rows, \(\mathbf{M}_{t}(i)\), the computation of the weights using the content-based addressing mechanism is obtained following: \[\mathbf{w}_{t}^{e}(i)=softmax(\beta_{t}K[\mathbf{k}_{t},\mathbf{M}_{t}])_{i}, \tag{5}\] where \(\beta_{t}\), outputted by the controller, is a positive scalar parameter that determines how concentrated the content weight vector should be. The following three steps are focused on location-based addressing. The second stage creates \(\mathbf{w}_{t}^{g}\) by interpolating \(\mathbf{w}_{t}^{e}\) with the weight vector from last time step, \(\mathbf{w}_{t-1}\), using \(g_{t}\in(0,1)\), also outputted by the controller. This interpolation operation allows the system to learn when to use or ignore content-based addressing. Next, for the focus of the weights to be shifted to different rows, the controller emits a shift weighting vector, \(\mathbf{s}_{t}\), that defines a normalised distribution over the allowed integer shifts. Each element in this vector gives the degree to which different integer shifts can occur. The actual shift occurs with a circular convolution: \[\mathbf{w}_{t}^{*}(i)=\sum_{j=0}^{N-1}\mathbf{w}_{t}^{g}(j)\mathbf{s}_{t}(i-j). \tag{6}\] Next, there is a sharpening parameter \(\gamma_{t}\geq 1\), outputted by the controller, which controls the sharpness of the vector weights: \[\mathbf{w}_{t}(i)=softmax(\mathbf{w}_{t}^{*\gamma_{t}})_{i}. \tag{7}\] Finally, we have a weight vector, \(\mathbf{w}_{t}\), that determines where to read from and write to in memory, depending on the specific head. ## 3 E2E ASR with a MANN The main structure of our memory-based E2E ASR network is depicted in Figure 1. In this architecture we add the proposed external memory in between the encoder and decoder of the Conformer system. At first, the input goes through the encoder module, which contains: a convolution subsampling module, a linear projection layer, a relative positional encoding module, N conformer blocks and, at last, a layer norm. Then, the output of the layer norm, \(\mathbf{h}\), goes trough the external memory system. For each time step \(t\), the vector \(\mathbf{h_{t}}\) will be transformed with a feedforward layer for each write and read head so that it is possible to obtain all vectors and values required to help in the reading and writing operations, as mentioned with more detail in Section 2.1. Following, the attention weights for the read and write head are computed via the addressing mechanism as described in Section 2.1. Writing and reading occur at each time step, allowing the system to memorise long-term acoustic dependencies recurrently. After the read operation, the read vector is concatenated with the encoder output \(\mathbf{h}_{t}\). Then, this concatenated vector is transformed into a new output vector with the same size of \(\mathbf{h}_{t}\) by going through a feedforward layer. Next, the sequence of outputs of the memory goes into the decoder. In addition, the decoder also receives the transcription outputs shifted right as input, which go through an embedding layer and a positional encoding module. The decoder contains D transformer blocks. At last, the E2E ASR system combined with the external memory learns in a fully-differentiable way by utilising the joint CTC-attention objective [25]. ## 4 Experiments ### Data and Experimental Setup Our experiments use the Librispeech corpus exclusively. We train with the train-clean-100 subset from Librispeech, with 28539 utterances and 585 speakers, and the train-960 set containing 281241 utterances and 5466 speakers. We use the dev clean and dev other, with 5567 utterances and 188 speakers. Finally, we report word-error-rate (WER) and character-error-rate (CER) results for both test clean and test other, with 2620/2939 utterances and 87/90 speakers, respectively. We also perform an ablation study varying the distribution of the test data conditions, especially considering the utterance length (long and very long). For the long setting, we created subsets from the test clean and the test other containing only the longest 100 utterances with the script subset_data_dir.sh, from Kaldi [26]. For the very long set, we used the original time segmentation information of the Librispeech corpus and concatenated the continuous segments present in the original test clean and the test other sets. Then, we selected the 100 longest concatenated segments using the same Kaldi script mentioned above, resulting into two new subsets: concat-clean and concat-other. Information about the average, minimum and maximum length of the utterances of these subsets are present in Table 1. ESPnet2 [27] is the toolkit we use to implement and investigate our proposed methods. Also, we use a default ESPnet conformer recipe from Librispeech to run all our setups. The conformer baseline and the conformer-NTM models were trained on 2 NVIDIA GeForce RTX 3080. For the 100 hours setup, all models were trained for 80 epochs, while for the 960 hours setting, the models were trained for 50 epochs. In both settings, an average of the ten best checkpoints in the dev set was used as the final model. Notice that the Conformer-NTM requires longer training time when compared to the baseline. The conformer baseline model extracts 80-dimensional FBANK acoustic features on the fly, followed by SpecAugment [28] and global mean and variance normalization. The raw input data has speed perturbation factors of 0.9, 1.0 and 1.1. Additionally, the model uses as targets 5000-byte pair encoding (BPE) [29] unigram units learned from the training data. The encoder is composed of one Conv2D module followed by 12 conformer blocks. The Conv2D includes two 2D-CNN layers with 256 channels, a kernel size of 3 x 3, a stride of size two and a ReLU activation. The decoder is composed of six transformer blocks. Both the encoder and decoder have four attention heads of dimension 256. The hidden dimension of the feedforward layer for the encoder and decoder is 256, while the output dimensions are 1024 and 2048, respectively. Also, the Adam optimizer was used with a learning rate of 0.002, a weight decay of 0.000001 and 15000 warmup steps. The number of batch bins for both training setups (train-clean-100 and train-960 hours) was 16 million. Also, the CTC weight was set to 0.3 for training and testing time. Regarding the external memories, we first experimented with a different number of rows (128, 256, 512) and columns (5,8,10,40) for the NTM using the train-clean-100 hours training setup. The best configuration parameters discovered were 256 rows and ten columns. We also tried to run with a differ \begin{table} \begin{tabular}{c c|c c c} \hline **Set/Subset** & **Category** & Mean & Min. & Max. \\ \hline \hline train-clean-100 & full set & 12.69 & 1.41 & 24.53 \\ train-960 & full set & 12.30 & 0.83 & 29.74 \\ test clean & full set & 7.42 & 1.29 & 34.96 \\ test other & full set & 6.54 & 1.25 & 34.51 \\ \hline test clean & long - 100 & 23.59 & 19.86 & 34.96 \\ test other & long - 100 & 21.11 & 17.13 & 34.51 \\ \hline concat-clean & very long - 100 & 33.41 & 25.39 & 68.14 \\ concat-other & very long - 100 & 30.76 & 21.77 & 68.98 \\ \hline \end{tabular} \end{table} Table 1: _Mean, minimum (Min.) and maximum (Max.) duration in seconds of the training sets, the test clean, test other, and the subsets long - 100 and very long - 100._ Figure 1: _Conformer E2E ASR system with an external fully-differentiable memory._ ent MANN, the DNC [23], which is the follow-up of the NTM system, but for the same chosen parameters, the NTM gave the best results in preliminary experiments. At last, we used the KenLM toolkit [30] with Kneser-Ney smoothing to train a 6-gram LM. For that, we used the Librispeech LM corpus from Kaldi and applied the BPE model to transform all words into sub-word units. The beam size is 60, and the weight for the 6-gram LM is 1. Furthermore, we report results with and without LM. ### Results Table 2 compares the performance of our proposed architecture, Conformer-NTM, versus the ESPnet conformer baseline, with and without an LM. Regarding the "Full Set" column results, without an LM, the Conformer-NTM slightly improves upon the Conformer baseline for test clean and test other in both training settings (train-clean-100 and train-960), except for the test other in the train-960 setting, where the Conformer-960h model achieves 6.7%/2.9% WER/CER and the Conformer-NTM-960h model achieves 6.8%/2.8% WER/CER. Decoding with the LM, our proposed models still achieve the lowest WERs and CERs compared to the baseline model, except for the train-960 setting, where the results in terms of WER are the same for the Conformer and Conformer-NTM. ### Analysis To examine the behaviour of the Conformer-NTM model for longer sentences, we decided to create subsets from the test clean and the test other sets containing long and very long utterances as described in Section 4.1, and evaluate its performance under these conditions. #### 4.3.1 Long Utterances Regarding long utterances, without LM, we can observe from Table 2 ("Long - 100" column) that the Conformer-NTM model achieves the lowest WER and CER results compared to the baseline for both test clean and test other. For the train-clean-100 setting, the Conformer-NTM model improves from 7.5% to 6.8% in WER for the test clean and improves from 16.6% to 15.8% WER for the test other. For the train-960 setting, the Conformer-NTM model improves from 3.3% to 3.0% in WER for the test clean and from 5.3% to 4.8% WER for the test other. With LM, the Conformer-NTM model still obtains better results for both test clean and test other subsets, except for the train-960 setting in the test clean subset where the result of the Conformer is equal to the Conformer-NTM in WER. #### 4.3.2 Very Long Utterances Concerning very long utterances, described in Section 4.1, we can observe from Table 2 ("Very Long - 100" column) that the baseline conformer results start to degrade more when compared to the baseline conformer results in column "Long - 100", mainly because the distribution of lengths in concat-clean and concat-other is more distant from the distribution of lengths present in the training and development data. Additionally, with and without LM, the Conformer-NTM improves by a significant margin all the WER and CER scores when compared to the baseline conformer in both training settings, i.e., train-clean-100 and train-960. For instance, in the train-960 setting using LM, the Conformer-NTM achieves a relative WER reduction up to 58.1% and 26.5% for the concat-clean and concat-other sets, respectively. These improvements demonstrate that the MANN based on the NTM memory helps the E2E Conformer ASR system to generalise better for very long sentences not seen during training without relying on any pre-processing of the data or changing training and decoding strategies when compared to traditional E2E ASR. Furthermore, the presence of the LM does not affect the improvements of the Conformer-NTM when compared to the conformer baseline. We hypothesise that the NTM memory learns to create more extended contexts at an acoustic level which benefits the decoder of the conformer when making the inference step. ## 5 Conclusions and Future Work In this work, we propose a new architecture, Conformer-NTM, that combines a MANN (based on the NTM) with a conformer for E2E ASR. We demonstrated that including the external memory is relevant to enhance the performance of the E2E ASR system for long utterances. Also, we observed that the Conformer-NTM starts to be more effective when the distribution length of the test data gets further away from the distribution length of the training data. Furthermore, in the presence of an LM and for very long utterances, the Conformer-NTM in the train-960 setting achieves a 58.1% WER relative reduction for the concat-clean set and a 26.5% WER relative reduction to the concat-other when compared to the Conformer baseline. Our future work includes investigating the effect of the NTM on other SOTA E2E ASR architectures. \begin{table} \begin{tabular}{c c|c c c c|c c} \hline \hline & \multicolumn{3}{c}{Full Set} & \multicolumn{2}{c}{Long - 100} & \multicolumn{2}{c}{Very Long - 100} \\ \cline{2-7} **Model** & **LM** & test clean & test other & test clean & test other & concat-clean & concat-other \\ \hline Conformer-100h & - & 6.5 (2.6) & 17.4 (8.5) & 7.5 (3.4) & 16.6 (8.0) & 12.2 (6.5) & 24.5 (14.3) \\ \hline Conformer-NTM-100h & - & 6.4 (2.4) & 17.2 (8.4) & 6.8 (2.6) & 15.8 (7.3) & 9.2 (4.1) & 21.7 (12.2) \\ \hline Conformer-960h & - & 2.8 (1.0) & 6.7 (2.9) & 3.3 (1.2) & 6.5 (2.7) & 10.0 (5.8) & 15.3 (9.8) \\ \hline Conformer-NTM-960h & - & 2.7 (0.9) & 6.8 (2.8) & 3.0 (0.9) & 5.8 (2.1) & 4.3 (2.0) & 11.6 (6.9) \\ \hline \hline Conformer-100h & 6-gram & 5.3 (2.3) & 14.7 (7.6) & 6.3 (3.0) & 14.2 (7.1) & 10.0 (5.9) & 21.2 (12.9) \\ \hline Conformer-NTM-100h & 6-gram & 5.2 (2.1) & 14.5 (7.4) & 5.5 (2.3) & 13.6 (6.5) & 7.4 (3.7) & 18.9 (11.2) \\ \hline Conformer-960h & 6-gram & 2.4 (0.8) & 5.8 (2.5) & 2.7 (0.9) & 5.3 (2.4) & 9.3 (5.5) & 14.7 (9.7) \\ \hline Conformer-NTM-960h & 6-gram & 2.4 (0.8) & 5.9 (2.5) & 2.7 (0.8) & 4.8 (1.9) & 3.9 (1.8) & 10.8 (6.7) \\ \hline \hline \end{tabular} \end{table} Table 2: (WERs [%]) (CERs [%]) on the E2E ASR proposed models – trained with train-clean-100 and train-960 sets from Librispeech – for the test other and clean sets (full set, long - 100 and very long - 100).
2309.15458
LogicMP: A Neuro-symbolic Approach for Encoding First-order Logic Constraints
Integrating first-order logic constraints (FOLCs) with neural networks is a crucial but challenging problem since it involves modeling intricate correlations to satisfy the constraints. This paper proposes a novel neural layer, LogicMP, whose layers perform mean-field variational inference over an MLN. It can be plugged into any off-the-shelf neural network to encode FOLCs while retaining modularity and efficiency. By exploiting the structure and symmetries in MLNs, we theoretically demonstrate that our well-designed, efficient mean-field iterations effectively mitigate the difficulty of MLN inference, reducing the inference from sequential calculation to a series of parallel tensor operations. Empirical results in three kinds of tasks over graphs, images, and text show that LogicMP outperforms advanced competitors in both performance and efficiency.
Weidi Xu, Jingwei Wang, Lele Xie, Jianshan He, Hongting Zhou, Taifeng Wang, Xiaopei Wan, Jingdong Chen, Chao Qu, Wei Chu
2023-09-27T07:52:30Z
http://arxiv.org/abs/2309.15458v3
# LogicMP: A Neuro-symbolic Approach for Encoding First-order Logic Constraints ###### Abstract Integrating first-order logic constraints (FOLCs) with neural networks is a crucial but challenging problem since it involves modeling intricate correlations to satisfy the constraints. This paper proposes a novel neural layer, LogicMP, which performs mean-field variational inference over an MLN. It can be plugged into any off-the-shelf neural network to encode FOLCs while retaining modularity and efficiency. By exploiting the structure and symmetries in MLNs, we theoretically demonstrate that our well-designed, efficient mean-field iterations greatly mitigate the difficulty of MLN inference, reducing the inference from sequential calculation to a series of parallel tensor operations. Empirical results in three kinds of tasks over images, graphs, and text show that LogicMP outperforms advanced competitors in both performance and efficiency. ## 1 Introduction The deep learning field has made remarkable progress in the last decade, owing to the creation of neural networks (NNs) (Goodfellow et al., 2016; Vaswani et al., 2017). They typically use a feed-forward architecture, where interactions occur implicitly in the middle layers with the help of various neural mechanisms. However, these interactions do not explicitly impose logical constraints among prediction variables, resulting in predictions that often do not meet the structural requirements. This paper investigates the problem of incorporating _first-order logic constraints_ (FOLCs) into neural networks. An example of FOLCs can be found in the document understanding task (Jaume et al., 2019), which aims to segment the given tokens into blocks for a document image (Fig. 1(a)). We formalize the task into the binary coexistence prediction of token pairs \(\mathcal{C}(a,b)\in\{0,1\}\) where Figure 1: The document understanding task predicts whether every two tokens coexist in the same block in an input document image (**a**). The FOLC regarding the transitivity of coexistence can be used to obtain the structured output. The ground truth (**b**) typically forms several squares for the segments. Both NN (Xu et al., 2020) (**c**) and advanced method (Xu et al., 2018) (**d**) struggle to meet the FOLC where many coexist variables are incorrectly predicted. In contrast, LogicMP (**e**) is effective while maintaining modularity and efficiency. See Sec. 5 for complete experimental details. \(\mathtt{C}\) denotes the co-existence of tokens \(a,b\) (Fig. 1(b)). There is a FOLC about the transitivity of coexistence predictions: when tokens \(a\) and \(b\) coexist in the same block, and tokens \(b\) and \(c\) coexist in the same block, then \(a\) and \(c\) must coexist, i.e., \(\forall a,b,c:\mathtt{C}(a,b)\land\mathtt{C}(b,c)\implies\mathtt{C}(a,c)\), to which we refer as "transitivity rule". NNs generally predict \(\mathtt{C}(\cdot,\cdot)\) independently, failing to meet this FOLC (Fig. 1(c)), and the same applies to the advanced regularization method (Xu et al., 2018) (Fig. 1(d)). We aim to incorporate the transitivity rule into NNs so that the predicted result satisfies the logical constraint (Fig. 1(c)). FOLCs are also critical in many other real-world tasks, ranging from collective classification tasks over graphs (Richardson & Domingos, 2006; Singla & Domingos, 2005) to structured prediction over text (Sang & Meulder, 2003). Incorporating such FOLCs into neural networks is a long-standing challenge. The main difficulty lies in modeling intricate variable dependencies among massive propositional groundings. For instance, for the transitivity rule with 512 tokens, 262K coexistence variables are mutually dependent in 134M groundings. Essentially, modeling FOLCs involves the weighted first-order model counting (WFOMC) problem, which has been extensively studied in the previous literature (den Broeck & Davis, 2012; Dalvi & Suciu, 2013; Gribkoff et al., 2014). However, it is proved #P-complete for even moderately complicated FOLCs (Dalvi & Suciu, 2013), such as the transitivity rule mentioned above. Markov Logic Networks (MLNs) (Richardson & Domingos, 2006) are a common approach to modeling FOLCs, which use joint potentials to measure the satisfaction of the first-order logic rules. MLN is inherited from WFOMC, and is difficult to achieve exact inference (Gribkoff et al., 2014). Although MLN formalization allows for approximate inference, MLNs have long suffered from the absence of efficient inference algorithms. Existing methods typically treat the groundings individually and fail to utilize the structure and symmetries of MLNs to accelerate computation (Yedidia et al., 2000; Richardson & Domingos, 2006; Poon & Domingos, 2006). ExpressGNN (Qu & Tang, 2019; Zhang et al., 2020) attempts to combine MLNs and NNs using variational EM, but they remain inherently independent due to the inference method's inefficiency. Some lifted algorithms exploit the structure of MLNs to improve efficiency but are infeasible for neural integration due to their requirements of symmetric input (de Salvo Braz et al., 2005; Singla & Domingos, 2008; Niepert, 2012) or specific rules (den Broeck & Davis, 2012; Gribkoff et al., 2014). This paper proposes a novel approach called _Logical Message Passing_ (LogicMP) for general-purpose FOLCs. It is an efficient MLN inference algorithm and can be seamlessly integrated with any off-the-shelf NNs, positioning it as a neuro-symbolic approach. Notably, it capitalizes on the benefits of parallel tensor computation for efficiency and the plug-and-play principle for modularity. Fig. 2 illustrates the computational graph of LogicMP. Bare NNs (Fig. 2a) predict each output variable independently. LogicMP can be stacked on top of any encoding network as an efficient modular neural layer that enforces FOLCs in prediction (Fig. 2b). Specifically, LogicMP introduces an efficient mean-field (MF) iteration algorithm for MLN inference (Fig. 2c). This MF algorithm enables LogicMP's outputs to approximate the variational approximation of MLNs, ensuring that FOLCs can be encoded into LogicMP's inputs. In contrast to vanilla MF algorithms that rely on inefficient sequential operations (Wainwright & Jordan, 2008; Koller & Friedman, 2009), our well-designed MF iterations can be formalized as Einstein summation, thereby supporting parallel tensor computation. This formalization benefits from our exploitation of the structure and symmetries of MLNs (Sec. 3.2), which is supported by theoretical guarantees (Sec. 3.1). In Sec. 5, we demonstrate the versatility of LogicMP by evaluating its performance on various real-world tasks from three domains: visual document understanding over images, collective classification over graphs, and sequential labeling over text. First, we evaluate LogicMP on a real-world document Figure 2: **A high-level view of LogicMP. NNs typically use the softmax layer for independent prediction (left), which can be replaced by a LogicMP encoding FOLCs (middle). LogicMP is implemented (right) by efficient mean-field iterations which leverage the structure of MLN (Sec. 3).** understanding benchmark dataset (FUNSD) (Jaume et al., 2019) with up to 262K mutually-dependent variables and show that it outperforms previous state-of-the-art methods (Sec. 5.1). Notably, the results demonstrate that LogicMP can lead to evident improvements even when imposing a single FOLC on prediction variables, which is beyond the capacity of existing methods using arithmetic circuits (ACs). For the second task (Sec. 5.2), we conduct experiments on relatively large datasets in the MLN literature, including UW-CSE (Richardson and Domingos, 2006) and Cora (Singla and Domingos, 2005). Our results show that LogicMP significantly speeds up by about 10x compared to competitive MLN inference methods, which enables larger-scale training for better performance. Finally, we evaluate LogicMP on a sequence labeling task (CoNLL-2003) (Sang and Raudler, 2003) and show that it can leverage task-specific rules to improve performance over competitors (Sec. 5.3). **Contributions.** Summarizing, we: _(i)_ Present a novel, modular, and efficient neural layer LogicMP, the first neuro-symbolic approach capable of encoding FOLCs. _(ii)_ Design an accelerated mean-field algorithm for MLN inference that leverages the structure and symmetries in MLNs, formalizing it to parallel computation with a reduced complexity from \(\mathcal{O}(N^{M}L^{2}D^{L-1})\) to \(\mathcal{O}(N^{M^{\prime}}L^{2})\) (\(M^{\prime}\leq M\)) (Sec. 3). For instance, LogicMP can incorporate FOLCs with up to 262K variables within 0.03 seconds, where AC-based methods fail during compilation. _(iii)_ Demonstrate its effectiveness and versatility in challenging tasks over images, graphs, and text, where LogicMP outperforms state-of-the-art neuro-symbolic approaches, often by a noticeable margin. ## 2 Markov Logic Networks An MLN is built upon a knowledge base (KB) \(\{E,R,O\}\) consisting of a set \(E=\{e_{k}\}_{k}\) of entities, a set \(R=\{r_{k}\}_{k}\) of predicates, and a set \(O\) of observation. Entities are also called constants (e.g., tokens). Each **predicate** represents a property on a relation, e.g., coexist (\(\mathbb{C}\)). With particular entities assigned to a predicate, we obtain a **ground atom**, e.g., \(\mathbb{C}(e_{1},e_{2})\) where \(e_{1}\) and \(e_{2}\) are two tokens. For a ground atom \(i\), we use a random variable \(v_{i}\) in the MLN to denote its status, e.g., \(v_{\mathcal{C}(e_{1},e_{2})}\in\{0,1\}\) denoting whether \(e_{1}\) and \(e_{2}\) coexist. The MLN is defined over all variables \(\{v_{i}\}_{i}\) and a set of first-order logic formulas \(F\). Each formula \(f\in F\) represents the correlation among the variables, e.g., \(\forall a,b,c:\mathbb{C}(a,b)\wedge\mathbb{C}(b,c)\implies\mathbb{C}(a,c)\) which equals to \(\forall a,b,c:\neg\mathbb{C}(a,b)\vee\neg\mathbb{C}(b,c)\vee\mathbb{C}(a,c)\) by De Morgan's law. With particular entries assigned to the formula, we obtain a ground formula, aka **grounding**, e.g., \(\neg\mathbb{C}(e_{1},e_{2})\vee\neg\mathbb{C}(e_{2},e_{3})\vee\mathbb{C}(e_{ 1},e_{3})\). For a grounding \(g\), we use \(\mathbf{v}_{g}\) to denote the variables in \(g\), e.g., \(\{v_{\mathcal{C}(e_{1},e_{2})},v_{\mathcal{C}(e_{1},e_{2})},v_{\mathcal{C}(e_{ 1},e_{3})}\}\). In MLN, each \(f\) is associated with a weight \(w_{f}\) and a potential function \(\phi_{f}(\cdot):\mathbf{v}_{g}\rightarrow\{0,1\}\) that checks whether the formula is satisfied in \(g\). For each formula \(f\), we can obtain a set of groundings \(G_{f}\) by enumerating all assignments. We adopt the open-world assumption (OWA) and jointly infer all **unobserved facts**. Based on the KB and formulas, we express the MLN as follows: \[p(\mathbf{v}|O)\propto\exp(\underbrace{\sum_{i}\phi_{u}(v_{i})}_{neural\ semantics}+\underbrace{\sum_{f\in F}w_{f}\sum_{g\in G_{f}}\phi_{f}(\mathbf{v}_{g})}_{ symbolic\ FOLCs})\,, \tag{1}\] where \(\mathbf{v}\) is the set of unobserved variables. The second term is for symbolic FOLCs, where \(\sum_{g\in G_{f}}\phi_{f}(\mathbf{v}_{g}))\) measures the number of satisfied groundings of \(f\). We explicitly express the first term to model the evidence of single ground atom \(i\) in status \(v_{i}\) using the unary potential \(\phi_{u}(\cdot):v_{i}\rightarrow\mathcal{R}\). By parameterizing \(\phi_{u}\) with an NN, this formulation enables semantic representation, allowing external features, such as pixel values of an image, to be incorporated in addition to the KB. Note that \(\phi_{u}\) varies with different \(i\), but for the sake of simplicity, we omit \(i\) in the notation. ### Mean-field Iteration for MLN Inference The MLN inference is a persistent and challenging problem, as emphasized in (Domingos and Lowd, 2019). In an effort to address this issue, we draw inspiration from CRFasRNN (Zheng et al., 2015) and employ the MF algorithm (Wainwright and Jordan, 2008; Koller and Friedman, 2009) to mitigate the inference difficulty, which breaks down the Markov network inference into multiple feed-forward iterations. Unlike the variational EM approach (Zhang et al., 2020), which requires additional parameters, MF does not introduce any extra parameters to the model. We focus on the MLN inference problem with a fixed structure (i.e., rules). The MF algorithm is used for MLN inference by estimating the marginal distribution of each unobserved variable. It computes a variational distribution \(Q(\mathbf{v})\) that best approaches \(p(\mathbf{v}|O)\), where \(Q(\mathbf{v})=\prod_{i}Q_{i}(v_{i})\) is a product of independent marginal distributions over all unobserved variables. Specifically, it uses multiple **mean-field iterations** to update all \(Q_{i}\) until convergence. Each mean-field iteration updates the \(Q_{i}\) in closed-form to minimize \(D_{KL}(Q(\mathbf{v})||p(\mathbf{v}|O))\) as follows (see derivation in App. A): \[Q_{i}(v_{i})\leftarrow\frac{1}{Z_{i}}\exp(\phi_{u}(v_{i})+\sum_{f\in F}w_{f} \sum_{g\in G_{f}(i)}\hat{Q}_{i,g}(v_{i}))\,, \tag{2}\] where \(Z_{i}\) is the partition function, \(G_{f}(i)\) is the groundings of \(f\) that involve the ground atom \(i\), and \[\hat{Q}_{i,g}(v_{i})\leftarrow\sum_{\mathbf{v}_{g_{-i}}}\phi_{f}(v_{i}, \mathbf{v}_{g_{-i}})\prod_{j\in g_{-i}}Q_{j}(v_{j}) \tag{3}\] is the **grounding message** that conveys information from the variables \(g_{-i}\) to the variable \(i\) w.r.t. the grounding \(g\). \(g_{-i}\) denotes the ground atoms in \(g\) except \(i\), e.g., \(g_{-\mathtt{C}(e_{1},e_{3})}=\{\mathtt{C}(e_{1},e_{2}),\mathtt{C}(e_{2},e_{3})\}\). ### Computational Complexity Analysis **Complexity Notation.** Although MF simplifies MLN inference, vanilla iteration remains computationally expensive, with its exponential complexity in the arity and length of formulas. Let us examine the time complexity of a single iteration using Eq. 2. Denote \(N\) as the number of constants in \(E\), \(M=\max_{f}|\mathcal{A}^{f}|\) as the maximum arity of formulas, \(L=\max_{f}|f|\) as the maximum length (number of atoms) of formulas, and \(D\) as the maximum number of labels of predicates (for typical binary predicates, \(D=2\); while for multi-class predicates in many tasks, \(D\) may be large). **Expectation calculation of grounding message.** The computation of grounding message \(\hat{Q}_{i,g}(v_{i})\) in Eq. 3 involves multiplying \(\prod_{j\in g_{-i}}Q_{j}(v_{j})\) (which is \(\mathcal{O}(L)\)) for all possible values of \(\mathbf{v}_{g_{-i}}\) (which is \(\mathcal{O}(D^{L-1})\)), resulting in a complexity of \(\mathcal{O}(LD^{L-1})\). When \(D\) is large, \(D^{L-1}\) is essential. **Aggregation of massive groundings.** Since the number of groundings \(|G_{f}|\) is \(\mathcal{O}(N^{M})\), and a grounding generates grounding messages for all involved variables, we have \(\mathcal{O}(N^{M}L)\) grounding messages. With the complexity of computing a grounding message being \(\mathcal{O}(LD^{L-1})\), the total time complexity of an MF iteration in Eq. 2 is \(\mathcal{O}(N^{M}L^{2}D^{L-1})\), which is exponential in \(M\) and \(L\). ## 3 Efficient Mean-field Iteration via LogicMP We make two non-trivial improvements on the vanilla MF iteration, enabling LogicMP to perform efficient MLN inference. (1) We find that the calculation of a single grounding message in Eq. 3 contains considerable unnecessary computations and its time complexity can be greatly reduced (Sec. 3.1). (2) We further exploit the structure and symmetries in MLN to show that the grounding message aggregation in Eq. 2 can be formalized with Einstein summation notation. As a result, MF iterations can be efficiently implemented via parallel tensor operations, which fundamentally accelerates vanilla sequential calculations (Sec. 3.2). In the following, we will introduce several concepts of mathematical logic, such as clauses and implications (see more details in App. B). ### Less Computation per Grounding Message **Clauses** are the basic formulas that can be expressed as the disjunction of literals, e.g., \(f:=\forall a,b,c:\neg\mathtt{C}(a,b)\vee\neg\mathtt{C}(b,c)\vee\mathtt{C}(a,c)\). For convenience, we explicitly write the clause as \(f(\cdot;\mathbf{n})\) where \(n_{i}\) is the preceding negation of atom \(i\) in the clause \(f\), e.g., \(n_{\mathtt{C}(a,b)}=1\) due to the \(\neg\) ahead of \(\mathtt{C}(a,b)\). A clause corresponds to several equivalent **implications** where the premise implies the hypothesis, e.g., \(\mathtt{C}(a,b)\wedge\mathtt{C}(b,c)\implies\mathtt{C}(a,c)\), \(\mathtt{C}(a,b)\wedge\neg\mathtt{C}(a,c)\implies\neg\mathtt{C}(b,c)\), and \(\mathtt{C}(b,c)\wedge\neg\mathtt{C}(a,c)\implies\neg\mathtt{C}(a,b)\). Intuitively, the grounding message \(\hat{Q}_{i,g}\) in Eq. 3 w.r.t. \(g_{-i}\to i\) corresponds to an implication (e.g., \(\mathtt{C}(e_{1},e_{2})\wedge\mathtt{C}(e_{2},e_{3})\implies\mathtt{C}(e_{1},e_{ 3})\)). Since the grounding affects \(i\) only when the premise \(g_{-i}\) is true, most assignments of \(\mathbf{v}_{g_{-i}}\) that result in false premises can be ruled out in \(\sum_{\mathbf{v}_{g_{-i}}}\) in Eq. 3. **Theorem 3.1**.: _(Message of clause considers true premise only.) For a clause formula \(f(\cdot;\mathbf{n})\), the MF iteration of Eq. 2 is equivalent for \(\hat{Q}_{i,g}(v_{i})\leftarrow\mathbf{1}_{v_{i}=\neg n_{i}}\prod_{j\in g_{-i}} Q_{j}(v_{j}=n_{j})\)._ The proof can be found in App. C. Table 1 briefly illustrates the idea of the proof: for assignments of \(g_{-i}\) resulting in false premises, the potential can be ruled out since it makes no difference for various assignments of the hypothesis \(i\). Therefore, only the true premise \(\{v_{j}=n_{j}\}_{j\in g_{-i}}\) needs to be considered. Compared to Eq. 3, the complexity is reduced from \(\mathcal{O}(LD^{L-1})\) to \(\mathcal{O}(L)\). The formulas in conjunctive normal form (CNF) are the conjunction of clauses. The simplification can also be generalized to CNF for \(\mathcal{O}(L)\) complexity. The following theorem demonstrates this claim: **Theorem 3.2**.: _(Message of CNF = \(\sum\) message of clause.) For a CNF formula with distinct clauses \(f_{k}(\cdot;\mathbf{n})\), the MF iteration of Eq. 2 is equivalent for \(\hat{Q}_{i,g}(v_{i})\leftarrow\sum_{f_{k}}\mathbf{1}_{v_{i}=\neg n_{i}}\prod_ {j\in g_{-i}}Q_{j}(v_{j}=n_{j})\)._ See App. D for proof. This theorem indicates that the message of CNF can be decomposed into several messages of its clauses. Therefore, we only need to consider the clause formulas. We also generalize the theorem for the formulas with multi-class predicates to benefit general tasks (App. E). ### Parallel Aggregation using Einstein Summation This subsection presents an efficient method for parallel message aggregation, i.e., \(\sum_{g\in G_{f}(i)}\hat{Q}_{i,g}(v_{i})\) in Eq. 2. In general, we can sequentially generate all propositional groundings of various formats in \(G_{f}(i)\) to perform the aggregation. However, the number of possible groundings can be enormous, on the order of \(\mathcal{O}(N^{M})\), and explicitly generating all groundings is infeasible in space and time. By exploiting the structure of MLN and treating the grounding messages of the same first-order logic rule symmetrically, LogicMP automatically formalizes the message aggregation of first-order logic rules into _Einstein summation_ (Einsum) notation. The Einsum notation indicates that aggregation can be achieved in parallel through tensor operations, resulting in acceleration at orders of magnitude. The virtue lies in the summation of the product, i.e., \(\sum_{g\in G_{f}(i)}\prod_{j\in g_{-i}}Q_{j}(v_{j}=n_{j})\) by Theorem 3.1, which indicates that the grounding message corresponds to the implication from the premise \(g_{-i}\) to the hypothesis \(i\). Due to the structure of MLN, many grounding messages belong to the same implication and have the calculation symmetries, so we group them by their corresponding implications. The aggregation of grounding messages w.r.t. an implication amounts to integrating some rule arguments, and we can formalize the aggregation into Einsum. For instance, the aggregation w.r.t. the implication \(\forall a,b,c:\mathsf{C}(a,b)\land\mathsf{C}(b,c)\implies\mathsf{C}(a,c)\) can be expressed as \(\mathsf{einsum}(\neg ab,bc\to ac^{\ast},\mathbf{Q}_{\mathsf{C}}(\mathbf{1}), \mathbf{Q}_{\mathsf{C}}(\mathbf{1}))\) where \(\mathbf{Q}_{\mathsf{C}}(\mathbf{v}_{r})\) denotes the collection of marginals w.r.t. predicate \(r\), i.e., \(\mathbf{Q}_{r}(\mathbf{v}_{r})=\{Q_{r(\mathcal{A}_{r})}(v_{r(\mathcal{A}_{r})} )\}_{\mathcal{A}_{r}}\) (\(\mathcal{A}_{r}\) is the arguments of \(r\)). Fig. 3 illustrates this process, where we initially group the variables by predicates and then use them to perform aggregation using parallel tensor operations (see App. F) We formalize the parallel aggregation as follows: **Proposition**.: _Let \([f,h]\) denote the implication of clause \(f\) with atom \(h\) being the hypothesis and \(\Phi_{u}(\mathbf{v}_{r})\) denote the collection of \(\phi_{u}(v_{i})\) w.r.t. predicate \(r\). For the grounding messages w.r.t. \([f,h]\) of \begin{table} \begin{tabular}{c|c|c|c|c|} \hline \hline \multirow{2}{*}{\(\phi_{f}(v_{g})\)} & \multirow{2}{*}{\(\phi_{f}(v_{g})\)} & \multirow{2}{*}{\(v_{\mathsf{C}(a_{1},a_{3})}\)} & \multirow{2}{*}{\(\phi_{f}(v_{g})\)} & \multirow{2}{*}{\(0\)} & \multirow{2}{*}{\(1\)} \\ \cline{3-3} \cline{5-6} & & & & & \\ \hline \multirow{2}{*}{\(\phi_{f}(v_{g})\)} & \((0,0)\) & \(1=1\) & ✗ \\ \cline{2-6} & \((0,1)\) & \(1=1\) & ✗ \\ \cline{2-6} & \((1,0)\) & \(1=1\) & ✗ \\ \cline{2-6} & \((1,1)\) & \(0\neq 1\) & ✗ \\ \hline \hline \end{tabular} \end{table} Table 1: For the grounding message of \(g\) w.r.t \(\mathsf{C}(e_{1},e_{2})\land\mathsf{C}(e_{2},e_{3})\Rightarrow\mathsf{C}(e_{1 },e_{3})\), only one assignment of \(g_{-\mathsf{C}(e_{1},e_{3})}\) makes difference to \(\mathsf{C}(e_{1},e_{3})\), i.e., useful. Figure 3: Instead of sequentially generating groundings (**left**), we exploit the structure of rules and formalize the MF iteration into Einstein summation notation, which enables parallel computation (**right**). a clause \(f(\mathcal{A}^{f};\mathbf{n}^{f})\) to its atom \(h\) with arguments \(\mathcal{A}^{f}\), their aggregation is equivalent to:_ \[\tilde{\mathbf{Q}}_{r_{h}}^{[f,h]}(\mathbf{v}_{r_{h}})\leftarrow\mathbf{1}_{ \mathbf{v}_{r_{h}}\gets n_{h}}\mathtt{einsum}(\text{``}...,\mathcal{A}_{r _{j\neq h}}^{f},...\rightarrow\mathcal{A}_{r_{h}}^{f}\text{ ``},...,\mathbf{Q}_{r_{j\neq h}}(n_{j\neq h}),...)\,, \tag{4}\] _where \(r_{h}\) is the predicate of \(h\), \(\mathcal{A}_{r_{h}}^{f}\) is the arguments of \(r_{h}\). The MF iteration of Eq. 2 is equivalent to:_ \[\mathbf{Q}_{r}(\mathbf{v}_{r})\leftarrow\frac{1}{\mathbf{Z}_{r}}\exp(\Phi_{u} (\mathbf{v}_{r})+\sum_{[f,h],r=r_{h}}w_{f}\mathbf{Q}_{r_{h}}^{[f,h]}(\mathbf{v }_{r_{h}}))\,. \tag{5}\] An additional benefit of using Einsum notation is it indicates a way to simplify complexity in practical scenarios. Let us consider a chain rule \(\forall a,b,c,d:\mathbf{r}_{1}(\mathbf{a},b)\wedge\mathbf{r}_{2}(b,c)\wedge \mathbf{r}_{3}(c,d)\rightarrow\mathbf{r}_{4}(a,d)\). The complexity of \(\mathtt{einsum}(\text{``}ab,bc,cd\to a\text{''},\mathbf{Q}_{r_{1}}( \mathbf{1}),\mathbf{Q}_{r_{2}}(\mathbf{1}),\mathbf{Q}_{r_{1}}(\mathbf{1}))\) is \(\mathcal{O}(N^{4})\). By **Einsum optimization**, we can reduce it to \(\mathcal{O}(N^{3})\). We compute \(\mathbf{Q}_{r_{1}}(\mathbf{1})\mathbf{Q}_{r_{1}}(\mathbf{1})\) which is \(\mathcal{O}(N^{3})\) to integrate the paths through \(b\), followed by multiplication with \(\mathbf{Q}_{r_{1}}(\mathbf{1})\) which is also \(\mathcal{O}(N^{3})\) to sum over \(c\). The complexity of any longer chain rules is \(\mathcal{O}(N^{3})\). Note that the Einsum optimization is almost free, as it can be done within milliseconds. This optimization method is not limited to chain rules and can be applied to other rules, which we demonstrate in App. G. For any rule, the optimized overall complexity is \(\mathcal{O}(N^{M^{\prime}}L^{2})\) where \(M^{\prime}\) is the maximum number of arguments in the granular operations (App. H), in contrast to the original one \(\mathcal{O}(N^{M}L^{2}D^{L-1})\). In the worst case, \(M^{\prime}\) equals \(M\), but in practice, \(M^{\prime}\) may be much smaller. ``` 0: Grouped unary potential \(\{\Phi_{u}(\mathbf{v}_{r})\}_{r}\), the formulas \(\{f(\mathcal{A};\mathbf{n})\}_{f}\) and rule weights \(\{w_{f}\}_{f}\), the number of iterations \(T\). \(\mathbf{Q}_{r}(\mathbf{v}_{r})\leftarrow\frac{1}{2^{r}_{r}}\exp(\Phi_{u}( \mathbf{v}_{r})))\) for all predicates \(r\). for\(t\in\{1,...,T\}\)do\(\triangleright\) Iterations for\(f\in F\)do\(\triangleright\) formulas for\(h\in f\)do\(\triangleright\) Implications Obtain \(\tilde{\mathbf{Q}}_{r_{h}}^{[f,h]}(\mathbf{v}_{r_{h}})\) by Eq. 4.\(\triangleright\) Parallel endfor endfor Update \(\mathbf{Q}_{r}(\mathbf{v}_{r})\) by Eq. 5 for all predicates \(r\). endfor return\(\{\mathbf{Q}_{r}(\mathbf{v}_{r})\}_{r}\). ``` **Algorithm 1** LogicMP. ## 4 Related Work **MLN inference.** MLNs are elegant Markov networks and inherently suitable for FOLCs, but they have been absent in the neuro-symbolic field for a long time due to the inference inefficiency. The most relevant work is the ExpressGNN (Qu & Tang, 2019; Zhang et al., 2020), which has preliminary attempted to combine MLNs with NNs via variational EM. Although both ExpressGNN and LogicMP are based on variational inference, they have clear differences: (1) LogicMP uses the MF algorithm, which permits closed-form iterations (Sec. 2.1). (2) LogicMP obtains essential acceleration by exploiting the structure and symmetries in MLNs (Sec. 3). (3) These enable LogicMP to be applied in general tasks, including computer vision (CV) and natural language processing (NLP) (Sec. 5). Conventional MLN inference methods perform inference either at the level of propositional logic or in a lifted way without performing grounding. The former is inefficient due to the complicated handling of the propositional graph, e.g., Gibbs sampling (Richardson & Domingos, 2006), MC-SAT (Poon & Domingos, 2006), BP (Yedidia et al., 2000). The latter consists of symmetric lifted algorithms which become inefficient with distinctive evidence, such as lifted BP (Singla & Domingos, 2008) and lifted MCMC (Niepert, 2012), and asymmetric lifted algorithms which often requires specific formulas (den Broeck & Davis, 2012; Gribkoff et al., 2014) or evidence (Bui et al., 2012). LogicMP situates itself within the MLN community by contributing a novel and efficient MLN inference method. **Neuro-symbolic reasoning.** Typically, neuro-symbolic methods for logical constraints, e.g., semantic loss (SL) (Xu et al., 2018) and semantic probabilistic layer (SPL) (Ahmed et al., 2022), are rooted in probabilistic logic programming (PLP) that utilizes ACs. However, ACs are often limited to propositional logic and may be insufficient to handle FOLCs unless specific formulas are employed (den Broeck Davis, 2012). Research applying ACs for FOLCs is currently ongoing in both the MLN and PLP fields, including probabilistic database (Jha Suciu, 2012) and asymmetric lifted inference (den Broeck Niepert, 2015), but it remains a challenging problem. LogicMP exploits the calculation symmetries in MLN for efficient computation by parallel tensor operations. Consequently, LogicMP contributes to developing neuro-symbolic methods for FOLCs using MLNs. Notably, popular neuro-symbolic methods such as DeepProbLog (Manhaeve et al., 2018) and Scallop (Huang et al., 2021) also use ACs and are typically used under closed world assumption rather than OWA. ## 5 Experiments ### Encoding Folc over Document Images **Benchmark Dataset.** We apply LogicMP in a CV task, i.e., the information extraction task on the widely used FUNSD form understanding dataset (Jaume et al., 2019). The task involves extracting information from a visual document image, as shown in Fig. 1a, where the model needs to segment tokens into several blocks. The maximum number of tokens is larger than 512. The evaluation metric is the F1 score. The dataset details and general settings are provided in App. J.1. **Our Method.** We formalize this task as matrix prediction as in (Xu et al., 2022). Each element in the matrix is a binary variable representing whether the corresponding two tokens coexist in the same block. A matrix with ground truth is shown in Fig. 1b. We adopt the LayoutLM (Xu et al., 2020), a robust pre-trained Transformer, as the backbone to derive the vector representation of each token. The matrix \(\Phi_{u}\) is predicted by dot-multiplying each pair of token vectors. We call this model _LayoutLM-Pair_. Independent classifiers often yield unstructured predictions (Fig. 1c), but we can constrain the output via the transitivity of the coexistence, i.e., tokens \(a,c\) must coexist when tokens \(a,b\) coexist, and \(b,c\) coexist. Formally, we denote the predicate as \(\mathcal{C}\) and the FOLC as \(\forall a,b,c:\mathcal{C}(a,b)\wedge\mathcal{C}(b,c)\implies\mathcal{C}(a,c)\). LogicMP applies this FOLC to LayoutLM-Pair. Each experiment is performed 8 times, and the average score is reported. See App. J.2 for more details. **Compared Methods.** We compare LogicMP to several robust information extraction methods, including _BIOES_(Xu et al., 2020), _SPADE_(Huang et al., 2021), and _SpanNER_(Fu et al., 2021). We also compare LogicMP to other neuro-symbolic techniques. _SL_(Xu et al., 2018) is the abbreviation of Semantic Loss, which enforces constraints on predictions by compiling an AC and using it to compute a loss that penalizes joint predictions violating constraints. However, compiling the AC for all variables (up to 262K) is infeasible. Therefore, we use an unrigorous relaxation (_SLrelax_), i.e., penalizing every triplet and summing them via the parallel method proposed in Sec. 3.2. _SPL_(Ahmed et al., 2022) models joint distribution using ACs, but the same relaxation as SL cannot be applied since all variables must be jointly modeled in SPL. **Main Results.** Table 2 shows the results on the FUNSD dataset, where "full" incorporates all blocks, and "long" excludes blocks with fewer than 20 tokens. Upon integrating FOLC using LogicMP, we observe consistent improvements in two metrics, particularly a 7.3% relative increase in "long" matches. This is because FOLC leverages other predictions to revise low-confidence predictions for distant pairs, as shown in Fig. 1. However, SL and SPL both fail in this task. While attempting to ground the FOLC and compiling AC using PySDD (Darwiche, 2011), we found that it fails when the sequence length exceeds 8 (App. J.3). In contrast, LogicMP can perform joint inference within 0.03 seconds using just 3 tensor operations (App. J.4) with a single additional parameter. SLrelax is beneficial but is outperformed by LogicMP. Additionally, LogicMP is compatible with SLrelax since LogicMP is a neural layer and SLrelax is a learning method with logical regularization. Combining them further improves performance. More visualizations are attached in App. J.5. ### Encoding FOLCs over Relational Graphs **Benchmark Datasets.** We evaluate LogicMP on four collective classification benchmark datasets, each with specific FOLCs. Smoke (Badreddine et al., 2022) serves as a sanity check (see results in App. K.5). Kinship (Zhang et al., 2020) involves determining relationships between people. UW-CSE (Richardson & Domingos, 2006) contains information about students and professors in the CSE department of UW. Cora (Singla & Domingos, 2005) involves de-duplicating entities using the citations between academic papers. It is noteworthy that Cora has 140+K mutually dependent variables and 300+B groundings, with only around 10K known facts. The dataset details and general settings are given in App. K. We conduct each experiment 5 times and report the average results. **Compared Methods.** We compare with several strong MLN inference methods. _MCMC_(Gilks et al., 1995; Richardson & Domingos, 2006) performs samplings over the ground Markov network. _BP_(Yedidia et al., 2000) uses belief propagation instead. _Lifted BP_(Singla & Domingos, 2008) groups the ground atoms in the Markov network. _MC-SAT_(Poon & Domingos, 2006) performs sampling using boolean satisfiability techniques. _HL-MRF_(Bach et al., 2017; Srinivasan et al., 2019) is hinge-loss Markov random field. _ExpressGNN_ denotes the graph neural network proposed in (Zhang et al., 2020), which is trained to fit the data. _ExpressGNN w/ GS_ denotes that ExpressGNN is trained to maximize the grounding scores, i.e., the satisfaction of formulas for the groundings using sequential summation (i.e., ExpressGNN-E (Zhang et al., 2020)). Following ExpressGNN w/ GS, we adopt the OWA setting where all unobserved facts are latent variables to infer and use the area under the precision-recall curve (AUC-PR) as the performance evaluation metric. **Our Method.** For a fair comparison with ExpressGNN w/ GS, we set the rule weights to 1 and use ExpressGNN as the encoding network to obtain unary potentials \(\phi_{u}\). We stack LogicMP with 5 iterations over it. The encoding network is trained to approach the output of LogicMP, which regularizes the output of the encoding network with FOLCs. This learning approach is similar to ExpressGNN w/ GS, as discussed in App. K.4. The main advantage of using LogicMP is its computational efficiency, which enables larger-scale training for better performance. **Main Results.** Fig. 5 shows the training efficiency of LogicMP, which is about 10 times better than ExpressGNN w/ GS, reducing the computational time per grounding to just 1 millisecond. Thus, we can scale the training from the original 16K (Zhang et al., 2020) to 20M groundings in a reasonable time. Surprisingly, we found that the performance of LogicMP steadily improved with more training (Fig. 5). This observation suggests that the performance of ExpressGNN w/ GS reported in the original work may be hindered by its inefficiency in performing sufficient training. Table 4 shows the AUC-PR results for the three datasets with a mean standard deviation of 0.03 for UW-CSE and 0.01 for Cora. A hyphen in the entry indicates that it is either out of memory or exceeds the time limit (24 hours). Note that since the lifted BP is guaranteed to get identical results as BP, the results of these two methods are merged into one row. LogicMP obtains almost perfect results on a small dataset (i.e., Kinship), exhibiting its excellent ability in precise inference. In addition, it performs much better than advanced methods on two relatively large datasets (i.e., UW-CSE and Cora), improving relatively by 173%/28% over ExpressGNN w/ GS. The improvement is due to its high efficiency, which permits more training within a shorter time (less than 2 hours). Without LogicMP, ExpressGNN w/ GS would take over 24 hours to consume 20M groundings. **Ablation Study.** Fig. 4 also illustrates the efficiency ablation of the techniques discussed in Sec. 3. As compared to LogicMP, the parallel Einsum technique (Sec. 3.2) achieves significant acceleration, while other improvements, i.e., Einsum optimization and RuleOut (Sec. 3.1), also enhance efficiency. Note that optimizing Einsum is almost cost-free, taking only milliseconds for datasets with an argument size of less than 6. More comparison results are shown in App. K.5. ### Encoding FOLCs over Text **Benchmark Dataset & Compared Methods.** We further verify LogicMP in an NLP task, i.e., the sequence labeling task. We conduct experiments on the well-established CoNLL-2003 benchmark (Sang & Meulder, 2003). The task assigns a named entity tag to each word, such as B-LOC, where B is Beginning out of BIOES and LOC stands for "location" out of 4 entity categories. This experiment aims not to achieve state-of-the-art performance but to show that specific FOLCs can also be applied. The compared methods use the bi-directional LSTM (BLSTM) as the backbone and employ different techniques, including _SLrelax_ and logic distillation (_LogicDist_) (Hu et al., 2016). **Our Method.** For a fair comparison, we also use BLSTM as the backbone and stack LogicMP on BLSTM to integrate the following FOLCs used in LogicDist. (1) **adjacent rule**: The BIOES schema contains constraints for adjacent labels, e.g., the successive label of B-PER cannot be O-PER. We explicitly declare the constraints as several adjacent logic rules, such as \(\forall i:\mathtt{label}(i)\in\{\text{B/I-PER}\}\Leftrightarrow\mathtt{label}(i+ 1)\in\{\text{UE-PER}\}\), where \(\mathtt{label}(i)\) is the multi-class predicate of \(i\)-th token label (see the extension for multi-class predicate in App. E). (2) **list rule**: we exploit a task-specific rule to inject prior knowledge from experts. Specifically, named entities in a list are likely to be in the same categories, e.g., "Bareelona" and "Juventus" in "1. Juventus, 2. Barcelona, 3....". The corresponding FOLC is \(\forall i,j:\mathtt{label}(i)\in\{\text{B/I/E-LOC}\}\wedge\mathtt{samelist}(i,j) \Leftrightarrow\mathtt{label}(j)\in\{\text{B/I/E-LOC}\}\), where \(\mathtt{samelist}(i,j)\) indicates the coexistence of two tokens in a list. **Main Results.** Table 3 presents our experimental results, with the rule-based methods listed at the bottom. Along with the BLSTM baselines, LogicMP outperforms SLrelax and LogicDist, where "p" denotes BLSTM and "q" post-regularizes the output of BLSTM. These methods implicitly impose constraints during training, which push the decision boundary away from logically invalid prediction regions. In contrast, LogicMP always explicitly integrates the FOLCs into the BLSTM output. For samples with a list structure, LogicMP improves F1 from 94.68 to 97.41. \begin{table} \begin{tabular}{l|l|c c c c c c|c c c c c|c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{} & \multicolumn{4}{c|}{Kinship} & \multicolumn{4}{c|}{UW-CSE} & \multicolumn{4}{c}{Cora} \\ \cline{3-14} & & S1 & S2 & S3 & S4 & S5 & avg. & A. & G. & L. & S. & T. & avg. & S1 & S2 & S3 & S4 & S5 & avg. \\ \hline \multirow{4}{*}{ \begin{tabular}{l} \end{tabular} } & MCMC (Richardson \& Domingos, 2006) & 5.3 & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - \\ & BF/Inider (Spingla \& Domingos, 2008) & 5.3 & - & 5.8 & 5.5 & 5.5 & 5.6 & - & 5.6 & - & - & - & - & - & - & - & - & - \\ & MC-SAT (Poon \& Domingos, 2006) & 5.4 & 6.0 & 5.5 & 5.5 & - & - & - & - & - & - & - & - & - & - & - & - & - \\ & HL-MR (Bach et al., 2017) & **1.0** & **1.0** & **1.0** & **1.0** & - & - & - & 0.6 & - & 0.9 & - & 0.2 & 0.4 & - & - & - & - & - \\ \cline{2-14} & ExpressGNN & 5.6 &.55 &.49 &.53 &.55 &.54 &.01 &.01 &.01 &.01 &.01 &.01 &.37 &.66 & 21 &.42 &.55 &.44 \\ & ExpressGNN w/ GS (Zhang et al., 2020) & 97. &.97 &.99 &.99 &.99 &.98 &.09 &.19 &.14 &.06 &.09 &.11 &.62 &.79 &.46 &.57 &.75 &.64 \\ & ExpressGNN w/ LogicMP & 9.9 & **1.0** & **1.0** & **1.0** & **9.9** & **2.6** & **3.0** & **- & **- & **- & **2.5** & **2.8** & **3.0** & **8.8** & **7.2** & **8.3** & **.89** & **.82** \\ \hline \hline \end{tabular} \end{table} Table 4: AUC-PR on Kinship, UW-CSE, and Cora. The best results are in bold. “-” means failure. Conclusion We presented a novel neuro-symbolic model LogicMP, an efficient method for MLN inference, principally derived from the MF algorithm. LogicMP can act as a neural layer since the computation is fully paralleled through feed-forward tensor operations. By virtue of MLN, LogicMP is able to integrate FOLCs into any encoding network. The output of LogicMP is the (nearly) optimal combination of the FOLCs from MLN and the evidence from the encoding network. The experimental results over various fields prove the efficiency and effectiveness of LogicMP. A limitation of LogicMP is the incapability of using the existential quantifier, and we leave this direction to future work.
2309.16151
Ab-initio insights into the physical properties of XIr3 (X = La, Th) superconductors: A comparative analysis
Here we report the structural, elastic, bonding, thermo-mechanical, optoelectronic and superconducting state properties of recently discovered XIr3 (X = La, Th) superconductors utilizing the density functional theory (DFT). The elastic, bonding, thermal and optical properties of these compounds are investigated for the first time. The calculated lattice and superconducting state parameters are in reasonable agreement to those found in the literature. In the ground state, both the compounds are mechanically stable and possess highly ductile character, high machinability, low Debye temperature, low bond hardness and significantly high melting point. The thermal conductivities of the compounds are found to be very low which suggests that they can be used for thermal insulation purpose. The population analysis and charge density distribution map confirm the presence of both ionic and covalent bonds in the compounds with ionic bond playing dominant roles. The calculated band structure and DOS profiles indicate metallic character. Unlike the significant anisotropy observed in elastic and thermal properties, all the optical constants of these compounds exhibit almost isotropic behavior. The optical constants correspond very well with the electronic band structure and DOS features. We have estimated the superconducting transition temperature of the compounds in this work.
Md. Sajidul Islam, Razu Ahmed, M. M. Hossain, M. A. Ali, M. M. Uddin, S. H. Naqib
2023-09-28T03:54:13Z
http://arxiv.org/abs/2309.16151v1
Ab-initio insights into the physical properties of \(X\)Ir\({}_{3}\) (\(X=\) La, Th) superconductors: A comparative analysis ###### Abstract Here we report the structural, elastic, bonding, thermo-mechanical, optoelectronic and superconducting state properties of recently discovered \(X\)Ir\({}_{3}\) (\(X=\) La, Th) superconductors utilizing the density functional theory (DFT). The elastic, bonding, thermal and optical properties of these compounds are investigated for the first time. The calculated lattice and superconducting state parameters are in reasonable agreement to those found in the literature. In the ground state, both the compounds are mechanically stable and possess highly ductile character, high machinability, low Debye temperature, low bond hardness and significantly high melting point. The thermal conductivities of the compounds are found to be very low which suggests that they can be used for thermal insulation purpose. The population analysis and charge density distribution map confirm the presence of both ionic and covalent bonds in the compounds with ionic bond playing dominant roles. The calculated band structure and DOS profiles indicate metallic character of \(X\)Ir\({}_{3}\) compounds. It is also found from the DOS profile that the Ir 5\(d\)-states dominate near the Fermi level. Unlike the significant anisotropy observed in elastic and thermal properties, all the optical constants of \(X\)Ir\({}_{3}\) compounds exhibit almost isotropic behavior. The optical parameters' profiles correspond very well with the electronic band structure and DOS features. We have estimated the superconducting transition temperature (\(T_{c}\)) of \(X\)Ir\({}_{3}\) compounds in this work. The calculated values of \(T_{c}\) for LaIr\({}_{3}\) and ThIr\({}_{3}\) compounds are 4.91 K and 5.01 K, respectively. We have compared and contrasted all the physical properties of \(X\)Ir\({}_{3}\) (\(X=\) La, Th) compounds in this study. DFT calculations; Elastic properties; Thermo-mechanical properties; Optoelectronic properties; Superconductivity ## 1 Introduction Superconductivity is one of the most interesting and significant topics in both theoretical and experimental condensed matter physics. Nowadays the investigation of superconductors has piqued the interest of modern society owing to their features suitable for wide range of applications. Many binary systems made of rare earth and transition metals have been found to exhibit superconductivity. The addition of rare earth and 5\(d\) transition metal element such as Ir (Iridium) can create a compound that displays superconductivity and other attractive
2309.13001
Joint $p$-Values for Higher-Powered Bayesian Model Checking with Frequentist Guarantees
We introduce a joint posterior $p$-value, an extension of the posterior predictive $p$-value for multiple test statistics, designed to address limitations of existing Bayesian $p$-values in the setting of continuous model expansion. In particular, we show that the posterior predictive $p$-value, as well as its sampled variant, become more conservative as the parameter dimension grows, and we demonstrate the ability of the joint $p$-value to overcome this problem in cases where we can select test statistics that are negatively associated under the posterior. We validate these conclusions with a pair of simulation examples in which the joint $p$-value achieves substantial gains to power with only a modest increase in computational cost.
Collin Cademartori
2023-09-22T17:04:12Z
http://arxiv.org/abs/2309.13001v2
# Joint \(p\)-Values for Higher-Powered Bayesian Model Checking with Frequentist Guarantees ###### Abstract We define an extension of the posterior predictive \(p\)-value for multiple test statistics and establish a bound on its frequency under the assumption of model correctness. We argue that the conservativity of the posterior predictive \(p\)-value increases with model dimension, and we demonstrate the ability of the joint \(p\)-value to overcome this problem in many cases. We also compare the joint \(p\)-values to other alternative \(p\)-values designed to have higher power and show that the joint \(p\)-value can achieve similar performance for model rejection while maintaining more favorable computational and interpretive properties. ## 1 Introduction Checking the adequacy of a statistical model is an essential step in almost any applied modeling workflow (Gelman et al., 2020; van de Schoot et al., 2021; Gabry et al., 2019; Blei, 2014). When a model's assumptions have not been tested against their observable consequences, inferences about unobservable quantities obtained through such models must be interpreted skeptically. However, the process of checking a model is often not straightforward, and in the Bayesian setting in particular it is subject to a number of confusions. For instance, we find substantial disagreement in the literature over questions such as: 1. Is our goal to subject our model to the strongest possible test of its compatibility with (some feature of) the data, in order to have the best possible chance of rejecting it? Or is our goal to generate assessments of fitness which can provide useful information for how we might improve our model? We will term the former the _rejection goal_, which is strongly advocated for by Robins et al. (2000). This perspective is explicitly rejected by Gelman (2003) who advocates for focusing on model compatibility over correctness. This closely tracks with our latter goal, which we will refer to as the _discovery goal_. 2. Do we need to know the frequency properties of our model checking procedures in order to interpret their output? Or can we achieve the relevant goals by using "purely" Bayesian calculations? And would using frequency calculations undermine the Bayesian consistency or validity of our analysis? Arguments for the importance of frequency information can found in Robins et al. (2000) and Bayarri and Berger (2000), whereas arguments in the opposite direction are given in Gelman (2013) and Gelman (2003). In the first question, the distinction between the goals of model rejection and model discovery can be arbitrarily sharp. An oracle which provides a yes or no answer, for any proposed model, to the question of whether it is a valid description of the true data generating process gives us 100% power against any alternative. But such a binary oracle offers little help in diagnosing the source of the model's inaccuracy or in finding plausible directions for improvement. In light of Box's famous adage - "all models are wrong, but some are useful" - it has been argued that most of our focus should be placed on the discovery goal. In the Bayesian setting, this is most commonly achieved by comparing observed data to simulations from the model's posterior predictive distribution - i.e. by performing a posterior predictive check. In numerical form, this leads to the posterior predictive \(p\)-value, but advocates of the posterior predictive check often recommend qualitaitive visual checks for their higher density of information (Gabry et al., 2019). In this setting, concerns over frequency properties are either not relevant (in the case of the \(p\)-value, which can be interpreted directly as a posterior probability) or not well-defined (in the case of visual assessments, where no formal decision process exists). However, when we pursue the rejection goal, frequency evaluations become much more relevant. Meng (1994) showed that the posterior predictive \(p\)-value has a frequency distribution which is stochastically less variable than uniform (under sampling from the prior predictive distribution). As a consequence, the frequency of a given posterior predictive \(p\)-value is usually less than its nominal value, and sometimes substantially so. If we test the model by comparing the \(p\)-value to some threshold, then such tests will be conservative or underpowered compared to a test using the corresponding frequency. Moreover, it has been observed that the size of this power deficit can be quite large in practice (Steinbakk and Storvik, 2009; Zhang, 2014; Yano et al., 2001). This paper makes two arguments: 1. The rejection goal can become practically relevant even in a workflow that takes the discovery goal as its primary concern. 2. We can effectively pursue the rejection goal by testing multiple statistics simultaneously with a joint \(p\)-value that achieves a balance of computational tractability and finite sample performance which is often lacking in other alternatives to the posterior predictive \(p\)-value. Pursuit of the discovery goal usually involves the construction of many models, each designed to improve fitness in response a check of a previous model. This is, for instance, the role that model checking plays in many statistical workflow guidelines. However, this process of model multiplication must eventually terminate, at least temporarily, due either to the diminishment of identifiable routes for further improvement or the need to use the model for some downstream task. We therefore want to evaluate, at any given time in the model building process, the risks of stopping at that time. In particular, we may reasonably wish to judge if our current model is acceptable for some task. In many cases, a model will be unacceptable if it is demonstrably incompatible with the relevant features of our observed data. Thus, tools that address the rejection goal are directly relevant to the choice of these stopping times. By contrast, tools oriented towards the discovery goal may not be helpful, particularly when we consider stopping because our model discovery tools have become less informative. As a general strategy to obtain higher power for model rejection, we propose computing a posterior predictive \(p\)-value for a collection of test statistics \(\mathcal{T}=\{T_{1},\ldots,T_{d}\}\): \[\mathsf{joint-}p_{\mathcal{T}}(\mathbf{y})=\mathbb{E}_{p(\mathbf{y}_{\text{ rep}}|\mathbf{y})}\mathbb{1}\left\{T_{1}(\mathbf{y}_{\text{rep}})>T_{1}(\mathbf{y}) \text{ and }T_{2}(\mathbf{y}_{\text{rep}})>T_{2}(\mathbf{y})\text{ and }\cdots\ T_{d}( \mathbf{y}_{\text{rep}})>T_{d}(\mathbf{y})\right\}. \tag{1}\] This proposal differs substantially from existing approaches which have focused on calibrating \(p\)-values to have an exactly or approximately known frequency distribution. The key idea behind our approach is that testing many statistics at once can substantially increase the difficulty of the model check (i.e. generates much lower nominal values), which allows us to obtain large improvements to power even with relatively loose information about the underlying frequencies in many cases. This joint \(p\)-value can be much easier to compute in practice than the most powerful calibration-based model checks, and it enjoys finite-sample guarantees that simpler methods cannot provide. ### Outline This paper is organized as follows. In Section 2, we present our main argument for the necessity of tools specialized for model rejection even within the framework of a discovery-first workflow. Section 3 reviews existing approaches for model rejection within a Bayesian framework and compares them to our proposed joint \(p\)-value. Here we prove our main result - a simple extension of Lemma 1 in Meng (1994) which provides a bound on the frequency of any given joint posterior predictive \(p\)-value. We validate that this strategy can obtain power gains from the joint structure of test statistics by studying our bound user various copula models of test statistic dependence in Section 4. Section 5 then presents a numerical experiment in which we compare our joint \(p\)-value to a number of alternatives, demonstrating that our method achieves a practically useful trade-off between interpretability, power, and computational tractability. Finally, Section 6 discusses the role for our method in a crowded landscape of model checking tools and considers directions for future work. ## 2 Model rejection with \(\mathsf{post}{-}p\) We now present a systematic argument for why a discovery-first modeling workflow should take the rejection goal seriously and why the usual discovery-focused tools cannot be used for this purpose. One of the mostly commonly used methods for Bayesian model rejection is to compare the posterior predictive \(p\)-value (\(\mathsf{post}{-}p\)) to some threshold. The classic argument against using this procedure for model rejection - that it is overly conservative - can be formalized using the concept of convex order. For distributions \(p,q\), we say that \(p\) is less than \(q\) in convex order (\(p\ll q\)) if, for \(X\sim p\), \(Y\sim q\), and any convex function \(\psi\), we have that \[\mathbb{E}\psi(X)\leq\mathbb{E}\psi(Y). \tag{2}\] Meng showed that \(\mathsf{post}{-}p\) is dominated in convex order by a uniform variable. To demonstrate this, let \(p_{T}(\mathbf{y},\boldsymbol{\theta})\) be the \(p\)-value computed with respect to \(p(\mathbf{y}\mid\boldsymbol{\theta})\), i.e. \[p_{T}(\mathbf{y},\boldsymbol{\theta})=\mathbb{E}_{p(\mathbf{y}_{\mathrm{ rep}}\mid\boldsymbol{\theta})}\mathbb{1}\left\{T(\mathbf{y}_{\mathrm{rep}}) \geq T(\mathbf{y})\right\}. \tag{3}\] Then we have that \[\mathbb{E}_{p(\mathbf{y})}\psi\left(\mathsf{post}{-}p_{T}(\mathbf{y})\right)= \mathbb{E}_{p(\mathbf{y})}\psi\left(\mathbb{E}_{p(\boldsymbol{\theta}| \mathbf{y})}p_{T}(\mathbf{y},\boldsymbol{\theta})\right)\overset{(a)}{\leq} \mathbb{E}_{p(\boldsymbol{\theta})}\mathbb{E}_{p(\mathbf{y}|\boldsymbol{ \theta})}\psi\left(p_{T}(\mathbf{y},\boldsymbol{\theta})\right)\overset{(b)}{= }\mathbb{E}\psi(U), \tag{4}\] where \(U\) is a uniform random variable. Here, \((a)\) follows by Jensen's inequality, and \((b)\) follows from the definition of \(p_{T}(\mathbf{y},\boldsymbol{\theta})\) and the fact that any \(p\)-value has a uniform distribution under its assumed sampling distribution (by the probability integral transform). Roughly, this convex ordering means that \(\mathsf{post}{-}p\) will tend to have a distribution that is more peaked around \(0.5\), and thus it will commonly be true that \[f_{T}(\alpha)\overset{\mathrm{def}}{=}\mathbb{P}\left(p_{T}(\mathbf{y})\leq \alpha\right)<\alpha \tag{5}\] for sufficiently small values of \(\alpha\). We can thus see that for any threshold \(p^{*}\), when (5) holds, the test that rejects when \(p_{T}(\mathbf{y})<p^{*}\) is lower power than the test that rejects when \(f_{T}\left(p_{T}(\mathbf{y})\right)<p^{*}\). The Bayesian who does not want to be concerned with frequency calculations may reasonably wonder at this point whether this claimed power deficit will be an issue in practice. Indeed, this argument does not show that \(\mathsf{post}{-}p\) is useless for model rejection. Meng also showed that \[\mathbb{P}\left(p_{T}(\mathbf{y})\leq\alpha\right)\leq 2\alpha \tag{6}\] for all \(\alpha\). Thus, when \(p_{T}(\mathbf{y})\) is sufficiently small, we will still have sufficient information to reject the model on frequentist grounds without the need to compute or approximate \(f_{T}(p_{T}(\mathbf{y}))\). Indeed, many examples show that \(\mathsf{post}{-}p\) can work quite well for this purpose, and one can always choose a more skeptical threshold if power is a substantial concern. Of course, the viability of this strategy relies entirely on _how_ non-uniform \(\mathsf{post}{-}p\) is in any given case. If \(\mathsf{post}{-}p\) becomes severely non-uniform and is sharply peaked around \(0.5\), then the only way to achieve significance levels that aren't extremely conservative may be to place the nominal threshold at levels so large (e.g. \(>0.4\)) that they would never be recommended absent direct evidence of this degree of peakedness (since they would result in unreasonably large significance levels in other cases). Likewise, it has been observed that large variation in the conservativity of \(\mathsf{post}{-}p\) across models and test quantities undermines its consistent interpretation [10]. In short, we can expect \(\mathsf{post}{-}p\) to give reasonable rejection performance only when it is consistently not-too-severely non-uniform. ### Conservativity of \(\mathsf{post}{-}p\) and discovery-driven model expansion In light of the above arguments, it is clear that we need an understanding of how non-uniform \(\mathsf{post}{-}p\) may be in practice to adjudicate the relevant concerns. Examining (4), we can see that the degree of non-uniformity is entirely controlled by the size of the gap in the inequality \((a)\). It is well-known that the gap in Jensen's inequality can be bounded above and below as \[\sigma_{\mathbf{y}}^{2}\frac{\inf\psi^{\prime\prime}}{2}\leq\mathbb{E}_{p( \boldsymbol{\theta}|\mathbf{y})}\psi\left(p_{T}(\mathbf{y},\boldsymbol{ \theta})\right)-\psi\left(\mathbb{E}_{p(\boldsymbol{\theta}|\mathbf{y})}p_{T }(\mathbf{y},\boldsymbol{\theta})\right)\leq\sigma_{\mathbf{y}}^{2}\frac{ \sup\psi^{\prime\prime}}{2}, \tag{7}\] where \(\sigma_{\mathbf{y}}^{2}=\operatorname{Var}\left[p_{T}(\mathbf{y},\boldsymbol{ \theta})\mid\mathbf{y}\right]\). Thus, the non-uniformity of \(p_{T}(\mathbf{y})\) is controlled by the average size of \(\sigma_{\mathbf{y}}^{2}\). We claim that, for at least some \(T\), we should expect \(\sigma_{\mathbf{y}}^{2}\) to increase throughout a discovery-driven modeling workflow. To formalize this claim, we begin with the following assumption. **Workflow Assumption.**_In a modeling workflow that emphasizes an open-ended process of model criticism and model improvement, our models will tend to become more complex and require higher-dimensional parameter spaces in order to accommodate those features of the data which are observed empirically but are not accounted for in our existing models._ This assumption of model improvement as model expansion may not always hold, for instance if we move from a generic initial model to a more specialized model designed with particular domain knowledge. Nevertheless, we believe this assumption is valid in many settings, as model improvement often requires accounting for unanticipated sources of variation (e.g. )overdispersion, random effects), which, ceteris paribus, results in models that are larger than their predecessors. We now formalize the notion of a model expansion so that we can study its effects on \(\sigma_{\mathbf{y}}^{2}\). **Definition 1** (Model Expansion).: _A model \(p(\mathbf{y},\boldsymbol{\theta},\boldsymbol{\lambda})\) defined with additional parameter \(\boldsymbol{\lambda}\in\overline{\mathbb{R}}^{k}\) is an expansion of base model \(p_{\mathrm{base}}(\mathbf{y},\boldsymbol{\theta})\) if_ \[p_{\mathrm{base}}(\mathbf{y},\boldsymbol{\theta})=p(\mathbf{y},\boldsymbol{ \theta}\mid\boldsymbol{\lambda}_{0})\text{ for some }\boldsymbol{\lambda}_{0}\in \overline{\mathbb{R}}^{k}, \tag{8}\] _where \(\overline{\mathbb{R}}=[-\infty,\infty]\)._ In words, \(p\) is an expansion of \(p_{\mathrm{base}}\) if it embeds \(p_{\mathrm{base}}\) as a conditional distribution. Our workflow assumption can be formalized as the proposition that a discovery-driven modeling workflow will tend to produce models which are expansions of previous models. Furthermore, when passing from a base model to an expanded model in this way, we can see by the law of total variance that \[\operatorname{Var}_{p}\left[p_{T}(\mathbf{y},(\boldsymbol{\theta },\boldsymbol{\lambda}))\mid\mathbf{y}\right] =\mathbb{E}\left\{\operatorname{Var}_{p}\left[p_{T}(\mathbf{y},( \boldsymbol{\theta},\boldsymbol{\lambda}))\mid\mathbf{y},\boldsymbol{\lambda} \right]\mid\mathbf{y}\right\}+\operatorname{Var}_{p}\left\{\mathbb{E}\left[p_{ T}(\mathbf{y},(\boldsymbol{\theta},\boldsymbol{\lambda}))\mid\mathbf{y}, \boldsymbol{\lambda}\right]\mid\mathbf{y}\right\}\] \[=\operatorname{Var}_{p_{\mathrm{base}}}\left[p_{T}(\mathbf{y}, \boldsymbol{\theta})\right]+\Delta+\operatorname{Var}_{p}\left\{\mathbb{E} \left[p_{T}(\mathbf{y},(\boldsymbol{\theta},\boldsymbol{\lambda}))\mid \mathbf{y},\boldsymbol{\lambda}\right]\mid\mathbf{y}\right\},\] where we define \[\Delta=\mathbb{E}\left\{\operatorname{Var}_{p}\left[p_{T}(\mathbf{y},( \boldsymbol{\theta},\boldsymbol{\lambda}))\mid\mathbf{y},\boldsymbol{\lambda} \right]\mid\mathbf{y}\right\}-\operatorname{Var}_{p}\left[p_{T}(\mathbf{y},( \boldsymbol{\theta},\boldsymbol{\lambda}))\mid\mathbf{y},\boldsymbol{\lambda}= \boldsymbol{\lambda}_{0}\right].\] We note that the second equality follows from the fact that \(p_{\mathrm{base}}(\boldsymbol{\theta}\mid\mathbf{y})=p(\boldsymbol{\theta}\mid \mathbf{y},\boldsymbol{\lambda}=\boldsymbol{\lambda}_{0})\) and \(p_{\mathrm{base}}(\mathbf{y}\mid\boldsymbol{\theta})=p(\mathbf{y}\mid \boldsymbol{\theta},\boldsymbol{\lambda}=\boldsymbol{\lambda}_{0})\). In any given model expansion, \(\Delta\) may be positive or negative, as \(\operatorname{Var}_{p}\left[p_{T}(\mathbf{y},(\boldsymbol{\theta},\boldsymbol{ \lambda}))\mid\mathbf{y},\boldsymbol{\lambda}\right]\) may vary arbitrarily over the support of \(p(\boldsymbol{\lambda}\mid\mathbf{y})\). On the other hand, we clearly always have \(\operatorname{Var}_{p}\left\{\mathbb{E}\left[p_{T}(\mathbf{y},(\boldsymbol{ \theta},\boldsymbol{\lambda}))\mid\mathbf{y},\boldsymbol{\lambda}\right]\mid \mathbf{y}\right\}\geq 0\). Thus, this identity along with (7) strongly suggests that \(\sigma_{\mathbf{y}}^{2}\) and the non-uniformity of \(\mathsf{post}{-}p\) tend to increase through the process of model expansion. We also note that this problem is not exclusive to \(\mathsf{post}{-}p\). If the posterior predictive \(p\)-value is highly non-uniform, then we should expect similar posterior predictive checks, such as replication plots, to be problematic for purposes of model rejection as well. A check which produces replications that appear visually similar to the observed data \(20\%\) of the time would usually be considered a positive result for the proposed model. It may just as easily be true that if the model were correct, such a visual check would produce data similar to the observed data in a much higher proportion of replications. In short, visual checks can be conservative in the same way as numerical \(p\)-values. We draw two conclusions from these observations. First, the above shows that when our tools for model discovery lead us to larger models, they also tend to lead toward models that are harder to reject with observable data insofar as our \(\mathsf{post}{-}p\) values become increasingly more conservative. While we may be willing to accept a trade-off in favor of tools that emphasize discovery over rejection all else equal, we believe few applied researchers would be comfortable with an arbitrarily high and increasing risk of selecting nearly unfalsifiable models. If this is correct, then this indicates a need to take the rejection goal seriously as an independent concern in model checking. Second, these calculations show that existing and common model checking tools such as \(\mathsf{post}{-}p\) are not suited to the rejection goal at least without some modification. Instead, what is needed is a model checking tool for which the difficulty of the assessment can be scaled to match the complexity of the model appropriately. In the next section, we review some existing proposals for remedying the non-uniformity of \(\mathsf{post}{-}p\) and introduce our proposed method, the joint \(p\)-value. ## 3 \(p\)-Values for Model Rejection We now consider possible methods for partly remedying the difficulties associated with the posterior predictive \(p\)-value as a tool for model rejection. We begin with attempts to derive \(p\)-values which have exactly or approximately uniform distributions, and then turn to our proposed joint \(p\)-value. ### Exactly and Approximately Calibrated \(p\)-Values Hjort et al. (2006) propose to overcome the conservativity of \(\mathsf{post}{-}p\) by plugging it into (an estimate of) its distribution function, which will result in a uniformly distributed quantity when the model is correctly specified by the probability integral transform. In particular, if \(H\) is the distribution function of \(\mathsf{post}{-}p_{\mathbf{y}}\) with respect to the prior predictive distribution, then we can estimate \(H\) by the empirical distribution function \(\hat{H}(p)=\frac{1}{S}\sum_{s=1}^{S}\mathbb{1}\left\{\mathsf{post}{-}p_{ \mathbf{y}_{\mathrm{rep}}^{(s)}}\leq p\right\}\), where \(\left\{\mathbf{y}_{\mathrm{rep}}^{(s)}\right\}_{s=1}^{S}\overset{iid}{ \sim}p(\mathbf{y})\) is a sample from the prior predictive distribution. The calibrated posterior predictive \(p\)-value is then \[\mathsf{cal}-p_{\mathbf{y}}=\hat{H}\left(\mathsf{post}{-}p_{\mathbf{y}}\right). \tag{9}\] This calibration step fully resolves the conservativity problem when \(H\) is well-estimated. However, the computation of \(\hat{H}\) generally requires sampling from \(p(\boldsymbol{\theta}\mid\mathbf{y}_{\mathrm{rep}}^{(s)})\) separately for each \(s=1,\ldots,S\). This can quickly become computationally infeasible for moderate \(S\) if the model is sufficiently complex. Thus, other methods have been proposed that trade exact calibration for approximate calibration and better computational properties. Bayarri and Berger (1999) propose \(p\)-values which are Bayesian in the sense that they account for posterior uncertainty but which enjoy reduced conservativity relative to \(\mathsf{post}{-}p\) by having a uniform frequency distribution in appropriate asymptotics. The key idea for achieving asymptotic uniformity comes from the observation that \(\mathsf{post}{-}p\) involves a double use of the data whereby the posterior "sees" the statistic \(T\) against which it will subsequently be tested. This artificially reduces the difficulty of the test, leading to conservativity. This diagnosis is partly justified by considering tests with ancillary statistics \(T\). Since these have distributions which are independent of \(\boldsymbol{\theta}\), the posterior contains no information about \(T\), and \(\mathsf{post}{-}p\) becomes exactly uniform for such \(T\). The proposed \(p\)-values thus attempt to formalize the idea of "removing" the information in \(T\) from the posterior before testing. The first of these is the conditional predictive \(p\)-value, defined for a test statistic \(T\) as \[\mathsf{cond}{-}p_{T}(\mathbf{y})=\mathbb{P}_{p\left(\mathbf{y}_{\mathrm{rep} }\mid\boldsymbol{\theta}_{T}\right)}\left(T(\mathbf{y}_{\mathrm{rep}})\geq T (\mathbf{y})\right), \tag{10}\] where we define \(\hat{\boldsymbol{\theta}}_{T}=\arg\max p\left(\mathbf{y}\mid\boldsymbol{ \theta},T(\mathbf{y})\right)\) as the \(T\)-conditional maximum likelihood estimate of \(\boldsymbol{\theta}\), and \[p\left(\mathbf{y}_{\mathrm{rep}}\mid\hat{\boldsymbol{\theta}}_{T}\right)= \int p\left(\mathbf{y}_{\mathrm{rep}}\mid\boldsymbol{\theta},T(\mathbf{y}) \right)p\left(\boldsymbol{\theta}\mid\hat{\boldsymbol{\theta}}_{T}\right)d \boldsymbol{\theta}. \tag{11}\] The key idea in this definition is that \(\hat{\boldsymbol{\theta}}_{T}\) should capture as much of the information about \(\boldsymbol{\theta}\) contained in the data as possible while excluding the information in \(T\). When \(\hat{\boldsymbol{\theta}}_{T}\) is sufficient for \(\boldsymbol{\theta}\), \(\mathsf{cond}{-}p_{T}\) is exactly uniform. However, forming and conditioning on the conditional MLE is often computationally difficult. We can instead try to remove the information contained in \(T\) from the posterior directly by conditioning \(T\) out of the likelihood. This results in Bayarri and Berger's partial predictive \(p\)-value: \[\mathrm{part}{-}p_{T}\left(\mathbf{y}\right)=\mathbb{P}_{p\left(\mathbf{y}_{ \mathrm{rep}}\mid\mathbf{y}\setminus T(\mathbf{y})\right)}\left(T(\mathbf{y}_ {\mathrm{rep}})>T(\mathbf{y})\right), \tag{12}\] where we define the partial posterior and posterior predictive distributions as \[p(\boldsymbol{\theta}\mid\mathbf{y}\setminus T(\mathbf{y}))\propto p( \mathbf{y}\mid\boldsymbol{\theta},T(\mathbf{y}))p(\boldsymbol{\theta}),\quad p (\mathbf{y}_{\mathrm{rep}}\mid\mathbf{y}\setminus T(\mathbf{y}))=\int p( \mathbf{y}_{\mathrm{rep}}\mid\boldsymbol{\theta})p(\boldsymbol{\theta}\mid \mathbf{y}\setminus T(\mathbf{y}))d\boldsymbol{\theta}. \tag{13}\] Since \(T(\mathbf{y})\) is determined exactly by \(\mathbf{y}\), the partial posterior differs from the posterior by a factor proportional to \(p(T(\mathbf{y})\mid\boldsymbol{\theta})^{-1}\). That these \(p\)-values approximately succeed in removing the conservativity problem is justified by Theorem 2 of Robins et al. (2000), which implies that \(\mathsf{cond}{-}p\) and \(\mathsf{part}{-}p\) both have asymptotically uniform frequency distributions under sampling models of the form \[p\left(\mathbf{y}\mid\boldsymbol{\theta},\psi_{n}\right)=\prod_{i=1}^{n}p_{i} \left(\mathbf{y}_{i}\mid\boldsymbol{\theta},\psi_{n}\right), \tag{14}\] where \(\psi_{n}\in\mathbb{R}\) is a one-dimensional nuisance parameter. Robins et al. also propose a number of other methods for deriving approximately calibrated \(p\)-values which depend on either modifications of the test statistic \(T\) or on approximate recalibrations of simpler \(p\)-values such as \(\mathsf{post}{-}p\). We do not treat these approaches in detail here since any generally available computational speedups relative to \(\mathsf{part}{-}p\) and \(\mathsf{cond}{-}p\) are usually achieved by exploiting some aspect of the asymptotics of (14), which we argue in the next section is an overly limiting model in many cases. The interested reader can consult Robins et al. (2000) for details. Because \(\mathsf{part}{-}p\) and \(\mathsf{cond}{-}p\) have identical asymptotic performance under (14) and \(\mathsf{part}{-}p\) is generally easier to compute, we will focus all subsequent comparisons on \(\mathsf{part}{-}p\). While \(\mathsf{part}{-}p\) is also usually less computationally expensive than the explicit calibration required for \(\mathsf{cal}-p\), it can still suffer from substantial computational costs when \(p(T(\mathbf{y})\mid\boldsymbol{\theta})^{-1}\) is not analytically available, which is usually the case when the model is sufficiently complex. We are unaware of a scheme for estimating this quantity in general other than estimating \(p(T(\mathbf{y})\mid\boldsymbol{\theta})\) with a kernel density estimator and inverting the result (which is the recommended strategy in Bayarri and Berger (2000)). Such kernel density estimates can be highly inefficient in the tails of the density, leading to explosive errors in the inverse. A strategy which retains asymptotic uniformity but is almost universally easy to compute is given by the sampled posterior predictive \(p\)-value (Gosselin, 2011; Johnson, 2007b, 2004). Unlike the previous approaches, this method generates a random \(p\)-value by first drawing a sample \(\widetilde{\boldsymbol{\theta}}\) from the posterior distribution \(p(\boldsymbol{\theta}\mid\mathbf{y})\) and then computing a \(p\)-value with respect to \(p(\mathbf{y}_{\text{rep}}\mid\widetilde{\boldsymbol{\theta}})\). In symbols: \[\mathsf{sampled}\!-\!p_{\mathbf{y}}=\mathbb{P}_{p(\mathbf{y}_{\text{rep}}| \boldsymbol{\theta})}\left(T(\mathbf{y}_{\text{rep}})\geq T(\mathbf{y})\right),\quad\boldsymbol{\theta}\sim p(\boldsymbol{\theta}\mid\mathbf{y}). \tag{15}\] The posterior predictive \(p\)-value is just the expected value of \(\mathsf{sampled}\!-\!p\) over the posterior distribution. Estimating (15) by a Monte Carlo average is generally extremely fast since sampling \(p(\mathbf{y}_{\text{rep}}\mid\boldsymbol{\theta})\) is trivial for most models. By not aggregating over the posterior, and in particular by exploiting the skew of the distribution of \(\mathsf{post}\!-\!p\) when the model is misspecified, the resulting sampled \(p\)-value is also able to obtain asymptotic uniformity. However, relative to \(\mathsf{part}\!-\!p\), \(\mathsf{sampled}\!-\!p\) can be more conservative on average preasymptotically, and can be substantially more conservative for any single realization when the variance is non-negligible. Together, \(\mathsf{cal}-p\), \(\mathsf{part}\!-\!p\), and \(\mathsf{sampled}\!-\!p\) represent a spectrum of Bayesian \(p-values\) which trade more or less computational ease for more or less reduction in conservativity. ### Joint \(p\)-Values The exactly and approximately calibrated \(p\)-values of the last section were based on the idea that posterior predictive checks can be too easy when we fail to set our thresholds for comparison relative to the corresponding frequency distribution. Thus, calibrating the \(p\)-values allows us to set thresholds appropriately to maintain a certain level of difficulty. On the other hand, Meng's bound (6) tells us that the miscalibration problem is asymmetric. If our nominal \(p\)-value is so small that twice that value is below our threshold, then we can still confidently reject our model on frequentist grounds. We can try to increase the difficulty of our tests, and thus the likelihood of smaller nominal \(p\)-values, by modifying our choice of test quantity. One way to achieve this is with ancillary statistics, which yield posterior predictive \(p\)-values that are exactly uniform (Gelman, 2013). However, discovering ancillary (or approximately ancillary) statistics is often difficult. And if our workflow assumption above holds, then we expect the discovery of ancillary statistics to become more difficult as our model size increases and our sampling distributions accommodate a greater variety of data behaviors. Since our primary concern is constructing tests for model rejection which can scale with model complexity, this is particularly worrying. Similarly, the use of pivotal discrepancy measures (which may depend on parameters \(\boldsymbol{\theta}\) as well as data \(\mathbf{y}\)) has been proposed since it is easier to calibrate the corresponding \(p\)-values (Gelman et al., 1996; Johnson, 2007a; Yuan and Johnson, 2011). But pivotal quantities may not exist when the observed data are not independent given the parameters, and there are no guarantees that pivotal quantities exist which quantify any particular feature of interest even when this assumption holds. Another method for increasing the difficulty of model checks is to hold out some portion of the data with which the test quantity is computed and then compare this quantity to the model fit to the remainder of the data (Vehtari and Lampinen, 2002; Stern and Cressie, 2000). Like the exactly calibrated \(p\)-value, this approach requires repeated sampling from the posterior distribution for different sets of observed data, which is often prohibitively computationally expensive. Many faster approximate procedures have been proposed, but none of these can be applied successfully across all types of models (Li et al., 2015, 2017; Marshall and Spiegelhalter, 2003). A more general and easily-applied approach for generating harder tests of our models is obtained by using many test statistics at once. If \(\mathcal{T}=\{T_{s}\}_{s=1}^{d}\) is a collection of test statistics, then the corresponding joint posterior predictive \(p\)-value is \[\mathsf{joint}\!-\!p_{\mathcal{T}}(\mathbf{y})=\mathbb{E}_{p(\mathbf{y}_{\text{ rep}}|\mathbf{y})}\mathbbm{1}\left\{T_{1}(\mathbf{y}_{\text{rep}})>T_{1}(\mathbf{y}) \text{ and }T_{2}(\mathbf{y}_{\text{rep}})>T_{2}(\mathbf{y})\text{ and }\cdots T_{d}(\mathbf{y}_{\text{rep}})>T_{d}( \mathbf{y})\right\}. \tag{16}\] An obvious problem with using a joint \(p\)-value is that we expect its observed value to shrink towards \(0\) as \(d\) increases even if the proposed model is correct. Furthermore, the joint \(p\)-value no longer satisfies Meng's bound (6). The first step towards making \(\mathsf{joint}\!-\!p\) useful is thus a simple generalization of Meng's bound which applies to multiple test statistics. **Theorem 1** (Frequency Bound for \(\mathsf{joint}\!-\!p\)).: _For any level \(\alpha\in[0,1]\), we have that_ \[\mathbb{P}_{p(\mathbf{y})}\left(\mathsf{joint}\!-\!p_{\mathcal{T}}(\mathbf{y })\leq\alpha\right)\leq\inf_{s\in[\alpha,1]}\frac{\int_{0}^{s}F(t)dt}{s- \alpha}, \tag{17}\] _where \(F\) is the cumulative distribution function of the random variable_ \[\mathbb{E}_{p(\mathbf{y}_{\text{rep}}|\boldsymbol{\theta})}\mathbbm{1}\left\{ T_{1}(\mathbf{y}_{\text{rep}})>T_{1}(\mathbf{y})\text{ and }T_{2}(\mathbf{y}_{\text{rep}})>T_{2}(\mathbf{y})\text{ and } \cdots T_{d}(\mathbf{y}_{\text{rep}})>T_{d}(\mathbf{y})\right\}. \tag{18}\] Proof.: See Appendix A To directly estimate the cumulative distribution function of \(\mathsf{joint}\!-\!p\) or \(\mathsf{post}\!-\!p\) requires repeated simulation of the posterior predictive distribution \(p(\mathbf{y}_{\text{rep}}\mid\mathbf{y})\) for each draw of \(\mathbf{y}\) from the model. In all but the simplest models, this requires sampling from \(p(\boldsymbol{\theta}\mid\mathbf{y})\) for each such \(\mathbf{y}\), which will often be prohibitively expensive. Theorem 1 shows that we can bound the cumulative distribution function of \(\mathsf{joint}\!-\!p\) by an optimum involving the cumulative distribution function of (18), which can be simulated directly with draws from \(p(\mathbf{y}_{\text{rep}}\mid\boldsymbol{\theta})\). Thus, Theorem 1 establishes that \(\mathsf{joint}\!-\!p\) can be interpreted for purposes of model rejection. But we have yet to establish that \(\mathsf{joint}\!-\!p\) improves on \(\mathsf{post}\!-\!p\) for rejection purposes in general. Since we expect \(F\) to increase more sharply at \(0\) as \(d\) increases, the bound (17) will generally get worse with increasing \(d\) for a fixed level of the joint \(p\)-value. Nevertheless, this bound can still provide value over \(\mathsf{post}\!-\!p\) for rejection purposes if the nominal \(p\)-value falls fast enough with \(d\). This can occur, for instance, when the values of \(\boldsymbol{\theta}\) for which \(p(\mathbf{y}\mid\boldsymbol{\theta})\) best fits each \(T_{s}\in\mathcal{T}\) lie in mostly distinct subsets of the parameter space. In particular, define \[\boldsymbol{\Theta}_{s,\alpha}=\{\boldsymbol{\theta}\in\boldsymbol{\Theta} \mid\mathbb{P}_{p(\mathbf{y}_{\text{rep}}|\boldsymbol{\theta})}(T_{s}(\mathbf{ y}_{\text{rep}})\geq T_{s}(\mathbf{y}))\geq\alpha\} \tag{19}\] Figure 1: A schematic representation of how the marginal posterior predictive \(p\)-values can be relatively large while the joint \(p\)-value is small. In the left panel, because the \(\boldsymbol{\Theta}_{s,0.2}\) have posterior probability \(0.3\), \(\mathsf{post}\!-\!p_{T_{s}}\) is bounded below by \(0.3\times 0.2=0.06\). In the right panel, because the intersection of the \(\boldsymbol{\Theta}_{j,0.01}\) has posterior probability less than \(0.04\), \(\mathsf{joint}\!-\!p\) is bounded above by \(0.01\times 0.96+1\times 0.04<0.05\). Each \(\mathbf{\Theta}_{s,\alpha}\) can be thought of as the subspace corresponding to data generating processes for which the observed \(T_{s}\) is not atypical. If the \(\mathbf{\Theta}_{s,\alpha}\) each have sufficient posterior probability for moderate \(\alpha\), then the corresponding \(\mathsf{post}\!-\!p\) values will be too large to reject. Nevertheless, if the \(\mathbf{\Theta}_{s,\alpha}\) also have small overlap, then the nominal value of \(\mathsf{joint}\!-\!p\) can be vanishingly small. In such a case, the bound (17) may still be sufficient to reveal the lack of fit. This situation is illustrated in Figure 1. ### Computation and interpretation of \(p\)-values We now turn to a comparison of \(\mathsf{cal}-p\), \(\mathsf{part}\!-\!p\), \(\mathsf{sampled}\!-\!p\), and \(\mathsf{joint}\!-\!p\) in terms of ease of use and interpretive power for model rejection. For our comparison of computational difficulty, we focus only on \(\mathsf{part}\!-\!p\) and \(\mathsf{joint}\!-\!p\) since \(\mathsf{sampled}\!-\!p\) generally poses no computational challenges and \(\mathsf{cal}-p\) is usually the least computationally feasible option. #### 3.3.1 Computing \(\mathsf{part}\!-\!p\) and \(\mathsf{joint}\!-\!p\) The nominal value of \(\mathsf{joint}\!-\!p\) can be estimated for any \(\mathcal{T}\) in the same manner as \(\mathsf{post}\!-\!p\) is estimated for a single statistic. Because \(\mathsf{joint}\!-\!p\) will concentrate near \(0\) as \(d\) increases, the estimation of \(\mathsf{joint}\!-\!p\) may require a greater number of simulations from \(p(\mathbf{y}_{\mathrm{rep}}\mid\mathbf{y})\) in order to resolve the order of magnitude to acceptable accuracy. However, in practice it often suffices to retain a fixed number of posterior draws \(\mathbf{\theta}_{i}\) and to take multiple draws from \(p(\mathbf{y}\mid\mathbf{\theta}_{i})\) for each \(i\). Thus, the increase in computational overhead from this step is usually modest. The estimation of the corresponding frequency bound (17) is generally more taxing, as we will usually not know the cumulative distribution function \(F\) in closed form. However, this function can be estimated with inexpensive Monte Carlo simulations of the joint model. Algorithm 1 describes the procedure, repeatedly estimating the empirical CDF of the random variable (18) conditional on \(\mathbf{\theta}\) and then aggregating the results. In particular, we only sample from the prior and sampling distributions, never from the posterior, significantly speeding up the Monte Carlo operations compared to exact calibration. Once we have our estimate \(\hat{F}\), we can estimate the bound (17) by performing a grid search for the optimum of \(\int_{0}^{s}\hat{F}(t)dt/(s-\alpha)\). Since \(\hat{F}\) is a one-dimensional function and \([\alpha,1]\) is compact, the last step can generally be performed very quickly. When the nominal observed value of \(\mathsf{joint}\!-\!p\) is small, we will need a high resolution estimate of \(F\) near \(0\) in order to accurately estimate the optimum, and this can require large values of \(M_{\text{sampling}}\) and \(L_{\text{estimate}}\). Because the complexity of the algorithm scales as \[N_{\text{prior}}\times M_{\text{sampling}}\times L_{\text{estimate}},\] the cost of estimating \(F\) will almost always dominate the computation. In practice, this cost can be substantially reduced by taking advantage of the fact that the computation can be carried out in parallel over the samples \(\{\boldsymbol{\theta}_{n}\}\). The reader may also notice that the \(\hat{p}^{(n,l)}\) in Algorithm 1 are not independent across \(1\leq l\leq L_{\text{estimate}}\). While this does introduce correlation in the errors, the estimator remains asymptotically unbiased. In particular, Barbe et al. (1996) showed under weak regularity conditions that the \(\sqrt{L_{\text{estimate}}}\Big{(}\hat{F}^{(n)}-F^{(n)}\Big{)}\) converge in distribution to centered Gaussian processes, where \(F^{(n)}\) is the CDF of (18) conditional on \(\boldsymbol{\theta}_{n}\). The computation of part\(-p\) is simpler but more subtle. In all but the simplest cases, we must sample from the partial posterior predictive distribution \(p(\mathbf{y}_{\text{rep}}\mid\mathbf{y}\setminus T(\mathbf{y}))\) in order to estimate part\(-p\). This will usually be achieved by sampling first from the partial posterior \(p(\boldsymbol{\theta}\mid\mathbf{y}\setminus T(\mathbf{y}))\), which can be done either through direct simulation or by importance resampling draws from the total posterior with the unnormalized weights \(1/p(T(\mathbf{y})\mid\boldsymbol{\theta})\). Whatever our strategy for sampling the partial posterior, we will generally need an estimate of \(p(T(\mathbf{y})\mid\boldsymbol{\theta})\), as this will only be available analytically for the simplest models and test statistics. In Bayarri and Berger (2000), it is recommended that kernel density estimation can be applied when the sampling distribution of \(T\) is unknown. In theory the required simulation is straightforward, since sampling from \(p(T(y)\mid\boldsymbol{\theta})\) is as simple as sampling from \(p(\mathbf{y}\mid\boldsymbol{\theta})\) and computing \(T\) on each sample. Like computing the bound for joint\(-p\), this requires a double simulation whereby we first sample \(N_{\text{post}}\) values of \(\boldsymbol{\theta}_{n}\sim p(\boldsymbol{\theta}\mid\mathbf{y})\), and then sample \(M_{\text{sampling}}\) values of \(\mathbf{y}_{m}\sim p(\mathbf{y}\mid\boldsymbol{\theta}_{n})\) for each \(1\leq n\leq N_{\text{post}}\). In practice, we often take \(N_{\text{post}}\) much smaller than \(N_{\text{prior}}\), but sampling once from \(p(\boldsymbol{\theta}\mid\mathbf{y})\) can be much more expensive than sampling once from \(p(\boldsymbol{\theta},\mathbf{y})\). The greater difficulty in computing part\(-p\) is in the need for potentially intractably large values of \(M_{\text{sampling}}\). This occurs for instance when the observed value of \(T(\mathbf{y})\) lies in the tail of the distribution \(p(T(\mathbf{y})\mid\boldsymbol{\theta})\) for some values of \(\boldsymbol{\theta}\) which are probable under the posterior. Because our sampling will generally depend on the inverse of this density, estimating these tails accurately can be essential to avoid explosively large weights. However, kernel density estimation is extremely inefficient in the tails and can systematically underweight tail probabilities with commonly used kernels. Various strategies may be available to stabilize the tail estimation, but we are not aware of any general methods that can succeed reliably without further assumptions or information about the underlying distribution. #### 3.3.2 Interpreting \(p\)-values The (approximately) calibrated \(p\)-values and joint predictive \(p\)-values face different trade-offs in interpretation. The frequency bound (17) will always be more conservative than the exactly calibrated \(p\)-value. And as we will see in Section 5, computational intensity tends to trade off with the conservativity of the corresponding \(p\)-value. However, the bound (17) holds in total generality and makes no assumptions about asymptotics or exchangeability. We regard this as a substantial benefit of joint\(-p\), as the availability of interpretable frequencies is the key property of any model rejection tool. The asymptotic uniformity of part\(-p\) and sampled\(-p\) allow them to be interpreted directly (without intermediate bounds) as a frequency in sufficiently nice cases, but this interpretation is limited both by the applicability of the asymptotic model as well as our ability to judge whether we have sufficient data to reliably use the asymptotic approximation. For instance, the asymptotic model (14) for part\(-p\) assumes both conditional independence as well as a shared parameter vector of fixed dimension. This framework is violated by models parametrized by a vector which grows in dimension with the data (e.g. local parameters in hierarchical models and HMM hidden states), and by models with non-independent sampling distributions (e.g. moving average models). Furthermore, when our workflow assumption is satisfied, we anticipate that the dimension of the parameter vector will increase as the modeling process proceeds. Consequently, even if the asymptotic assumption appears potentially valid in our initial models, the process of model expansion is likely to erode that validity. Since we were motivated by the problem of finding model rejection tools which are robust in the setting of model expansion, this issue is particularly concerning. A lesser but not irrelevant benefit of \(\mathsf{joint}{-}p\) is that it is interpretable as a tail probability with respect to the posterior predictive distribution of the fully fitted model. The partial predictive \(p\)-value can only interpreted directly as a probability with respect to the partial posterior, and it may be unclear how to translate conclusions about this partial posterior to the full posterior. For instance, when the test statistic \(T\) is sufficient for all model parameters, the partial posterior reduces to the prior, and the partial predictive \(p\)-value reduces to the prior predictive \(p\)-value. And unlike all of the other \(p\)-values considered, the random \(\mathsf{sampled}{-}p\) is not a function of the model, observed data, and test statistics, complicating its interpretation as a measure of evidence. ## 4 Validating \(\mathsf{joint}{-}p\) with non-positively associated extremes We now examine the behavior of \(\mathsf{joint}{-}p\) under different assumptions about how our extreme exceedances \(T_{s}(\mathbf{y}_{\mathrm{rep}})\geq T_{s}(\mathbf{y})\) are associated under the posterior predictive distribution \(p(\mathbf{y}_{\mathrm{rep}}\mid\mathbf{y})\). We test the behavior of our bound under two conditions: an easier condition in which our test statistics are non-negatively associated under the true model, which we can study with exact computations, and a harder condition in which the test statistics can be non-positively associated, which we study with simulation experiments using a parametric model for the copula of the statistics. ### An easier case: non-negatively associated test statistics Our main purpose is to establish quantitative evidence under reasonable assumptions for the intuition given after Theorem 1, viz., that our frequency bound (17) will in fact shrink to \(0\) as \(d\to\infty\) when our extreme exceedances are not positively associated and have corresponding marginal \(p\)-values which are not too large. First, we must establish precisely what we mean by non-positively and non-negatively associated test statistics. To do this, we first generalize our definition of \(F\) from Theorem 1 to arbitrary random variables. In particular, if \((Y_{1},\ldots,Y_{d})\) are random variables with joint cumulative distribution function \(\Phi(y_{1},\ldots,y_{d})\), then the cumulative distribution function \(F_{\Phi}\left(t\right)\) of the random variable \(\Phi(Y_{1},\ldots,Y_{d})\) is the Kendall function associated to the distribution \(\Phi\). If we denote the joint CDFs associated to \((-T_{1},\ldots,-T_{d})\) under \(p(\mathbf{y}\mid\boldsymbol{\theta})\) by \(\Phi_{\boldsymbol{\theta}}\), then \(F(t)=\mathbb{E}_{p(\boldsymbol{\theta})}F_{\Phi_{\boldsymbol{\theta}}}(t)\), and we can study the behavior of our frequency bound (17) by studying the Kendall functions \(F_{\Phi_{\boldsymbol{\theta}}}(t)\). (We negate the test statistics in constucting the Kendall function simply to keep the inequality direction consistent with (17), but nothing of importance is changed since this direction is arbitrary.) Furthermore, if \(F_{\Phi_{1}},F_{\Phi_{2}}\) are two Kendall functions, then \(\Phi_{2}\) is larger than \(\Phi_{1}\) in positive \(K\)-dependence order (\(\Phi_{1}\prec_{\mathrm{FKD}}\Phi_{2}\)) if \(F_{\Phi_{1}}(t)\geq F_{\Phi_{2}}(t)\) for all \(t\in[0,1]\)(Caperaa et al., 1997). To see that this ordering is related to the dependence structure of the distributions \(\Phi\), we note that joint extremes for the corresponding random variables \(\{Y_{i}\}_{i=1}^{d}\) become more likely as the variables become more positively associated, the probability \(\Phi(Y_{i},\ldots,Y_{d})\) becomes larger on average, and \(F_{\Phi}(t)\) falls. This idea can be formalized somewhat by noting that \(\Phi_{1}\prec_{\mathrm{PKD}}\Phi_{2}\) implies that \(\tau_{1}\leq\tau_{2}\), where \(\tau_{i}\) is the value of Kendall's tau associated to \(\Phi_{i}\). In two dimensions, Kendall's \(\tau\) is given by the formula \[\tau=\mathbb{E}\mathrm{sign}\left[\left(Y_{1}-Y_{1}^{\prime}\right)\left(Y_{2 }-Y_{2}^{\prime}\right)\right],\] where \((Y_{1},Y_{2}),(Y_{1}^{\prime},Y_{2}^{\prime})\stackrel{{ iid}}{{\sim}}\Phi\). This definition can be generalized to higher dimensions, where it measures an overall level and direction of association between the random variables \(\{Y_{i}\}_{i=1}^{d}\)(Joe, 1990). We will say simply that \((Y_{1},\ldots,Y_{d})\sim\Phi\) are positive \(K\)-dependent if \(F_{\Phi}\prec_{\text{PKD}}F_{\Psi}\), where \(\Psi\) is the joint CDF corresponding to independent random variables \(U_{1},\ldots,U_{d}\). Because the Kendall function \(F_{\Psi}\) is independent of the marginal distributions of the \(U_{i}\), we may take them to be uniform on \([0,1]\). This simplification allows for direct calculation of \(F_{\Psi}\), which is shown in Barbe et al. (1996) to be given by the formula \[F_{\Psi}(t)=t\left[1+\sum_{i=1}^{d-1}\frac{\log(1/t)^{i}}{i!}\right]. \tag{20}\] With these facts established, we can now see that if our test statistics are positively associated in the sense of being positive \(K\)-dependent under \(p(\mathbf{y}\mid\boldsymbol{\theta})\) for each \(\boldsymbol{\theta}\), then our frequency bound (17) is further upper bounded by \[\inf_{s\in[\alpha,1]}\frac{\int_{0}^{s}F_{\Psi}(t)dt}{s-\alpha}. \tag{21}\] We study the behavior of this bound under the following assumptions on our posterior predictive \(p\)-values: 1. The extremal exceedances for our test statistics \(T_{s}\) are non-positively associated: \[\mathbb{P}\left(T_{s}(\mathbf{y}_{\text{rep}})\geq T_{s}( \mathbf{y})\mid T_{1}(\mathbf{y}_{\text{rep}})\geq T_{1}(\mathbf{y}),\ldots,T_ {s-1}(\mathbf{y}_{\text{rep}})\geq T_{s-1}(\mathbf{y})\right)\] (22) \[\leq\mathbb{P}\left(T_{s}(\mathbf{y}_{\text{rep}})\geq T_{s}( \mathbf{y})\right),\] (23) 2. The posterior predictive \(p\)-values are upper bounded by some number \(p\in(0,1)\): \[\mathsf{post}-p_{T_{s}}\left(\mathbf{y}\right)\leq p\text{ for all }1\leq s\leq d.\] (24) Under these conditions, we clearly have that \(\mathsf{joint}-p(\mathbf{y})\leq p^{d}\). In order to assess the performance of \(\mathsf{joint}-p\) under these assumptions, we plot the relation between (21) and the marginal bound \(p\) for dimensions \(d=2,\ldots,6\) in Figure 2. For each value of the bound \(p\), the level monotonically decreases as the number of test statistics increases. This suggests that, in this setting, the joint \(p\)-value is asymptotically successful in the sense that we will eventually reject the model if we have enough test statistics with non-positively associated observed extremal exceedances. It is also apparent that the efficiency of this procedure depends strongly on the marginal \(p\)-value bound \(p\). For smaller values of \(p\), passing from two to three test statistics is sufficient to halve the resulting level of the test, but for larger values, the drop is less than a fifth. This may be particularly troubling since those cases where \(p\) is larger are exactly those in which \(\mathsf{post}-p\) is least capable of model rejection. We note, however, that this represents a worst-case scenario in the sense that we make no assumptions about the gap in the inequalities (22). In practice, when our extremal exceedances are non-positively associated, we usually observe nominal values of \(\mathsf{joint}-p\) smaller than \(p^{d}\), and thus we can often obtain bounds that fall below the corresponding curve in Figure 2. We also note that these conclusions continue to hold if we relax the assumption of positive \(K\)-dependence for all \(\boldsymbol{\theta}\) to the assumption that \[\mathbb{E}_{p(\boldsymbol{\theta})}F_{\Phi_{\boldsymbol{\theta}}}(t)\leq F_{ \Psi}(t)\text{ for }t<\epsilon\] for some \(\epsilon>0\). ### A harder case: non-positively associated test statistics The results of the last section suggest that our proposed joint \(p\)-value works when our test statistics are non-negatively associated under the model in the sense that, when our extreme exceedances are non-positively associated, the resulting frequency bound for joint\(-p\) shrinks to \(0\) as \(d\) grows. The situation for joint\(-p\) is harder, however, in situations when the test statistics are non-positively associated even under the proposed model. In this case, smaller joint \(p\)-values are more common under the proposed model, so rejecting the model becomes harder. Furthermore, investigating the behavior of joint\(-p\) in this setting is more challenging since there is no general upper bound available for the average Kendall function \(F\). Instead, we use a parametric model of the test statistics \(\{T_{s}\}_{s=1}^{d}\) to investigate the performance of joint\(-p\). In particular, we use a copula model, which specifies only the dependence structure of the statistics while leaving the marginal distributions arbitrary. As noted above, this is tractable since the Kendall function depends only on this copula. We need our copula model to be defined in arbitrarily many dimensions and to allow for modeling negative association between the test statistics, particularly in the tails of their distribution. The Gaussian copula is a natural choice in this setting since it is definable in any dimension and can be parametrized to represent negative association between each component variable. It is also critical that the Gaussian copula has zero tail dependence - a measure of the positive association between component variables when one takes an extreme value - since the behavior of the Kendall function near \(0\) is particularly sensitive to these extremes. The \(t\) copula, for instance, can be parametrized to represent negative overall association, but the resulting copulas always exhibit positive dependence in the tails (Joe, 2014). As a result, the corresponding Kendall functions are dominated by \(F_{\Psi}\) near zero. The Gaussian copula \(\Phi^{G}\) is parametrized by a correlation matrix \(R\), which we will take to have constant off-diagonal terms equal to \(-v/(d-1)\), where \(d\) is the number of test statistics. When \(v=1\), this is the minimum value that results in a valid correlation matrix \(R\) when the off-diagonal entries are constant. In light of the above, we view this as a reasonably hard case for joint\(-p\), since we assume that every pair of test statistics is negatively associated in the proposed model and we thus have that \(\Phi^{G}\prec_{\text{PKD}}\Psi\). There is no known analytic expression for the Kendall function of the Gaussian copula, so we estimate it using the empirical CDF of \(\Phi^{G}\left(U_{1}^{(i)},\ldots,U_{d}^{(i)}\right)\), where \((U_{1}^{(i)},\ldots,U_{d}^{(i)})\) are Monte Carlo samples from \(\Phi^{G}\). Plugging this estimate in for \(F\) in (17), we can compute frequency bounds with \(\alpha=p^{d}\) for varying values of the marginal \(p\)-value bound \(p\) and the level of negative association \(v\). Figure 3 plots the resulting bounds against dimension \(d=2,3,4\) for \(p=0.1,0.2,0.4\) Figure 2: Upper bounds on level of joint \(p\)-value test for non-negatively associated observed extremal exceedances versus bound on the posterior predictive \(p\)-values of the test statistics separately for various numbers of test statistics considered. The level decreases with the number of test statistics and with the bound on the marginal \(p\)-values. and \(v=0.1,0.3,0.5\). The effectiveness of \(\mathsf{joint}{-}p\) in this setting is now contingent on the combination of \(p\) and \(v\). In the first column, for \(p=0.1\), the joint \(p\)-value continues to work well in the sense that the frequency bound decreases with \(d\) fast enough to be significant at the \(0.1\) level for just three test statistics and for all tested values of \(v\). In the second column, for \(p=0.2\), our bound falls with \(d\) and is below the corresponding bound for \(\mathsf{post}{-}p\) for \(v=0.1\) and \(v=0.3\), indicating that the joint \(p\)-value is improving on our ability to reject the model. But for \(v=0.5\) the bound actually increases with \(d\), indicating that our proposed procedure is no longer able to use the negative association in the observed extremal exceedances to reject the model. When \(p\) increases to \(0.4\), we find that the frequency bound increases with \(d\) regardless of the value of \(v\). We conclude that \(\mathsf{joint}{-}p\) can still provide value for model rejection when our test statistics are negatively associated under the proposed model, but, for this to be possible and efficient, we need either that the corresponding marginal \(p\)-values are not too large or that the test statistics must not be too negatively associated under the assumed model. ## 5 Comparing Bayesian \(p\)-values in a simulation example We now turn to a comparison between \(\mathsf{joint}{-}p\), \(\mathsf{part}{-}p\), \(\mathsf{sampled}{-}p\), and \(\mathsf{cal}-p\) in a simple simulation example. We take our observed data to be a random sample of size \(N=70\) from a \(\mathsf{beta}\left(1,1.5\right)\) distribution. We will assume a misspecified model with a \(\mathsf{beta}\left(\theta,\theta\right)\) sampling distribution and a uniform prior over \(\left[0,3\right]\). While our assumed sampling distribution is symmetric for all values of \(\theta\), the true data generating distribution is substantially skewed to the right, as shown in Figure 4. We test our assumed model with statistics \(T_{1}\) and \(T_{2}\) taken to be the \(0.05\) and \(0.95\) sample quantiles respectively. Qualitatively, values of \(\theta\) closer to zero yield sampling distributions that better match the observed lower quantile but overshoot the observed upper quantile. Similarly, larger values of \(\theta\) better match the observed upper quantile but now overshoot the observed lower quantile. If the posterior splits the difference between the regions of parameter space, then we should expect that both observed quantiles will be lower than what is typical in posterior predictive replications from the assumed model. Indeed, this is precisely what we see when we compute the probabilities Figure 3: Upper bounds on frequency of joint \(p\)-value against dimension for negatively associated test statistics and for varying levels of negative dependence (columns) and varying bounds on the marginal posterior predictive \(p\)-values (rows). of \(T_{s}(\mathbf{y}_{\text{rep}})\leq T_{s}(\mathbf{y})\) for \(s=1,2\), yielding posterior predictive \(p\)-values of \(\approx 0.07\) for both test statistics. Computing our bound for \(\mathsf{joint}-p\) and \(\mathsf{cal}-p\) requires estimating distribution functions of certain exceedance probabilities, and computing \(\mathsf{part}-p\) requires estimating the partial posteriors (13) for \(T_{1}\) and \(T_{2}\). Figure 5 displays the estimated distribution function \(\hat{F}\) around the nominal joint \(p\)-value \(\alpha=0.0028\) along with the optimization objective \(\int_{\alpha}^{s}\hat{F}(s)/(s-\alpha)\). This shows in particular that we have estimated the distribution \(F\) to sufficient resolution around \(\alpha\) to trust our estimated bound. Plots of estimated intermediate quantities for \(\mathsf{part}-p\) and \(\mathsf{cal}-p\) are given in the appendix. Table 6 displays estimates of the various candidate \(p\)-values for this problem. Because the nominal joint \(p\)-value is two orders of magnitude smaller than either of the \(\mathsf{post}-p\) values, we achieve a frequency upper bound which is comparable to the partial \(p\)-value and calibrated \(p\)-value for the upper quantile. The partial \(p\)-value and calibrated \(p\)-value for the lower quantile is about half the size of the joint \(p\)-value bound, reflecting the fact that our bound may substantially overestimate the true frequency of a nominal joint \(p\)-value. However, while the partial \(p\)-value is only asymptotically uniform, our frequency bound holds without asymptotic assumptions. And while the calibrated \(p\)-value is exactly uniform, it requires repeated simulation from the posterior predictive distribution for various values of \(\mathbf{y}\), whereas our bound requires repeated simulation from the joint model, which is typically orders of magnitude faster. Compared to the sampled \(p\)-value, our frequency bound is less than median sampled \(p\)-value for either statistic. More importantly, the sampled \(p\)-value is a random quantity and is larger than our frequency bound more than \(60\%\) of the time for either statistic. Figure 7 displays the survival function of the sampled \(p\)-values for \(T_{1}\) and \(T_{2}\) along with various other \(p\)-values, showing that the sampled \(p\)-value is only less conservative on average compared to the posterior predictive \(p\)-value. Thus, while the sampled \(p\)-value is much easier to compute than our bound, it Figure 4: Solid: the \(\mathsf{beta}(1,1.5)\) distribution from which our observed data was generated. Dashed: our assumed \(\mathsf{beta}(\theta,\theta)\) sampling distribution for \(\theta=0.5,1,1.5,3\). Figure 5: Left: the estimated distribution function of joint extremal exceedances (18) with vertical line indicating the nominal joint \(p\)-value. Right: the optimization objective on the right-hand side of (17) for a range of \(s\) with horizontal line at the optimum. is more conservative in this problem and like the partial \(p\)-value is only guaranteed to be uniform asymptotically for general test statistics. Unsurprisingly, at about a fifth the magnitude, the joint \(p\)-value bound is a substantial improvement over the corresponding frequency bounds for the individual posterior predictive \(p\)-values. Overall, \(\mathsf{joint}-p\) displays a useful balance of trade-offs in this problem. It is substantially less conservative than the alternatives which are easier to compute, and it comes within a factor of two of the more powerful alternatives while being easier to compute and offering preasymptotic guarantees. ## 6 Discussion and Future Work The limitations of the posterior predictive \(p\)-value for purposes of model rejection have been established in the literature since at least Meng (1994) - nearly thirty years prior to the present time of writing. And, as we have discussed, many alternatives with better power and conservativity properties have been proposed in the intervening time. Yet, as far as we can tell, none of these has succeeded in establishing itself as a widely used, recommended, and implemented. In light of this, the reader might question the productivity of proposing another alternative to \(\mathsf{post}-p\). Indeed, we are sympathetic to this viewpoint, and we have therefore aimed to shape our new proposal around a diagnosis of the present state of relative stasis. Disagreement over the purpose of model checking may contribute significantly to this stasis insofar as it limits our ability to form a consensus around any particular set of tools. Some have rejected the sort of model checking that we consider here as fundamentally anti-Bayesian due to the way the data are used to update our beliefs outside of the posterior distribution. Others who see a role for frequentist considerations in Bayesian modeling nevertheless have argued for prioritizing the discovery goal of model checking, deemphasizing the limitations of common tools like \(\mathsf{post}-p\) for the rejection goal. Figure 6: Candidate \(p\)-values and corresponding bounds (in parentheses, where applicable) for \(T_{1}\) and \(T_{2}\) equal to the \(0.05\) and \(0.95\) sample quantiles. The partial and calibrated \(p\)-value give strongest evidence against the model, followed by the joint \(p\)-value, median sampled \(p\)-value, and posterior predictive \(p\)-value. Figure 7: Survival functions for the sampled \(p\)-value computed for test statistics \(T_{1}\) and \(T_{2}\) equal to the \(0.05\) and \(0.95\) sample quantiles respectively. The solid, dashed and dotted lines represent the corresponding posterior predictive, joint, and calibrated \(p\)-values respectively. The sampled \(p\)-value is less conservative than \(\mathsf{post}-p\) on average in this problem, but more conservative than \(\mathsf{joint}-p\) and \(\mathsf{cal}-p\) on average. A more subtle reason that alternatives to \(\mathsf{post}{-}p\) have not been widely adopted may be a tension that limits their usefulness over \(\mathsf{post}{-}p\) in practice. In particular, all but the most computationally expensive alternatives maximize their power and have known frequency properties only asymptotically. As we showed, the conservativity of \(\mathsf{post}{-}p\) increases as model complexity increases, as in process of model expansion. But this setting of growing parameter dimension and fixed data size also threatens the reasonability of asymptotic approximations. By estimating non-asymptotic frequency bounds, our proposal aims to overcome this problem and to allow consistent frequentist interpretations across modeling contexts. Furthermore, by combining multiple statistics to increase the difficulty of the corresponding tests, we have shown that we can overcome the conservativity inherent to such bounds and obtain substantially higher power than corresponding bounds for \(\mathsf{post}{-}p\). In response to disagreements over the purpose of model checking, we again emphasize that model checking is necessarily a big tent containing distinct goals, no one of which can universally take priority over the others. We view \(\mathsf{joint}{-}p\) as a tool specialized to the goal of model rejection, and thus as a complement rather than a substitute for tools already widely in use, which may be better suited for other model checking tasks such as the discovery of alternative models. The practical applicability of \(\mathsf{joint}{-}p\) may be limited when the computations involved become overly burdensome. When the nominal joint \(p\)-value (1) is extremely small, estimating the CDF \(F\) in the bound (17) to high accuracy around this nominal value may become very difficult. Particularly troubling is the fact that our Algorithm 1 may spend substantial resources estimating \(F\) globally whereas we normally only require an estimate near 0 to obtain a good upper bound on (17). Thus, finding a means of reducing this seemingly extraneous computation to increase the efficiency of estimating our bound be a useful direction for future work.
2309.05663
Diffusion-Guided Reconstruction of Everyday Hand-Object Interaction Clips
We tackle the task of reconstructing hand-object interactions from short video clips. Given an input video, our approach casts 3D inference as a per-video optimization and recovers a neural 3D representation of the object shape, as well as the time-varying motion and hand articulation. While the input video naturally provides some multi-view cues to guide 3D inference, these are insufficient on their own due to occlusions and limited viewpoint variations. To obtain accurate 3D, we augment the multi-view signals with generic data-driven priors to guide reconstruction. Specifically, we learn a diffusion network to model the conditional distribution of (geometric) renderings of objects conditioned on hand configuration and category label, and leverage it as a prior to guide the novel-view renderings of the reconstructed scene. We empirically evaluate our approach on egocentric videos across 6 object categories, and observe significant improvements over prior single-view and multi-view methods. Finally, we demonstrate our system's ability to reconstruct arbitrary clips from YouTube, showing both 1st and 3rd person interactions.
Yufei Ye, Poorvi Hebbar, Abhinav Gupta, Shubham Tulsiani
2023-09-11T17:58:30Z
http://arxiv.org/abs/2309.05663v1
# Diffusion-Guided Reconstruction of Everyday Hand-Object Interaction Clips ###### Abstract We tackle the task of reconstructing hand-object interactions from short video clips. Given an input video, our approach casts 3D inference as a per-video optimization and recovers a neural 3D representation of the object shape, as well as the time-varying motion and hand articulation. While the input video naturally provides some multi-view cues to guide 3D inference, these are insufficient on their own due to occlusions and limited viewpoint variations. To obtain accurate 3D, we augment the multi-view signals with generic data-driven priors to guide reconstruction. Specifically, we learn a diffusion network to model the conditional distribution of (geometric) renderings of objects conditioned on hand configuration and category label, and leverage it as a prior to guide the novel-view renderings of the reconstructed scene. We empirically evaluate our approach on egocentric videos across 6 object categories, and observe significant improvements over prior single-view and multi-view methods. Finally, we demonstrate our system's ability to reconstruct arbitrary clips from YouTube, showing both \(1^{st}\) and \(3^{rd}\) person interactions. ## 1 Introduction Our hands allow us to affect the world around us. From pouring the morning coffee to clearing the dinner table, we continually use our hands to interact with surrounding objects. In this work, we pursue the task of understanding such everyday interactions in 3D. Specifically, given a short clip of a human interacting with a rigid object, our approach can infer the shape of the underlying object as well as its (time-varying) relative transformation w.r.t. an articulated hand (see Fig. 1 for sample results). This task of recovering 3D representations of hand-object interactions (HOI) has received growing interest. While initial approaches [4, 16, 22, 75, 24] framed it as 6-DoF pose task estimation for known 3D objects/templates, subsequent methods have tackled the reconstruction of apri or unknown objects [33, 89, 24]. Although single-view 3D reconstruction approaches can leverage data-driven techniques to reconstruct HOI images [96, 24, 89], these approaches cannot obtain precise reconstructions given the fundamentally limited nature of the single-view input. On the other hand, current video-based HOI reconstruction methods primarily exploit multi-view cues and rely on purely geometry-driven optimization for reconstruction. As a result, these methods are suited for in-hand scanning where a user carefully presents exhaustive views of the object of interest, but they are not applicable to our setting as aspects of the object may typically be unobserved. Towards enabling accurate reconstruction given short everyday interaction clips, our approach (DiffHOI) unifies the data-driven and the geometry-driven techniques. Akin to the prior video-based reconstruction methods, we frame the reconstruction task as that of optimizing a video-specific temporal scene representation. However, instead of purely relying on geometric reprojection errors, we also incorporate data-driven priors to guide the optimization. In particular, we learn a 2D diffusion network which models the distribution over plausible (geometric) object renderings conditioned on estimated hand configurations. Inspired by recent applications in text-based 3D generation [39, 61], we use this diffusion model as a generic data-driven regularizer for the video-specific 3D optimization. We empirically evaluate our system across several first-person hand-object interaction clips from the HOI4D dataset [44], and show that it significantly improves over both prior single-view and multi-view methods. To demonstrate its applicability in more general settings, we also show qualitative results on arbitrary interaction clips from YouTube, including both first-person and third-person clips. ## 2 Related Works **Reconstructing Hand-Object Interactions.** Hands and objects inherently undergo mutual occlusions which makes 3D reconstruction extremely ill-posed during interactions. Hence, many works [1, 5, 14, 22, 23, 54, 77, 92, 5, 77, 92] reduce the problem to 6D pose estimation by assuming access to instance-specific templates. Their frameworks of 6D pose optimization can be applied to both videos and images. Meanwhile, template-free methods follow two paradigms for videos and images. On one hand, video-based methods takes in synchronized RGB(D) videos [18, 31, 70, 71, 91] or monocular videos [19, 83, 29] and fuse observation to a canonical 3D representation [51]. This paradigm does not use any prior knowledge and requires all regions being observed in some frames, which is often not true in everyday video clips. On the other hand, methods that reconstruct HOI from single images [7, 8, 24, 38, 89] leverage learning-based prior to reconstruct more general objects. While they are able to generate reasonable per-frame predictions, it is not trivial to aggregate information from multiple views in one sequence and generate a time-consistent 3D shape. Our work is template-free and unifies both geometry-driven and data-driven methods. **Generating Hand-Object Interactions.** Besides reconstructing the ongoing HOIs, many works have explored generating plausible HOIs in different formulations. Some works model their joint distributions [6, 28, 33, 72]. Works that are usually called affordance or grasp prediction study the conditional distribution that generates hands/humans given an object/scenes, in 3D representation [11, 30, 37, 2, 73] or 2D images [10, 36, 80, 90]. Recently, some other works explore the reverse problem that generates plausible scenes for a given human/hand pose [88, 3, 94, 86, 60, 3]. In our work, we follow the latter formulation to learn an image-based generative model of hand-held objects given hands, since hand pose estimation [67] is good enough to bootstrap the system. **Neural Implicit Fields for Dynamic Scenes.** Neural implicit fields [48, 49, 55] are flexible representation that allows capturing diverse shape with various topology and can be optimized with 2D supervision via differentiable rendering [79, 87]. To extend them to dynamic scenes, a line of work optimizes per-scene representation with additional general motion priors [38, 46, 56, 57, 62]. Though these methods can synthesize highly realistic novel views from nearby angles, they struggle to extrapolate viewpoints [15]. Another line of work incorporate category-specific priors [84, 85, 50, 76, 82] to model articulations. They typically work on single deformable objects of the same category such as quadrupeds or drawers. In contrast, we focus on hand-object interactions across six rigid categories where dynamics are mostly due to changing spatial relations and hand articulations. **Distilling diffusion models.** Diffusion models [64, 25] have made significant strides in text-to-image synthesis. They can also be quickly adapted to take additional inputs or generate outputs in other domains [93, 27]. Recent works have shown that these image-based diffusion models can be distilled [61, 78] into 3D scene representations for generation or reconstruction [13, 32, 41, 47, 81, 95]. Inspired by them, we also adopt a conditional diffusion model for guiding 3D inference of HOIs. However, instead of learning prior over appearance, we use a diffusion model to learn a prior over a-modal geometric renderings. ## 3 Method Given a monocular video of a hand interacting with a rigid object, we aim to reconstruct the underlying hand-object interaction,, the 3D shape of the object, its pose in every frame, along with per-frame hand meshes and camera poses. We frame the inference as per-video optimization of an underlying 3D representations. While the multiple frames allow leveraging multi-view cues, they are not sufficient as the object of interests is often partially visible in everyday video clips, due to limited viewpoints and mutual occlusion. Our key insight is to incorporate both view consistency across multiple frames and a data-driven prior of the HOIs geometry. The learned interaction prior captures both category priors, _e.g_. mugs are generally cylindrical, and hand priors, _e.g_. pinched fingers are likely to hold thin handles. We train a conditional diffusion model for the prior that guides the HOI to be reconstructed during per-video optimization. More specifically, given a monocular video \(\hat{I}^{t}\) with corresponding hand and object masks \(\hat{M}^{t}\equiv(\hat{M}^{t}_{h},\hat{M}^{t}_{o})\), we aim to optimize a HOI representation (Sec. 3.1) that consists of a time-persistent implicit field \(\mathbf{\phi}\) for the rigid object, a time-varying morphable mesh for the hand \(H^{t}\), the relative transformation between hand and object \(T^{t}_{h\to o}\), and time-varying camera poses \(T^{t}_{c\to h}\). The optimization objective consists of two terms (Sec. 3.3): a reprojection error from the estimated original viewpoint and data-driven prior term that encourages the object geometry to appear more plausible given category and hand information when looking from another viewpoint. The prior is implemented as a diffusion model conditioned on a text prompt \(C\) about the category and renderings of the hand \(\pi(H)\) with geometry cues (Sec. 3.2). It denoises the rendering of the object \(\pi(O)\) and backpropagates the gradient to the 3D HOI representation by score distillation sampling (SDS) [61]. ### HOI Scene Representation Implicit field for the object.The rigid object is represented by a time-persistent implicit field \(\mathbf{\phi}\) that can handle unknown topology and has shown promising results when optimizing for challenging shapes [79, 85, 87]. For every point in the object frame, we use multi-layer perceptrons to predict the signed distance function (SDF) to the object surface, \(s=\mathbf{\phi}(\mathbf{X})\). Time-varying hand meshes.We use a pre-defined parametric mesh model MANO [66] to represent hands across frames. The mesh can be animated by low-dimensional parameters and thus can better capture more structured motions, _i.e_. hand articulation. We obtain hand meshes \(H^{t}\) in a canonical hand wrist frame by rigging MANO with a 45-dim pose parameters \(\mathbf{\theta}^{t}_{A}\) and 10-dim shape parameters \(\mathbf{\beta}\), _i.e_. \(H^{t}=\text{MANO}(\mathbf{\theta}^{t}_{A},\mathbf{\beta})\). The canonical wrist frame is invariant to wrist orientation and only captures finger articulations. Composing to a scene.Given the time-persistent object representation \(\mathbf{\phi}\) and a time-varying hand mesh \(H^{t}\), we then compose them into a scene at time \(t\) such that they can be reprojected back to the image space from the cameras. Prior works [59, 21, 23] typically track 6D object pose directly in the camera frame \(T_{c\to o}\) which requires an object template to define the object pose. In our case, since we do not have access to object templates, the object pose in the camera frame is hard to estimate directly. Instead, we track object pose with respect to hand wrist \(T^{t}_{h\to o}\) and initialize them to identity. It is based on the observation that the object of interest usually moves together with the hand and undergoes "common fate" [69]. A point in the rigid object frame can be related to the predicted camera frame by composing the two transformations, camera-to-hand \(T^{t}_{c\to h}\) and hand-to-object \(T^{t}_{h\to o}\). For notation convention, we denote the implicit field transformed to the hand frame at time t as \(\mathbf{\phi}^{t}(\cdot)\equiv\mathbf{\phi}(T_{h\to(}\cdot))\). Besides modeling camera extrinsics, we also optimize for per-frame camera intrinsics \(\mathbf{K}^{t}\) to account for zoom-in effect, cropping operation, and inaccurate intrinsic estimation. In summary, given a monocular video with corresponding masks, the parameters to be optimized are \[\mathbf{\phi},\mathbf{\beta},\mathbf{\theta}^{t}_{A},T^{t}_{h\to o},T^{t}_{c\to h},\mathbf{K}^ {t} \tag{1}\] Differentialble Rendering.To render the HOI scene into an image, we separately render the object (using volumetric rendering [87]) and the hand (using mesh render Figure 2: **Method Overview:** We decompose the HOI scene into 1) a rigid time-persistent implicit field \(\phi\) for the object, 2) hand meshes parameterized by hand shape \(\beta\) and articulations \(\theta^{t}_{A}\), and 3) their time-varying relative poses \(T^{t}_{h\to o}\). We define the camera poses \(T^{t}_{c\to h}\) in the hand frame. The scene representation is optimized with respect to a reprojection term from the original views \(v^{t}\) and a data-prior term from novel views \(v^{\prime}\). ing [43, 58]) to obtain geometry cues. We then blend their renderings into HOI images by their rendered depth. Given an arbitrary viewpoint \(v\), both differentiable renders can render geometry images including mask, depth, and normal images, \(G_{h}\equiv(M_{h},D_{h},N_{h}),G_{o}\equiv(M_{o},D_{o},N_{o})\) To compose them into a semantic mask \(M_{HOI}\) that is later used to calculate the reprojection loss, we softly blend the individual masks by their predicted depth. Similar to blending two-layer surfaces of mesh rendering, the final semantic masks can be computed by alpha blending: \(M=B(M_{h},M_{o},D_{h},D_{o})\). Please refer to supplementary material for the full derivation of the blending function \(B\). ### Data-Driven Prior for Geometry When observing everyday interactions, we do not directly observe all aspects of the object because of occlusions and limited viewpoint variability. Despite this, we aim to reconstruct the 3D shape of the full object. To do so, we rely on a data-driven prior that captures the likelihood of a common object geometry given its category and the hand interacting with it \(p(\mathbf{\phi}^{t}|H^{t},C)\). More specifically, we use a diffusion model which learns a data-driven distribution over geometry rendering of objects given that of hands and category. \[\log p(\mathbf{\phi}^{t}|H^{t},C)\approx\mathbb{E}_{v\sim V}\log p( \pi(\mathbf{\phi}^{t};v)|\pi(H^{t};v),C) \tag{2}\] where \(v\sim V\) is a viewpoint drawn from a prior distribution, \(C\) as category label and \(\pi\) as rendering function. Since this learned prior only operates in geometry domain, there is no domain gap to transfer the prior across daily videos with complicated appearances. We first pretrain this diffusion model with large-scale ground truth HOIs and then use the learned prior to guide per-sequence optimization (Sec. 3.3). Learning prior over a-modal HOI geometry.Diffusion models are a class of probabilistic generative models that gradually transform a noise from a tractable distribution (Gaussian) to a complex (e.g. real image) data distribution. Diffusion models are supervised to capture the likelihood by de-noising corrupted images. During training, they take in corrupted images with a certain amount of noise \(\sigma_{i}\) along with conditions and learn to reconstruct the signals [26]: \[\mathcal{L}_{\text{DDPM}}[\mathbf{x};\mathbf{c}]=\mathbb{E}_{\epsilon\sim\mathcal{N} (\mathbf{0},\mathbf{I}),i}\|\mathbf{x}-D_{\mathbf{\psi}}(\mathbf{x}_{i},\sigma_{i},\mathbf{c})\|_{2}^ {2} \tag{3}\] where \(\mathbf{x}_{i}\) is a linear combination of signal \(\mathbf{x}\) and noise \(\epsilon\) while \(D_{\mathbf{\psi}}\) is the denoiser. In our case, as shown in Fig. 3, the diffusion model de-noises the a-modal geometry rendering of an object given text prompt and hand. Additionally, the diffusion model is also conditioned on the rendering of uv-coordinate of MANO hand \(U_{h}\) because it can better disambiguate if the hand palm faces front or back. More specifically, the training objective is \(\mathcal{L}_{\text{diff}}=\mathcal{L}_{\text{DDPM}}[G_{o};C,G_{h},U_{h}]\). The text prompt comes from a text template: "an image of a hand holding {_category_}". Implementation Details.When we train the diffusion model with the rendering of ground truth HOI, we draw viewpoints with rotation from the uniform distribution in \(\text{SO}(3)\). We use the backbone of a text-to-image model [52] with cross attention and modify it to diffuse 5-channel geometry images (3 for normal, 1 for mask and 1 for depth). We initialize the weights from the image-conditioned diffusion model [52] pretrained with large-scale text-image pairs. The additional channels in the first layer are loaded from the average of the pretrained weights. ### Reconstructing Interaction Clips in 3D After learning the above interactions prior, at inference time when given a short monocular clip with semantic masks of hand and object, we optimize a per-sequence HOI representation to recover the underlying hand-object interactions. We do so by differentiable rendering of the 3D scene representation from the original views and from random novel views. The optimization objectives consist of the following terms. Reprojection error.First, the HOI representation is optimized to explain the input video. We render the semantic mask of the scene from the estimated cameras for each frame and compare the rendering of the semantic masks (considering hand-object occlusion ) with the ground truth masks: \(\mathcal{L}_{\text{reproj}}=\sum_{t}\|M^{t}-\hat{M}^{t}\|_{1}\) Learned prior guidance.In the meantime, the scene is guided by the learned interactino prior to appear more likely from a novel viewpoint following Scored Distillation Sampling (SDS) [61]. SDS treats the output of a diffusion model as a critic to approximate the gradient step towards more likely images without back-propagating through the diffusion model for compute efficiency: \[\mathcal{L}_{SDS}=\mathbb{E}_{v,\epsilon,i}[w_{i}\|\pi(\mathbf{\phi}^{t})-\hat{G }_{o}^{i}\|_{2}^{2}] \tag{4}\] Figure 3: **Geometry-informed Diffusion Model: The diffusion model takes in a noisy geometry rendering of the object, the geometry rendering of the hand, and a text prompt, to output the denoised geometry rendering of objects.** where \(\hat{G}_{o}^{i}\) is the reconstructed signal from the pre-trained diffusion model. Please refer to relevant works [47, 61] or supplmentary for full details. Other regularization.We also include two regularization terms: one Eikonal loss [17] that encourages the implicit field \(\mathbf{\phi}\) to be a valid distance function \(\mathcal{L}_{\text{eik}}=\|\nabla_{X}\mathbf{\phi}^{2}-1\|^{2}\), and another temporal loss that encourages the hand to move smoothly with respect to the object \(\mathcal{L}_{\text{smooth}}=\sum_{t}\|T_{h\to o}^{t}H^{t}-T_{h\to o}^{t-1}H^{t-1} \|_{2}^{2}\) Initialization and training details.While the camera and object poses are learned jointly with object shape, it is crucial to initialize them to a coarse position [40]. We use FrankMocap [67], an off-the-shelf hand reconstruction system, to initialize the hand parameters, camera-to-hand transformations, and camera intrinsic. More specifically, FrankMocap predicts finger articulation \(\mathbf{\theta}_{A}^{t}\), wrist orientation \(\mathbf{\theta}_{w}^{t}\), and a weak perspective camera. The last two are used to compute camera-to-hand transformation and intrinsics of a full perspective camera. See appendix for derivation. We initialize the object implicit field to a coarse sphere [87] and the object poses \(T_{h\to o}^{t}\) to identity such that the initial object is roughly round hand palm. The per-frame hand pose estimation sometimes fails miserably in some challenging frames due to occlusion and motion blur. We run a lightweight trajectory optimization on wrist orientation to correct the catastrophic failure. The optimization objective encourages smooth joint motion across frames while penalizing the difference to the per-frame prediction, _i.e_. \(\mathcal{L}=\|H(\mathbf{x}^{t})-H(\hat{\mathbf{x}}^{t})\|+\lambda\|H(\mathbf{x}^{t+1})-H( \mathbf{x}^{t})\|\) where \(\lambda\) is \(0.01\). Please see appendix for full details. ## 4 Experiment We first train the diffusion model on the egocentric HOI4D [44] dataset and visualize its generations in Section 4.1. Then, we evaluate the reconstruction of hand-object interactions quantitatively and qualitatively on the held-out sequences and compare DiffHOI with two model-free baselines (Section 4.2). We then analyze the effects of both category-prior and hand-prior respectively, ablate the contribution from each geometry modality, and analyze its robustness to initial prediction errors (Section 4.3). In Section 4.4, we discuss how DiffHOI compares with other template-based methods. Lastly, in Section 4.5, we show that our method is able to reconstruct HOI from in-the-wild video clips both in first-person and from third-person view. Dataset and Setup.HOI4D is an egocentric dataset consisting of short video clips of hand interacting with objects. It is collected under controlled environments and recorded by head-wear RGBD cameras. Ground truth is provided by fitting 6D pose of scanned objects to the RGBD videos. We use all of the 6 rigid object categories in portable size (mug, bottle, kettle, knife, toy car, bowl). To train the diffusion model, we render one random novel viewpoint for each frame resulting in 35k training points. We test the object reconstruction on held-out instances, two sequences per category. All of baselines and our method use the segmentation masks from ground truth annotations and the hand poses from the off-the-shelf prediction system [67] if required. For in-the-wild dataset, we test on clips from EPIC-KITCHENS [12] videos and casual YouTube videos downloaded from the Internet. The segmentation masks are obtained using an off-the-shelf video object segmentation system [9]. ### Visualizing Data-Driven Priors We show conditional generations by the pre-trained diffusion model in Fig. 11. Given the geometry rendering of hand (only visualizing surface normal), as well as a text prompt, we visualize 4 different generations from the diffusion model. Middle row shows the generated surface normal of the object and bottom row visualizes the generated Figure 4: **Generations from conditional diffusion model:** Given the geometry rendering of hand \(G_{h}\) (only showing surface normals) and a text prompt \(C\), we visualize 4 different generations from the diffusion model. Middle row shows the generated surface normal of the objects and bottom row visualizes the generated object masks overlayed on the given hand masks. Note the left and middle column share the same text condition while middle and right column share the same hand condition. object masks overlayed on top of the given hand mask, for a better view of the hand-object relations. Note that left and middle column condition on the same text prompts while middle and right column conditions on the same hand pose. Please see appendix for additional examples and visualizations of all modalities. The generated object match the category information in the prompt while the generations are diverse in position, orientation, and size. Yet, all of the hand-object interactions are plausible, _e.g_. different generated handles all appear at the tip of the hand. Comparing middle and right examples, different category prompts lead to different generations given the same hand rendering. With the same prompt but different hands (left and middle), the generated objects flip the orientation accordingly. In summary, Fig. 11 indicates that the learned prior is aware of both the hand prior and the category-level prior hence being informative to guide the 3D reconstruction from clips. ### Comparing Reconstructions of HOI4D Evaluation Metric.We evaluate the object reconstruction errors. Following prior works [20, 29], we first align the reconstructed object shape with the ground truth by Iterative Closest Point (ICP), allowing scaling. Then we compute Chamfer distance (CD), F-score [74] at \(5mm\) and \(10mm\) and report mean over 2 sequences for each category. Chamfer distance focuses on the global shapes more and is affected by outliers while F-score focuses on local shape details at different thresholds [74]. Baselines.While few prior works tackle our challenging setting - 3D HOI reconstruction from casual monocular \begin{table} \begin{tabular}{l|c c c c c c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Mug} & \multicolumn{3}{c}{Bottle} & \multicolumn{3}{c}{Kettle} & \multicolumn{3}{c}{Bowl} & \multicolumn{3}{c}{Knife} & \multicolumn{3}{c}{ToyCar} & Mean \\ \cline{2-13} & \(F@55\) & \(F@10\) & \(CD\) & \(F@5\) & \(F@10\) & \(CD\) & \(F@5\) & \(F@10\) & \(CD\) & \(F@5\) & \(F@10\) & \(CD\) & \(F@5\) & \(F@10\) & \(CD\) & \(F@5\) & \(F@10\) & \(CD\) \\ \hline HHOR [29] & 0.18 & 0.37 & 7.0 & 0.26 & 0.56 & 3.1 & 0.12 & 0.30 & 11.3 & 0.31 & 0.54 & 4.2 & **0.71** & 0.93 & **0.6** & 0.26 & 0.59 & 1.9 & 0.31 & 0.55 & 4.7 \\ HOI [89] & 0.44 & 0.71 & 2.1 & 0.48 & 0.77 & 1.5 & 0.21 & 0.45 & 6.3 & 0.38 & 0.64 & 3.1 & 0.33 & 0.68 & 2.8 & 0.66 & 0.95 & 0.5 & 0.42 & 0.70 & 2.7 \\ Ours (DiffHOI) & **0.64** & **0.86** & **1.0** & **0.54** & **0.92** & **0.7** & **0.43** & **0.77** & **1.5** & **0.79** & **0.98** & **0.4** & 0.50 & **0.95** & 0.8 & **0.83** & **0.99** & **0.3** & **0.62** & **0.91** & **0.8** \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparison with baselines:** We compare our method along with prior works HHOR [29] and iHOI [89] on the HOI4D dataset and report object reconstruction error in \(F@5\)mm and \(F@10\)mm scores and Chamfer Distance (\(CD\)). Figure 5: **Qualitative evaluation on HOI4D:** We show reconstruction by our method (DiffHOI) along with two baselines [29, 89] in the image frame (left) and another novel view with (top right) or without (bottom right) hand. Please see project website for reconstruction videos. clips without knowing the templates, the closest works are two template-free methods from Huang _et al_. [29] (HHOR) and Ye _et al_. [89] (iHOI). HHOR is proposed for in-hand scanning. It optimizes a deformable semantic implicit field to jointly model hand and object. HHOR captures the dynamics by a per-frame warping field while no prior is used during optimization. iHOI is a feed-forward method and reconstructs 3D objects from single-view images by learning the hand prior between hand poses and object shapes. The method does not leverage category-level prior and do not consider time-consistency of shapes. We finetune their pretrained model to take in segmentation masks. We evaluate their result by aligning their predictions with ground truth for each frame and report the average number across all frames. ### Ablation Studies We ablate our system carefully to analyze the contribution of each component. Besides the object reconstruction errors in the aligned object-centric frame, we further evaluate the hand-object _arrangement_ by reporting the Chamfer distance of objects in hand frame, _i.e_. \(CD_{h}\equiv CD(T^{t}_{o\to h}O,\hat{T}^{t}_{o\to h}\hat{O})\)). We only report mean value in the main paper. Please refer to supplementary for category-wise results. **How does each learned prior help?** We analyze how the category and hand priors affect reconstruction by training two more diffusion models conditioned only on text-prompt or hand renderings respectively. We also compare with the variant without optimizing \(\mathcal{L}_{\text{SDS}}\) (no prior). As reported quantitatively, we find that _category prior helps object reconstructions while hand prior helps hand-object relation (Tab. 2)_. And combining them both results in best performance. We highlight an interesting qualitative result of reconstructing the bowl in Fig. 6. Neither prior can reconstruct the concave shape on its own - the hand pose alone is not predictive enough of the object shape while only knowing the object to be a bowl cannot make the SDS converge to a consensus direction that the bowl faces. Only knowing _both_ can the concave shapes be recovered. This example further highlights the importance of both priors. **Which geometry modality matters more for distillation?** Next, we investigate how much each geometry modality (mask, normal, depth) contributes when distilling them into 3D shapes. Given the same pretrained diffusion model, we disable one of the three input modalities in optimization by setting its weight on \(\mathcal{L}_{\text{SDS}}\) to 0. As visualized in Fig. 7, the surface normal is the most important modality. Interestingly, the model collapses if not distilling surface normals and even performs worse than the no-prior variant. Without distillation on masks, the object shape becomes less accurate probably because binary masks predict more discriminative signals on shapes. Relative depth does not help much with global object shape but it helps in aligning detailed local geometry (\(F@5\)) and aligning the object to hand (\(F@10\)). **How robust is the system to hand pose prediction errors?** We report the object reconstruction performance when using GT vs predicted hand pose in Tab. 4, and find that our system is robust to some prediction error. Moreover, even if we artificially degrade the prediction by doubling the error, our performance remains better than the baselines (Tab. 1). We also report the hand pose estimation metrics and find that our optimization improves the initial predictions (in parentheses). ### Comparing with Template-Based Methods We compare with HOMAN [23], a representative template-based method that optimizes object 6D poses and hand articulations with respect to reprojection error and multiple interaction objectives including contact, intersection, distance, relative depth, temporal smoothness, _etc_. We show quantitative and qualitative results in Tab. 5 and 8. Note that evaluating HOMAN in terms of object \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{Object Reconstruction} & \multicolumn{2}{c}{Hand Estimation} \\ \cline{2-6} & \(F@5\uparrow\) & \(F@10\uparrow\) & \(CD\downarrow\) & MPJPE \(\downarrow\) & AUC \(\uparrow\) \\ \hline GT & 0.68 & 0.91 & 0.75 & – & – \\ Prediction* & 0.62 & 0.91 & 0.77 & 26.9(28.4) & 0.49(0.47) \\ Pred. Error \(\times 2\) & 0.63 & 0.87 & 1.01 & 40.7(44.6) & 0.31(0.27) \\ \hline \hline \end{tabular} \end{table} Table 4: **Error analysis against hand pose noise: * marks our unablated method. Numbers in parentheses are per-frame prediction errors before optimization.** \begin{table} \begin{tabular}{l c c c|c} \hline \hline & \(F@5\uparrow\) & \(F@10\uparrow\) & \(CD\downarrow\) & \(CD_{h}\downarrow\) \\ \hline HOMAN-GT & 1.00 & 1.00 & 0.00 & 84.3 \\ HOMAN-average & 0.76 & 0.94 & 0.48 & 120.9 \\ HOMAN-furthest & 0.49 & 0.78 & 1.33 & 157.9 \\ Ours(DiffHOI) & 0.62 & 0.91 & 0.78 & 48.7 \\ \hline \hline \end{tabular} \end{table} Table 5: **Comparison with template-based baseline: Quantitative results on the HOI4D dataset for object reconstruction error in the object-centric frame (\(F@5\), \(F@10\), \(CD\)) and for hand-object alignment (\(CD_{h}\)). We compare our method with HOMAN [23] with the ground truth template (-GT), with random templates from the training split (and reporting the average), and with furthest template from the ground truth (-furthest).** Figure 8: **Comparing with template-based method: We show reconstruction in the image frame (top) and from a novel view (bottom) by our method and HOMAN [23] when provided with ground-truth templates, a random template, and the most dissimilar template in the training split.** reconstruction is equivalent to evaluating templates since the objects are aligned in the object-centric frame. We first report the average object reconstruction errors when optimizing with different templates from training sets. While the gap indicates potential room to improve object shapes for template-free methods, DiffHOI is favorable over some templates in the training set. Nevertheless, when evaluating the objects in the hand frame, DiffHOI outperforms HOMAN by a large margin. The numbers along with visualizations in Fig. 8 indicate that template-based methods, even when optimizes with multiple objectives to encourage interactions, still struggle to place objects in the context of hands, especially for subtle parts like handles. Furthermore, optimizing with random templates degrades significantly, highlighting the inherent drawbacks of template-based methods to demand the accurate templates. ### Reconstructing In-the-Wild Video Clips Lastly, we show that our method can be directly applied to more challenging video clips. In Fig. 9 top, we compare between our method and iHOI [89]. iHOI predicts reasonable shapes from the front view but fails on transparent objects like the plastic bottle since it is never trained on such appearance. In contrast, we transfer better to in-the-wild sequences as the learned prior only take on geometry cues. In Fig. 9 bottom, we visualize more results from our method. By incorporating learned priors, our method is robust to mask prediction inaccuracy, occlusion from irrelevant objects (the onion occludes knife blade), truncation of the HOI scene (bowl at the bottom left), _etc_. Our method can also work across ego-centric and third-person views since the learned prior is trained with uniformly sampled viewpoints. The reconstructed shapes vary from thin objects like knives to larger objects like kettles. ## 5 Conclusion In this work, we propose a method to reconstruct hand-object interactions without any object templates from daily video clips. Our method is the first to tackle this challenging setting. We represent the HOI scene by a model-free implicit field for the object and a model-based mesh for the hand. The scene is optimized with respect to re-projection error and a data-driven geometry prior that captures the object shape given category information and hand poses. Both of these modules are shown as critical for successful reconstruction. Despite the encouraging results, there are several limitations: the current method can only handle small hand-object motions in short video clips up to a few (\(\sim\)5) seconds; the reconstructed objects still miss details of shape. Despite the challenges, we believe that our work takes an encouraging step towards a holistic understanding of human-object interactions in everyday videos. Acknowledgements.The authors would thank Di Huang for HHOR comparison. We thank Dandan Shan, Sudeep Dasari for helping with EPIC-KITCHENS datasets. We also thank Sudeep Dasari, Hanzhe Hu, Helen Jiang for detailed feedback on the manuscript. Yufei was partially supported by the NVIDIA fellowship. Figure 9: **Qualitative evaluation on in-the-wild video clips:** We show reconstruction by our method (DiffHOI) and iHOI [89] in the image frame (left) and a novel view with (top right) or without (bottom right) hand. Please see project website for reconstruction videos.
2309.11616
Digital twins kinetics of virtual free-radical copolymerization of vinyl monomers with stable radicals. 1. Methyl methacrylate
The first experience of virtual free-radical polymerization of a set of vinyl monomers in the framework of the digital twins (DTs) concept (Polymers 2023, 15, 2999) is extended to methyl methacrylate, copolymerized with small additives of stable radicals such as fullerene C60 and TEMPO. The virtualization of chemical processes is based on the assumption of basic chain reactions that lay the foundation of polymerization. In the current study, a set of elementary reactions, covering the initial stage of the polymerization, is considered. The reactions are the most suitable for quantum chemical treatment. The calculations, covering about 30 DTs, were carried out using a semi-empirical version of the unrestricted Hartree-Fock approximation. The main energy and spin-density parameters of the ground state of DTs are determined. The barrier profiles of three DTs were calculated, which led the foundation for determining the activation energy of the studied processes. The decisive role of spins in the formation of transition states of these processes is confirmed. The two stable radicals behave quite differently with respect to the polymerization of the studied monomer. The action of fullerene C60 concerns mainly the capturing of the initiating free radicals, anchoring them to the molecule body and thus slowing the main polarization process. In contrast, TEMPO effectively capture monomer radicals manifesting itself as a killer of the polymerization thus providing the appearance of the induction period in practice. The obtained virtual kinetic data are in a full consent with experimental reality.
Elena F. Sheka
2023-09-20T20:02:44Z
http://arxiv.org/abs/2309.11616v2
Digital twins' kinetics of virtual free-radical copolymerization of vinyl monomers with stable radicals. 1. Methyl methacrylate ###### Abstract The first experience of virtual free-radical polymerization of a set of vinyl monomers in the framework of the digital twins (DTs) concept (_Polymers_ 2023, 15, 2999) is extended to methyl methacrylate, copolymerized with small additives of stable radicals such as fullerene C\({}_{60}\) and TEMPO. The virtualization of chemical processes is based on the assumption of basic chain reactions that lay the foundation of polymerization. In the current study, a set of elementary reactions, covering the initial stage of the polymerization, is considered. The reactions are the most suitable for quantum chemical treatment. The calculations, covering about 30 DTs, were carried out using a semi-empirical version of the unrestricted Hartree-Fock approximation. The main energy and spin-density parameters of the ground state of DTs are determined. The barrier profiles of three DTs were calculated, which led the foundation for determining the activation energy of the studied processes. The decisive role of spins in the formation of transition states of these processes is confirmed. The two stable radicals behave quite differently with respect to the polymerization of the studied monomer. The action of fullerene C\({}_{60}\) concerns mainly the capturing of the initiating free radicals, anchoring them to the molecule body and thus slowing the main polarization process. In contrast, TEMPO effectively capture monomer radicals manifesting itself as a 'killer' of the polymerization thus providing the appearance of the induction period in practice. The obtained virtual kinetic data are in a full consent with experimental reality. Keywords:vinyl monomers; free radical polymerization; digital twins approach; energy graphs; activation energy; free-radical copolymerization; stable radicals; methyl methacrylate; fullerene C\({}_{60}\); TEMPO ## 1 Introduction Free-radical polymerization (FRP) of vinyl monomers is the most studied chemical process and the introduction of weak additions of complementary stable radicals into their reactors has always been under the constant attention of researchers [1]. Once focused on optimizing the technology for obtaining the final polymer, the researchers, having not found a noticeable effect of these radicals on the final product, came to a consensus about their weak practical significance. The situation changed drastically when fullerene C\({}_{60}\) was introduced into the polymerization reactor as an additional stable radical (see monographic review [2] and references therein). The effect turned out to be so strong that authors of Ref. [3] had to speak about observing the free-radical copolymerization (FRCP) of of the studied monomer (xylylene) with C\({}_{60}\). The feature was supported with other researchers and very soon FRCP turned into a broad area of new polymer science. The issues has been studied by several groups and for various vinyl monomers (xylylene [3], styrene [4 - 12], methyl methacrylate [7, 13, 14], 4-vinylbenzoic acid [15], maleic anhydride [16], etc.). Studies of this first wave of activity revealed two main features that accompany the involving of fullerene in the FRCP. The first concerns the effect of the fullerene-provoked inhibition of the reaction at its initial stage while the second is related to branched-star liked structures of polymer molecules anchored on fullerene. Studies of the second wave of the activity [17-21] were mainly aimed at elucidating the details of the inhibition-fullerene kinetics, with respect to which a wide range of opinions was observed. The vision of the final stage of the reaction, concerning star formation [22], has been generally accepted until now. The discrepancy in understanding the inhibitory role of C\({}_{60}\) fullerene in FPCR is naturally associated with the fundamental basis of this process, which is figuratively well described with a 'pineapple-on-plantation' picture [23] caused by a multivariable intermolecular interaction between molecular substances that fill a polymerizing chemical reactor. The result of each of the potential pair interactions may prevent polymer chains growth from being a winner of the strong competition. The latter is controlled by the kinetics of the reactions occurring and determines the winner with the fastest kinetics, all other things being equal. 'Ceteris paribus' includes such main reaction parameters as solvent, concentration of initial reagents, temperature, reactor design and so forth. On the other hand, distinctive characteristics of the reactions such as the types of initiating and stable radicals as well as monomers are carriers of the required information to be necessary for the determination of the thermodynamic and kinetic parameters of the considered reactions. In spite of a large variation of the results obtained, there is a possibility to reveal certain trends concerning the inhibitory effects of stable radicals on FPR. Let's consider these trends using the example of reactions carried out with the maximum approximation to the 'ceteris paribus' format. Fundamental studies [20, 21] involved two different neat monomers (methyl methacrylate (MMA) and styrene (St)), two initiating free radicals \(AIBN^{\star}\) and \(BP^{\star}\) produced in the course of the thermal decomposition of 2,2'-azobisisobutyronitrile (AlBN) and benzoyl peroxide (BP), respectively, as well as two complementary stable radicals C\({}_{60}\) and TEMPO. The obtained results concerning the kinetics of the initial stage of the FRP of the mentioned monomers have allowed to reveal three main trends. The first concerns the FRCP of monomer with C\({}_{60}\), the conversion rate of which reduces by several time in the fullerene presence with respect to that one of the neat monomer. This trend is characteristic for the (C\({}_{60}\) + MMA) FRCP. The second trend is associated with the presence of the induction period (IP) at the initial stage of the studied FRCP on the background of practically unchanged conversion rate of the neat monomer. The trend is observed for the (TEMPO+MMA), (TEMPO+St) as well as (C\({}_{60}\) + St) FRCPs. The third trend concerns the similarity and difference in the reactions of the two neat monomers on the presence of stable radicals C\({}_{60}\) and TEMPO. The main goal of the current study is to replace these verbal characteristics with digital kinetic parameters that allow to numerically describe the observed empirical patterns. The revealed trends provide rich food for thought, on the one hand, and clearly outline the field of activity of virtual polymerization. Obviously, the virtual confirmation of the selected trends concerns the foundations of the FRCP mechanism. The implementation of a virtual game, involving all the discussed trends and previously not realistic, today is possible within the framework of a new modeling paradigm - the Digital Twins (DTs) one [23-25]. This approach makes it possible to consider the totality of the presented trends under conditions 'other things being equal' and to reveal those features of the stable radicals that determine their voluminous role in the FRP of vinyl monomers. The successful implementation of the game paradigm is determined by how clearly the playing field is configured and the game characters are defined. The configuration of the playing field in this work is based on the chain-reaction concept laying the foundation of the complex polymerization process that thus presenting it as a well-traced sequence of superpositional elementary reactions [26, 27]. From a theoretical viewpoint, such a vision of polymerization is the most favorable for using the quantum-chemical (QC) techniques for its virtual consideration, reducing it to the consideration of individual elementary reactions [28, 29]. The theory of elementary reactions and their QC consideration has been going on for many decades [30-34], and the only complaint about the certain limitations of these studies can be the fact that highly welcome QC calculations of sets of one-type reactions did not become widespread. The main novelty of the DT concept concerns just this key point since a large massive of elementary reactions, which is the playing field in the current study, allows clearly distinguish one-type reactions, performed under the same conditions, followed by a comparative analysis of their results, accompanied by the establishment of reliable trends. The status of 'the same conditions' implies the application to the same QC consideration, absolute temperature zero, and vacuum medium. The characters of the virtual game under discussion involve a large number of DTs, which are participants in diverse elementary reactions. Their number and type of molecular compositions is determined by the number and type of elementary reactions to be considered. The latter are schematically presented in Table 1 in terms of the relevant DTs. Notations \(M\) and \(R\) are related to vinyl monomers of different type and free radicals. Stable radicals, both monotargeted TEMPO and multitargeted C\({}_{60}\), are marked with \(F\). The reaction list does not \begin{table} \begin{tabular}{|c|l|l|l|} \hline **Reaction** & \multirow{2}{*}{**Reaction equation**} & **Reaction** & \multirow{2}{*}{**Reaction type**} \\ **mark** & & & \\ & & & \\ & & & \\ \hline _(1)_ & \(R^{*}+M\to RM^{*}\) & \(k_{L}\) & generation of monomer radical \\ \hline _(2)_ & \(RM^{*}+(n-1)M\to RM_{n}^{*}\) & \(k_{p}\) & generation of oligomer radical, polymer chain \\ & & & growth \\ \hline _(30)_ & \(F+M\to FM\) & \(k_{2m}^{f}\) & two-dentate grafting of monomer on C\({}_{60}\) \\ \hline _(3b)_ & \(F+M\to FM^{*}\) & \(k_{1m}^{f}\) & one-dentate grafting of monomer on C\({}_{60}\), \\ & & & generation of monomer radical \\ \hline _(4)_ & \(FM^{*}+(n-1)M\to FM_{n}^{*}\) & \(k_{p}^{f}\) & generation of oligomer radical, polymer chain \\ & & & growth \\ \hline _(5)_ & \(F+R^{*}\to FR\) & \(k_{r}^{f}\) & free radical grafting on C\({}_{60}\) \\ \hline _(6)_ & \(F+RM^{*}\to F(RM)\) & \(k_{mr}^{f}\) & monomer radical grafting on C\({}_{60}\) \\ \hline _(7)_ & \(F+RM_{n}^{*}\to F(RM_{n})\) & \(k_{or}^{f}\) & oligomer radical grafting on C\({}_{60}\) \\ \hline _(8)_ & \(FR+RM^{*}\to F(RRM)\) & \(k_{mrr}^{f}\) & sequential grafting of monomer radical and free \\ & & & radical on C\({}_{60}\) \\ \hline _(9)_ & \(F(RM)+R^{*}\to F(RMR)\) & \(k_{mrr}^{f}\) & sequential grafting of free radical and monomer \\ & & & radical on C\({}_{60}\) \\ \hline _(10)_ & \(F+nR^{*}\to FR_{n}\) & \(k_{mr}^{f}\) & \(n\) free radical grafting on C\({}_{60}\) \\ \hline _(11)_ & \(F+nRM^{*}\to F(RM)_{n}\) & \(k_{nmr}^{f}\) & grafting of \(n\) monomer radicals on C\({}_{60}\) \\ \hline _(12)_ & \(FR_{n}+RM_{m}^{*}\to F(R_{n}RM_{m})\) & \(k_{mrmrr}^{f}\) & sequential grafting of \(n\) free radicals and \(m\) \\ & & & oligomer radical on C\({}_{60}\) \\ \hline _(13)_ & \(F(RM_{n})+RM_{m}^{*}\to F(RM)_{n}RM_{m})\) & \(k_{mmrmrr}^{f}\) & sequential grafting of \(n\) monomer radicals and \(m\) \\ & & & oligomer radical on C\({}_{60}\) \\ \hline \end{tabular} * \({}^{1)}\)\(F\), \(M\), \(R^{*}\) mark digital twins of fullerene C\({}_{60}\), vinyl monomer and a free radical; \(RM^{*}\) and \(RM_{n}^{*}\) do the same for monomer and oligomer radicals, respectively. \end{table} Table 1: Elementary reactions of the free-radical copolymerization of vinyl monomers with C\({}_{60}\)1 present all of the potential elementary events, say, reactions of chain transfer and other are skipped, while being quite complete in view of the empirical trends under study. The first common characteristic of the reactions listed in Table 1 is their radical character. However, they are therewith distinctly divided into two groups that cover association reactions (1) and (2), uniting free radical with monomer, and grafting reactions (3)-(12) that in the case of C\({}_{60}\) are reactions of the fullerene derivatization of different kinds. Products of the first-group reactions as well as those of reactions (3b) and (4) are free radicals while those of the second-group are either stable species or stable radicals in the case when \(F\) presents either TEMPO or C\({}_{60}\), respectively. Over thirty years ago, these stable radicals were called fullerenyls [35]. The past decades since then, this name has taken root [36-39] and we will use it in the future. Reaction (1) is the cornerstone of the entire polymerization process, determining its feasibility. By selecting the most successful participants in this reaction empirically, the researchers opted for monotargeted free radicals such as \(\mathit{AIBN}^{*}\) and \(\mathit{BP}^{*}\), while the stable radical TEMPO was found unsuitable for this role. As for C\({}_{60}\), the first wave of polymer researchers, who introduced fullerene into polymerization and were confident in its radical nature, made repeated attempts to detect reaction (3b), accompanied by the growth of a polymer chain attached to the fullerene (reaction (4)) [3-16, 40]. The fate of reaction (3b) depends on the detailed configuration of the intermolecular junction between C\({}_{60}\) and a monomer. The latter is configured with two \(\mathit{sp}^{2}\)C-C bonds, one belonging to fullerene and the other presenting a vinyl group of monomer. Accordingly, the junction can be either two-dentate or one-dentate configured. If the first configuration causes the formation of a [2x2] cycloaddition monoderivative stable radical \(\mathit{FM}\) similar to the patterned C\({}_{60}\), the second results in the formation of fullerene-grafted monomer radical \(\mathit{FM}^{*}\)similar to \(\mathit{RM}^{*}\). Accordingly, reaction (2) describes the polymer chain growth initiated with a free radical while reaction (4) describes the monomer polymerization, once grafted on fullerene. Although the existence of a reaction (3b) was suspected in a number of cases, a confident conclusion was not made and this reaction as well as reaction (4) were classified as unlikely. Nevertheless, we will take these reactions in what follows into account. Reaction (5) reveals a capturing of free radicals \(\mathit{R}^{*}\) with stable ones while reaction (6) does the same but with monomer radical \(\mathit{RM}^{*}\). Both reactions evidently affect the monomer polymerization, decreasing the number of initiating free radicals in the first case and a completely terminating the polymerization in the second. A complete set of reactions (1)-(6) governs initial stage of the monomer polymerization, dividing the latter in the FRP (reactions (1) and (2)) and FRCP (reactions (3)-(6). Reaction (7) completes the polymerization process by capturing the oligomer radical \(\mathit{RM}^{*}_{n}\) with a stable radical, which stops the growth of the polymer chain. Reactions (8)-(11) involve all other variations of the intermolecular interaction of a stable radical, mainly, fullerene, with active players of the polymerization game such as sequential grafting of free radical and monomer radical (8) and vice versa (9) as well as multiple grafting of either free radical (10) or monomer radical (11) on the C\({}_{60}\). Two last reactions (12) and (13) complete the vision of the oligomer radical grafting on the fullerene, a part of whose carbon atoms are occupied with grafted free radicals (12) and monomer radicals (13). A detailed consideration of reactions (1), (2), and (7) has been performed recently [23]. The obtained results will be widely used in what follows. Reactions (3)-(6) are the main goal of the current study. Reactions (8)-(13) concern the formation of star-like disposition of polymer chains on the fullerene and will be discussed briefly. The elementary reaction concept of FRP/FRCP provides one more common thing. This concerns a standard energy graph, applicable to this kind of reactions. Actually, each such reaction proceeds between a pair of reagents, the energy of intermolecular interaction between which follows a typical graph when the components of the pair get closer in the chosen direction, taken as the reaction coordinate (see Figure 1). As seen in the figure, the graph includes the total energy of the equilibrium reactant community \(Y\), \(E(Y)\), the energy of the product of the \(Y\) pair interaction, \(E(X)\), and the energy of the transition state of the molecular complex under consideration, \(E_{TS}(X\leftrightarrow Y)\). As for kinetics of the reaction, its standard description concerns the rate constant, \(k(T)\), which is expressed through the Arrhenius equation (Eq. 1) [31-34] \[k(T)=Ae^{\left(\frac{-E_{A}}{kT}\right)}. \tag{1}\] Here \(A\) is a complex frequency factor, while \(E_{a}\) presents the activation energy, which is either the energy of the \(X\) product dissociation, \(E_{ad}\), or the \(Y\) pair association, \(E_{aa}\). There is also one more important energetic parameter - reaction enthalpy, \(\Delta H\), or coupling energy, \(E_{cpl}\)=\(E(X)\)-\(E(Y)\). The main difficulty in evaluation of the \(k(T)\) value is highly complicated nature of frequency factor \(A\). Its determination concerns basic problems of rotational-vibrational dynamics of polyatomic molecules, such as great number of both vibrational and rotational degrees of freedom as well as their anharmonicity. However, for one-type elementary reactions, A factor is expected to change weakly [28, 31-34], so that activation energy becomes governing. Its value can be determined by building barrier profiles of either association or dissociation of molecular pairs \(Y\) and \(X\), respectively. ## 3 Digital twins concept and digital twins of reagents under discussion A general algorithm of the DT concept can be presented schematically as [24,25] \[Digital twins\to Virtual device\to IT product.\] Here, DTs are molecular models under study, virtual device is a carrier of a selected software, IT product covers a large set of computational results related to the DTs under different actions in the light of the soft explored. The quality of IT product highly depends on how broadly and deeply the designed DTs cover the knowledge concerning the object under consideration and how adequate is the virtual device to peculiarities of this object. The concept is completely free from statistical and random errors that accompany real experiments, and is favored to any Figure 1: Universal energy graph of the pairwise intermolecular interaction in the system of many possibilities. modification. IT product in the current study presents structural, thermodynamic and kinetic parameters that in terms of the universal energy graph accompany any of the elementary reactions discussed in the previous section. Digital twins of the current study present vinyl monomer MMA; free radical \(\mathit{AIBN}^{\star}\) and \(\mathit{BP}^{\star}\); two stable radicals - TEMPO and fullerene C\({}_{60}\); and a number of final products, among which various fullerenyls, designed for each elementary reaction particularly. Virtual device in the current study is the CLUSTER-Z1 software [41, 42] implementing AM1 version of the semi-empirical unrestricted Hartree-Fock (UHF) approach [43]. The program showed itself highly efficient concerning open-shell electronic systems such as fullerenes [44, 45], graphene molecules [46], and stable radicals [47, 48]. A detailed discussion concerning the choice of proper softwares for virtual FRP of vinyl monomers is presented elsewhere [23]. Digital twins of fullerenyls were designed basing on spin chemistry of fullerene C\({}_{60}\)[44, 45], a short sketch of which is given in the Supporting Material.. ## 4 Virtual free-radical copolymerization of methyl methacrylate with stable radicals ### Fundamental grounds A set of cross-elementary reactions in the form of a matrix, the boundaries of which are mapped with their participants, turned out to be the most convenient for the simultaneous presentation and discussion of a large number of possibilities and results. In a certain form, matrix Table 2 is a visualization of a 'pineapple-on-the-plantation' [23] in the realities of the FRP of methyl methacrylate and its copolymerization with C\({}_{60}\) fullerene and TEMPO. The corresponding'matrix elements' are evidently divided into three groups plus an individual one marked with different colors. Yellow elements present elementary reactions and their final products that govern initial stage of the MMA FRP. Elements in light blue describe reactions and final products related to the FRCP of MMA with TEMPO. Faint-pink elements do the same with respect to fullerene C\({}_{60}\). Individual light-gray element is related to the interaction of TEMPO with C\({}_{60}\). The first third of the table lists final products of paired elementary reactions (1) - (6), following the accepted designation of both reactions and their products in Table 1. Duplicate matrix elements are omitted. The matrix as a whole describes the case when the FRP of monomer is mainly provided with free radical-initiator \(\mathit{AIBN}^{\star}\), while the FRCP of the monomer occurs because of introduction of TEMPO and C\({}_{60}\) into the virtual reactor. As seen from this part of the table, reactions and/or products can be divided in two groups. Members of the first group are rooted on monomer (M) and concern the generation of different monomer-based radicals listed in the first row. Those are dimer radical \(\mathit{R}^{A}\mathit{M}_{2}^{\star}\), ensuring the growth of the polymer chain of the monomer M, two monomer radicals \(\mathit{R}^{A}\mathit{M}^{\star}\) and \(\mathit{R}^{T}\mathit{M}^{\star}\), which determine the beginning of the chain growth due to the initiator radicals \(\mathit{AIBN}^{\star}\) (\(\mathit{R}^{A\star}\)) and TEMPO (\(\mathit{R}^{T\star}\)), as well as the monomer radical \(\mathit{FM}^{\star}\), which determines potential initiation of the generation and growth of the polymer chain anchored to the fullerene. All other matrix elements are related to the FRCP of MMA with the two stable radicals. Thus, the reactions and/or products of the second row express a threat for a complete forbiddance of the polymer chain growth caused by capturing the leading monomer radical \(\mathit{R}^{A}\mathit{M}^{\star}\). The first reaction (\(\mathit{R}^{A}\mathit{M})_{2}\) concerns its dimerization. Reactions \(\mathit{R}^{A}\mathit{R}^{A}\mathit{M}\) and \(\mathit{R}^{T}\mathit{R}^{A}\mathit{M}\) describe the loss of the radical ability of the monomer radical \(\mathit{R}^{A}\mathit{M}^{\star}\) because of attracting free radicals \(\mathit{R}^{A\star}\) and \(\mathit{R}^{T\star}\), respectively. Reaction \(\mathit{FR}^{A}\mathit{M}\) corresponds the case when C\({}_{60}\) is acting as a killer of the FRP. Three last members describe the interaction of the main free radical \(\mathit{R}^{A\star}\) with other two, which evidently leads to their additional consumption and, consequently, to the FRCP slowing. The second third of the table presents the thermodynamic characteristics of the reactions described above in terms of their energies, \(\Delta H\), or coupling energies, \(E_{cpl}\), of their final products. The last third does the same but in terms of the activation energies of the corresponding reactions,\(E_{aa}=E_{a}\). The bold data are obtained by constructing barrier profiles of the dissociation of the final products of the relevant elementary reactions. These profiles present typical energy graphs shown in Figure 1. Curve 1 in Figure 2 describes the barrier profile of the dimer radical \(R^{A}M_{2}^{\star}\) dissociation, thus revealing the activation energy governing the kinetics of the first step of the MMA polymerization propagation. Curve 2 plots the main parameter that is responsible for 'killing' this propagation with free radical \(R^{T\star}\), while curve 3 exhibits the activity of fullerene C\({}_{60}\) to capture free radical \(R^{A\star}\). Other data in the top-left matrix cells are evaluated when using Evans-Polanyi-Semenov (EPS) relations [23, 32, 49] that linearly couple \(E_{cpl}\) and \(E_{a}\) of one-type elementary reactions providing the polymerization of vinyl monomers [23]. As shown, these relations in the case of reactions generating monomer-radicals and dimer-radicals occurred reliable. Actually, the EPS-based \(E_{a}\) value for the MMA dimer-radical of the current study (top \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline & \(R^{A}M^{\star 1)}\) & \begin{tabular}{c} AIBN\({}^{\star}\) \\ \(R^{A+1)}\) \\ \end{tabular} & \begin{tabular}{c} TEMPO \\ \(R^{T\star}\) \\ \end{tabular} & \begin{tabular}{c} \(C_{60}\) \\ \(F\) \\ \end{tabular} \\ \hline \(M\) & \(R^{A}M_{2}^{\star}\) & \(R^{A}M^{\star}\) & \(R^{T}M^{\star}\) & \begin{tabular}{c} _2-denote_ \\ \(FM\) \\ \end{tabular} & \begin{tabular}{c} _1-denote_ \\ \(FM\) \\ \end{tabular} \\ \hline \(R^{A}M^{\star}\) & (\(R^{A}M\))\({}_{2}\) & \(R^{A}R^{A}M\) & \(R^{T}R^{A}M\) & \(FR^{A}M\) \\ \hline \(R^{A}\) & - & - & \(R^{T}R^{A}\) & \(FR^{A}\) \\ \hline \(R^{T\star}\) & - & - & - & \(FR^{T}\) \\ \hline \multicolumn{5}{|c|}{**Coupling energies, \(E_{cpl}\), kcal/mol**} \\ \hline \(M\) & -7.732 (2) & \(1.68\) (1) & \(\begin{array}{c}\text{not}\\ \text{formed}\end{array}\) & -19.097 & \(\begin{array}{c}\text{29.748}\\ \text{0.96}\text{e}\end{array}\) \\ \hline \(R^{A}M^{\star}\) & -52.843 & -33.635 & -17.42 & 0.113 \\ \hline \(R^{A}\) & - & - & 5.001 & -20.137 \\ \hline \(R^{T\star}\) & - & - & - & 0.98 \\ \hline \multicolumn{5}{|c|}{**Activation energies, \(E_{a}\), kcal/mol**\({}^{2}\)} \\ \hline \(M\) & \begin{tabular}{c} **18.715** (2) \\ 16.319 (2) \\ 12.562 (3) \\ 10.815 (4) \\ \end{tabular} & \begin{tabular}{c} \text{not} \\ \text{formed}\end{tabular} & - & \(E_{aa}\) \\ \hline \(R^{A}M^{\star}\) & - & - & 9.35 & not formed \\ \hline \(R^{A}\) & - & - & \(E_{aa}>E_{aa}\) & **9.398** \\ \hline \(R^{T\star}\) & - & - & - & not formed \\ \hline \end{tabular} \({}^{1)} Digits in brackets mark the number of monomers in the oligomer chain. \({}^{2)}\) Bold data are related to the barrier profiles in Figure 2. The rest data are calculated by using EPS relations presented in [23]. \end{table} Table 2: Elementary reactions and their final products supplemented with virtual thermodynamic and kinetic data related to the FRCP of methyl methacrylate with stable radicals TEMPO and C\({}_{60}\) data in the first cell of the bottom part of Table 2) is well consistent with that one followed from the barrier profile shown in Figure 2. In the previous study [23], where the detailed view of energy graphs as barrier profiles was given for the first time, a spin origin of transition states was established. The latter was evidenced by the fact that position of the energy graphs maxima coincides with \(R^{C-C}_{crit}\) of 2.11\(\pm\)0.1 A that determines the maximum length of the \(sp^{3}\)C-C bond, above which the bond becomes radicalized thus revealing the start of its breaking [50, 51]. This bond played the role of the reaction coordinate in all the cases studied earlier [23]. As seen in Figure 2, the same in related to the \(FR^{A}\) case, the barrier-profile maximum of which is located at 2.08 A. Oppositely to the case, the \(sp^{3}\)C-O bond is central in the case of intermolecular junctions in the \(R^{A}M_{2}^{*}\) and \(R^{T}R^{A}M\) complexes. Expectedly, the maximum positions of their barrier profiles should differ from those provide with breaking of the \(sp^{3}\)C-C bond. Actually, these positions constitute 1.97 and 2.01 A, which well correlates with \(R^{C-O}_{crit}\) of 2.05 A relating to the dissociation of the \(sp^{3}\)C-O bond of ethylene glycol presented in the bottom of Figure 2. Evidently, \(R^{C-O}_{crit}\), as well as \(R^{C-C}_{crit}\), deviates in different atomic surrounding, which is proved by its two values presented above. Virtual and real kinetics of initial stage of the FRP of methyl methacrylate and its FRCP with TEMPO and C\({}_{60}\) The numerical parameters of virtual thermodynamics and kinetics of the FRP of methyl methacrylate and its FRCP with TEMPO and C\({}_{60}\), collected in this study are presented in Table 2. Figure 2: (a) Barrier profiles of the dissociation of dimer radicals \(R^{A}M_{2}^{*}\) (1), \(R^{T}R^{A}M\) (2) and \(FR^{A}\) (3) products (see text). (b). \(N_{D}(R)\) graphs related to the dissociation of the \(sp^{3}\)C-C bond of ethane and \(sp^{3}\)C-O bond of ethylene glycol [51]. UHF AM1 calculations. The equilibrium DTs structures, associated with the matrix elements of the table are divided into two groups related to the FRCP of MMA with C\({}_{60}\) only (Figure 3) and corresponding to the case when both stable radicals are involved of the polymerization process (Figure 4). Graphical inserts in the figures present experimental summary data on the kinetics of the relevant processes. Molecular structures in Figure 3a present virtual FRP of MMA, starting with creation of monomer radical \(R^{A}\,M^{\star}\) and proceeding with sequential generation oligomer radicals \(R^{A}M^{\star}_{n+1}\) with \(n\) from 1 to 3. This group of DTs is described with yellow data of Table 2. Evidently, in general, MMA FRP is fully similar to that typical to all other single-group vinyl monomers [23] by both complicated structure configuring when growing \(n\) and energy parameters \(E_{cpl}\) and \(E_{a}\) listed in Table 2, while differing in details. The MMA vinyl group forms \(sp^{2}\)C-CH\({}_{2}\) bond, a sole carbon atom of which willingly generates a stable intermolecular \(sp^{3}\)C-C bond with the target carbon of the \(AIBN^{\star}\) in contrast to carbon atom of methylene group, coupling of which with \(R^{A\star}\) target is thermodynamically not profitable. The intermolecular junction between free radical and monomer is provided with the mentioned \(sp^{3}\)C-C bond, while methylene group of the vinyl becomes a carrier of the radicalized atom, servicing as a new target for the next attacking action. A detailed description of all the feature of vinyl monomer FRP can be found in Ref. [23]. When fullerene \(F\) joins the company of \(M\) and \(R^{A\star}\), intermolecular interaction stimulates its contact with the two partners first, result of which is exhibited in Figure 4b. Evidently, the \(M\) and \(F\) contact is provided with tight cooperation of \(sp^{2}\)C-C bonds of the MMA vinyl group and one of the C\({}_{60}\) such bonds. As discussed earlier, this two-bond contact can be either [2x2] cycloaddition two-dentate or one-dentate. The first is resulted in the formation of a stable fullerene monodorivative - fullerenyl \(FM\), the reaction thermodynamic of which is quite favorable. Radical properties of the species are governed by the pool of unpaired electrons \(N_{D}\), modified by the monomer anchoring and distributed over all the cage atoms [45]. Concerning the polymerization, this product presents just a side effect causing a negligible consumption of the monomer due to small concentration of C\({}_{60}\). However, when the interbond contact between monomer and fullerene is one-dentate, the MMA vinyl bond reacts on the fullerene presence similarly to that of a free radical such as \(R^{A\star}\), generating a typical free radical of the methylene group thus forming a fullerene based monomer radical \(FM^{\star}\). The radicalization of the methylene constitutes 0.96 e, so that this radical is highly similar to such as \(R^{A}\,M^{\star}\) and, ideally, can lead to the propagation of the monomer polymerization. Evidently, the decisive word belongs to the energy parameters. As seen in Table 2, \(E_{cpl}\) of the monomer radical \(FM^{\star}\) is positive indicating that its formation is thermodynamically not profitable. Additionally, \(E_{cpl}\) large value means that the corresponding energy graph is characterized with \(E_{ad}\,\gg\,E_{aa}\), making the formation of the radical kinetically impossible. The interaction of C\({}_{60}\) with free radical \(R^{A\star}\) occurs much simpler and results in the formation of a standard fullerene derivative - fullerenyl \(FR^{A}\), radical properties of which is governed with a modified pool \(N_{D}\), presented in Figure S1b. Therefore, capturing of free radicals with fullerene in the current chemical reactor is thermodynamically quite favorable. In contrast, a similar action of the fullerene with respect to monomer radical \(R^{A}\,M^{\star}\) (reaction \(FR^{A}M\)) was unsuccessful, since none of the numerous attempts to link this radical with C\({}_{60}\) was positive so that none of stable structural formations \(FR^{A}M\) was obtained. Turning to a discussion of the kinetics of the considered elementary reactions in the \(E_{a}\) - part of Table 2, we should pay attention to the data presented in yellow and faint pink. The former are related to the FRP of MMA, while the latter - to the FRCP of MMA with C\({}_{60}\). As seen from the table, the former process is governed with activation energy filling the range from 11 to 24 kcal/mol. The values are typical for all the studied vinyl monomers [23] and are kinetically quite favorable for the experimental implementation of their FRP to be successful. Actually, curve 1 in Figure 3c represents the empirical conversion of the MMA monomer array in due course of its FRP initiated with \(AIBN^{\star}-R^{A\star}\). The corresponding elementary reactions include the initiation of the monomer radical and the successive growth of the polymer chain. Rates of the corresponding reactions, \(k_{i}\) and \(k_{p}\), respectively, can be considered as reference. Coming back to Table 2, we could not ignore the fact that in the current chemical reactor occurs one more process, namely, capturing the acting free radicals with fullerene C\({}_{60}\), activation energy of which is less of any former ones promoting FRP of MMA. The equilibrium structure of the relevant final product \(FR^{A}\) is shown in Figure 3b.This kinetically favorable event may strongly decrease the current concentration of free radicals, thus slowing the main FRP. It is this slowing that is clearly seen in Figure 3c, when a small fraction of C\({}_{60}\), equal to 1/100 of the monomer mass, is added to the reactor. As can be seen from the figure, the linear growth of the reference FRP conversion is preserved, but its rate is three times less. Obviously, increase in the C\({}_{60}\) concentration lifts the rate of free radicals removal from the MMA polymerization, which is confirmed experimentally by a new about twofold decrease in the monomer conversion rate with a twofold increase in the C\({}_{60}\) concentration (cf. curves 2 and 3 in Figure 3c). The generation of \(FR^{A}\) fullerenyls (the \(R^{A\star}\) anchoring) can be not only single, but also numerous. Therefore, the role of C\({}_{60}\) in its FRCP with MMA is to adsorb free radicals \(R^{A\star}\) in due course of an extended \(R^{A\star}\)-polylerivatization of C\({}_{60}\). The kinetical predominance of this elementary reaction is exhibited as retaining a straight-linearly dependent conversion of monomer while remarkably lowering its rate. Figure 3: Equilibrium structures of digital twins of FRCP MMA with fullerene C\({}_{60}\). (a) Oligomer radicals \(R(M)_{n+1}^{\star}\) for \(n\) from 1 till 3. (b) Fullerenyls \(FM\), \(FM^{\star}\). and \(FM^{A}\). Small yellow, gray and red balls mark hydrogen, common and target carbon atoms, respectively. Larger green and blue balls depict nitrogen and oxygen atoms. UHF AM1 calculations. (c) Kinetics of MMA polymerization initiated with 2.0 x 10\({}^{-2}\) mol/L AlBN at 333 K in the presence of different fullerene C\({}_{60}\) concentrations of (_1_) 0, (_2_) 1.0 x 10\({}^{-3}\),and (_3_) 2.0 x 10\({}^{-3}\) mol/L.. The data are adapted from [20]. The complication of the reference FRP of MMA by addition of another stable radical \(R^{T*}\) instead of C\({}_{60}\) leads to a completely different behavior of the MMA polymerization. According to the data, listed in Table 2 and marked with light blue, radical \(R^{T*}\) does not initiate the formation of monomer radical \(R^{T}M^{*}\) (Figure 4a), capable of polymerizing the monomer independently from \(R^{A*}\). In contrast to C\({}_{60}\), the TEMPO's ability to capture free radicals \(AIBN^{*}-R^{A*}\) (see Figure 4b) is not thermodynamically favorable and is kinetically unreal due to \(E_{ad}>E_{aa}\). The only meaningful elementary reaction, concerning the triad, consisting of monomer and two radicals \(R^{A*}\) and \(R^{T*}\), concerns reaction \(R^{T}R^{A}M\)- the removing of the monomer radical \(R^{A}\)\(M^{*}\) from the further polymerization process by its adsorbing with radical \(R^{T*}\). The final product of this reaction is shown in Figure 4c and reveals a strong coupling of radical \(R^{T*}\) with monomer radical \(R^{A}M^{*}\). Thus, 'killing' the monomer radicals \(R^{A}M^{*}\) is the main role of the TEMPO in the FRCP of MMA with TEMPO. Evidently, the killing prevents from the propagation of the MMA polymer chain thus terminating the monomer polymerization, which provides the appearance of the IP on the conversion vs time plotting. Evidently, the IP duration is determined with the radical \(R^{T*}\) concentration since the FRP of MMM cannot start until all this radical mass is consumed. Thereafter, a standard FRP of MMA will proceed, which is revealed with the same conversion rate of the reference and post-performed processes (cf. curves 1 and 1 in Figure 4e). Figure 4: Free-radical copolymerization of MMA with C\({}_{60}\) and TEMPO. Equilibrium structures of digital twins related to elementary reactions \(R^{T}M\) (a), \(R^{T}R^{A}\) (b), \(R^{T}\)(c), and \(FR^{T}\) (d) (see the reactions nomination in Table 2). Small yellow and gray balls mark hydrogen and common carbon atoms, respectively. Larger green and blue balls depict nitrogen and oxygen atoms. UHF AM1 calculations. (e) Conversion of the MMA in the course of polymerization initiated with \(2.0\times 10^{-2}\) mol/L AIBN at 333 K. (1) In the absence of TEMPO, (1) in the presence of \(1.0\times 10^{-3}\) mol/L TEMPO, (2) in the presence of \(1.0\times 10^{-3}\) mol/L of fullerene C\({}_{60}\), but the absence of TEMPO and (2) in the presence of \(1.0\times 10^{-3}\) mol/L of both fullerene C\({}_{60}\) and TEMPO. The data are adapted from [20]. The joint addition of TEMPO (\(R^{T\star}\)) and C\({}_{60}\) into the chemical reactor is expected to be a superposition of effects presented with curves 1 and 2 of plotting in Figure 3c and with curves 1 and 1' plotted in Figure 4e if there is no reaction between the two radicals. As seen in Table 2, a potential reaction \(FR^{T}\) does not occur (see the reaction final product in Figure 4d) so that the mentioned superposition is highly probable. As seen in Figure 4e, it is really observed and pairs of curves 1-1 and 2-2' just duplicate each other, while differing with the conversion rates, the same for each pair, caused by the fullerene presence. Taking as a whole, the experimental picture, covering FRP of MMA as well as FRCPs of MMA with TEMPO and C\({}_{60}\), reliably shows that elementary reactions of the FRP of MMA themselves and in the presence of additional stable radicals such as TEMPO and C\({}_{60}\) are independent and, thus, superpositional. ## 5 Conclusion The concept of digital twins, which has successfully demonstrated itself in the virtualization of free-radical polymerization of vinyl monomers [23], is used in this work to determine the mechanism of the influence of additional stable radicals on the above process. The success of the implementation of the concept is determined by the reliability of the basic concept considering polymerization as a chain reaction consisting of a set of independent elementary reactions occurred superpositionally. In this view, polymerization is a product of a won kinetic competition in the environment of a set of elementary reactions. The quantum chemical approximation used suggested the activation energy for labeling final reaction products as the main numerical kinetic parameter. In this work, the family of vinyl monomers is represented by methyl methacrylate, which undergoes rapid polymerization when initiated by the free radical \(AIBN^{\star}\). Additives of stable radicals were provided with TEMPO and fullerene C\({}_{60}\). The latter stimulate their copolymerization with the monomer, which has a significant impact on the course of the main FRP reaction. The FRP of the monomer and its FRCP with stable radicals were divided into a network of elementary reactions, each of which was numerically analyzed in terms of a standard energy graph of reaction theory. The results obtained are as follows. 1. A virtual examination of the full set of elementary reactions gives a complete picture of the mechanism of SRP and SRSP in the coordinates of the chemical reagents used. 2. MMA FRP, virtualized for the first time in this work, is allowed and proceeds in a similar manner with respect to other vinyl monomers. 3. Despite the large selection of potential interactions and reactions in a virtual chemical reactor, fullerene additives are most kinetically favorable only for the capture and adsorption of \(AIBN^{\star}\) free radicals. This capture affects the number of free radicals and leads to a decrease in the conversion rate of the main monomer in the presence of fullerene. Moreover, an increase in fullerene concentration is accompanied by a cymbate further decrease in conversion, which is reliably observed experimentally. 4. Unlike fullerene, for TEMPO the most kinetically favorable reaction is the capture of a monomer radical. This entrapment prevents propagation of monomer polymerization, which leads to the occurrence of an induction period on the temporary conversion plot of the monomer. In full accordance with this prediction, the experimental dependences of the monomer conversion in the case of its FRCP with TEMPO show an extended induction period. 5. An explanation of the reasons that determine the experimentally observed multiple decrease in the conversion rate of MMA in the presence of C\({}_{60}\) additives and the appearance of an extended induction period in the MMA FRCP in the presence of TEMPO has been explained for the first time. 6. In the initial period of polymerization, the single-target nature of TEMPO in contrast to the multi-target nature of fullerene C60 do not have a fundamental effect on the polymerization process. However, if in the case of TEMPO the radical is fully worked out in the initial period, then in the case of C60, the fullerenlys formed in the initial period continue to participate in the further formation of the final polymer product, providing their multitarget carbon structure for anchoring growing polymer chains. ## References * [1]_Handbook of Radical Polymerization_, Eds. K. Matyjaszewski, T.P. Davis. John Wiley and Sons: Hoboken, 2002 * [2]_Fullerene Polymers. Synthesis, Properties and Applications_. Eds. N. Martin, F. Giacalone. Wiley-VCH: Weinheim, 2009. * [7] Camp, A. G.; Lary, A.; Ford, W. T. Free-radical copolymerization of methyl methacrylate and styrene with C\({}_{60}\). _Macromolecules_ 1995. **28**, 7959-7961. * [9] Cao, T.; Webber, S.E. Radical copolymerization of styrene and C\({}_{60}\). _Macromolecules_ 1995, **28**, 3741-3743. * [10] Geckeler, KE; Arsalani, N. Synthesis and properties of hydrophilic polymers. 4. Preparation and characterization of poly (oxyethylene) telechelics with different aromatic termini. _J. Macromol. Sci., Part A: Pure and Appl. Chem._ 1996, **33**, 1165-1179. * [11] Steward, D.; Imrie, C. T. Role of C\({}_{60}\) in the free radical polymerisation of styrene. _Chem. Commun._ 1996, **13**, 1383-1384. * [13] Ford, W. T.; Graham, T. D.; Mourey, T. H. Incorporation of C\({}_{60}\) into poly(methyl methacrylate) and polystyrene by radical chain polymerization produces branched structures. _Macromolecules_ 1997, **30**, 6422-6429. * [14] Ford, W. T.; Nishioka, T.; McCleskey, S. C.; Mourey, T. H.; Kahol, P. Structure and radical mechanism of formation of copolymers of C\({}_{60}\) with styrene and with methyl methacrylate. _Macromolecules_, 2000, 33, 2413-2423. * [17] Kurmaz, S.V.; Pyryaev, A.N.; Obraztsova, N.A. Effect of fullerene on the radical homo_ and copolymerization of _n_vinylpyrrolidone and (di)methacrylates. _Polym. Sci., Ser. B_, 2011, **53**, 497-504 * [19] Atovmyan, E. G. On the relationship between the fullerene reactivity and degree of substitution. _Russ. Chem. Bull., Int. Ed._ 2017, **66**, _567-570_. * [20] Yumagulova, R. Kh.; Kuznetsov, S. I.; Diniakhmetova, D. R.; Frizen, A. K.; Kraikin, V. A.; Kolesov. S. V. On the initial stage of the free-radical polymerizations of styrene and methyl methacrylate in the presense of fullerene C\({}_{60}\). Kinetics and Catalysis 2016, **57**, 380-387. * [21] Yumagulova, R. Kh.; Kolesov, S.V. Specific features of reactions between fullerene C\({}_{60}\) and radicals stabilized by conjugation in the process of radical polymerization. Bullet, Bashkir University 2020, **25**. 47-51. DOI: 10.33184/bullet-bsu-2020.1.8 * [22] Harris,P. J. F. Fullerene polymers: A brief review. \(C\) 2020, **6**, 71; doi:10.3390/c6040071 * [23] Sheka, E.F. Virtual free radical polymerization of vinyl monomers in view of digital twins. _Polymers_ 2023, **15**, 2999. * [24] Rasheed, A.; San, O.; Kvamsdal,T. Digital twins: Values, challenges and enablers from a modeling perspective. _IEEE Access_ 2020, doi: 0.1109/ACCESS.2020.2970143. * [25] Sheka, E.F. Digital Twins in graphene's technology. arXiv.2208.14926. * [26] H.W. Starkweather, H.W.; Taylor, G.B. The kinetics of the polymerization of vinyl acetate. _JACS_ 1930, **52**, 4708-4714. * [27] Semenov, N.N. _Tsepnye Reakcii_ (Chain Reactions) Goschimizdat: Moskva, 1934 (in Russian). * [28] Bagdasar'yan, Kh.S., _Teoriya radikal'noi polimerizatsii_ (Free_ Radical Polymerization Theory), Moscow: Nauka, 1966. (in Russian). * [29] Gol'dfein, M.D.; Kozhevnikov, N.V.; Trubnikov, A.V. _Kinetika i mekhanizm regulirovaniya protsessov obrazovaniya polimerov_ (Kinetics and Control of Polymerization Processes), Saratov: Saratov. Gos. Univ., 1989. * [30] Pross, A. _Theoretical and Physical Principles of Organic Reactivity_, Wiley, New York, 1995. * [31] Denisov, E. T. _Constanty skorosni gomoliticheskikh zhidkofaznykh rekcij_ (Rate constants of homolytic liquid-phase reactions) Moskwa: Nauka. 1971. * [32] Heuts, J.P.A. Theory of radical reactions. In _Handbook of Radical Polymerization_, Eds. K. Matyjaszewski, T.P. Davis. John Wiley and Sons: Hoboken, 2002, pp. 1-76. * [33] Denisov, E. T.; Sarkisov, O. M.; Likhtenshtein, G. I. _Chemical Kinetics: Fundamentals and Recent Developments_, Elsevier, Amsterdam, 2003. * [34] E.T. Denisov, I.B. Afanas'ev. _Oxidation and Antioxidants in Organic Chemistry and Biology_. Boca Raton, Florida: CRC Press. Taylor and Francis Group, 2005. * [35] Krusic, P.; Wasserman, E.; Keizer, P.; Morton, J.; Preston, K. Radical reactions of C\({}_{60}\). _Science_ 1991, **254**, 1183-1185. * [36] Morton, J.; Negri, R. F.; Preston, K. F. Review of recent EPR and theoretical studies on the addition of free radicals to C\({}_{60}\) and C\({}_{70}\). _Magn. Res. Chem._ 1995, **33**, 20-27. * [37] Tumanskii, B.; Kalina, O. _Radical Reactions of Fullerenes and their Derivatives_, Kluwer: Dordrecht, 2002. * [38] Sabirov, D.Sh.; Garipova, R.R.; Bulgakov, R.G. Density functional theory study on the decay of fullerenyl radicals RC\({}_{60}\)\({}^{\bullet}\), ROC\({}_{60}\)\({}^{\bullet}\), and ROOC\({}_{60}\)\({}^{\bullet}\) (R = tert-butyl and cumyl) and polarizability of the formed fullerene dimers. _Journ. Phys. Chem. A_ 2013, **117**, 13176-13181. * [39] Diniakhmetova, D. R.; Yumagulova, R. Kh.; Kolesov, S. V. Interaction of fullerene C\({}_{60}\) with benzoyl peroxide at initiation of radical polymerization. Bullet, Bashkir University 2020, **25**. 52-57. DOI: 10.33184/bullet-bsu-2020.1.9. * shaped polymers with a fullerene core. In _Fullerene Polymers. Synthesis, Properties and Applications (Eds. N. Martin. F. Giacalone)_. Wilry-VCH: Weinheim, 2009, pp. 97-127. * [41] Zayets, V. A. _CLUSTER-Z1: Quantum-Chemical Software for Calculations in the s,p-Basis_. Inst. Surf. Chem. Nat. Ac. Sci. of Ukraine: Kiev, **1990**; (in Russian). * [42] Berzigiyarov, P.K.; Zayets, V.A.; Ginzburg, I.Ya.; et al. NANOPACK: Parallel codes for semiempirical quantum chemical calculations of large systems in the _sp-_ and _spd-_basis. _Int. J. Quantum Chem_. 2002, **88**, 449-462. * [43] Dewar M.J.S.; Zoebisch, E.G.; Healey, E.F.; Stewart, J.J.P. AM1: A new general purpose quantum mechanical molecular model. _J.Amer.Chem.Soc._ 1985, **107**, 3902-3909. * [44] Sheka, E.F. Chemical susceptibility of fullerenes in view of Hartree-Fock approach. _Int. J. Quant. Chem._ 2007, **107**, 2803-2816. * [45] Sheka, E.F. _Fullerenes. Nanochemistry, Nanomagnetism, Nanomedicine, Nanophotonics_. CRC Press, Taylor and Francis Group, Boca Raton, 2011. * [46] Sheka, E.F. _Spin Chemical Physics of Graphene_; Pan Stanford: Singapore, 2018. * [47] Sheka, E. F. Virtual vibrational spectrometry of stable radicals--necklaced graphene molecules. _Nanomat._ 2022, **12**, 597. * [48] Sheka, E.F. Digital Twins solve the mystery of Raman spectra of parental and reduced graphene oxides. _Nanomat._, 2022, **12**, 4209. * [49] M. Dossi, M.; Storti, G.; Moscatelli, D. Initiation kinetics in free-radical polymerization: Prediction of thermodynamic and kinetic parameters based on ab initio calculations. _Macromol. Theory Simul._ 2010, **19**, 170-178. * [50] Sheka, E.F. Stretching and breaking of chemical bonds, correlation of electrons, and radical properties of covalent species, _Adv. Quant. Chem._ 2015, **70**, 111-161. * [51] Sheka, E.F. Private Archive of Computed Data, 2016, partially published [46, 49]. **Digital twins' kinetics of virtual free-radical copolymerization of vinyl monomers with stable radicals. 1. Methyl methacrylate** Elena F. Sheka Institute of Physical Researches and Technology, Peoples' Friendship University of Russia (RUDN University), 117198 Moscow, Russia; [email protected] **Supporting Material** Digital twins of fullerenyls were designed basing on spin chemistry (Sch) of fullerene C\({}_{60}\)[1-5]. According to the latter, the molecule is a stable radical carrying unpaired of the total number \(N_{D}\)= 9.6 \(e\). These electrons are distributed over the carbon atoms of the molecule in accordance with the partial fractions \(N_{DA}\) on each atom, represented by the histogram in Figure S1a. \(N_{DA}\) is (stepped red curve), and free valence \(V_{free}\) (curve with dots). Different colors in the insert distinguish six atomic groups shown by the \(Z\to A\) graph. (b). \(sp^{2}\)C-C bond distribution of the fullerene C\({}_{60}\) (histogram) and fullerenyl \(FR^{A}\) (insert presents equilibrium structure of the species, see Table 2) (curve with dots). Red curve plots \(N_{DA}\) of the fullerenyl. a quantitative indicator of the atomic chemical susceptibility (ACS) of atom A, which is in full agreement with the corresponding free valence of the atom, \(V_{A}^{free}\) (see curve with dots in the figure). Arranged in the \(max\)\(\rightarrow\)\(min\) format (stepped red curve), they reveal five groups of atoms, each consisting of 12 atoms and characterized by a constant \(N_{DA}\) value within the group. These groups of atoms are shown in different colors in the structural insert of the figure. The light gray atoms, which form two central hexagons, represent six identical \(sp^{2}\)C-C pairs, the first to enter any type of reaction involving C\({}_{60}\). One of the main concepts of the fullerene' Sch is the controlling role of \(N_{DA}\) over the process of stepwise polyderivatization of the fullerene molecule [1-5]. Tunable after each completed step of attaching the corresponding addend, the \(N_{DA}\) distribution reveals the maximum value that determines the target atom for the next attack. Several examples of such computational polyderivatization of C\({}_{60}\) fullerene, confirmed by experimental data, are given in the monograph [4]. In what follows, the matter will be about various C\({}_{60}\) monodervatives. In all the cases, atom 33 was selected as a target one. As seen in Figure S1b, anchoring the addend to this atom causes a drastic elongation of one covalent bond (black curve) transferring it from \(sp^{2}\)-type of 1.423 A to \(sp^{3}\) one of 1.534 A. Because of the closed structure of the neat molecule, this elongation is accompanied with a reconstruction of all the covalent bonds, which, remaining of \(sp^{2}\)-type, change their lengths. The latter is resulted in the reconstruction of the \(N_{DA}\) distribution of the neat C\({}_{60}\) (red curve), thus revealing the highest value of 0.55 \(e\) at atom 22 that is a partner to atom 33 of a broken \(sp^{2}\)C-C bond. Atom 22 becomes the best target for the next anchoring and preserves this role for small addends such as individual atoms, OH, COOH and other groups [4]. Molecular addends may not be added to this atom because of sterical constraints. The target atoms, suitable for the addition, should be looked through the next \(N_{DA}\) top-list atoms, anchoring to which is free from the sterical constraints. When the selection is over and the necessary computations are performed, newly reconstructed \(N_{DA}\) distribution is analyzed to make choice the next potential targets among the top-list cage atoms suitable for the next addition without sterical constraints. The next-step derivatization proceeds similarly until the total pool of unpaired electrons \(N_{D}\) is consumed [1-5].
2301.13623
Unimodular Gravity in Covariant Formalism
In this short note we study unimodular gravity in Weyl-De Donder formalism. We find corresponding Hamiltonian and study consequence of the unimodular constraint on the conjugate covariant momenta. We also find covariant Hamiltonian for Henneaux-Teitelboim unimodular action and study corresponding equations of motion.
J. Kluson, B. Matous
2023-01-31T13:30:58Z
http://arxiv.org/abs/2301.13623v1
###### Abstract ###### Abstract In this short note we study unimodular gravity in Weyl-De Donder formalism. We find corresponding Hamiltonian and study consequence of the unimodular constraint on the conjugate covariant momenta. We also find covariant Hamiltonian for Henneaux-Teitelboim unimodular action and study corresponding equations of motion. **Unimodular Gravity in Covariant Formalism** J. Kluson\({}^{\ddagger}\) and B. Matous\({}^{\ddagger}\)1 Footnote 1: Email addresses: J. Kluson: [email protected], B. Matous: [email protected] \({}^{\dagger}\)_Department of Theoretical Physics and Astrophysics, Faculty of Science,_ _Masaryk University, Kotlarska 2, 611 37, Brno, Czech Republic_ \({}^{\ddagger}\)_North-Bohemian Observatory and Planetarium in Teplice,_ _Kopernikova 3062, 415 01, Teplice, Czech Republic_ ## 1 Introduction and Summary Unimodular gravity was firstly introduced by A. Einstein in his paper [3] published in 1916. In this work the unimodular constraint \(\sqrt{-g}=1\) was used as gauge fixing condition of general diffeomorphism in order to simplify calculations. Then it was shown in [1, 2] that imposing this condition before the variation of Einstein-Hilbert action leads to the traceless equations of motion. As we review below these equations of motion are classically equivalent to the general relativity equations of motion with crucial difference that the cosmological constant appears as integration constant rather than true cosmological constant. This fact brings new hope how to solve cosmological constant problem which was however questioned in [4], 2 where it was argued that quantum corrections make the cosmological constant ultraviolet sensitive in unimodular gravity as well. On the other hand it is important to stress that no definitive conclusions have been reached yet regarding this problem and unimodular gravity is still very intensively studied, for some works devoted to unimodular gravity, see for example [7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 18, 19, 20, 22, 23, 24, 25, 26]. One of the most interesting aspects of unimodular gravity is the number of physical degrees of freedom. Naively, unimodular constraint \(\sqrt{-g}=1\) reduces the number of independent components of metric to nine which could suggest that the number of physical degrees of freedom is less than in general relativity. On the other hand unimodular gravity is invariant under restricted diffeomorphism. Taking these two aspects together we find that the number of local physical degrees of freedom is the same as in ordinary general relativity. This fact was proved with the help of the Hamiltonian analysis of unimodular gravity performed in [16, 17, 18, 19, 20, 21]. On the other hand as was shown in these papers standard analysis of unimodular gravity based on \(D+1\) splitting of target space-time is rather non-trivial and shown complexity of the canonical analysis of systems with constraints. Then one could ask the question how unimodular gravity could be described in covariant canonical formalism that is known as Weyl-De Donder theory [27, 28]. The key point of this formulation is that we treat all partial derivatives as equivalent when we define conjugate momenta. For example, if we have scalar field \(\phi\) with Lagrangian density in \(D+1\) dimensional spacetime equal to \(\mathcal{L}=-\frac{1}{2}\eta^{ab}\partial_{a}\phi\partial_{b}\phi-V(\phi)\), we define the conjugate momentum as 3 Footnote 3: We define \(\eta_{ab}=\operatorname{diag}(-1,1,\ldots,1),a,b=0,1,\ldots,D\). \[\pi^{a}=\frac{\partial\mathcal{L}}{\partial\partial_{a}\phi}=-\eta^{ab} \partial_{b}\phi\.\] Then covariant canonical Hamiltonian density is defined as \[\mathcal{H}=\pi^{a}\partial_{a}\phi-\mathcal{L}=-\frac{1}{2}\pi_{a}\eta^{ab} \pi_{b}+V(\phi)\.\] Clearly such a form of Hamiltonian density preserves diffeomorphism invariance of the theory. This approach is known as multisymplectic field theory, see for example [29, 30, 31], for review, see [32] and for recent interesting application of this formalism in string theory, see [33, 34]. It is clear that such covariant canonical formalism is especially suitable for manifestly covariant theories as for example general relativity. In fact, covariant canonical formalism of general relativity was found long time ago by P. Horava [35]. This analysis was recently generalized to the case of \(F(R)\) gravity in [37] and further elaborated in [38]. In this paper we apply this formalism for unimodular theory of gravity in \(D+1\) dimensions. This is non-trivial task due to the well known complexity of canonical analysis of unimodular gravity in non-covariant formalism. Further, it is also very interesting to study this system since it contains primary unimodular constraint and it is non-trivial task how to deal with such systems in covariant canonical formalism. In more details, we include this primary constraint to the action with corresponding Lagrange multiplier. Then we derive corresponding equations of motion. Using these equations of motion we find that the unimodular constraint implies another constraint on the canonical conjugate momenta. Then we show that this constraint is equivalent to the vanishing of the trace of the Christoffel symbols which is characteristic property of unimodular theory of gravity [10]. This is nice and non-trivial result. On the other hand the Lagrange multiplier corresponding to the primary constraint cannot be determined as in non-covariant canonical formalism by imposing condition of the preservation of the secondary constraint due to the fact that the equations of motion for conjugate momenta are in the form of the divergence of these momenta. For that reason we determine this constraint in the same way as in the Lagrangian formalism when we calculate the trace of the equations of motion. As a result we obtain equations of motion that are traceless and that do not depend on the cosmological constant which is in agreement with the Lagrangian formulation of unimodular gravity. As the second step in our analysis we find covariant canonical formulation of Henneaux-Teitelboim formulation of unimodular gravity [16]. In this case we again identify covariant Hamiltonian together with set of primary constraints. Then we consider canonical form of the action and determine corresponding equations of motion. Solving these equations of motion we find that Lagrange multiplier is integration constant. In this case we reproduce results well known from Lagrangian analysis. However we mean that this is nice and interesting application of the covariant canonical analysis to the constraint systems. Let us outline our results and suggest possible extension of this work. We found covariant Hamiltonian formalism for unimodular gravity. First of all we determined covariant Hamiltonian for general relativity action in \(D+1\) dimensions where we again introduced variable \(f^{ab}=\sqrt{-g}g^{ab}\). At this place we would like to stress an importance of this result since it was not apriori known whether \(f^{ab}\) is suitable for formulation of gravity in space-time of dimension different from 4. Then we imposed unimodular constraint using Lagrange multiplier method and then we studied corresponding equations of motion. We found that the consistency of the theory demands that the trace of conjugate momenta is zero. Then we showed that this is character istic property of unimodular gravity when we pass to Lagrangian formalism. Final we found covariant Hamiltonian for Henneaux-Teltelboim formulation of unimodular gravity. We identified primary constraints of the theory and then we studied equations of motion that follow from canonical form of the action. We showed that they precisely reproduce Lagrangian equations of motion that is nice consistency check of the covariant canonical formalism. We mean that the analysis presented in this paper suggests that covariant Hamiltonian formalism is very close to Lagrangian formalism and in some situations the covariant Hamiltonian formalism is more suitable than Lagrangian one, as for example study of thermodynamics properties of horizon [36]. It is also clear that there are more systems that could be analysed with the help of covariant canonical formalism. One possibility is to study Weyl invariant gravity in this formalism. Another possibility would be to perform analysis of theories of gravity with higher derivatives where the classical canonical analysis is very complicated, see for example [40]. We hope to return to these problems in future. This paper is organized as follows. In the next section (2) we review properties of unimodular gravity.Then in section (3) we proceed to the covariant canonical formulation of this theory. Finally in section (4) we perform covariant canonical formulation of Henneaux-Teltelboim unimodular gravity. ## 2 Brief Review of Unimodular Gravity In this section we review basic facts about unimodular gravity. For recent very nice and more detailed review, see for example [5, 6]. Unimodular gravity is theory with the constraint \(\sqrt{-g}=1\). Clearly such a condition has a consequence on allowed differomorphism transformation. In fact, let us consider general transformation of coordinates \[x^{\prime a}=x^{a}+\xi^{a}(x) \tag{1}\] that implies inverse relation \[x^{a}=x^{\prime a}-\xi^{a}(x)\approx x^{\prime a}-\xi^{a}(x^{\prime})+{\cal O }(\xi^{2})\, \tag{2}\] where \(a,b,c=0,1,\ldots,D\). Under these transformation the metric \(g_{ab}\) transform as \[g^{\prime}_{ab}(x)=g_{ab}(x)-\partial_{c}g_{ab}(x)\xi^{c}(x)-g_{ac}(x)\partial _{b}\xi^{c}(x)-\partial_{a}\xi^{c}(x)g_{cb}(x) \tag{3}\] that implies following variation of metric \[\delta g_{ab}(x)=g^{\prime}_{ab}(x)-g_{ab}(x)=-g_{ac}\partial_{b}x^{c}-\partial_{ a}\xi^{c}g_{cb}-\partial_{c}g_{ab}\xi^{c}\] so that the variation of the square root of the determinant of metric is equal to \[\delta\sqrt{-\det g}=-(2\partial_{a}\xi^{a}-\partial_{c}g_{ab}g^{ba}\xi^{c}) \sqrt{-\det g}. \tag{4}\] In case of unimodular gravity this variation should vanish and hence we obtain following condition on \(\xi^{a}\) in the form \[\nabla_{a}\xi^{a}=\partial_{a}\xi^{a}+\frac{1}{2}g^{ac}\partial_{d}g_{ca}\xi^ {d}=0\.\] The most straightforward way how to find an action for unimodular gravity is to consider standard Einstein-Hilbert action with an unimodular constraint added \[S=\frac{1}{16\pi}\int d^{D+1}x[\sqrt{-g}(R-2\bar{\Lambda})+\Lambda(\sqrt{-g}-1 )]+S_{matt}\, \tag{6}\] where \(\Lambda\) is Lagrange multiplier whose variation ensures unimodular condition and where \(\bar{\Lambda}\) is constant. Performing variation of the action (6) with respect to \(g^{ab}\) we obtain following equations of motion \[\frac{1}{16\pi}(R_{ab}-\frac{1}{2}g_{ab}(R-2\bar{\Lambda}+\Lambda))=T_{ab}\, \tag{7}\] where \(T_{ab}\) is matter stress energy tensor defined as \[T_{ab}=-\frac{1}{\sqrt{-g}}\frac{\delta S_{matt}}{\delta g^{ab}}. \tag{8}\] The crucial point is that \(\Lambda\) is Lagrange multiplier that should be determined as a consequence of the equations of motion. To do this we perform the trace of the equation (7) to express \(\Lambda\) as \[\Lambda=\frac{(1-D)}{1+D}R-\frac{32\pi}{D+1}T+2\bar{\Lambda}\,\quad T\equiv g^{ab}T_{ab}. \tag{9}\] Inserting this result into (7) we obtain \[R_{ab}-\frac{1}{D+1}g_{ab}R=16\pi(T_{ab}-\frac{1}{D+1}g_{ab}T). \tag{10}\] These equations of motion are trace-free and also most importantly they do not contain any information about cosmological constant \(\bar{\Lambda}\). It is important to stress that even equations of motion of general relativity without unimodular constraint imposed split into 9 trace-free equations of motion and one additional one. To see this consider general relativity equations of motion \[R_{ab}-\frac{1}{2}g_{ab}(R-2\bar{\Lambda})=16\pi T_{ab}. \tag{11}\] Taking the trace of this equation we can express \(R\) as \[R=\frac{2}{1-D}(16\pi T-(D+1)\bar{\Lambda}). \tag{12}\] Note that with the help of this equation we can rewrite (11) into trace-free form \[R_{ab}-\frac{1}{D+1}Rg_{ab}=16\pi(T_{ab}-\frac{1}{D+1}Tg_{ab}). \tag{13}\] However we should again stress that (12) determines \(R\) as function of trace of matter stress energy tensor and true cosmological constant term in Einstein-Hilbert action while in case of unimodular gravity we express \(\Lambda\)-which is Lagrange multiplier and not constant, as function of \(R,T\) and \(\bar{\Lambda}\), as follows from equation (9). In order to check equivalence between unimodular gravity and ordinary general relativity we should be able to reproduce equation (12) in case of unimodular gravity as well. We can do this by following procedure. Consider equations of motion (10) and rewrite them into the form \[R_{ab}-\frac{1}{2}g_{ab}R=16\pi(T_{ab}-\frac{1}{D+1}g_{ab}T)+\frac{1-D}{2(D+1) }Rg_{ab}. \tag{14}\] Now we apply covariant derivative on both sides of the equations above and using the fact that the covariant derivative of Einstein tensor \(G_{ab}=R_{ab}-\frac{1}{2}g_{ab}R\) is zero we get \[\frac{1}{D+1}\nabla_{b}(16\pi T-\frac{1-D}{2}R)=16\pi\nabla^{a}T_{ab}. \tag{15}\] If we consider ordinary form of matter we obtain that divergence of stress energy tensor is zero as a consequence of _matter equations of motion_. Then the right side of the equation above is zero and the left side can be easily integrated with the result \[R=\frac{2}{1-D}(16\pi T+\Omega)\, \tag{16}\] where \(\Omega\) now appears as true integration constant rather than the cosmological constant that was imposed in the theory by hand. In other words (16) is the last equation of motion of unimodular gravity and we fully recovered equivalence with general relativity however keeping in mind that we should still have to impose the condition \(\sqrt{-g}=1\) in the course of calculations. Having performed basic review of unimodular gravity we proceed in the next section to its formulation in the covariant Hamiltonian formalism. ## 3 Covariant Hamiltonian Formalism For \(D+1\) dimensional Unimodular Gravity In this section we find covariant Hamiltonian formalism for unimodular gravity in \(D+1\) formalism. As usual in the covariant formalism we split the Einstein-Hilbert action into bulk and boundary terms. Since this procedure is well known, see for example [35, 36] and also recent generalization to the case of \(F(R)\) gravity [37] we write immediately final result \[{\cal L}={\cal L}_{bulk}+{\cal L}_{surf}\,\] \[{\cal L}_{bulk}=\frac{1}{16\pi}\sqrt{-g}[\Gamma^{h}_{dk}\Gamma^ {k}_{gh}g^{gd}-\Gamma^{f}_{fk}\Gamma^{k}_{gh}g^{gh}]+\] \[+\frac{1}{16\pi}\bar{\Lambda}\sqrt{-g}+\frac{1}{16\pi}\lambda( \sqrt{-g}-1)\equiv\] \[\equiv{\cal L}_{quad}+\frac{1}{16\pi}\bar{\Lambda}\sqrt{-g}+ \frac{1}{16\pi}\lambda(\sqrt{-g}-1)\,\] \[{\cal L}_{surf}=\frac{1}{16\pi}\partial_{j}[\sqrt{-g}(g^{ik} \Gamma^{j}_{ik}-g^{ij}\Gamma^{k}_{ik})]\,\] where \(\Gamma^{a}_{bc}\) are Christoffel symbols \[\Gamma^{a}_{bc}=\frac{1}{2}g^{ad}(\partial_{b}g_{dc}+\partial_{c}g_{db}- \partial_{c}g_{ab})\, \tag{18}\] and where \(\bar{\Lambda}\) is cosmological constant. Note that the presence of the term with Lagrange multiplier allows us to treat all components of metric as independent. Now we are ready to proceed to the covariant Hamiltonian formulation of this theory. The main idea of this formalism is to treat all derivatives of dynamical variables on the equal footing [27, 29, 35] which is sharp contrast with the standard canonical formalism where the time coordinate has exceptional meaning. This is very attractive idea especially in the context of generally covariant theories since sometimes it is very difficult to perform \(D+1\) splitting of large-space time and corresponding dynamical fields. In case of covariant canonical formalism of gravity we define conjugate momenta \(M^{cmn}\) to \(g_{mn}\) in the following way \[M^{cmn}=\frac{\partial{\cal L}_{bulk}}{\partial\partial_{c}g_{mn}}. \tag{19}\] Note that the momenta are defined by bulk part of the Lagrangian density only as follows from the fact that equations of motion are derived by variation of the action when we fix metric and its derivative on the boundary, for careful discussion see [36]. Then from (17) we obtain \[M^{cmn}=\frac{1}{32\pi}\sqrt{-g}[g^{mk}\Gamma^{c}_{kd}g^{dn}+g^{ nk}\Gamma^{c}_{kd}g^{dm}-\] \[-g^{mn}\Gamma^{c}_{gh}g^{gh}-\Gamma^{f}_{fk}(g^{km}g^{cn}+g^{kn}g^ {cm})+g^{mn}g^{ck}\Gamma^{f}_{fk}]\] using \[\frac{\delta\Gamma^{k}_{gh}}{\delta\partial_{c}g_{mn}}=\frac{1}{ 4}(g^{ks}\delta^{c}_{g}(\delta^{m}_{s}\delta^{n}_{h}+\delta^{n}_{s}\delta^{m} _{h})+\] \[+g^{ks}\delta^{c}_{h}(\delta^{m}_{s}\delta^{n}_{g}+\delta^{n}_{s} \delta^{m}_{g})-g^{ks}\delta^{c}_{s}(\delta^{m}_{g}\delta^{n}_{h}+\delta^{n}_{ g}\delta^{m}_{h}))\] Then we could formulate covariant Hamiltonian formalism using canonical variables \(g_{ab}\) and \(M^{cab}\). However it turns out that the situation is much simpler when we introduce an alternative set of variables [35, 36] that are defined as \[f^{ab}=\sqrt{-g}g^{ab}. \tag{22}\] Then it is easy to see that the conjugate momenta are defined by chain rule \[N^{c}_{\ ab}=\frac{\partial{\cal L}_{quad}}{\partial\partial_{c}f^{ab}}=\frac{ \partial L_{quad}}{\partial(\partial_{d}g_{mn})}\frac{\partial(\partial_{d}g_{ mn})}{\partial(\partial_{c}f_{ab})}\.\] From (22) we see that \(f^{ab}\) and \(g_{mn}\) are related by point transformations so that \[\partial_{d}g_{mn}=\frac{\partial g_{mn}}{\partial f^{ab}}\partial_{d}f^{ab}. \tag{24}\] Then we have \[\frac{\partial(\partial_{d}g_{mn})}{\partial(\partial_{c}f^{ab})}=\frac{ \partial g_{mn}}{\partial f^{ab}}\delta^{c}_{d}\] and finally \[N^{c}_{\ ab}=\frac{\partial{\cal L}_{quad}}{\partial(\partial_ {c}g_{mn})}(-g_{mk}B^{kl}_{\ ab}g_{ln})\,\] where \[B^{kl}_{\ ab}=\frac{\delta g^{kl}}{\delta f^{ab}}=(-f)^{-\frac{1}{D-1}}\left( \frac{1}{2}(\delta^{k}_{a}\delta^{l}_{b}+\delta^{l}_{a}\delta^{k}_{b})-\frac{ 1}{D-1}f^{kl}f_{ab}\right)\,\] where we used the fact that \[-\det f\equiv-f=(-g)^{\frac{D+1}{2}}(-g)^{-1} \tag{28}\] and consequently \[\sqrt{-g}=(-f)^{\frac{1}{D-1}}\,\quad g^{ab}=(-f)^{-\frac{1}{D-1}}f^{ab}. \tag{29}\] Then using previous form of \(M^{cmn}\) we obtain \[N^{c}_{\ ab}=\frac{\partial{\cal L}_{quad}}{\partial(\partial_ {c}g_{mn})}(-g_{mk}B^{kl}_{\ ab}g_{ln})=\] \[=-\frac{1}{32\pi}[2\Gamma^{c}_{\ ab}-\Gamma^{f}_{fa}\delta^{c}_{ b}-\Gamma^{f}_{fb}\delta^{c}_{a}]\.\] Note that this relation does not depend on the number of space-time dimensions. Then in order to find corresponding Hamiltonian we should find inverse relation between \(\Gamma^{a}_{bc}\) and \(N^{a}_{bc}\). Let us presume that it has the form \[\Gamma^{c}_{ab}={\bf A}N^{c}_{ab}+{\bf B}(N^{d}_{da}\delta^{c}_{b}+N^{d}_{bd} \delta^{c}_{a}). \tag{31}\] Inserting (30) into (31) we obtain \[N^{c}_{ab}=-\frac{1}{32\pi}(2{\bf A}N^{c}_{ab}+2{\bf B}(N^{d}_{ da}\delta^{c}_{b}+N^{d}_{bd}\delta^{c}_{a})-\] \[-({\bf A}+{\bf B}(D+2))N^{f}_{fa}\delta^{c}_{b}-({\bf A}+{\bf B}( D+2))N^{f}_{fb}\delta^{c}_{a})\] using \(\Gamma^{f}_{fa}=({\bf A}+{\bf B}(D+2))N^{f}_{fa}\). Comparing left and right side we obtain that \({\bf A}\) and \({\bf B}\) are equal to \[{\bf A}=-16\pi\,\quad{\bf B}=-\frac{{\bf A}}{D}. \tag{33}\] Then it is easy to find kinetic term of covariant Hamiltonian for \(D+1\) dimensional unimodular gravity in the form \[{\cal H}_{kin}=\partial_{c}f^{ab}N^{c}_{ab}-{\cal L}_{quad}=16\pi \left[N^{b}_{cd}f^{da}N^{c}_{ab}-\frac{1}{D}N^{r}_{ra}f^{ab}N^{s}_{sb}\right]\,\] where we used the fact that \[\partial_{c}f^{ab}=\partial_{c}\sqrt{-g}g^{ab}+\sqrt{-g}\partial_{c}g^{ab}= \Gamma^{d}_{dc}f^{ab}-\Gamma^{a}_{cd}f^{db}-\Gamma^{b}_{dc}f^{da}\] together with the condition \(\nabla_{c}g^{ab}=0\) that implies \[\partial_{c}\sqrt{-g}=\Gamma^{d}_{dc}\sqrt{-g}\,\quad\partial_{c}g^{ ab}=-(\Gamma^{a}_{cd}g^{db}+\Gamma^{b}_{cd}g^{da})\.\] The final form of the covariant Hamiltonian for unimodular gravity contains terms with the unimodular constraint and true cosmological constant \(\bar{\Lambda}\). Then the phase-space form of the action has the form \[S=\int d^{D+1}x(N^{c}_{ab}\partial_{c}f_{ab}-{\cal H}_{kin}-\frac{1}{16\pi}(- f)^{\frac{1}{D-1}}\bar{\Lambda}-\frac{1}{16\pi}\lambda((-f)^{\frac{1}{D-1}}-1))\, \tag{37}\] where \(\lambda\) is Lagrange multiplier corresponding to unimodular constraint. From the action above we determine corresponding equations of motion by performing variation with respect to \(f^{ab},N^{c}_{ab}\) and \(\lambda\) \[\delta S=\int d^{D+1}x(\delta N^{c}_{ab}\partial_{c}f_{ab}+N^{c}_{ ab}\partial_{c}\delta f_{ab}-\] \[-\frac{\delta\mathcal{H}_{kin}}{\delta N^{c}_{ab}}\delta N^{c}_{ ab}-\frac{\delta\mathcal{H}_{kin}}{\delta f^{ab}}\delta f^{ab}-\] \[-\frac{1}{16\pi(D-1)}(\lambda+\bar{\Lambda})(-f)^{\frac{1}{D-1}} \delta f^{ab}f_{ab}-\delta\lambda((-f)^{\frac{1}{D-1}}-1))=0\] that implies following equations of motion \[\partial_{c}f^{ab}=\frac{\delta\mathcal{H}}{\delta N^{c}_{ab}}\,\quad(-f)^{\frac{1}{D-1}}-1=0\,\] \[-\partial_{c}N^{c}_{ab}=\frac{\delta\mathcal{H}}{\delta f^{ab}}+ \frac{\lambda}{16\pi(D-1)}(-f)^{\frac{1}{D-1}}f_{ab}+\frac{\bar{\Lambda}}{16 \pi(D-1)}(-f)^{\frac{1}{D-1}}f_{ab}\,\] or explicitly \[\partial_{c}f^{ab}=16\pi[N^{a}_{cd}f^{db}+N^{b}_{cd}f^{da}-\frac{ 1}{D}(f^{bd}N^{s}_{sd}\delta^{a}_{c}+f^{ad}N^{s}_{sd}\delta^{b}_{c})]\,\] \[-\partial_{c}N^{c}_{ab}=\frac{16\pi}{2}(N^{d}_{ca}N^{c}_{bd}+N^{d }_{cb}N^{c}_{ad})-\] \[-\frac{16\pi}{D}N^{r}_{ra}N^{s}_{sb}+\frac{\lambda}{16\pi(D-1)}(-f )^{\frac{1}{D-1}}f_{ab}+\frac{\bar{\Lambda}}{16\pi(D-1)}(-f)^{\frac{1}{D-1}}f _{ab}\,\] \[(-f)^{\frac{1}{D-1}}-1=0\.\] Taking the trace of the second equation we can determine \(\lambda\) as \[\lambda=\frac{16\pi(D-1)}{(D+1)}(-\partial_{c}N^{c}_{ab}f^{ab}-16 \pi N^{d}_{ca}f^{ab}N^{c}_{bd}+\frac{16\pi}{D}N^{r}_{ra}f^{ab}N^{s}_{sb})- \bar{\Lambda}\,\] where we have took into account the equation on the fourth line in (40). Then the equations of motion for \(N^{c}_{ab}\) have the form \[-\partial_{c}N^{c}_{ab}=\frac{16\pi}{2}(N^{d}_{ca}N^{c}_{bd}+N^{d}_ {cb}N^{c}_{ad})-\frac{16\pi}{D}N^{r}_{ra}N^{s}_{sb}+\] \[+\frac{1}{(D+1)}(-\partial_{j}N^{j}_{ik}f^{ik}-16\pi N^{d}_{ci}f^{ ik}N^{c}_{kd}+\frac{16\pi}{D}N^{r}_{ri}f^{ik}N^{s}_{sk})f_{ab}\.\] Clearly this equation is traceless and all dependence on the cosmological constant \(\bar{\Lambda}\) disappears which is an essence of unimodular gravity. On the other hand one let us try to calculate the trace of the first equation that gives \[\partial_{c}f^{ab}f_{ab}=16\pi[N^{a}_{cd}f^{db}+N^{b}_{cd}f^{da}-\frac{1}{D}(f ^{bd}N^{s}_{sd}\delta^{a}_{c}+f^{ad}N^{s}_{sd}\delta^{b}_{c})]f_{ba} \tag{43}\] that can be simplified into the form \[\partial_{c}f=32\pi[\frac{D-1}{D}]N^{s}_{sc}\.\] Now taking into account unimodular constraint we immediately get the condition \[N^{s}_{sc}=0 \tag{44}\] that can be interpreted as secondary constraint. On the other hand the condition (44) seems to be too strong so that we should discuss it in more details. We begin with the recapitulation that unimodular gravity in the covariant Hamiltonian formalism is described by canonical conjugate variables \(f^{ab},N^{c}_{ab}\) that are restricted by unimodular condition together with (44). In order to find proper interpretation of the constraint (44) it is instructive to derive general relativity variables from \(f^{ab},N^{c}_{ab}\). As the first step let us consider linear combination of \(N^{c}_{ab}\) that we denote as \(\Gamma^{c}_{ab}\) and which is given by following prescription \[\Gamma^{c}_{ab}=-16\pi N^{c}_{ab}+\frac{16\pi}{D}(N^{d}_{da}\delta^{c}_{b}+N^ {d}_{bd}\delta^{c}_{a}). \tag{45}\] This can be always done and we should again stress that \(\Gamma^{c}_{ab}\) is not related to \(f^{ab}\) at all. Clearly \(\Gamma^{c}_{ab}=\Gamma^{c}_{ba}\). Then we define covariant derivative where \(\Gamma^{c}_{ab}\) are coefficients of connection. Let us further define \(g^{ab}\) and its inverse \(g_{ab}\) in the following way \[g^{ab}=f^{ab}(-f)^{\frac{1}{1-D}}\,\quad g_{ab}=f_{ab}(-f)^{\frac{1}{D-1}}. \tag{46}\] Let us then define covariant derivative of \(g^{ab}\) as \[\nabla_{c}g^{ab}=\partial_{c}g^{ab}+\Gamma^{a}_{cd}g^{db}+\Gamma^{b}_{cd}g^{da}\, \tag{47}\] that, using (45), takes the form \[\nabla_{c}g^{ab}=(-f)^{\frac{1}{1-D}}\times\] \[\times[\partial_{c}f^{ab}-16\pi N^{a}_{cd}f^{db}-16\pi N^{b}_{cd }f^{da}+\frac{16\pi}{D}f^{bd}N^{r}_{dr}\delta^{a}_{c}+\frac{16\pi}{D}N^{r}_{dr }f^{da}\delta^{b}_{c}]=0\, \tag{48}\] where we used the first equation in (40) that also implies \(\partial_{c}f^{mn}f_{mn}=32\pi\frac{D-1}{D}N^{s}_{sc}\). Now thanks to the equation \(\nabla_{c}g^{ab}=0\) we can express \(\Gamma^{a}_{bc}\) in the form of Christoffel symbols \[\Gamma^{a}_{bc}=\frac{1}{2}g^{ad}(\partial_{b}g_{dc}+\partial_{c}g_{db}- \partial_{d}g_{bc}). \tag{49}\] On the other hand let us return to the relation between \(\Gamma^{a}_{bc}\) and \(N^{a}_{bc}\) that takes the form \[\Gamma^{f}_{fa}=-\frac{32\pi}{D}N^{f}_{fa} \tag{50}\] so that condition that \(N^{s}_{sa}=0\) implies \[\Gamma^{s}_{sa}=0. \tag{51}\] On the other hand from (49) we obtain \[\Gamma^{f}_{fc}=\frac{1}{2}g^{fd}\partial_{c}g_{df}=\partial_{c}\det g=0 \tag{52}\] so that condition \(N^{s}_{sc}=0\) is equivalent to unimodular condition. It is important to stress that the fact that unimodular constraint implies \(\Gamma^{s}_{sa}=0\) has not been appreciated too much with exception of recent interesting paper [10] where it was stressed that the equivalence between general relativity and unimodular gravity is non-trivial. Rather, it was argued there that the natural geometry for unimodular relativity is equiprojective geometry [39]. We also see that the condition \(N^{s}_{sa}=0\) emerges naturally in the covariant canonical formalism of unimodular gravity. Covariant Form of Unimodular Gravity In this section we perform covariant canonical formalism for Henneaux-Teitelboim formulation of unimodular gravity that has the form \[S=\frac{1}{16\pi}\int d^{D+1}x\sqrt{-g}[R+\lambda(\sqrt{-g}-\partial_{a}\tau^{a}) ]\, \tag{53}\] where \(\tau^{a}\) is vector density and \(\lambda\) is Lagrange multiplier. Now the equations of motion for \(\lambda\) implies \[\sqrt{-g}-\partial_{a}\tau^{a}=0 \tag{54}\] while equation of motion for \(\tau^{a}\) leads to \[\partial_{a}\lambda=0. \tag{55}\] It is clear that the covariant Hamiltonian formulation of this theory is almost the same as in previous case with difference that there is momentum conjugate to \(\tau^{a}\). Writting \(\partial_{a}\tau^{a}=\partial_{b}\tau^{a}\delta^{b}_{a}\) we obtain momentum conjugate to \(\tau^{a}\) to be equal to \[p^{b}_{a}=\frac{\delta{\cal L}}{\delta\partial_{b}\tau^{a}}=-\frac{1}{16\pi} \lambda\delta^{b}_{a} \tag{56}\] however this can be interpreted as primary constraints of the theory \[{\cal G}^{b}_{a}\equiv p^{b}_{a}+\frac{1}{16\pi}\lambda\delta^{b}_{a}. \tag{57}\] In fact, the bare Hamiltonian is defined as \[{\cal H}_{B}=p^{b}_{a}\partial_{b}\tau^{a}+\partial_{c}f^{ab}N^{ c}_{ab}-{\cal L}=\] \[=16\pi[N^{b}_{cd}f^{da}N^{c}_{ab}-\frac{1}{D}N^{r}_{ra}f^{ab}N^{s} _{sb}]-\frac{1}{16\pi}\lambda(-f)^{\frac{1}{D-1}}\] and we see that the dependence on momenta \(p^{\nu}_{\mu}\) is missing. For that reason we should consider Hamiltonian with primary constraints included \[{\cal H}_{T}=16\pi[N^{b}_{cd}f^{da}N^{c}_{ab}-\frac{1}{D}N^{r}_{ra }f^{ab}N^{s}_{sb}]-\] \[\frac{1}{16\pi}\lambda(-f)^{\frac{1}{D-1}}+\Gamma^{a}_{b}(p^{b}_ {a}+\frac{1}{16\pi}\lambda\delta^{b}_{a})\] and consider corresponding equations of motion that arise from the variation of the canonical form of the action \[S=\int d^{D+1}x(\partial_{c}f^{ab}N^{c}_{ab}+p^{a}_{b}\partial_{a} \tau^{b}-16\pi[N^{b}_{cd}f^{da}N^{c}_{ab}-\frac{1}{D}N^{r}_{ra}f^{ab}N^{s}_{sb}] +\] \[+\frac{1}{16\pi}\lambda(-f)^{\frac{1}{D-1}}+\Gamma^{a}_{b}(p^{b}_ {a}+\frac{1}{16\pi}\lambda\delta^{b}_{a}))\] so that the equations of motion have the form \[\partial_{c}f^{ab}=16\pi[N^{a}_{cd}f^{db}+N^{b}_{cd}f^{da}-\frac{ 1}{D}(f^{bd}N^{s}_{sd}\delta^{a}_{c}+f^{ad}N^{s}_{sd}\delta^{b}_{c})]\,\] \[-\partial_{c}N^{c}_{ab}=\frac{16\pi}{2}(N^{d}_{ca}N^{c}_{bd}+N^{ d}_{cb}N^{c}_{ad})-\frac{16\pi}{D}N^{r}_{ra}N^{s}_{sb}+\frac{\lambda}{(D-1)}(-f)^{ \frac{1}{D-1}}f_{ab}\,\] \[(-f)^{\frac{1}{D-1}}+\Gamma^{a}_{a}=0\,\quad\partial_{b}\tau^{a}+ \Gamma^{a}_{b}=0\,\quad\partial_{a}p^{a}_{b}=0\,\quad p^{b}_{a}+\frac{1}{16\pi}\lambda \delta^{b}_{a}=0\.\] If we combine the first and the second equation on the third line we find \[(-f)^{\frac{1}{D-1}}=\partial_{a}\tau^{a} \tag{62}\] that has exactly the same form as equation (54). We further perform partial derivative of the fourth equation on the third line and we obtain \[\partial_{b}p^{b}_{a}=-\frac{1}{16\pi}\partial_{a}\lambda \tag{63}\] that using the third equation on the same line implies that \[\partial_{a}\lambda=0. \tag{64}\] This equation also shows that \(\lambda\) is constant and it can be interpreted as integration constant. Then it can be argued in the same way as in the previous section that the equations (61) are equivalent to the Lagrangian equations of Henneaux-Teitelboim gravity. In other words, covariant Hamiltonian description of Henneaux-Teitelboim gravity is equivalent to corresponding Lagrangian description which is nice consistency check. **Acknowledgement:** The work of JK is supported by the grant "Dualitites and higher order derivatives" (GA23-06498S) from the Czech Science Foundation (GACR).
2309.08347
Reward Engineering for Generating Semi-structured Explanation
Semi-structured explanation depicts the implicit process of a reasoner with an explicit representation. This explanation highlights how available information in a specific query is utilised and supplemented with information a reasoner produces from its internal weights towards generating an answer. Despite the recent improvements in generative capabilities of language models, producing structured explanations to verify a model's true reasoning capabilities remains a challenge. This issue is particularly pronounced for not-so-large LMs (e.g., FLAN-T5-XXL). In this work, we first underscore the limitations of supervised fine-tuning (SFT) in tackling this challenge, and then introduce a carefully crafted reward engineering method in reinforcement learning (RL) to better address this problem. We investigate multiple reward aggregation methods and provide a detailed discussion which sheds light on the promising potential of RL for future research. Our proposed method on two semi-structured explanation generation benchmarks (ExplaGraph and COPA-SSE) achieves new state-of-the-art results.
Jiuzhou Han, Wray Buntine, Ehsan Shareghi
2023-09-15T12:10:03Z
http://arxiv.org/abs/2309.08347v2
# Reward Engineering for Generating Semi-structured Explanation ###### Abstract Semi-structured explanation depicts the implicit process of a reasoner with an explicit representation. This explanation highlights how available information in a specific query is supplemented with information a reasoner produces from its internal weights towards generating an answer. Despite the recent improvements in generative capabilities of language models, producing structured explanations to verify model's true reasoning capabilities remains a challenge. This issue is particularly pronounced for not-so-large LMs, as the reasoner is expected to couple a sequential answer with a structured explanation which embodies both the _correct presentation_ and the _correct reasoning process_. In this work, we first underscore the limitations of supervised fine-tuning (SFT) in tackling this challenge, and then introduce a carefully crafted reward engineering method in reinforcement learning (RL) to better address this problem. We investigate multiple reward aggregation methods and provide a detailed discussion which sheds light on the promising potential of RL for future research. Our proposed reward on two semi-structured explanation generation benchmarks (ExplaGraph and COPA-SSE) achieves new state-of-the-art results. 1 Footnote 1: Our code is available at [https://github.com/Jüzuhouh/Reward-Engineering-for-Generating-SEG](https://github.com/Jüzuhouh/Reward-Engineering-for-Generating-SEG). ## 1 Introduction Language models have shown great capability in complex reasoning tasks (Touvron et al., 2023; Bubeck et al., 2023; Touvron et al., 2023; Chung et al., 2022; Brown et al., 2020; Yang et al., 2018; Lin et al., 2019). Despite their proficiency in generating accurate results, a comprehensive assessment of the models' true capabilities in reaching the correct output necessitates an explainable mechanism. In this spirit, generating structured explanations (Saha et al., 2021; Brassard et al., 2022) offer an effective pathway as they are explicit (as opposed to unstructured natural language explanations) in representing the intricate relationships between facts employed during reasoning, and are easier to evaluate. An example of explanation graph is shown in Figure 1. Generation of a reasoning path in large language models (LLMs) like the GPT family (OpenAI, 2023), have been mostly orchestrated by Chain-of-Thought (CoT) (Kojima et al., 2022; Wei et al., 2022), and more recently Tree-of-thought (Yao et al., 2023) and Graph-of-Thought (Yao et al., 2023) approaches. These approaches enable LLMs to generate an internal reasoning process before producing a response. While this has proven impact on improving models' predictive capabilities, categorical evaluation of the reasoning steps in the unstructured textual format is difficult and non-trivial. One might assume that an ideal structured representation of explanation could also be produced by LLMs via in-context prompting, but it has been demonstrated that LLMs have major strug Figure 1: Given the belief and argument, the task is to predict the stance (support/counter) and generate an explanation graph representing the reasoning process. The explanation graph in SFT+RL output is more expressive, and also contains an external commonsense concept. gles in generating structured data even via few-shot prompting Han et al. (2023). This necessitates more attention on the generation of structured explanations from language models. A more mature space of research has focused on investigating the semi-structured explanation generation (SEG) via smaller language model. Saha et al. (2021) propose to use multiple models for predicting answer, internal nodes, external nodes and relations. Cui et al. (2023) incorporate a generative pre-training mechanism over synthetic graphs by aligning input pair of text-graph to improve the model's capability in generating semi-structured explanation. Both works train separate models for prediction of response, and generation of explanation. However, it is reasonable to anticipate that even a moderate-size language model should have the ability to generate both answers and the necessary structured explanations. In this paper, we put the generation of structured explanations at the center of our focus. To investigate this further, we opted to employ a single moderate-size language model, FLAN-T5-XXL (13B) Chung et al. (2022), and train it to undertake both response prediction and explanation generation. We first show the inadequacy of supervised fine-tuning (SFT) for equipping the backbone model for SEG. To mitigate this problem, we investigate the integration of reinforcement learning (RL) alongside the traditional SFT approach. Specifically, we design a reward engineering method in RL and explore multiple reward aggregation methods. We demonstrate the effectiveness of our proposed method on two SEG benchmarks, ExplaGraph Saha et al. (2021) and COPA-SSE Brassard et al. (2022), and achieve new state-of-the-art results. We also provide an in-depth discussion on RL use for SEG in particular, and hope the findings of our work to shed light on both challenges and potentials of RL in the broader space of graph generation. ## 2 Semi-structured Explanation Structured explainability refers to a specific form of explanation that highlights the underlying decision-making processes of the model via an explicit representation of relationships between different reasoning factors. For a more comprehensive overview of datasets in this space, please see Saha et al. (2021). In this work, we focus on two question-answering tasks, ExplaGraph Saha et al. (2021) and COPA-SSE Brassard et al. (2022), and provide a brief overview of them in what follows. An example of each task is provided in Table 1. ExplaGraphGiven a belief and an argument, the task requires a model to predict whether a certain _argument_ supports or counters a _belief_. Each instance in the data is also accompanied by a commonsense explanation graph which reveals an internal reasoning process involved in inferring the predicted stance. The explanation graph is a connected directed acyclic graph (DAG), in which the nodes are concepts (short English phrase) and relations are chosen based on ConceptNet Liu and Singh (2004). Concepts are either internal (part of the belief or the argument) or external (part of neither but necessary for the explanation). Semantically, the explanation graphs are commonsense-augmented structured arguments that explicitly support or counter the belief. semi-structured explanation in COPA-SSE is not necessary to be a DAG. The difficulty of these two tasks is that first the model needs to correctly understand the question and answer it, then generate a reasonable and semantically correct semi-structured explanation. The answers are in a form of an unstructured natural language, while the explanations are of structured format. Tasking a model to generate both modalities, as we will show in the experiment section, imposes a major challenge. ## 3 Reward Engineering for SEG Motivated by the success of reinforcement learning from human feedback (RLHF) (Ouyang et al., 2022; Dubois et al., 2023; Touvron et al., 2023b) in LLMs, we propose to use RL for semi-structured explanation generation task. To achieve this, we design a reward engineering method by incorporating different sources of reward. The RLHF typically begins with a pre-trained LLM, which is fine-tuned with supervised learning on a downstream task, namely SFT model. Then it contains two phases: reward modelling phase and RL fine-tuning phase. Our reward engineering is designed for improving the reward modelling phase. The objective of RL fine-tuning is to optimize the policy model against a reward model. In our work, we use proximal policy optimization (PPO) (Schulman et al., 2017). ### Reward Model In reward modelling phase, given the input and a generated output, the reward model, \(R_{\phi}\), generates a single scalar representing its overall quality. To train a reward model, first we need to collect the paired preference data. In this work, we generate the paired data using the SFT model, which is fine-tuned on the semi-structured explanation task. The SFT model generates the outputs from the training data, then we pair the generated output with its corresponding reference. To improve the quality of the paired preference data, we filter out the pairs where the generated output is the same as the reference. In each pair, the reference is regarded as the preferred data. The filtered paired preference data is then used to train the reward model. ### Reward Metric In addition to collecting the reward from reward model, we propose to collect another reward from evaluation metrics. This metric reward can explicitly reflect the quality of the generated output which is naturally complementary to the reward from the reward model. Since the semi-structured explanation is represented in format of a set of triples (i.e., [head, relation, tail]), following the previous work (Saha et al., 2021), we consider each triple as a sentence and use the existing text matching metrics to calculate the graph matching score. Specifically, BLEU (Papineni et al., 2002), ROUGE (Lin, 2004) and BERTScore (Zhang et al., 2019) are extended to Graph-BLEU, Graph-ROUGE and Graph-BERTScore. ### Reward Aggregation The reward model \(R_{\phi}\) takes input prompt \(x\) and generated output \(y\), and generate a single scalar \(R_{\phi}(x,y)\). For the metric reward, given the generated output \(y\) and the reference \(y^{\prime}\), the evaluation metric \(R_{m}\) is used to calculate a metric score as the reward \(R_{m}(y,y^{\prime})\). To aggregate two rewards, an important premise is that the order of magnitude of two rewards should not have too much difference (e.g., 0.01 vs 100), otherwise the effect of one reward could be washed away. To regulate this, we explore various aggregation configurations for the final reward \(R(x,y,y^{\prime})\), \[R(x,y,y^{\prime})=\alpha R_{\phi}(x,y)+(1-\alpha)R_{m}(y,y^{\prime}) \tag{1}\] where \(\alpha\) is a coefficient to control the weights of different rewards. In RL phase, we use the total reward to provide feedback to the language model. In particular, we formulate the following optimization problem, \[\max_{\pi_{\theta}}\mathbb{E}_{x\sim\mathcal{D},y\sim\pi_{\theta }(y|x)}\left[R(x,y,y^{\prime})\right]\\ -\beta\mathbb{D}_{\mathrm{KL}}\left[\pi_{\theta}(y\mid x)\|\pi_{ \mathrm{ref}}(y\mid x)\right] \tag{2}\] where \(\beta\) is a KL coefficient controlling the deviation from the base reference policy \(\pi_{\mathrm{ref}}\), which is the initial SFT model. In practice, the language model policy \(\pi_{\theta}\) is also initialised to the initial SFT model. ## 4 Experiment ### Datasets ExplaGraph (Saha et al., 2021) contains 2,368/398/400 samples as training/dev/test set. Since the labels of test set is not public, we provide the evaluation results on dev set.2 As shown in Table 1, for ExplaGraph, the instruction we use is "_Predict the stance and generate an explanation graph given the belief and argument._" We concatenate the instruction with the belief and argument as input, and the output is a stance concatenated with a linearised explanation graph. COPA-SSE Brassard et al. (2022) contains 1,000/500 samples as training/test set. Since each instance in COPA-SSE contains multiple human-rating semi-structured explanation, we only use the one with highest rating score as the reference. For COPA-SSE, the instruction we use is "_Given the premise, choose from a or b and generate an explanation graph._" This instruction is concatenated with the premise and two options as input. The output is the answer along with a semi-structured explanation. ### Models For the supervised fine-tuning, we use FLAN-T5-XXL (13B) Chung et al. (2022) as the backbone model. We conduct instruction-tuning on it using LoRA Hu et al. (2022), which is a parameter-efficient fine-tuning method. FLAN-T5-XXL is an encoder-decoder architecture model, which generally performs better than decoder-only architecture model such as LLaMA Touvron et al. (2023) in transduction tasks that need a deep understanding of the input Fu et al. (2023). For reward modelling, we use LLaMA-7B. We first fine-tune the pre-trained LLaMA-7B on the task data, then the reward model is initialised from the fine-tuned LLaMA-7B model checkpoint. This can help the reward model to better understand the input and improve the performance. The training details are provided in the Appendix A. #### 4.2.1 Evaluation Metrics For ExplaGraph evaluation, we use the same evaluation metrics provided in the ExplaGraph Saha et al. (2021): Stance Accuracy (SA), Structural Correctness Accuracy of Graphs (StCA), Semantic Correctness Accuracy of Graphs (SeCA), GraphBertScore (G-BS), Graph Edit Distance (GED), Edge Accuracy (EA). For COPA-SSE evaluation, we use Answer Accuracy (AA), Triple Match F1 Score (T-F1), Graph Match F1 Score (G-F1), Graph-BertScore (G-BS), Graph Edit Distance (GED). The detailed descriptions of the above metrics are provided in Appendix B. ### Results We demonstrate the evaluation results on ExplainGraph dev set in Table 2, comparing with other baseline methods. The results show that doing SFT on FLAN-T5-XXL can achieve higher SA and EA than all baseline methods. When we only use one reward in RL, the performance is largely improved. The metric reward we use is G-BERTScore and the KL coefficient \(\beta\) is 0.3 for RL, which are the best setting based on our experiments. Using single metric reward \(R_{m}\) is more effective than using the reward model \(R_{\phi}\). The aggregation of \(R_{\phi}\) and \(R_{m}\) without using weights has the best performance among all settings, which outperform the baselines on five metrics with a large margin. The aggregation of two rewards using weight performs even worse than using single reward. We speculate that using weight decreases the effect of two rewards, thus leading an undesired influence to the RL. The evaluation results on COPA-SSE is shown in Table 3. Using RL can steadily improve the performance of the SFT model, especially when conducting rewards aggregation without using weights. This is consistent with the result shown on ExplainGraph dataset. ### Effect of Different Metrics in \(R_{m}\) In Section 3.2, we introduced three metrics GraphBLEU, Graph-ROUGE and Graph-BERTScore which could be used to calculate \(R_{m}\). To probe the effect of these metrics, we conduct probing ex \begin{table} \begin{tabular}{l c c c c c} \hline & **AA1** & **X-T5-XXL** & **X-XXL** & **G-ESES** & **E-T1** \\ \hline RL-BY Saha et al. (2021) & 23.20 & **24.80** & 15.00 & 20.62 & 21.70 \\ T5-Large Salha et al. (2022) & 86.20 & 46.50 & 31.60 & 36.00 & 0.66 & 26.80 \\ T5-Large-CEi et al. (2022) & 86.20 & 37.30 & 37.95 & 41.70 & 0.62 & 29.30 \\ BLMPL Large Cliff et al. (2023) & 88.19 & 36.43 & 26.13 & 28.42 & 0.74 & 28.77 \\ BALATT-Large (8.027) (Gai et al., 2023) & 88.19 & 43.99 & 37.43 & 38.73 & 0.68 & 25.03 \\ \hline PLAN-T5-XXL- SPT (our method) & 91.17 & 84.36 & 38.16 & 36.14 & 0.66 & 31.23 \\ RL with only \(R_{m}\) & 91.46 & 75.54 & 44.47 & 44.43 & 0.59 & 39.38 \\ RL with only \(R_{m}\) & 91.96 & 55.95 & 46.73 & 47.28 & 0.57 & 38.61 \\ + RL with RL, \(R_{m}\), \(R_{m}\) with weights & **91.96** & 61.81 & **48.49** & **47.90** & **5.05** & **44.16** \\ + RL with RL, \(R_{m}\), \(R_{m}\) with weights & 91.46 & 56.03 & 42.46 & 44.42 & 0.60 & 38.67 \\ \hline \end{tabular} \end{table} Table 2: The evaluation results on ExplaGraph dev set. The weight factor \(\alpha\) used in last setting is 0.9. Boldface shows the best result for a column, and arrows indicate the direction of improvement, i.e., \(\uparrow\): higher is better. \begin{table} \begin{tabular}{l c c c c c} \hline & **AA1** & **F-F1** & **G-F1** & **G-B8** & **GED1** \\ \hline FLAN-T5-XXL - SFT & 96.0 & 1.78 & 7.71 & 66.60 & 20.00 \\ + RL with only \(R_{o}\) & 96.6 & 1.95 & 11.67 & 67.86 & 18.74 \\ + RL with only \(R_{o}\) & 96.4 & 1.83 & 10.58 & 66.93 & 19.00 \\ + RL with \(R_{o}\), \(R_{m}\) with weights & **96.8** & **2.25** & **12.39** & **68.58** & **17.62** \\ + RL with \(R_{o}\), \(R_{m}\) with weights & 96.4 & 1.85 & 10.77 & 67.08 & 18.95 \\ \hline \end{tabular} \end{table} Table 3: The evaluation results on COPA-SSE test set. The weight factor \(\alpha\) used in last setting is 0.5. Boldface shows the best result for a column, and arrows indicate the direction of improvement, i.e., \(\uparrow\): higher is better. periments on ExplaGraph. The results are shown in Table 4. Graph-BERTScore performs best among all metrics. We assume this is because the BLEU and ROUGE are calculated using overlapping n-grams. Essentially for the graph-structured data containing multiple triples, the calculation of n-grams becomes less meaningful. However, Graph-BERTScore is a semantic evaluation metric which is still useful in graph-structured data, thus leading better performance in \(R_{m}\). ### Effect of KL Coefficient \(\beta\) In RL, KL Coefficient \(\beta\) is a significant parameter controlling the deviation from the SFT model. To investigate the effect of \(\beta\), we conduct experiments on ExplaGraph dataset using different values of \(\beta\) (from 0.1 to 1.0). The results are demonstrated in Table 5. As the \(\beta\) increases from 0.1, the performance becomes better until \(\beta\) is over 0.3. From 0.3 to 1.0, the performance goes down gradually, although they achieve the highest SA. In general, setting \(\beta\) as 0.3 leads the best performance in both ExplaGraph and COPA-SSE tasks. When \(\beta\) is small (e.g., 0.1) the new model deviates far from the old model. In this case, although there is a slight improvement, the model also may learn some undesired pattern to achieve higher rewards (i.e., reward hacking). As the \(\beta\) increases, it forces the new model to remain close to the old model, leading a steady improvement. When \(\beta\) is close to 1.0, the performance is almost identical to SFT. ### Effect of Weight Factor \(\alpha\) In our reward aggregation method, a weight factor \(\alpha\) is used to control the importance of different rewards. Although using the reward aggregation method without weights (i.e., removing \(\alpha\) and \(1-\alpha\)) performs better, here we investigate the effect of \(\alpha\) (from 0.1 to 1.0). The results are shown in Table 6. From the results, there is no explicit pattern, but in general, larger values of \(\alpha\) result in better performance. This means in reward aggregation, the reward from reward model \(R_{\phi}\) is more significant than metric reward \(R_{m}\). ### Discussion During our experiments, an important finding is that RL is expected to help with inferring new information. This means for the task where the model needs to generate outputs containing inferred information, RL could be a promising method. For example, in the SEG task, the explanation graph may contain some external commonsense concepts, which should be inferred from the model's weights (implicit knowledge). This is usually difficult for SFT model, especially when encountering some unseen data in test stage. However, RL provides more possibilities for the model to learn to infer these external concepts. Our preliminary experiments using RL for other Text-to-Graph generation tasks (i.e., WebNLG) indicated that RL may not be an effective method when SFT reaches the task performance ceiling. ## 5 Conclusion In this work, we focused on the semi-structured explanation generation task and proposed to use \begin{table} \begin{tabular}{l c c c c c} \hline & SA1 & SICAT & SeCAT & G-BS1 & GED1 & Ext \\ \hline PLAN-TS-XXL - SFT & 91.71 & 46.98 & 35.18 & 36.14 & 0.66 & 31.23 \\ + RL, \(\alpha=0.1\) & 91.96 & 50.00 & 39.45 & 38.68 & 0.64 & 34.12 \\ + RL, \(\alpha=0.2\) & 92.46 & 46.48 & 36.18 & 35.82 & 0.67 & 31.22 \\ + RL, \(\alpha=0.3\) & **92.12** & 46.98 & 36.93 & 36.04 & 0.66 & 32.71 \\ + RL, \(\alpha=0.4\) & 91.71 & 52.76 & 39.95 & 40.83 & 0.62 & 35.59 \\ + RL, \(\alpha=0.5\) & 91.46 & 51.76 & 41.21 & 40.59 & 0.63 & 35.53 \\ + RL, \(\alpha=0.6\) & 91.71 & 57.58 & **42.46** & **44.43** & **6.09** & 37.85 \\ + RL, \(\alpha=0.7\) & 91.46 & 50.50 & 38.44 & 40.69 & 0.64 & 34.36 \\ + RL, \(\alpha=0.8\) & 91.71 & 48.99 & 35.18 & 39.65 & 0.65 & 32.58 \\ + RL, \(\alpha=0.9\) & 91.46 & **56.03** & **42.46** & 44.25 & **0.60** & **38.67** \\ \hline \end{tabular} \end{table} Table 6: The evaluation results on ExplaGraph dev set using different values of weight factor \(\alpha\). The KL Coefficient \(\beta\) used is 0.3 for all experiments. \begin{table} \begin{tabular}{l c c c c c c} \hline & SAT & SICAT & SeCAT & G-BS1 & GED1 & Ext \\ \hline PLAN-TS-XXL - SFT & 91.71 & 46.98 & 35.18 & 36.14 & 0.66 & 31.23 \\ + RL, \(\beta=0.1\) & 91.46 & 48.99 & 38.44 & 38.70 & 0.65 & 32.88 \\ + RL, \(\beta=0.2\) & 91.71 & 51.51 & 37.69 & 41.33 & 0.64 & 33.90 \\ + RL, \(\beta=0.3\) & 91.96 & **61.81** & **48.49** & **47.50** & **0.56** & **44.16** \\ + RL, \(\beta=0.4\) & **92.21** & 47.47 & 49.69 & 36.91 & 0.66 & 32.65 \\ + RL, \(\beta=0.5\) & **92.21** & 54.77 & 38.19 & 44.21 & 0.61 & 36.16 \\ + RL, \(\beta=0.6\) & **92.21** & 52.23 & 37.77 & 42.10 & 0.63 & 34.45 \\ + RL, \(\beta=0.7\) & **92.21** & 48.78 & 36.34 & 40.18 & 0.65 & 32.60 \\ + RL, \(\beta=0.8\) & **92.21** & 46.23 & 35.43 & 35.13 & 0.67 & 31.47 \\ + RL, \(\beta=0.9\) & **92.21** & 44.17 & 34.23 & 34.58 & 0.67 & 31.03 \\ + RL, \(\beta=1.0\) & **92.21** & 47.74 & 33.92 & 38.61 & 0.66 & 30.54 \\ \hline \end{tabular} \end{table} Table 5: The evaluation results on ExplaGraph dev set using different values of KL Coefficient \(\beta\). For the reward aggregation in RL, we use the aggregation method without weights. train a single model with SFT+RL to generate both answers and structured explanations. We highlight the inadequacy of SFT in performing this complex task, and proposed a carefully designed reward engineering method in RL to better address this problem. We investigated different reward aggregation methods and conduct extensive experiments under different settings to better highlight the dynamic of the RL objective function and reward model choices. Our method achieves the new SoTA results on two SEG benchmarks, ExplaGraph and COPA-SSE.
2309.07712
Normalized factorial moments of spatial distributions of particles in high multiplicity events: A Toy model study
In ultra-relativistic heavy-ion collisions a strongly interacting complex system of quarks and gluons is formed. The nature of the system so created and the mechanism of multi-particle production in these collisions may be revealed by studying the normalized factorial moments ($F_{{\rm{q}}}$) as function of various parameters. The resilience of $F_{{\rm{q}}}$ moments studied using Toy model events shows that these are sensitive to the presence of dynamical fluctuations in the system and are robust against the uniform efficiencies in the data measurements. Results of this study serve as a suitable reference baseline for the experimental and simulation studies.
Sheetal Sharma, Salman Khurshid Malik, Zarina Banoo, Ramni Gupta
2023-09-14T13:46:23Z
http://arxiv.org/abs/2309.07712v2
Normalized factorial moments of spatial distributions of particles in high multiplicity events: A Toy model study ###### Abstract In ultra-relativistic heavy-ion collisions a strongly interacting complex system of quarks and gluons is formed. The nature of the system so created and the mechanism of multi-particle production in these collisions may be revealed by studying the normalized factorial moments (\(F_{\rm q}\)) as function of various parameters. The resilience of \(F_{\rm q}\) moments studied using Toy model events shows that these are sensitive to the presence of dynamical fluctuations in the system and are robust against the uniform efficiencies in the data measurements. Results of this study serve as a suitable reference baseline for the experimental and simulation studies. ## I Introduction The Large Hadron Collider (LHC) at CERN, Switzerland [1], and the Relativistic Heavy-Ion Collider (RHIC) at BNL, USA [2], serve the purpose of studying the deconfined state of strongly interacting particles, the Quark-Gluon Plasma (QGP) and unraveling its properties by colliding nuclei at ultra-relativistic energies [3; 4]. This state of matter can be obtained by colliding heavy nuclei at high energies such that the energy density and temperature melts hadrons [5]. As the medium created cools down, a transition occurs from the QGP state to the hadronic state. One of the aims of these collider experiments is to probe the phase diagram of this strongly interacting matter and its properties [6; 7; 8]. In the phase diagram of a system, at a critical point, the correlation length of the system becomes infinite and scale invariant [9]. A study of critical behavior is vital to have a deeper understanding of the properties of any system and knowledge about the nature of the phase transition between different phases. At this point, the system exhibits large fluctuations in various observables [10; 11], and a study of these provide a powerful means to understand the myriad characteristics of the system. Some of the features of the matter may only get revealed at very high energy density [12]. The charged particle multiplicity in heavy-ion collisions is a function of the energy and temperature. Fluctuations in the multiplicity distributions are one of the main observables of these collision experiments. In [13], it is suggested that the normalized factorial moments (NFMs) of the charged particle multiplicity distributions recorded using the colliders, as at LHC, be analyzed for the study of bin-to-bin and event-to-event multiplicity fluctuations. A power-law behavior of the normalized factorial moments of the particle density fluctuations in spatial or momentum space with an increasing number of bins is termed as intermittency [14; 15; 16]. This analysis technique has been performed for various systems at low energies in search of QGP and to understand the quark-hadron phase-transition [12]. However, investigations of heavy-ion collision data using this tool have led to no definitive conclusions on the critical point, order of phase transition, or nature of multiplicity fluctuations [16] because of low bin multiplicities. The availability of data with high charged particle density per bin has generated a renewed interest, such as at the STAR experiment at RHIC [17], ALICE at CERN [18], to use this methodology to comprehend the multiparticle production processes. A detailed study of the normalized factorial moments of the spatial distributions in high multiplicity events generated using the Toy model is reported here in extension to [19]. In section II an introduction of the Toy model event generation followed by details on the methodology of analysis is given. Observations and results of the analyses are discussed in section III and a summary of the work is given in section IV. ## II Event generation and analysis methodology Processes leading to multiparticle production in heavy-ion collisions are still not known. A number of methodologies and event generators developed using theories and models are studied to investigate particle production mechanisms to find answers to these mysteries. In high-energy physics research event generators are extensively employed by experimentalists to simulate experimental conditions and to understand outcomes from the experiments based on known physics implemented in the models. Here, a baseline behavior of the normalized factorial moments, in the contours of the methodology proposed in [13] is investigated using events generated with a Toy model, for its suitability to analyze heavy-ion collision data. Intermittency analysis, first proposed in 1986 [14] to investigate fluctuations in the pseudorapidity distributions of some cosmic ray events, is suggested [13; 20; 21] to be performed for high multiplicity data recorded by the detectors in the recent colliders, such as at the LHC, to understand the multiparticle production and quark-hadron phase transition. The ALICE experimental setup at LHC has a central barrel detector that includes TPC, ITS, and TOF detectors. These detectors together measure the charged particles within a common angular phase space of pseudorapidity (\(|\eta|\leq 0.8\)) and full azimuthal angle (\(0\leq\varphi\leq 2\pi\)) [22; 23]. Toy model event samples are generated with charged particle multiplicity distributions similar to the one recorded for midrapidity region by ALICE detector for low \(p_{\mathrm{T}}\) tracks (\(<\) 2.0 GeV/c). Four event samples with multiplicity distributions, as shown in Fig. 1, having mean multiplicity 400, 900, 1300, and 1900 are simulated using global random generator object (gRandom) provided by the ROOT [24] analysis framework, using system clock as seed. Uniformly distributed particles are generated in the spatial phase space (\(\eta,\varphi\)), having \(|\eta|\,\leq 0.8\) and \(0\leq\varphi\leq 2\pi\) such that there are no bin-to-bin (spatial) fluctuations. Lattice QCD predicts large density fluctuations in a system at a critical point while undergoing phase transition. It is argued in [13] that the behavior of NFM as a function of the number of bins is a good measure of fluctuations in the spatial configurations of produced particles in the heavy-ion collisions. This methodology, similar to the intermittency analysis as proposed earlier in [14; 15; 25; 26], has been tested for some of the models at LHC energies [19; 27]. In present work, a two-dimensional (2D) analysis is performed for the Toy model events wherein for an event the kinematic phase space is partitioned into a square lattice with \(M_{\eta}\) and \(M_{\varphi}\) number of bins along \(\eta\) and \(\varphi\), respectively, as is depicted in Fig.2. With \(M_{\eta}\!=\!M_{\varphi}\) there are \(M^{2}\) number of bins, such that \(q^{\mathrm{th}}\) order normalized factorial moments (NFM), for a total of \(N\) events, is defined as \[F_{\mathrm{q}}(M)=\frac{\frac{1}{N}\sum_{e=1}^{N}\frac{1}{M^{2}} \sum_{i=1}^{M^{2}}f_{q}^{e}(n_{\mathrm{ie}})}{\left(\frac{1}{N} \sum_{e=1}^{N}\frac{1}{M^{2}}\sum_{i=1}^{M^{2}}f_{1}^{e}(n_{\mathrm{ie}}) \right)^{q}} \tag{1}\] where \[f_{\mathrm{q}}^{e}(n_{\mathrm{ie}})=\Pi_{j=0}^{q-1}(n_{\mathrm{ie}}-j), \tag{2}\] and \(n_{\mathrm{ie}}\) is the bin multiplicity (number of particles) in the \(i^{\mathrm{th}}\) bin of the \(e^{\mathrm{th}}\) event. \(q\) takes positive integer values \(\geq 2\). \(M\) is varied from 4 to 100, and \(q\) from 2 to 5 in integral steps. By definition, these normalized factorial moments (\(F_{\mathrm{q}}(\mathrm{M})\)) filter out statistical fluctuations. If fluctuations in the spatial distributions of the particles are Poissonian, then \(F_{\mathrm{q}}(\mathrm{M})\!=\!1\)[14]. A power-law behavior of \(F_{q}\) for \(q\geq 2\) as function of \(M^{\mathrm{D}}\) (D signifies the phase space dimensions, in the present analysis D=2) \[F_{\mathrm{q}}(M)\propto(M^{2})^{\phi_{\mathrm{q}}} \tag{3}\] then defines intermittency (also termed as M-scaling), where \(\phi_{\mathrm{q}}>0\) is known as intermittency index or scaling index. Dependence of \(F_{\mathrm{q}}(M)\), for \(\mathrm{q}>2\), on the second order factorial moment (\(F_{2}(\mathrm{M})\)) is proposed in [20; 28], where the intermittency is studied within the framework of Ginzburg-Landau formalism, such that \[F_{\mathrm{q}}(M)\propto(F_{2}(M))^{\beta_{\mathrm{q}}}, \tag{4}\] and \(\beta_{q}\) is the exponent which depends on the bin resolution and number of bins \(M\). This scaling is termed as Figure 1: Multiplicity distributions for the four sets of event samples generated using Toy model. Figure 2: Graphic illustration showing two dimensional (\(\eta,\varphi\)) phase space partitioned in \(\mathrm{M}\times\mathrm{M}\) bins and mapping of tracks of an event onto this partitioned phase space region (grid) is also shown. F-scaling. \(\beta_{q}\) and \(\phi_{q}\) depend on different critical parameters of the system. Thus, F-scaling is independent of the M-scaling behavior. Relating to order of the moment (\(q\)), a scaling exponent (\(\nu\)) can be obtained from the relation \[\beta_{\rm q}=(q-1)^{\nu}. \tag{5}\] The scaling exponent is independent of the specific values of critical parameters and serves as a tool to examine the occurrence of QCD phase transition [13; 20; 29]. Theoretical predictions from Ginzburg-Landau (GL) theory [20] for the value of \(\nu\) is 1.304, and 1.0 based on calculations derived from a two-dimensional Ising model [29]. The scaling behavior and sensitivity of NFM for the Toy model events is investigated. In the backdrop of various detector effects that may distort the original signal the efficiency correction method, required to be implemented for the extraction of true behavior of NFM is also studied. ## III Observations and discussion ### Scaling of normalized factorial moments (NFM) An event-by-event analysis is performed where an event is mapped onto a two-dimensional phase space partitioned into \(M^{2}\) bins. Event factorial moments are determined for q = 2, 3, 4, and 5 using Eq. 2. Normalized factorial moments (\(F_{q=2,3,4,5}(M)\)) are then calculated for the whole event sample, as defined in Eq. 1. Fig. 3(a) shows the dependence of \(F_{q}\) on \(M^{2}\) in a log-log plot for an event sample with average multiplicity of 900. Statistical uncertainties on the data points are estimated using the sub-sampling method. For the second order NFM \(F_{\rm q}(M)\) is observed to be \(>1\). As the value of \(q\) increases, the deviation of \(F_{q}\) from 1 increases. However, \(F_{q}\) shows no dependence on \(M\) (no power-law as in Eq. 3), i.e., as \(M\) increases there is negligible change in the \(F_{\rm q}\) (for all \(q^{\prime}\)s) and thus no intermittency in the generated events. No significant rise in \(\ln F_{\rm q}\) with \(\ln M^{2}\) seen for the Toy model indicates the absence of self-similar modeling in particle generation. F-scaling (Eq. 4), a \(\ln F_{q}(M)\) versus \(\ln F_{2}(M)\) plot, for the same event sample is given in Fig. 3(b). For \(q>2\), \(F_{q}\) shows linear dependence on \(F_{2}\) and line fit to these graphs give slopes, \(\beta_{q}\). A dimensionless scaling exponent (Eq. 5), which quantitatively characterizes the geometric spatial configurations of the Toy model events, obtained using these slope values, is discussed in section III.3. ### Sensitivity of NFM to gauge fluctuations The independence of \(F_{\rm q}({\rm M})\) on the number of bins (\(M^{2}\)) for the Toy model events, as observed above, can be because of two possible reasons. One can be that the observable \(F_{\rm q}(M)\) are not sensitive to multiplicity fluctuations from bin-to-bin and the second possible reason which can lead to this observation can be that there are no bin-to-bin fluctuations in the events. The Toy model events have been simulated for uniform spatial distributions with no bin-to-bin fluctuations. Thus, the behavior of NFMs is required to be investigated in the presence of fluctuations. For investigating the sensitivity of the observable to gauge the density fluctuations in the spatial configurations, fluctuations are added with hand (termed hereafter _artificial fluctuations_) in each event, and the scaling behavior of normalized factorial moments is studied. The event sample that results after including artificial fluctuations in each Toy model event is then termed as _modified Toy model event_. For getting such an event with artificial fluctuations, a number of tracks equal to 5% of the multiplicity of an event, are added in some region of the phase space of the event. At the same time, an equal number of tracks are removed from the rest of the region to keep the multiplicity distribution of the modified Toy model events same as that of the original Toy model Figure 4: (Left plot) Spatial distribution of particles in the (\(\eta,\varphi\)) phase space of an event. Panel on right shows the spatial distribution of particles of the same event with 5% tracks added in selected bins while removing equal number of tracks from rest of the region (modified Toy model events). Figure 3: (Left) log-log plot of \(F_{\rm q}(M)\) vs \(M^{2}\) (M-scaling) for q = 2, 3, 4 and 5; (Right) log-log plot of \(F_{\rm q}(M)\) vs \(F_{2}\) (F–scaling) for q = 3, 4 and 5 for the event sample with mean multiplicity \(\langle N\rangle=900\). Similar observations are made for the other event samples studied. event. This makes some of the bins in the phase space, more populated than the rest of the bins, thereby including multiplicity spikes in the spatial configurations which replicates a system having critical fluctuations. Fig. 4, depicts (\(\eta\), \(\varphi\)) lego plot of one such Toy model event (left panel) in which the phase-space is partitioned into 40 bins and the same event after it is modified with artificial fluctuations is also shown (right panel). Analysis is performed on the modified Toy model events, and normalized factorial moments are determined for q = 2, 3, 4 and 5 with M varying from 4 to 100. The M-scaling behavior is studied, where in Fig.5(a) log-log plot between \(F_{\text{q}}(\text{M})\) and \(M^{2}\) is given. A power-law dependence of \(F_{\text{q}}(\text{M})\) on \(M^{2}\) is observed. For high \(M\) values \(F_{\text{q}}>>1\) and this deviation increase as M increases. These results show that NFM are sensitive to bin-to-bin multiplicity fluctuations and thus are suitable measure to gauge fluctuations. ### Scaling exponent (\(\nu\)) and its dependence on multiplicity Scaling exponent (\(\nu\)) obtained from a line fit to the \(\ln\beta_{q}\) vs \(\ln(q-1)\) plot (Eq. 5) provides valuable insight into the underlying physics of the heavy-ion collisions [20]. Scaling exponent for the Toy model events, with no bin-to-bin fluctuations, is \(1.603\pm 0.016\). In case of modified Toy model events the value of \(\nu\) is \(0.998\pm 0.004\) as is shown in Fig. 6. Toy model events having no preferred bins for track generation, a little addition of tracks in a small region of phase space, alters bin multiplicity and gives large fluctuations in spatial patterns with \(\nu\approx 1.00\), a value which is observed in case of two-dimensional Ising model with large fluctuations [12]. Multiplicity increases with the increase in collision energy. It would be interesting to know whether there is any multiplicity and hence energy dependence of the scaling exponent (\(\nu\)). To investigate this, events with multiplicity distributions, having value of the mean varying from 300 to 1900 (Fig. 1), are generated and analyzed to study scaling behavior. Scaling exponent (\(\nu\)) is observed to be independent of the average multiplicity as is shown in Fig. 7, where the values predicted from models and theory are also shown. The Toy model events which do not have any bin-to-bin fluctuations the scaling exponent is different from the one predicted for the systems with critical fluctuations and is also independent of the number of particles produced (multiplicity). This signifies the importance of the scaling exponent that is a pure number which quantifies the characteristic inherent fluctuations in the system. ### Resilience of NFM to efficiency corrections In heavy-ion collisions, particles produced are recorded by detectors which may have some inefficiencies in detecting all the particles of interest. These limitations stem Figure 7: Scaling exponent as function of average multiplicitiesof the four sets of Toy model events. Scaling exponent values predicted from the other models are also shown. Figure 6: Scaling exponent (\(\nu\)) obtained from line fit to \(ln\beta_{\text{q}}\) vs \(ln(q-1)\) plot for the Toy model events and modified Toy model events. from non-optimal detector resolution and inefficiencies in tracking routines. The measurements and calculations of the observable are thus affected by these inefficiencies and need to be corrected before making any reasonable conclusions. Monte Carlo event generators may be used to calculate overall efficiencies. For this, model generated events are passed through the detector geometry and then recorded and track reconstruction routines are applied as in the case of experimental setup. Detector efficiency is the ratio of the number of tracks detected by the detector geometry to the number of generated tracks in the acceptance region. Constrained by different conditions, detector efficiencies in the acceptance region can be binomial or non-binomial. The inefficiencies may alter true values of the normalized factorial moments and hence also their behavior with the bin resolution or the number of bins (\(M^{2}\)). The two-dimensional efficiency maps are determined for each \(M\) value. If \(\epsilon_{i}\) denotes the efficiency of the \(i^{\text{th}}\) bin of the partitioned phase space, then the normalized factorial moments, as defined in Eq.(1), are corrected for the efficiencies as \[F_{\text{q}}^{corr.}(M)=\frac{\frac{1}{N}\sum_{e=1}^{N}\frac{1}{M^{2}}\sum_{i =1}^{M^{2}}\frac{f_{q}^{e}(n_{\text{ie}})}{\epsilon_{i}^{q}}}{\left(\frac{1}{ N}\sum_{e=1}^{N}\frac{1}{M^{2}}\sum_{i=1}^{M^{2}}\frac{f_{1}^{e}(n_{\text{ie}})}{ \epsilon_{i}}\right)^{q}}, \tag{6}\] and thus \(F_{q}^{corr}\) is termed as efficiency corrected normalized moments. For the uniform (binomial) and the non-uniform (non-binomial) efficiencies in the acceptance region of the detectors, how efficiency corrections affect the NFM is studied. To create events with 80% overall uniform efficiency to detect particles, 20 percent of the tracks are uniformly removed from the Toy model events. The events so obtained are termed here as the reconstructed-uniform (recU) events and have 80% of the tracks that are there in the Toy model events. The analysis is performed on these events and using Eq. 1 NFM \(F_{q}^{recU}\)(M) are obtained. Two-dimensional efficiency maps are calculated for all \(M\) values such that for each \(i^{\text{th}}\) bin in the phase space that is divided in \(M^{2}\) bins, there is an efficiency value \(\epsilon_{i}\). For recU events corrected normalized factorial moments, \(F_{q}^{corr.recU}\)(M), are calculated using Eq. 6. In Fig. 8, M-scaling plots for q = 2 are given for these three cases. It is observed that \(F_{2}\)(M) \(\approx\)\(F_{2}^{recU}\)(M) \(\approx\)\(F_{2}^{corr.recU}\)(M). Therefore, the ratios \(F_{2}^{recU}\)/\(F_{2}\) and \(F_{2}^{corr.recU}\)/\(F_{2}\)\(\approx\) 1, as shown in the lower panel of the figure. Thus, the true values of NFM are not affected when the efficiencies are of binomial in nature. In other words, normalized factorial moments are robust against binomial (uniform) efficiencies. For events with 80% non-uniform efficiencies, 20 percent of the tracks are removed from the selected phase space region of the Toy model events thereby resulting in a set of events with non-binomial efficiencies. The set of events so obtained are termed reconstructed-non uniform (recNU) events. The analysis is performed on recNU events and normalized factorial moments (\(F_{q}^{recNU}\)(M)) are calculated using Eq. 1. As observed in Fig. 9 it is observed that \(F_{q}\)(M) \(\neq\)\(F_{q}^{recNU}\)(M). Efficiency maps are obtained for each \(M\) value so as to have efficiency \(\epsilon\) for each bin. Corrected NFM (\(F_{q}^{corr.recNU}\)(M)) are calculated using Eq. 6. It is observed (Fig. 9) that \(F_{q}\)(M) \(\equiv\)\(F_{q}^{corr.recNU}\)(M) and ratio \(F_{q}^{corr.recNU}\)/\(F_{q}\) = 1 that is shown with red markers in the lower panel. The true values of the normalized factorial moments are reproduced by applying NFM formula with efficiency corrections. Thus, in the case of non-binomial (non-uniform) detector efficiencies, Eq. 6 should be used to obtain the true values for NFM. ## IV Summary Normalized factorial moments (NFM) in the contours of _intermittency_ analysis are studied for the high multiplicity Toy model events simulated using uniform distribution function. NFM do not show dependence on the number of bins. However, the methodology is found to be sensitive to the density fluctuations in the multiplicity distributions. A baseline value of the scaling exponent (\(\nu\)) from the Toy model events is \(\,1.603\pm 0.016\), a value which is greater than the value predicted for the second-order phase transition in the Ginzburg-Landau formalism. In addition, for the Toy model events \(\nu\) is independent of the multiplicity but is dependent on the nature of distributions and hence the inherent fluctuations. \(F_{q}\) is robust against the uniform efficiencies but for non-uniform efficiencies, calculations using efficiency-corrected NFM formula reproduce the true values of the observable. The observations and results obtained here Figure 8: \(F_{q=2}\) as function of \(\ln M^{2}\) for Toy model events, reconstructed-Uniform (recU) events and efficiency corrected values of NFM for recU events. The lower panel shows the ratio graphs. serve as a baseline behavior of NFMs for future experimental investigations in this field. ## V Acknowledgement R.G. is grateful to Rudolph C. Hwa and Edward Sarkisyan-Grinbaum for many fruitful discussions. The authors are thankful to Tapan K. Nayak, Igor Altsybeev, and Mesut Arslandok for the helpful discussions to complete this study. This work is partially funded by RUSA 2.0 grant sanctioned in favor of one of the authors by the Ministry of Education, Government of India.
2309.11170
AutoSynth: Learning to Generate 3D Training Data for Object Point Cloud Registration
In the current deep learning paradigm, the amount and quality of training data are as critical as the network architecture and its training details. However, collecting, processing, and annotating real data at scale is difficult, expensive, and time-consuming, particularly for tasks such as 3D object registration. While synthetic datasets can be created, they require expertise to design and include a limited number of categories. In this paper, we introduce a new approach called AutoSynth, which automatically generates 3D training data for point cloud registration. Specifically, AutoSynth automatically curates an optimal dataset by exploring a search space encompassing millions of potential datasets with diverse 3D shapes at a low cost.To achieve this, we generate synthetic 3D datasets by assembling shape primitives, and develop a meta-learning strategy to search for the best training data for 3D registration on real point clouds. For this search to remain tractable, we replace the point cloud registration network with a much smaller surrogate network, leading to a $4056.43$ times speedup. We demonstrate the generality of our approach by implementing it with two different point cloud registration networks, BPNet and IDAM. Our results on TUD-L, LINEMOD and Occluded-LINEMOD evidence that a neural network trained on our searched dataset yields consistently better performance than the same one trained on the widely used ModelNet40 dataset.
Zheng Dang, Mathieu Salzmann
2023-09-20T09:29:44Z
http://arxiv.org/abs/2309.11170v1
# AutoSynth: Learning to Generate 3D Training Data ###### Abstract In the current deep learning paradigm, the amount and quality of training data are as critical as the network architecture and its training details. However, collecting, processing, and annotating real data at scale is difficult, expensive, and time-consuming, particularly for tasks such as 3D object registration. While synthetic datasets can be created, they require expertise to design and include a limited number of categories. In this paper, we introduce a new approach called AutoSynth, which automatically generates 3D training data for point cloud registration. Specifically, AutoSynth automatically curates an optimal dataset by exploring a search space encompassing millions of potential datasets with diverse 3D shapes at a low cost. To achieve this, we generate synthetic 3D datasets by assembling shape primitives, and develop a meta-learning strategy to search for the best training data for 3D registration on real point clouds. For this search to remain tractable, we replace the point cloud registration network with a much smaller surrogate network, leading to a \(4056.43\) times speedup. We demonstrate the generality of our approach by implementing it with two different point cloud registration networks, BPNet [13] and IIDAM [34]. Our results on TUD-L [26], LINEMOD [23] and Occluded-LINEMOD [7] evidence that a neural network trained on our searched dataset yields consistently better performance than the same one trained on the widely used ModelNet40 dataset [65]. ## 1 Introduction 3D point cloud registration, which aims to estimate the relative transformation between two given point clouds, is a traditional computer vision task. With the advent of deep learning, point cloud registration is nowadays commonly tackled with deep networks, achieving impressive results. The main research direction in this area consists of designing new network architectures to improve performance. Here, by contrast, we argue that the quantity and quality of training data have as crucial an impact on the networks' performance as its architecture and training details, and thus advocate data creation as a research goal in itself. The traditional approach to collecting 3D registration data consists of scanning real objects. This, however, is highly time-consuming and does not scale to the quantity of data commonly expected for deep network training. Generating synthetic data, therefore, comes as a promising alternative. Nevertheless, it requires access to 3D object models, thus often limiting the number of categories, and human expertise to generate realistic data, typically leading to a domain gap w.r.t. real-world point clouds despite best efforts. In this work, we address this by introducing an approach dubbed AutoSynth, automating the process of curating a 3D dataset. Specifically, we aim for the resulting dataset to act as effective training data for a 3D object registration network that will then be deployed on real-world point clouds. To achieve this, we develop a meta-learning strategy that searches for the optimal dataset over a space encompassing millions of potential datasets, covering a wide diversity of 3D shapes. The search is guided by a target real-world dataset, thus producing data that reduces the domain gap. Our experiments demonstrate that the resulting training dataset yields improved registration performance not only on the target data but on other real-world point clouds. For this to be possible, we design a very large search space based on the assumption that complex shapes can be created by combining simple primitives. Diverse datasets can then be sampled from this space, and we design an evolutionary algorithm to automatically curate the best training dataset to achieve high performance on the target data. Employing a registration network in the search process, however, would be impractical as even the smallest competitive model would require \(1,875\) GPU days on a single RTX8000 for only \(1,000\) search steps. To make the search tractable, we observe that the true quality function, i.e., the accuracy of the registration network of interest, can be replaced with a proxy one, i.e., the reconstruction accuracy of an autoencoder. Specifically, our experiments evidence that, for the same training and testing data, registration accuracy and reconstruction quality follow the same trend, even when using an autoencoder whose architecture is orders of magnitude smaller than that of any registration network able to produce nontrivial results. As such, our approach yields a \(4056.43\times\) speedup compared to using a registration network. We demonstrate the generality of our approach by implementing it with two different point cloud registration networks, BPNet [13] and IDAM [34]. Our results on TUD-L [26], LINEMOD [23] and Occluded-LINEMOD [7] consistently demonstrate that a neural network trained on our searched dataset achieves better performance than the same one trained on the widely used ModelNet40 dataset [65]. Our main contributions can be summarized as follows: * We present AutoSynth, a novel meta-learning-based approach to automatically generate large amounts of 3D training data and curate an optimal dataset for point cloud registration. * We show that the search can be made tractable by leveraging a surrogate network that is \(4056.43\) times more efficient than the point cloud registration one. * We evidence that using a single scanned real-object as target dataset during the search yields a training set that leads to good generalization ability. ## 2 Related Work **Traditional point cloud registration methods.** Point cloud registration aims to estimate the relative pose between two input point sets. Many algorithms [2, 41, 46, 42, 17, 40, 50, 27, 29, 54, 53, 70, 33, 1, 24, 22, 18, 9, 8, 21, 35] have contributed to achieving this. The best-known one probably is Iterative Closest Point (ICP) [5], which has served as basis for many variants, such as Generalized-ICP [55] and Sparse ICP [6], aiming to improve robustness to noise and mismatches. We refer the reader to [44, 52] for a review of ICP-based strategies. The main drawback of ICP-based methods is their requirement for a reasonable initialization to converge to a good solution. As a consequence, recent efforts have been made towards global optimization strategies, leading to algorithms such as Go-ICP [72], Super4PCS [41], and Fast Global Registration (FGR) [77]. While effective, these methods still suffer from the presence of noise and outliers in the point sets. This is addressed by post-processing strategies, such as that of [60], TEASER [70], and TEASER++ [71]. **Learning-based object point cloud registration.** Following the current trend in computer vision, much recent point cloud registration research has focused on a deep learning-based approach. A key requirement to achieve this was the design of deep networks acting on unstructured sets. Deep sets [75] and PointNet [45] constitute pioneering works in this direction. In particular, PointNetLK [3] combines the PointNet backbone with the traditional, iterative Lucas-Kanade (LK) algorithm [39] so as to form an end-to-end registration network; DCP [62] exploits DGCNN [64] backbones followed by Transformers [59] to establish 3D-3D correspondences. While effective, PointNetLK and DCP cannot tackle the partial-to-partial registration scenario. That is, they assume that both point sets are fully observed, during both training and test time. PRNet [63] and IDAM [34] address this via a deep network designed to extract keypoints from each input set and match these keypoints. By contrast, RPM-Net [73] and RGMNet [19] build on DCP and adopt a different strategy, replacing the softmax layer with an optimal transport one so as to handle outliers. DeepGMR [74] leverages mixtures of Gaussians and formulates registration as the minimization of the KL-divergence between two probability distributions to handle outliers. While the above-mentioned methods were designed to handle point clouds in full 3D, the recent BPNet [13] was shown to successfully tackle registration from 2.5D measurements, including on real scene datasets, such as TUD-L [26], LM [23] and LMO [7]. Here, we follow an orthogonal direction to these works, and address the task of learning to generate synthetic training data to generalize to real scene test-time observations. We will demonstrate our approach using both BPNet [13] and IDAM [34]. **Learning to generate training data.** Data is essential for the success of learning-based methods, including point cloud registration ones. While much effort has been made to obtain real-world 3D ground truth [26, 23, 7, 25, 16, 31, 67, 48, 15, 58, 20], synthetic data generation [65, 66, 37, 61, 11] has emerged as an effective alternative source for supervision. For such synthetic datasets, e.g., ModelNet40 [65], the creation of each mesh model nonetheless requires human supervision to control its size, position, texture, etc. Hence producing a large amount of synthetic objects remains laborious. This raises the question of the feasibility to automatically generate the training data. In this context, most existing works focus on synthesizing images. For example, the work of [51], Meta-Sim [30], Meta-Sim2 [14], and AutoSimulate [4] learn simulator hyperparameters to maximize the performance of a model on semantic segmentation or object detection. This is achieved by treating the entire data generation and network training pipeline as a black-box, and using reinforcement learning-based gradient estimation. However, these methods still require manually-designed object and scene models as input to the simulator, thus limiting the generated data to a small number of scenes. By contrast, AutoFlow [57] leverages web images to learn to generate image pairs, thus greatly increasing the data diversity. This, however, does not easily generalize to generating point cloud data. A few works have nonetheless tackled the problem of generating 3D data using shape primitives. In particular, [68] does so to build training data for a shape-from-shading network that recon structs object shapes from image sequences; [69] generates 3D synthetic training data to estimate the surface normals, depth, albedo, and shading maps from a single RGB image. Importantly, these techniques rely on the main task network to evaluate the effectiveness of the training data. With the typical growth of state-of-the-art deep network for point cloud registration, this would result in an intractable computational cost. Here, we therefore propose to replace the main task network with a lightweight surrogate network in the searching phase, which we demonstrate to maintain the final performance while requiring three orders of magnitude less computation. Note that our approach does not follow the predictor-based strategy commonly used in neural architecture search [36, 43, 12]. Specifically, these methods still require training a thousand target models to then train the predictor, which remains too expensive for the computationally-intensive state-of-the-art point cloud registration networks. Here, instead, we leverage a surrogate network that completely replaces the original one. ## 3 Methodology ### Problem Formulation Our objective is to automatically generate a synthetic 3D dataset \(D_{syn}\) such that the main task model (MTM), i.e., a point cloud registration model \(\Psi\) in our case, achieves maximum accuracy on the test set when trained on \(D_{syn}\) until convergence. The test set is evidently not available during training, and thus we mimic it with a target dataset \(D_{tgt}\). Formally, we express the problem of searching for a synthetic dataset \(D_{syn}\) as that of finding a policy \(P\), encompassing hyperparameters to generate a 3D dataset, such that \(\Psi(w,D_{syn}(P))\) achieves the best performance on \(D_{tgt}\). The set of all policies is referred to as the search space \(O\), and we use an evolutionary algorithm to find the best policy \(\hat{P}\) that minimizes the evaluation loss \[\hat{P}=\operatorname*{argmin}_{P\in O}\ \mathcal{L}_{eval}(\Psi(w,D_{syn}(P)),D_{tgt}), \tag{1}\] where \(w\) denotes the weights of the MTM trained on \(D_{syn}\) until convergence. ### Search Space The search space defines the set of policies that the meta-learning method can explore during training. In other words, it encompasses all possible training datasets, with each policy corresponding to the hyperparameters used to create one 3D dataset. To generate a dataset, we exploit the observation that complex shapes can be obtained by combining simple primitives [10, 68, 56], such as cuboids, cones, cylinders, etc. Following [49, 68], we define each shape primitive as an implicit surface function \(\mathcal{F}:\mathbb{R}^{3}\rightarrow\mathbb{R}\), such that a point \(\mathbf{x}\in\mathbb{R}^{3}\) on the primitive's surface satisfies \(\mathcal{F}(\mathbf{x})=0\), whereas \(\mathcal{F}(\cdot)<0\) for interior points and \(\mathcal{F}(\cdot)>0\) for exterior ones. In other words, \(\mathcal{F}\) encodes a signed distance function. Each primitive can then undergo a set of transformations. Specifically, we focus on affine transformations, such as translation, rotation, scaling, shearing, and stretching. For a 3D point \(\mathbf{x}\), this can be expressed as \[\mathcal{T}(\mathbf{x})=\alpha T_{rot}T_{shear}T_{stretch}\mathbf{x}- \mathbf{t}, \tag{2}\] where \(\alpha\) is a scaling parameter controlling the overall size of the primitive, \(\mathbf{t}\) is a translation vector, \(T_{rot}\) is a rotation matrix, \(T_{shear}=S_{x}S_{y}S_{z}\) is a matrix combining shearing operations along the different axes, and \(T_{stretch}=A_{x}A_{y}A_{z}\) is a matrix controlling the scale of the primitive along the different axes. Given an existing shape primitive \(\mathcal{F}(\mathbf{x})\), the transformed shape can be obtained as \(\mathcal{F}(\mathcal{T}(\mathbf{x}))\). To composite the individual transformed primitives into a complex shape, we utilize logic operators between shapes, as discussed below. Specifically, to create more distinct shapes from our primitives, we perform truncation with a plane. Let \(\mathcal{F}(\mathbf{x})\) denote a transformed primitive, where we neglect the explicit dependency on the transformation \(\mathcal{T}(\cdot)\) for ease of notation. Furthermore, let \(\mathcal{F}_{plane}(\mathbf{x})\) denote a plane, defined by a point and a surface normal. The truncation operation can then be expressed as \[\mathcal{F}_{truncation}(\mathbf{x})=max(\mathcal{F}(\mathbf{x}),\ \mathcal{F}_{plane}( \mathbf{x})). \tag{3}\] Given a set of \(m\) transformed and truncated primitives with implicit representations \(\{\mathcal{F}_{1}(\mathbf{x}),\mathcal{F}_{2}(\mathbf{x}),\dots,\mathcal{F}_{ m}(\mathbf{x})\}\), we combine the shapes using the union operator, which can be expressed as \[\mathcal{F}_{union}(\mathbf{x})=\{\mathcal{F}_{1}(\mathbf{x}),\mathcal{F}_{2 }(\mathbf{x}),\dots,\mathcal{F}_{m}(\mathbf{x})\}. \tag{4}\] The final object mesh is generated by merging all vertices and faces of each transformed shape primitive. This operation is substantially faster than mesh union [28], \(\mathcal{F}_{union}(\mathbf{x})=min(\mathcal{F}_{1}(\mathbf{x}),\mathcal{F}_{2 }(\mathbf{x}),\dots,\mathcal{F}_{m}(\mathbf{x}))\), in practice. The mesh of the primitives can be obtained via the marching cube [38] or simply-defined vertices and faces, and shape generation can be sped up by saving and reusing them. In this framework, a policy \(P\) consists of the 11 parameters corresponding to the above-mentioned 11 operations. Specifically, the operations we search over are rotation \(\{1\}\), translation \(\{1\}\), overall scale \(\{1\}\), shearing on each axes \(\{3\}\), and stretching on each axes \(\{3\}\), with additional parameters encoding the number of primitives to consider \(\{1\}\) and the truncation plane \(\{1\}\). Each operation also comes with a default range of magnitude. We discretize the range of magnitude into nine values so that we can use a discrete search algorithm to find them. Ultimately, finding the optimal policy \(P\) becomes a search problem in a space that contains \(9^{11}=31,381,059,609\) possibilities. We refer to this search space as \(O\). ote that the operations described above allow us to form a large search space, which we will show to be effective in practice. However, they are by no means the unique way of defining such a space, and we hope that our work will motivate others to design new search spaces. ### Evolutionary Algorithm To automatically search for the optimal policy \(P\) in the search space so as to minimize \(\mathcal{L}\) in Eq. (1), we employ an evolutionary algorithm with a tournament selection strategy [47]. This algorithm acts as a meta-learner, which iteratively provides policies from which we generate the dataset \(D_{syn}(P)\). The deep network is then trained on the generated dataset \(D_{syn}(P)\) and evaluated on \(D_{tgt}\) to obtain feedback on its effectiveness. The meta-learner then generates a new policy based on this feedback, which causes the dataset to evolve due to policy changes. This approach allows \(D_{tgt}\) to affect the final policy, and if it consists of scanned real-objects, it can help to narrow the domain gap. Specifically, the evolutionary algorithm starts with an initial population of \(k\) policies: \(Q=\{P_{1},P_{2},\ldots,P_{k}|P_{i}\in O\}\). During each evolutionary step, two individuals \(\{P_{i},P_{j}\}\) are chosen from the population \(Q\) and their evaluation loss \(\{\mathcal{L}(P_{i}),\mathcal{L}(P_{j})\}\) are compared, where \(\mathcal{L}(P_{i})=\mathcal{L}_{eval}(\Psi(w,D_{syn}(P_{i})),D_{tgt})\). After each competition, we select the best policy as parent and generate a new policy, \(P_{child}\), through mutation. By adding the new \(P_{child}\) to the policy pool and removing the worst-performing policy, we ensure that the policy pool remains of the same size and does not shrink. Specifically, the mutation is performed by randomly choosing one out of the 11 policy hyperparameters from the duplicated best policy and changing this hyperparameter label to another discrete label. For example, for rotation, assuming a discretization in steps of \(\frac{\pi}{8}\), the mutation may change the original label \(\frac{\pi}{8}\) to \(\frac{3\pi}{8}\). We then create a new synthetic dataset \(D_{syn}(P_{child})\) from the mutated child, and train the network \(\Psi\) on it until convergence. At the next evolutionary step, the child has then the possibility of becoming a parent. This process is repeated to evolve policies until a maximum number of trials is reached. We then select the policy with the best evaluation result as the optimal policy \(\hat{P}\). The details are given in Algorithm 1. ``` Input : Search space \(O\), population size \(k\), max number of trials \(M\), target dataset \(D_{tgt}\), deep network \(\Psi\). Output : Policy \(\hat{P}\) to generate the data that achieves the highest validation performance 1 initialize_population(\(Q\)) by randomly sampling k policies from \(O\). current_trial_num \(\coloneqq 0\) whilecurrent_trial_num \(<M\)do 21 randomly select two individuals \(P_{i}\) and \(P_{j}\) from \(Q\). 22 train the network \(\Psi\) on these two datasets \(D_{syn}(P_{i})\) and \(D_{syn}(P_{j})\) until convergence. 33 compare the evaluation loss on the target dataset \(D_{tgt}\), and get the best_individual and the worst_individual (tournament selection) 34 delete the worst_individual from \(Q\); 35 mutate the best_individual, add it to the population \(Q\), and train the individual; 36 current_trial_num += 1 ; 37 38 end while ``` **Algorithm 1**Evolutionary Policy Search ### Surrogate Task Model The search algorithm described in Sec. 3.3 requires training a target task model to convergence at every evolutionary trial. Unfortunately, the state-of-the-art point cloud registration networks tend to involve many parameters and expensive layers, such as transformers, as illustrated in the top portion of Fig. 1 for BPNet. As such, searching for the best training data with our search procedure would become prohibitively expensive. For example, using BPNet, one of the smallest registration models, a single trial in the search process would cost \(1.875\) GPU days on one Nvidia-V100. Therefore, a standard search process of \(1,000\) trials would require \(1,875\) GPU days. To address this, we propose to replace the target task model with a model tackling a surrogate task. For this substitution to make sense, the surrogate model should meet the following conditions: (i) It should take as input the same type of data as the target model, i.e., point clouds; (ii)it should not require any extra annotations; (iii) it should be trainable much more quickly than the task model; (iv) its behavior, i.e., evaluation loss, should follow a similar trend to that of the target model as the training data changes. These constraints immediately discard any point registration net Figure 1: Comparison of the point cloud registration network (top) and the point cloud reconstruction one (bottom) work, even much reduced versions of existing ones, as we have observed that meaningful registration results can only be obtained with architectures that would be too large for our purpose. Instead, we propose to make use of a point cloud reconstruction network. This choice was motivated by the observation that, by definition, such a network also operates on point clouds; it does not require any annotations, only the point clouds themselves; it can exploit a much more lightweight architecture, as it does not need to compare two point clouds and thus can be designed without transformer layers. This leaves the question of evaluation loss behavior. This can be answered from the perspective of multi-task learning literature [76], which has demonstrated that different tasks performed on the same input data often follow similar behavior, i.e., improving one also improves the others. More pragmatically, we will show in our experiments that the behavior of our point cloud reconstruction network follows that of the registration one as we vary the training set. **Surrogate network architecture.** The architecture of our surrogate network is shown in the bottom portion of Fig. 1. In essence, it is an autoencoder, relying on the same DGCNN block as the registration network but without any transformer layers. Instead, to prevent the network from directly copying the input point cloud to the output, we project the outputs of the DGCNN to a low-dimensional latent space, and then force the network to reconstruct the whole point cloud from this compressed representation. Formally, the input to the network is a point set \(\mathcal{X}=\{x_{1},\dots,x_{v}\}\), where \(x_{i}\in\mathbb{R}^{3}\) represents a 3D point position. We obtain the point set \(\mathcal{X}\) by uniform sampling from the mesh model. The output \(\mathcal{Y}\) is the same size point set, representing the reconstructed point positions. The encoder projects each input point cloud into a latent space and the decoder reconstructs the point cloud from the latent representation. We then compute the reconstruction error for \(\mathcal{Y}\) using the symmetric Chamfer distance \[\mathcal{L}_{CD}=\frac{1}{2m}(\sum_{x\in\mathcal{X}}\min_{y\in\mathcal{Y}}\|x -y\|_{2}^{2}+\sum_{y\in\mathcal{Y}}\min_{x\in\mathcal{X}}\|y-x\|_{2}^{2}). \tag{5}\] We train the surrogate network parameter \(\theta\) by solving \[\theta^{*}=\operatorname*{argmin}_{\theta}\mathcal{L}_{CD}(\Psi_{surrogate} (\theta,D_{syn})). \tag{6}\] In the searching phase, we therefore also use the symmetric Chamfer distance as fitness score, giving the loss \[\mathcal{L}(P)=\mathcal{L}_{CD}(\Psi_{surrogate}(\theta^{*},D_{syn}(P)), \hat{D}_{tgt}). \tag{7}\] Our surrogate task model only needs 15min to converge and only requires \(1.42\)GB GPU memory. An experiment with \(1,000\) trials only takes \(0.462\) GPU days on Nvidia-V100 GPU, which is \(4056.43\) times more efficient than using the original registration network. ## 4 Experiments In this section, we evaluate the effectiveness of our AutoSynth training set search strategy. Below, we first provide implementation details. We then present results on real scenes, and finally analyze different aspects of our approach via ablation studies. **Implementation details.** Our complete pipeline consists of two steps: Searching for the best policy using AutoSynth, and training the registration network on the training dataset generated using the best policy. To generate complex 3D datasets, we utilize a set of shape primitives that includes sphere, cuboid, cone, cylinder, torus, tetrahedron, octahedron, icosahedron, and dodecahedron, as these have shown promising results in our analysis. This set of primitives, however, is not exhaustive and we hope that our results will encourage other researchers to further expand it and propose better alternatives. In the search process, we build our target dataset \(D_{tgt}\) using one scanned real object, i.e., Stanford bunny. We augment it with random rotations to generate \(100\) samples, which constitute \(D_{tgt}\). We set the population size to be \(32\) and the maximum number of trials to be \(1,000\), which we observed to be sufficient to obtain a good policy. For the reconstruction network, we set the batch size to be \(8\) and use the Adam [32] with a learning rate of \(0.001\). For each trial in the search phase, we train the reconstruction network for \(20,000\) iterations, after which the network has typically converged. Once the best policy is found, we use it for all the experiments, i.e., we only searched for the policy once. For BPNet [13] and IDAM [34], we use the modified versions of [13] with Match Normalization. We only replace the training data but keep the same parameter settings as in [13] to train them to convergence. For the real-scene datasets, we use the provided training sequence. For ModelNet40 [65], we use the official training split, which consists of \(9,843\) mesh models across \(40\) categories. To obtain a source point cloud, we sample points uniformly from a mesh model. For the target point cloud, we generate a depth map from the mesh and with a random camera pose, and sample points from it. For our AutoSynth search process, we only need the source point cloud as input, which also acts as ground truth for the reconstruction network. Following [13], we report the rotation and translation mAP, the ADD, and the BOP benchmark metrics. ### Results on Real-scene Datasets Here, we compare our AutoSynth searched dataset to ModelNet40 by evaluating the performance of the registration models, i.e., BPNet and IDAM, trained on them. To this end, we evaluate the trained models on three different real-scene datasets, i.e, TUD-L, LM, and LMO. Note that this corresponds to an unseen-object setting, as the training mesh models do not overlap with the test ones. **TUD-L dataset.** The results of all methods on TUD-L are summarized in Tab. 1. In Tab. 1, the '-Real' model was trained and tested on TUD-L's real scene data, corresponds to the'seen' object setting. On the other hand, the '-AutoSynth' model, trained on synthetic data and tested on TUD-L's real scene data, represents an 'unseen' object setting. This discrepancy in settings accounts for the observed performance difference. The same principle applies to Tabs. 2 and 3, which were tested using the LM and LMO datasets. Furthermore, we also report the results of the top-performing traditional, learning-free, registration methods. BPNet and IDM trained on our AutoSynth searched dataset yield significantly better performance than their counterparts trained on ModelNet40. This evidences the superiority of our searched dataset, containing more diverse and complex objects. Note that the traditional methods based on FPFH features yield poor results. However, 'VidalSensors18' and 'CVPR10-3D-Edges', two traditional methods corresponding to the top depth-only performers in the BOP leaderboard, remain more effective than any learning-based method, including ours, in the unseen-object setting. Nevertheless, we push the limits of what synthetic data can achieve for deep learning-based methods, thus opening the \begin{table} \begin{tabular}{l|c c c|c c c|c c|c c c} \hline \hline & \multicolumn{3}{c|}{Rotation mAP} & \multicolumn{3}{c|}{Translation mAP} & ADD & \multicolumn{3}{c}{BOP Benchmark} \\ Method & \(5^{\circ}\) & \(10^{\circ}\) & \(20^{\circ}\) & \(1cm\) & \(2cm\) & \(5cm\) & \(0.1d\) & VSD & MSSD & MSPD & AR \\ \hline IDM-Real & 0.56 & 0.58 & 0.61 & 0.55 & 0.66 & 0.81 & 0.58 & 0.580 & 0.604 & 0.618 & 0.601 \\ BPNet-Real & **0.91** & **0.92** & **0.93** & **0.86** & **0.95** & **0.99** & **0.93** & **0.859** & **0.914** & **0.935** & **0.903** \\ \hline ICP & 0.02 & 0.02 & 0.02 & 0.01 & 0.14 & 0.57 & 0.02 & 0.117 & 0.023 & 0.027 & 0.056 \\ FGR(FPFH) & 0.00 & 0.01 & 0.01 & 0.04 & 0.25 & 0.63 & 0.01 & 0.071 & 0.007 & 0.008 & 0.029 \\ TEASER++(FPFH) & 0.13 & 0.17 & 0.19 & 0.03 & 0.22 & 0.56 & 0.17 & 0.175 & 0.196 & 0.193 & 0.188 \\ Super4PCS & 0.30 & 0.50 & 0.56 & 0.05 & 0.40 & 0.92 & 0.54 & 0.265 & 0.500 & 0.488 & 0.418 \\ \(\star\)Vidal-Sensors18 & - & - & - & - & - & - & **0.811** & **0.910** & **0.907** & **0.876** \\ \(\star\)Drost & - & - & - & - & - & - & - & 0.809 & 0.875 & 0.872 & 0.852 \\ IDM-MN40 & 0.30 & 0.32 & 0.36 & 0.31 & 0.41 & 0.73 & 0.34 & 0.373 & 0.362 & 0.364 & 0.366 \\ IDM-AutoSynth & 0.40 & 0.43 & 0.46 & 0.41 & 0.54 & 0.83 & 0.45 & 0.496 & 0.454 & 0.471 & 0.474 \\ BPNet-MN40 & 0.71 & 0.74 & 0.77 & 0.70 & 0.80 & 0.94 & 0.76 & 0.724 & 0.772 & 0.796 & 0.763 \\ BPNet-AutoSynth & **0.78** & **0.81** & **0.85** & **0.77** & **0.86** & **0.95** & **0.84** & 0.777 & 0.845 & 0.867 & 0.829 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparison of registration models trained on AutoSynth and ModelNet40 on the **TUD-L** real scene dataset. Note that BPNet-Real and IDM-Real were trained with the TUD-L real scene training sequence, i.e., not in the unseen-object setting. BPNet-MN40 was trained on ModelNet40-full. BPNet-AutoSynth was trained on our AutoSynth generated dataset with the Stanford bunny as target dataset. The results for Vidal-Sensor18 [60] and Drost (Drost-CVPR10-3D-Edges) [17] were directly taken from the BOP leaderboard. \begin{table} \begin{tabular}{l|c c c|c c c|c c|c c c} \hline \hline & \multicolumn{3}{c|}{Rotation mAP} & \multicolumn{3}{c|}{Translation mAP} & ADD & \multicolumn{3}{c}{BOP Benchmark} \\ Method & \(5^{\circ}\) & \(10^{\circ}\) & \(20^{\circ}\) & \(1cm\) & \(2cm\) & \(5cm\) & \(0.1d\) & VSD & MSSD & MSPD & AR \\ \hline IDM-Real & 0.15 & 0.23 & 0.27 & 0.25 & 0.54 & 0.91 & 0.23 & 0.352 & 0.311 & 0.345 & 0.336 \\ BPNet-Real & **0.43** & **0.59** & **0.67** & **0.49** & **0.83** & **0.97** & **0.60** & 0.616 & 0.680 & 0.737 & 0.678 \\ \hline ICP & 0.00 & 0.01 & 0.01 & 0.04 & 0.27 & 0.82 & 0.01 & 0.092 & 0.014 & 0.027 & 0.044 \\ FGR(FPFH) & 0.00 & 0.00 & 0.00 & 0.05 & 0.31 & 0.89 & 0.00 & 0.068 & 0.000 & 0.010 & 0.026 \\ TEASER++(FPFH) & 0.01 & 0.03 & 0.05 & 0.03 & 0.21 & 0.73 & 0.03 & 0.108 & 0.076 & 0.098 & 0.094 \\ Super4PCS & 0.02 & 0.09 & 0.15 & 0.04 & 0.31 & 0.89 & 0.10 & 0.117 & 0.178 & 0.201 & 0.165 \\ \(\star\)PPF\_3D\_ICP & - & - & - & - & - & - & **0.719** & **0.856** & **0.866** & **0.814** \\ \(\star\)Drost & - & - & - & - & - & - & - & 0.678 & 0.786 & 0.789 & 0.751 \\ IDM-MN40 & 0.08 & 0.11 & 0.14 & 0.15 & 0.44 & 0.89 & 0.12 & 0.258 & 0.178 & 0.206 & 0.214 \\ IDM-AutoSynth & 0.21 & 0.29 & 0.33 & 0.28 & 0.60 & 0.91 & 0.29 & 0.420 & 0.359 & 0.398 & 0.392 \\ BPNet-MN40 & 0.31 & 0.42 & 0.50 & 0.37 & 0.69 & 0.95 & 0.43 & 0.491 & 0.518 & 0.571 & 0.527 \\ BPNet-AutoSynth & **0.36** & **0.49** & **0.58** & **0.41** & **0.74** & **0.94** & **0.50** & 0.538 & 0.579 & 0.641 & 0.586 \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative comparison of registration models trained on AutoSynth and ModelNet40 on the **LINEMOD** real scene dataset. PPF_3D_ICP [17] and Drost (Drost-CVPR10-3D-Only) [17] are traditional methods and represent the best depth-only performers from the BOP leaderboard. door to future research on learning to generate training data. The reason why BPNet-Real and IDM-Real achieve better performance than these models trained on synthetic data is twofold. First, they work in the easier setting where the test object has been observed during training. Second, there remains a domain gap between real-scene depth maps and synthetic ones. While our results show that our AutoSynth approach bridges part of this gap, further reducing it remains a topic for future research. **LINEMOD dataset.** The LINEMOD dataset is more challenging than TUD-L because of the presence of symmetric objects and minor occlusions at the object boundaries. As shown in Tab. 2, even Super4PCS fails to yield meaningful results on this dataset. Our BPNet-AutoSynth and IDM-AutoSynth again achieve better performance than BPNet-MN40 and IDM-MN40. This shows that the dataset searched by our AutoSynth algorithm on the Stanford bunny generalizes well to different real-scene datasets. Note that, the IDM-AutoSynth achieves even better performance than IDM-Real. This is because the LINEMOD dataset does not provide real depth maps for training data, and we thus used the synthetic ones provided by LINEMOD, which also suffer from a domain gap w.r.t. the real test data. This shows that training on data with more diverse shapes can improve evaluation performance when the domain gap is large. **Occluded-LINEMOD dataset.** The Occluded-LINEMOD dataset depicts an even more challenging scenario than LINEMOD by including severe occlusions. As such, as shown in Tab. 3, the results of all the methods deteriorate. Nevertheless, BPNet-AutoSynth and IDM-AutoSynth still significantly outperform BPNet-MN40 and IDM-MN40, respectively. This further demonstrates that our searched dataset delivers a consistent performance improvement across different real evaluation datasets and different point cloud registration frameworks. ### Analysis Here, we conduct ablation studies to analyze (i) the behavior similarity of the main and surrogate task networks; (ii) the impact of the target dataset; (iii) the effectiveness of the guidance from the surrogate network; and (iii) the impact of pre-training on the searched data. **Behavior of the main and surrogate task networks.** We conduct experiments to compare the performance of models trained on datasets with different numbers of shapes by adjusting the number of ModelNet40 mesh models used for training. Specifically, we randomly sample \(M\in\{1,5,10,50\}\) models per ModelNet40 category. For example, MN40(01_per_cate) was built by taking a single mesh model from each category, and thus contains 40 mesh models. For this set of experiments, we use BPNet as our main task registration network. We summarize the results in Tab. 4, where we also report the reconstruction errors of the surrogate reconstruction network trained on the same data. These results evidence that both tasks, i.e., registration and reconstruction, follow the same trend as the number of training meshes changes. In short, increasing the number of training models improves pose estimation accuracy and lowers reconstruction error. Importantly, the results obtained with our AutoSynth searched dataset are the best, confirming the effectiveness of our surrogate task network. **Impact of the target dataset \(D_{tgt}\).** To assess the influence of \(D_{tgt}\) on the search, we compare the use of a scanned real-object with an MN40(01_per_cate) dataset, using BPNet as backbone. Our framework leverages a feedback mechanism to learn from \(D_{tgt}\), which helps to narrow \begin{table} \begin{tabular}{l|c c c|c c c|c c c c} \hline \hline & \multicolumn{3}{c|}{Rotation mAP} & \multicolumn{3}{c|}{Translation mAP} & ADD & \multicolumn{3}{c}{BOP Benchmark} \\ Method & \(5^{\circ}\) & \(10^{\circ}\) & \(20^{\circ}\) & \(1cm\) & \(2cm\) & \(5cm\) & \(0.1d\) & VSD & MSSD & MSPD & AR \\ \hline IDM-Real & 0.15 & 0.22 & 0.32 & 0.23 & 0.58 & 0.88 & 0.25 & 0.349 & 0.320 & 0.374 & 0.348 \\ BPNet-Real & **0.31** & **0.46** & **0.56** & **0.37** & **0.70** & **0.91** & **0.47** & 0.478 & 0.542 & 0.612 & 0.544 \\ \hline ICP & 0.01 & 0.01 & 0.01 & 0.07 & 0.36 & 0.85 & 0.01 & 0.085 & 0.014 & 0.032 & 0.044 \\ FGR(FPFH) & 0.00 & 0.00 & 0.00 & 0.08 & 0.43 & 0.85 & 0.00 & 0.055 & 0.000 & 0.009 & 0.021 \\ TEASER++(FPFH) & 0.01 & 0.02 & 0.05 & 0.04 & 0.26 & 0.77 & 0.02 & 0.096 & 0.060 & 0.093 & 0.083 \\ Super4PCS & 0.01 & 0.03 & 0.06 & 0.06 & 0.31 & 0.83 & 0.03 & 0.054 & 0.072 & 0.113 & 0.080 \\ \(\star\)Vidal-Sensors18 & - & - & - & - & - & - & - & 0.473 & 0.625 & 0.647 & 0.582 \\ \(\star\)PPF\_3D\_ICP & - & - & - & - & - & - & - & **0.523** & **0.669** & **0.716** & **0.636** \\ IDM-MN40 & 0.04 & 0.08 & 0.11 & 0.12 & 0.47 & 0.88 & 0.07 & 0.205 & 0.112 & 0.153 & 0.157 \\ IDM-AutoSynth & 0.14 & 0.21 & 0.26 & 0.23 & 0.57 & 0.88 & 0.20 & 0.316 & 0.272 & 0.322 & 0.303 \\ BPNet-MN40 & 0.22 & 0.32 & 0.41 & 0.30 & 0.63 & 0.92 & 0.34 & 0.395 & 0.404 & 0.472 & 0.423 \\ BPNet-AutoSynth & **0.25** & **0.35** & **0.41** & **0.34** & **0.65** & **0.92** & **0.37** & 0.410 & 0.429 & 0.501 & 0.447 \\ \hline \hline \end{tabular} \end{table} Table 3: Quantitative comparison of registration models trained on AutoSynth and ModelNet40 on the **Occluded-LINEMOD** real scene dataset. Vidal-Sensors18 [60] and PPF_3D_ICP [17] are traditional methods and represent the best depth-only performers from the BOP leaderboard. the reality gap when using scanned real-objects. The results presented in Tab. 5 show that AutoSynth(Real) outperforms AutoSynth(MN40(01_per_cate)), which confirms our claim. Note that BPNet trained using MN40(01_per_cate) as \(D_{syn}\) yields better results than the one trained on it directly. This is due to the fact that the 3D dataset evolved from MN40(01_per_cate) contains more distinct shapes than it, resulting in better performance. **Effectiveness of the guidance from the surrogate network.** Here, we evaluate the effectiveness of the surrogate network \(\Psi_{surrogate}\) at guiding the search towards the best policy by comparing it with two alternatives that offer no guidance: (i) A no-feedback strategy corresponding to randomly picking a policy from the search space; (ii) a full-range policy consisting of randomly sampling using the largest possible range of transformations during training. The comparison in Tab. 6 on the TUD-L dataset testing sequence and with BPNet as registration network clearly shows the benefits of the surrogate network for the search. **Impact of pre-training on the searched data.** To evaluate the use of our approach as a pre-training strategy, we pre-train the network on the AutoSynth-searched data and fine-tune it on the TUD-L training set. As shown in Tab. 7, this lets us reach a new SOTA performance (**0.94** in R5\({}^{\circ}\) mAP), showing the effectiveness of our AutoSynth dataset. ## 5 Conclusion We have introduced a novel algorithm to automatically generate large amounts of 3D training dataset and curate the optimal one from the millions of options. To this end, we have proposed to use a surrogate reconstruction network while searching for a data generation policy, thus accelerating the search by \(4056.43\) times. We have evidenced the generality of our approach by evaluating it with two different point cloud registration methods, BPNet and IDAM. Our experiments on real-scene datasets have evidenced that a network trained on our searched dataset consistently outperforms the same model trained on the widely used ModelNet40 dataset. As shown by our results, however, there remains a gap between our searched dataset and real scans. In the future, we will study how to further bridge this gap by improving the realism of the synthesized data. \begin{table} \begin{tabular}{l|c c c|c c c|c} \hline \hline & \multicolumn{3}{c|}{Rotation mAP} & \multicolumn{3}{c|}{Translation mAP} & ADD \\ Method & \(5^{\circ}\) & \(10^{\circ}\) & \(20^{\circ}\) & \(1cm\) & \(2cm\) & \(5cm\) & \(0.1d\) \\ \hline full-range & 0.66 & 0.68 & 0.70 & 0.64 & 0.75 & 0.90 & 0.69 \\ no-feedback & 0.61 & 0.65 & 0.69 & 0.62 & 0.74 & 0.89 & 0.68 \\ Surrogate net & **0.78** & **0.81** & **0.85** & **0.77** & **0.86** & **0.95** & **0.84** \\ \hline \hline \end{tabular} \end{table} Table 6: Effectiveness of the feedback mechanism. Using our surrogate reconstruction network to guide the search clearly outperforms both selecting a random policy and using the full range policy. \begin{table} \begin{tabular}{l|c c c|c c c|c} \hline \hline & \multicolumn{3}{c|}{Rotation mAP} & \multicolumn{3}{c|}{Translation mAP} & ADD \\ Method & \(5^{\circ}\) & \(10^{\circ}\) & \(20^{\circ}\) & \(1cm\) & \(2cm\) & \(5cm\) & \(0.1d\) \\ \hline Real & 0.91 & 0.92 & 0.93 & 0.86 & 0.95 & 0.99 & 0.93 \\ AutoSynth & 0.78 & 0.81 & 0.85 & 0.77 & 0.86 & 0.95 & 0.84 \\ Pretrain & **0.94** & **0.95** & **0.96** & **0.90** & **0.97** & **1.00** & **0.96** \\ \hline \hline \end{tabular} \end{table} Table 7: BPNet trained on TUD-L vs AutoSynth vs AutoSynth pre-trained followed by TUD-L fine-tuning. \begin{table} \begin{tabular}{l|c|c c c|c c|c|c} \hline \hline & \multicolumn{3}{c|}{Rotation mAP} & \multicolumn{3}{c|}{Translation mAP} & ADD \\ Method & \(5^{\circ}\) & \(10^{\circ}\) & \(20^{\circ}\) & \(1cm\) & \(2cm\) & \(5cm\) & \(0.1d\) \\ \hline MN40(01) & 0.59 & 0.62 & 0.68 & 0.59 & 0.71 & 0.92 & 0.65 \\ MN40 & 0.71 & 0.74 & 0.77 & 0.70 & 0.80 & 0.94 & 0.76 \\ AS(MN40(01)) & 0.76 & 0.80 & 0.84 & 0.77 & 0.86 & 0.93 & 0.82 \\ AS(Real) & **0.78** & **0.81** & **0.85** & **0.77** & **0.86** & **0.95** & **0.84** \\ \hline \hline \end{tabular} \end{table} Table 5: Results of employing different datasets as the target dataset with BPNet as backbone. MN40(01) stands for MN40(01_per_cate); AS stands for AutoSynth. \begin{table} \begin{tabular}{l|c c c|c c c|c} \hline \hline & \multicolumn{3}{c|}{Rotation mAP} & \multicolumn{3}{c|}{Translation mAP} & ADD \\ Method & \(5^{\circ}\) & \(10^{\circ}\) & \(20^{\circ}\) & \(1cm\) & \(2cm\) & \(5cm\) & \(0.1d\) \\ \hline MN40(01) & 0.59 & 0.62 & 0.68 & 0.59 & 0.71 & 0.92 & 0.65 \\ MN40 & 0.71 & 0.74 & 0.77 & 0.70 & 0.80 & 0.94 & 0.76 \\ AS(MN40(01)) & 0.76 & 0.80 & 0.84 & 0.77 & 0.86 & 0.93 & 0.82 \\ AS(Real) & **0.78** & **0.81** & **0.85** & **0.77** & **0.86** & **0.95** & **0.84** \\ \hline \hline \end{tabular} \end{table} Table 5: Results of employing different datasets as the target dataset with BPNet as backbone. MN40(01) stands for MN40(01_per_cate); AS stands for AutoSynth. ## 6 Acknowledgements Zheng Dang would like to thank H. Chen for the highly-valuable discussions and for her encouragement. This work was funded in part by the Swiss Innovation Agency (Inno-suisse).
2305.19821
LMCap: Few-shot Multilingual Image Captioning by Retrieval Augmented Language Model Prompting
Multilingual image captioning has recently been tackled by training with large-scale machine translated data, which is an expensive, noisy, and time-consuming process. Without requiring any multilingual caption data, we propose LMCap, an image-blind few-shot multilingual captioning model that works by prompting a language model with retrieved captions. Specifically, instead of following the standard encoder-decoder paradigm, given an image, LMCap first retrieves the captions of similar images using a multilingual CLIP encoder. These captions are then combined into a prompt for an XGLM decoder, in order to generate captions in the desired language. In other words, the generation model does not directly process the image, instead processing retrieved captions. Experiments on the XM3600 dataset of geographically diverse images show that our model is competitive with fully-supervised multilingual captioning models, without requiring any supervised training on any captioning data.
Rita Ramos, Bruno Martins, Desmond Elliott
2023-05-31T13:03:17Z
http://arxiv.org/abs/2305.19821v1
# LMCap: Few-shot Multilingual Image Captioning by ###### Abstract Multilingual image captioning has recently been tackled by training with large-scale machine translated data, which is an expensive, noisy, and time-consuming process. Without requiring any multilingual caption data, we propose LMCap, an _image-blind_ few-shot multilingual captioning model that works by prompting a language model with retrieved captions. Specifically, instead of following the standard encoder-decoder paradigm, given an image, LMCap first retrieves the captions of similar images using a multilingual CLIP encoder. These captions are then combined into a prompt for an XGLM decoder, in order to generate captions in the desired language. In other words, the generation model does not directly process the image, instead processing retrieved captions. Experiments on the XM3600 dataset of geographically diverse images show that our model is competitive with fully-supervised multilingual captioning models, without requiring any supervised training on any captioning data. ## 1 Introduction The task of image captioning has witnessed impressive performance gains with the trend of large-scale encoder-decoder models and vision-and-language pre-training (Li et al., 2022; Wang et al., 2021; Hu et al., 2022; Wang et al., 2022). Despite all of this progress, existing models are mostly available on English or are specialised for other high-resource languages. This limits the access to the technology for a broader range of languages that exist in the world. Moreover, the current mainstream trend results in design decisions and methods that may only work well for English-centric datasets or the couple of languages for which captioning data is available (Ruder, 2020). There is a need to develop multilingual image captioning models that can serve speakers of different languages. Still, scaling captioning models to a wide variety of languages involves different challenges. One major limitation is the lack of multilingual image-caption pairs of clean labelled data for training the models. One possible solution is to automatically translate the existing English datasets (Thapliyal et al., 2022). While effective, this approach can result in models that learn translation artefacts, and perpetuates an English-centric perspective instead of encouraging the use of geographically diverse concepts that are not overly specific to the western culture (Liu et al., 2021). Moreover, with or without automatic translations, training captioning models with multilingual data can be expensive, given the amount of data and number of parameters needed to mitigate the _curse of multilinguality_(Conneau et al., 2019; Goyal et al., 2021). This paper presents LMCap, an _image-blind_ multilingual image captioning model that does not require any training specific for image captioning. We propose an efficient method that reuses a pre-trained multilingual language model and adapts it to the vision-and-language captioning setting. Our work is motivated by the recent "Socratic Models" framework (Zeng et al., 2022), in which different models can be combined through text prompting (e.g., image captioning can be achieved by prompting a language model with a set of visual concepts extracted from the predictions of a vision model). Different from the original Socratic Models, our approach is inspired by retrieval-augmented generation (Lewis et al., 2020; Izacard et al., 2022). Specifically, a multilingual language model generates captions given a prompt consisting of the captions retrieved from similar images, and a demonstration of how to produce a caption in the desired language. We note here that this is an _image-blind_ approach, i.e. the language model producing the caption does not actually process the image. Our main contributions are as follows: (1) We propose a few-shot multilingual image captioning approach named LMCap, that re-uses pre-trained models without requiring any training specific for image captioning; (2) To the best of our knowledge, LMCCap is the first captioning model with retrieval-augmented generation in a multilingual setting, and in a few-shot setting of captioning; (3) We report on experiments with the XM3600 benchmark (Thapliyal et al., 2022) of human-authored captions and geographic diverse images, demonstrating that LMCCap exhibits strong few-shot performance on a wide variety of languages; (4) We further show that LMCCap performs substantially better than the original Socratic Models. Moreover, instead of only achieving competitive performance against other zero-shot models, LMCCap can also compete with a large-scale supervised state-of-art captioning model. ## 2 Background and Related Work Image Captioning:The task of automatically generating textual descriptions for input images has been largely explored in English, while multilingual image captioning has only been addressed in a couple of studies (Gu et al., 2018; Thapliyal et al., 2022; Chen et al., 2022). Like in most recent work on image captioning (Li et al., 2022; Wang et al., 2021, 2022), studies addressing multilingual setups have also focused on scaling the size of encoder-decoder models and the amount of training data, resorting to machine translated versions of multimodal data to accommodate multiple languages (Thapliyal et al., 2022). Differently from training a large-scale encoder-decoder model, we follow a few-shot setting with an _image-blind_ approach based on prompting. Few-Shot and Zero-Shot Approaches:Performing few-shot learning by prompting a language model with examples and demonstrations of a task (Brown et al., 2020; Radford et al., 2019; Schick and Schutze, 2020) is an efficient and effective alternative to update model parameters. Similarly to other NLP tasks, recent work in the vision-and-language domain has used prompt-based learning by building on top of pre-trained language and vision models, although usually also involving extra multimodal training (Tsimpoukelli et al., 2021; Alayrac et al., 2022; Jin et al., 2021). In our work, we follow a similar few-shot prompting approach to the recent Socratic Models (Zeng et al., 2022) that do not involve any multimodal training, as described next. In image captioning, there have also been zero-shot methods that similarly to our approach do not involve any training, by relying on prompts or adaptations over the decoding algorithms, such as ZeroCap (Tewel et al., 2021) and ConZic (Zeng et al., 2023). However, these models work for English and not for the multilingual captioning setting. Socratic Models:Zeng et al. (2022) proposed the Socratic Models (SMs) framework, where different multimodal pre-trained models communicate via zero-shot or few-shot prompting. For the task of image captioning, SMs generate captions by prompting a language model (i.e., GPT-3 (Brown et al., 2020)) with information about the input image obtained with another pre-trained model (i.e., CLIP (Radford et al., 2021)). The visual information is in this way represented into a language-based prompt, containing the number of people presented in the image, the places, objects, and what is the type of image. We explore a similar approach in the multilingual setting by reusing multilingual models, and through a retrieval-based prompt. Retrieval-augmentation:The knowledge from language models can be adapted and expanded by combing non-parametric knowledge from datatores (i.e., external memories) (Khandelwal et al., 2019; Lewis et al., 2020; Izacard et al., 2022; Ram et al., 2023). The success of conditioning generation with retrieved information, in several different NLP tasks, has inspired some recent studies in image captioning (Ramos et al., 2023; Fei, 2021; Sarto et al., 2022; Ramos et al., 2023). The study that is most closely related to our captioning model is SmallCap (Ramos et al., 2023), an encoder-decoder model that is prompted with retrieved captions as well. However, in image captioning, retrieval-augmentation has mostly being explored with supervised learning and not few-shot learning. Moreover, retrieval-augmentation remains unexplored in the multilingual scenario. ## 3 Model Language Model Prompt-based Captioning (LMCCap) is a few-shot multilingual captioning model augmented with retrieval. It involves prompting a Language Model (LM) with captions retrieved from a datastore by a Vision-and-Language Model (VLM). Captions are generated in an _image-blind_ manner, without actually processing the visual contents of the input image, instead using a prompt containing the retrieved captions. The method works as follows: first, given an input image, the VLM is used to find relevant captions in the datastore. Second, the retrieved captions are converted to a language prompt, which is encoded by the multilingual LM to generate captions in a desired language, conditioning the generation on the prompt. Finally, the set of generated captions can be scored by the VLM against the input image, to select the best caption. The main aspects of our approach are shown in Figure 1 and fully detailed next. Image-Text Retrieval:The input image and a datastore of captions are encoded by a multilingual CLIP Carlsson et al. (2022), i.e. a VLM that can be used to calculate image-text similarity. In this way, given the encoded data, M-CLIP is used to retrieve the \(K\) most similar captions from the datastore. The datastore contains captions associated to diverse images, which can be in English or another language. The retrieved captions will serve to guide a language model as an example of what the predicted caption should resemble, through the use of a prompt and as described next. Retrieval-augmented Prompting:The retrieved captions, which represent the visual information about the image, are formatted into a prompt for the language model. The prompt starts with fixed \(N\)-shot examples and ends with the retrieval information about the input image, to guide the language model. Each shot is a demonstration of how to generate a caption in a desired language for an image, given a set of retrieved captions. After these \(N\)-examples, the prompt terminates with the retrieved information about the actual input image. An example of the format of the prompt can be seen in Figure 1 and in more detail in Appendix D. We note that the retrieved captions, either from the fixed \(N\)-shot examples or those corresponding to the input image, can be presented in any language or in multiple languages. Prompting Multilingual Text Generation:The aforementioned prompt is used as input for an XGLM Lin et al. (2021) pre-trained multilingual autoregressive LM, to generate captions in a given language. XGLM is applied in a few-shot setting, which means that LMCap does not require any training (i.e., the captions are generated by providing the prompt at inference time to XGLM). Captions are generated in the desired language by including an example in the \(N\) demonstrations in the prompt, as shown in Figure 1. Multilingual Reranking:After the LM generates a set of captions, the multilingual VLM performs a final image-text similarity step to find the caption that best describes the input image. This is based on the same M-CLIP model used for the initial image-text retrieval. ## 4 Evaluation In this section, we describe the evaluation of LMCap. We describe the experimental setup and results, and we also present ablation studies and further discussions about our approach. ### Experimental Setup Model:LMCap uses two pre-trained multilingual models, namely the autoregressive XGLM language model facebook/xglm-2.9B, and the multilingual M-CLIP vison-and-language model xlm-roberta-large-ViT-H-14, respectively available on HuggingFace Wolf et al. (2020) and OpenCLIP1. Our approach does not require any training, generating captions at inference time using a single NVIDIA V100S 32GB GPU. Footnote 1: [https://github.com/mlfoundations/open_clip](https://github.com/mlfoundations/open_clip) To generate a caption in a desired language, XGLM is prompted with retrieved captions extracted by the M-CLIP model. For caption retrieval, the input image and a set of captions from a datastore are both encoded by M-CLIP to perform direct image-text search. The datastore contains English captions from the COCO training set and is indexed offline with the nearest-neighbour search library named FAISS Johnson et al. (2017), using the index IndexFlatIP that does not involve training. A set of \(K\)=\(4\) retrieved captions are used in the prompt for the input image, along with a fixed set of \(N\)=\(3\)-shot examples, as described in Appendix D. Conditioned on the prompt, XGLM generates captions using beam-search decoding with a beam of 3. A set of \(c\)=\(3\) candidate captions are re-ranked using M-CLIP, to select the final generated caption in the desired language. The code for LMCap is made freely available2. Footnote 2: [https://github.com/RitaRamo/lmcap](https://github.com/RitaRamo/lmcap) Datasets:We mainly evaluate our approach on XM3600, i.e. a multilingual image captioning dataset Thapliyal et al. (2022) featuring geographically-diverse images, collected from Open Images with basis on the regions of 36 languages. For each language, 100 images were selected and annotated with human generated cap tions, resulting in a total of 3600 images and 261375 captions across the 36 languages. XM3600 does not contain training or validation splits. For validation and hyperparameter tuning, we relied on the COCO (Chen et al., 2015) validation split (COCO-DEV) from the standard Karpathy splits (Karpathy and Fei-Fei, 2015). For "reference captions", we machine translate the English captions into Spanish, Hindi, and Chinese, using the M2M-100 model (Fan et al., 2021), similarly in spirit to Thapliyal et al. (2022) who used the Google Translate API3. We make this development set available to the community at [https://github.com/RitaRamo/lmcap](https://github.com/RitaRamo/lmcap). As previously mentioned, we also use the captions from the COCO training set to build the datastore. The datastore simply contains the original English captions from COCO without incurring in an expensive and noisy machine translation process, unlike in the study from Thapliyal et al. (2022). Footnote 3: [https://cloud.google.com/translate](https://cloud.google.com/translate) Model Assessment and Comparison:We compare LMCap with the four multilingual models proposed by Thapliyal et al. (2022). These models combine different mT5 (Xue et al., 2020) and ViT (Zhai et al., 2022) versions and are trained in a fully-supervised fashion on COCO-35L and CC3M-35L, i.e., Google's machine translation API versions of the original COCO and CC3M datasets (Chen et al., 2015; Sharma et al., 2018). Specifically, BB+CC combines mT5-base and ViT-B/16 pretrained on CC3M-35L and finetuned on COCO-35L; BB is trained on COCO-35L; Bg switches to the ViT-g/14 model; and Lg uses mT5-large and and ViT-g/14, also trained with COCO-35L. For reference, Thapliyal et al. (2022) spent 5000 TPU Figure 1: Illustration of the key aspects of LMCap, a few-shot multilingual image captioning approach that re-uses pre-trained unimodal models without requiring any training. In our _image-blind_ approach, a multilingual language model (XGLM) is prompted with information retrieved with a multilingual CLIP model. The prompt contains a set of \(N\)-shot examples and \(K\) retrieved captions, to guide caption generation in a desired language. hours to train their models, while our method can be used out-of-the-box for inference, i.e., 45 minutes for the X3600 benchmark per language. Following Thapliyal et al. (2022), results are reported with the CIDEr Vedantam et al. (2015) metric for English, Spanish, Hindi, and Chinese, with other languages covered in Section 4.4. CIDEr is a standard captioning metric that computes how well the generated caption matches the consensus of the reference captions, based on Term Frequency-Inverse Document Frequency (TF-IDF). In Appendix A, we included more generation metrics for holistic evaluation. To compute the metrics, we used the COCO evaluation package 4, and the SacreBLEU tokenization Post (2018). Footnote 4: Available at [https://github.com/tylin/coco-caption](https://github.com/tylin/coco-caption) ### Results Xm3600:Following Thapliyal et al. (2022), we report results on XM3600 for English, Spanish, Hindi, and Chinese, in Table 1. We can see that LMCap outperforms all supervised approaches on Chinese, and achieves competitive performance on the other languages, despite being _image-blind_ and not being trained on any image captioning data. For English, Spanish, and Hindi, we note that LMCap is only outperformed by the large-scale supervised variant BB+CC, pre-trained on CCM3 and fine-tuned on COCO, jointly on English and the other 35 languages for the two datasets, i.e., with 123M captions. For the other variants that are only trained on COCO-35L, our model has a substantially larger performance on the CIDER metric across all four languages. We also show that our model can further benefit from increasing the datastore (LMCap\({}_{+}\)), as described in more detail over Section 4.3. and \(N\)-shot examples, we study the effect of our prompt when varying \(K\) and \(N\). Table 3 shows the importance of not depending on a single retrieved caption across the 4 languages. This is similar to previous findings in retrieval-augmented captioning studies focusing on English Sarto et al. (2022); Ramos et al. (2023), which showed that a large \(K\) makes the model more robust to mismatched captions. We further see that English and Spanish benefit from encoding a larger set of retrieved captions, while Hindi and Chinese work better with a smaller \(K\). We select \(K=4\) since it has close-to-optimal performance for each of the languages. We then explore varying the number of \(N\)-shot examples, and found \(N=3\) to be the optimal value on all the four the languages. We thus use \(K=4\) and \(N=3\) in the prompt of LMCap. Datasore:We also studied different contents for the datastore beyond the English captions from the COCO training set, shown in Table 4. Given that our model reaches much better performance on English, we hypothesise that our model can better generate captions in a desired language when having the retrieved captions in that same language. This could be validated using translations from COCO in the other languages, but since those are not available, we instead used a machine translated version of the Conceptual Captions dataset (CCM3) from Qiu et al. (2022). We used the English, Spanish, and Chinese versions of the CCM3 training set, respectively for each of the corresponding languages (CCM3-L). We found that performance deteriorates on the COCO-DEV dataset, which might be explained by the difference between the COCO and CCM3-L datasets. Even combining the two datasets (COCO + CCM3-L) is worse than using only the COCO dataset. In an attempt to cover more diverse concepts, we augmented COCO with three large web datasets (Conceptual Captions Sharma et al. (2018), Conceptual 12M Changpinyo et al. (2021), and SBU captions Ordonez et al. (2011)), using their noise-free versions Li et al. (2022). We refer to this dataset as CCS, and it contains synthetic model-generated texts for the web images. Using CCS leads to an improvement compared to just using COCO, except for Hindi. In Table 1, we also report results on XM3600 with this best datastore configuration, for which the performance again decreases for Hindi, but has a substantial improvement on English and Chinese. The benefits of including a more diverse collection of captions are further shown in Appendix E with some qualitative examples (e.g., LMCap was now able to generate the french concept _macarons_ in English). Notice that the retrieved captions from CCS are still on English. Thus, although there is lack of multilingual image-caption pairs with clean labelled data, it would be interesting to pursue further work on incorporating retrieved information from other languages, in order to improve performance to levels similar to those for English. Model Size:In Table 5, we show the importance of using a language model that has a sufficiently large number of parameters. Both XGLM-562M and XGLM-1.7B are unable to generate captions beyond English. On the other hand, the 7.5B variant can lead to a stronger performance, but large \begin{table} \begin{tabular}{l l l l l} \hline \hline Datasets & en & es & hi & zh \\ \hline COCO & 0.711 & 0.415 & 0.229 & 0.554 \\ CC3M-L & 0.387 & 0.309 & - & 0.337 \\ COCO + CC3M-L & 0.601 & 0.359 & - & 0.481 \\ COCO + CCS & **0.713** & **0.431** & 0.212 & **0.563** \\ \hline \hline \end{tabular} \end{table} Table 4: Datastore ablations on COCO-DEV, where captions are retrieved from different sources of data. CC3M-L corresponds to machine translated version of Conceptual Captions proposed in Qiu et al. (2022) (Hindi not available), while CCS refers to the Conceptual Captions, Conceptual 12M, and SBU datasets Li et al. (2022). \begin{table} \begin{tabular}{l l l l l} \hline \hline Setup & en & es & hi & zh \\ \hline \multicolumn{5}{c}{Varying K-Captions} \\ \hline K=1, N=1 & 0.622 & 0.380 & 0.240 & 0.522 \\ K=2, N=1 & 0.654 & 0.400 & **0.269** & 0.562 \\ K=3, N=1 & 0.695 & 0.414 & 0.211 & **0.565** \\ K=4, N=1 & 0.711 & 0.415 & 0.229 & 0.554 \\ K=5, N=1 & **0.734** & **0.424** & 0.205 & 0.529 \\ \hline \multicolumn{5}{c}{Varying N-Shot} \\ \hline K=4, N=1 & 0.711 & 0.415 & 0.229 & 0.554 \\ K=4, N=2 & 0.735 & 0.440 & 0.247 & 0.583 \\ K=4, N=3 & **0.767** & **0.454** & **0.334** & **0.584** \\ K=4, N=4 & 0.757 & 0.424 & 0.318 & 0.580 \\ \hline \hline \end{tabular} \end{table} Table 3: The effect of using different numbers of \(K\) retrieved captions and \(N\) few-shot examples. Results reported on COCO-DEV with best results in bold. scale LMs require more GPU memory, which limits the size of the prompt that can be encoded with modest hardware5. LMCap uses the more efficient XGLM-2.9B version. These results are in line with previous findings, which suggest that stronger few-shot performance is achieved when the prompt is encoded by large LMs (Brown et al., 2020). Footnote 5: We had to run the largest model in half precision (float16). ### Additional Discussion We now discuss the performance of LMCap across the 36 languages, taking into consideration the data that was used for pre-training the LM. We also compare our approach with SMs and a simple baseline of retrieval plus translation. To support quantitative evaluation, we show some qualitative examples. Multilingual Pre-training:In Table 6, we report the results of LMCap on XM3600 for all the 36 languages considered in the dataset, ordered by the percentage of pre-training data used in XGLM for each language. LMCap shows strong few shot performance on the diverse set of languages in which XGLM was pre-trained on. Similarly to BB+CC and Lg models, which are limited to the 36 languages they were trained on, our model is also dependent on the LM pre-training data, although there is potential to replace XGLM by another large LM, in order to generalize to other languages. Baseline of Retrieval with Translation:We also compared our approach against a baseline that retrieves the nearest caption on English and translates it into other languages in Table 8, using the M2M-100 model. This is to quantify the impact of prompting the language model compared to performing direct translation on retrieved captions. On COCO-DEV, we see that LMCap only outperforms these results on English. Notice, however, that the references on COCO-DEV for the other languages rely on the M2M-100 distributions, as the baseline, promoting to an inequitable CIDEr. When evaluating on human-labeled data, as is the case with the XM3600 dataset, we see the benefits of prompting with retrieval information. Notice also both LMCap and the retrieval baseline outperform the BB model (the later also competitive to the other 3 SOTA variants), despite training with large-scale multimodal machine translated data for hours. This shows the clear benefits of using retrieval-augmentation in multilingual image captioning, not just for result quality but to avoid high computation costs as well. Qualitative Results:Figure 2 shows examples of captions generated in different languages by LMCap, together with the retrieved captions that are provided in the prompt regarding each _blind-input_ image. Qualitative examples tend to show diversity in the generation across the languages, with the retrieved information being itself diverse. For instance, in the first example, for English and Spanish, LMCap focuses on describing that a man is \begin{table} \begin{tabular}{l c c c c} \hline \hline Model & en & es & hi & zh \\ \hline \hline \multicolumn{5}{c}{COCO-DEV} \\ \hline LMCap & **0.767** & 0.454 & 0.334 & 0.584 \\ Baseline M2M-100 & 0.590 & 0.563 & 0.548 & 0.714 \\ \hline \multicolumn{5}{c}{XM3600} \\ \hline LMCap & **0.452** & **0.329** & **0.132** & **0.221** \\ Baseline M2M-100 & 0.333 & 0.205 & 0.120 & 0.170 \\ _BB_: COCO-35L & 0.297 & 0.194 & 0.098 & 0.087 \\ \hline \hline \end{tabular} \end{table} Table 8: Comparison to direct translation on retrieved captions (Baseline), on COCO-DEV and XM3600. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & en & es & hi & zh \\ \hline \hline \multicolumn{5}{c}{COCO-DEV} \\ \hline LMCap & **0.767** & 0.454 & 0.334 & 0.584 \\ Baseline M2M-100 & 0.590 & 0.563 & 0.548 & 0.714 \\ \hline \multicolumn{5}{c}{XM3600} \\ \hline LMCap & **0.452** & **0.329** & **0.132** & **0.221** \\ Baseline M2M-100 & 0.333 & 0.205 & 0.120 & 0.170 \\ _BB_: COCO-35L & 0.297 & 0.194 & 0.098 & 0.087 \\ \hline \hline \end{tabular} \end{table} Table 7: Comparison to Socratic Models (SMs) on the COCO-DEV dataset. LMCap clearly outperforms SMs, as highlighted by bold. Figure 2: Examples of captions generated by LMCap for English, Spanish, Hindi, and Chinese, on XM3600 images and based on retrieved captions regarding each _blind-input_ image. in front of microphones (i.e., based on the first two retrieved captions). In turn, for Hindi and Chinese, the man is in front of a laptop (i.e., from the first example), and the captions can also mention that he is ready to give a speech in Chinese (i.e., given the last two retrieved captions). In the second image, we can see that LMCap can simply copy a retrieved caption to generate in English, while for the other languages the model may come up with terms not directly present in the retrieved captions (e.g., "snow slope" in Spanish). The last image is a negative example, where incorrect retrieved captions led the model into errors in English and Chinese, showing that there are also limitations in our _image-blind_ approach. For more examples, see Appendix C. ## 5 Conclusions This paper proposes LMCap, an _image-blind_ few-shot multilingual image captioning model. LMCap is based on prompting a language model with \(N\)-shot examples and retrieved captions extracted by a vision-and-language model, to condition caption generation in a desired language with a multilingual language model. On XM3600, i.e. a human-labelled massively multilingual multimodal benchmark, LMCap performs competitively against the state-of-the-art without involving expensive training with large-scale translated multimodal data, or with any captioning data. Experimental results further demonstrate that LMCap largely outperforms Socratic Models (Zeng et al., 2022), showing that retrieval augmentation plays a crucial role in our prompting approach. As future work, we plan to further assess the use of multilingual data in the datastore, as well as the impact of directly promoting diversity (Ye et al., 2022; Levy et al., 2022) in the captions used in the prompt. ## Acknowledgements This research was supported by the Portuguese Recovery and Resilience Plan through project C645008882-00000055, through Fundacao para a Ciencia e Tecnologia (FCT) with the Ph.D. scholarship 2020.06106.BD, and through the INESC-ID multi-annual funding from the PIDDAC programme (UIDB/50021/2020). ## Limitations Image captioning and multilingual image captioning studies tend to focus on the COCO dataset, which was shown to contain gender imbalance. Previous research has also showed that models trained on COCO tend to amplify this bias (Hendricks et al., 2018; Zhao et al., 2017). While our model is not trained on COCO or in any captioning data, it relies on a pre-trained language model, which is known to suffer from different sources of bias and fairness issues (Bommasani et al., 2021; Sheng et al., 2021; Schramowski et al., 2022). Our model also involves retrieval-augmentation with captions extracted by a vision-and-language model, also pre-trained in an unsupervised manner. Like in the case of other retrieval-augmented generative models (Lewis et al., 2020), LMCap has inherently a bias towards the retrieved information. Notwithstanding, by conditioning on information from a datastore with clean and curated text, LMCap has potential to ameliorate some of the generation issues of the language model (e.g., elude hateful or violent language). To have insights on the biases presented in LMCap, we recommend analysing the retrieved captions used by the model, since they provided cues to the predictions, as shown in Figure 2. We argue that it can be much harder to have a direct interpretation for captioning models that are not retrieval-augmented. Another limitation of our model relates to it following a full _image-blind_ approach, which heavily depends on information from similar captions instead of the visual content from the actual input image. To address this limitation, future work could additionally include concepts extracted from the image in the prompt, as proposed in Socratic Models, combined with the retrieved information. ## Ethics Statement The datasets supporting the evaluation of LMCap are publicly available for academic purposes. We also plan to release our code, and the additional resources that were built to support the experiments. We emphasise that LMCap challenges the efficiency of most current captioning approaches, in terms of resource usage and development/deployment effort, while at the same time promoting more equitability and inclusion, exemplified here by attempting to balance language representation at low computational costs. We further note that while our model attempts to advance research beyond English-centric captioning, by considering captioning for a wide variety of languages, it is important to address and pay more attention to low-resource languages as well (i.e., languages beyond those covered in our tests). Evaluating LMCap with additional datasets, covering an even larger set of languages and concepts, would be desirable.
2303.00026
On weak ergodicity breaking in mean-field spin glasses
The weak ergodicity breaking hypothesis postulates that out-of-equilibrium glassy systems lose memory of their initial state despite being unable to reach an equilibrium stationary state. It is a milestone of glass physics, and has provided a lot of insight on the physical properties of glass aging. Despite its undoubted usefulness as a guiding principle, its general validity remains a subject of debate. Here, we present evidence that this hypothesis does not hold for a class of mean-field spin glass models. While most of the qualitative physical picture of aging remains unaffected, our results suggest that some important technical aspects should be revisited.
Giampaolo Folena, Francesco Zamponi
2023-02-28T19:02:47Z
http://arxiv.org/abs/2303.00026v3
# On weak ergodicity breaking in mean-field spin glasses ###### Abstract The weak ergodicity breaking hypothesis postulates that out-of-equilibrium glassy systems lose memory of their initial state despite being unable to reach an equilibrium stationary state. It is a milestone of glass physics, and has provided a lot of insight on the physical properties of glass aging. Despite its undoubted usefulness as a guiding principle, its general validity remains a subject of debate. Here, we present evidence that this hypothesis does not hold for a class of mean-field spin glass models. While most of the qualitative physical picture of aging remains unaffected, our results suggest that some important technical aspects should be revisited. ## I Introduction Mean-field spin glasses are prototypes of complex materials. Their rough Hamiltonian function (or "energy landscape") features a multitude of local minima. At finite temperature, this results in a rough free energy landscape, with a multitude of metastable states that can trap the dynamics for times diverging exponentially with system size [1]. In equilibrium, mean-field spin glasses have been solved by several techniques, including the replica and cavity methods, and the structure of thermodynamic states is now well understood [2; 3]. However, such thermodynamic states are by construction inaccessible, because equilibrium cannot be achieved at temperatures below the glass transition. In fact, glassy materials (e.g., window glasses) are usually prepared by cooling from high temperature [4], and the same cooling protocol can also be used to solve optimization problems under the name of simulated annealing [5]. A particular case of such cooling is an instantaneous quench from infinite temperature to zero temperature, i.e. gradient descent dynamics starting from a random initial state. Such dynamics has recently attracted a lot of interest because it is routinely used to train modern deep neural networks [6]. Hence, understanding where, in the rough landscape of a disordered system, a cooling or quench dynamics would end is a problem of primary importance in a broad field of problems, ranging from material science to artificial intelligence [7]. A milestone in this line of research is the exact solution by Cugliandolo and Kurchan of the out-of-equilibrium quench dynamics of the so-called pure spherical \(p\)-spin-glass model [8]. Here, \(p\) refers to the number of interacting spins in the Hamiltonian, and spherical refers to the fact that spins are continuous variables constrained on the \(N\)-dimensional sphere. These authors were able to solve numerically the exact dynamical mean field theory (DMFT) equations that describe such dynamics when the thermodynamic limit \(N\to\infty\) is taken first (at fixed time after the quench). Moreover, they could analytically construct an exact asymptotic solution when time goes to infinity (after the thermodynamic limit) [8; 9]. This solution gave, for the first time, a coherent picture of the low-temperature out-of-equilibrium evolution of disordered systems towards the bottom of their energy landscape, and revealed a series of highly non-trivial physical properties of the dynamics. It shows that (i) the system never becomes stationary but instead _ages_ indefinitely, reaching lower and lower regions of the energy landscape; (ii) it asymptotically gets stuck at a "threshold" value of energy that sharply separates high-energy saddle-rich and low-energy minima-rich regions of the landscape [9; 10]; (iii) the threshold level is characterized by "marginal stability", i.e. the spectrum of eigenvalues of the Hessian matrix touches zero, resulting in the presence of arbitrarily soft excitation modes that make the system extremely sensitive to small perturbations; (iv) at long times, any non-linear transformation of time leads to the same result, i.e. the system possesses an internal "clock" that is independent of the actual parametrization of time, the so-called "reparametrization invariance" symmetry [11]; (v) the threshold energy level is asymptotically sampled uniformly, hence giving rise to a notion of "effective equilibrium" and an associated "effective temperature" [8; 12]; (vi) correspondingly, memory of the initial state is completely lost. This phenomenon of persistent and memoryless aging has been dubbed "weak ergodicity breaking" [9; 13] and has become a central concept in glass physics. In fact, many numerical and experimental studies suggest that structural glasses undergo a similar kind of aging when quenched to low temperatures [4; 14]. Similar results have been obtained for deep neural networks in the under-parametrized regime [15]. The weak ergodicity breaking scenario is particularly attractive because the manifold on which the system evolves asymptotically is independent of the initial condition and can thus be characterized by entirely geometrical methods, without the need for an explicit solution of the dynamics that is difficult to obtain in more complex models [16; 17; 18; 19; 20; 21]. Motivated by these considerations, recent work has investigated whether the weak ergodicity breaking scenario, and its asymptotic aging structure, holds more generally in spin glass models. The Ising \(p\)-spin-glass has been investigated numerically by Rizzo [22] (for \(p=3\)) and by Bernaschi et al. [23] (for \(p=2\), corresponding to the Sherrington-Kirkpatrick model). The results of both works suggest either strong ergodicity breaking, i.e. non-vanishing correlation between the initial configuration and that at asymptotically divergent times, or a long-time crossover to a much slower time decay (e.g. logarithmic), thus suggesting that a different asymptotic solution than the Cugliandolo-Kurchan one [24] might apply to these models. Folena et al. [25] studied the mixed spherical \((p+s)\)-spin-glass, i.e. a mixture of two pure \(p\)-spin-glasses, with different values of the number of interacting spins, chosen to be \(p=3\) and \(s=4\). For this \((3+4)\)-spin-glass, Folena et al. identified what they called an "onset" temperature \(T_{\rm onset}\), such that for initial configurations prepared in equilibrium at temperature \(T>T_{\rm onset}\), weak ergodicity breaking seemingly applies to gradient descent dynamics, while for \(T<T_{\rm onset}\) one has strong ergodicity breaking [25]. Because of these results, the general validity of the weak ergodicity breaking hypothesis beyond the case of the pure spherical \(p\)-spin-glass remains undecided. In this work, we revisit the situation by considering mixed spherical \((p+s)\)-spin-glass models with fixed \(p=2\) or \(p=3\), and varying \(s\) over a wide range of values. We restrict ourselves to the simplest case of gradient descent (i.e., zero-temperature) dynamics starting from an initial random configuration (i.e., infinite initial temperature). We solve the DMFT equations for these models (hence taking the thermodynamic limit first, at fixed times), both via numerical integration [8] and using series expansions [26]. Our results from both methods consistently suggest that either strong ergodicity breaking holds at any \(s>p\), or that weak ergodicity breaking is only restored at very large (unobservable) times via some poorly understood crossover. The phenomenon is most visible at large \(s\), but it seems to remain present (although very weakly) even for the \(3+4\) model investigated by Folena et al. [25]. We show that most of the physical ingredients of the Cugliandolo-Kurchan solution listed above also apply to the mixed \((p+s)\)-spin model, namely (i) the system ages indefinitely, (iii) the dynamics approaches a marginally stable manifold, and (v) a modified fluctuation-dissipation relation suggest the emergence of an effective thermal regime. Yet, although our results are not fully conclusive, they strongly suggest that the weak ergodicity breaking hypothesis does not apply, at least on observable time scales, and as a result the system ends up surfing on a non-universal manifold that depends on the initial condition, and whose properties cannot be computed from a simple geometrical scheme. Whether these manifolds can be described by a proper generalization of the Cugliandolo-Kurchan asymptotic solution of DMFT remains an open problem [25]. We note that numerical results on finite-dimensional models of structural glasses seem to agree with the weak ergodicity breaking hypothesis, in the sense that correlation with the initial state is lost at large times, see e.g. [27; 28; 29]. While it has been established that in the infinite-dimensional limit such models are described by a DMFT [30; 31; 32; 33], the type of ergodicity breaking in this setting has not been fully investigated, but preliminary results [21] suggest strong ergodicity breaking in infinite dimensions similar to the one we report here for the mixed \((p+s)\)-spin glass model. Hence, either weak ergodicity breaking is restored by non-mean-field effects in finite-dimensional glasses, or strong ergodicity breaking is present in these systems but it is too small to be detected numerically. It is important to note that in finite dimensions, under general hypotheses, the asymptotic aging dynamics is connected to the structure of the equilibrium Boltzmann distribution, see Ref. [34] for a review. Progress along these lines would give additional insights on the aging dynamics of glasses and other similarly complex systems. The rest of this paper is organized as follows. In Sec. II we introduce the models we study and review some known properties of their energy landscape and the DMFT equations. In Sec. III we present our main results. We conclude by a brief discussion in Sec. IV. A few more technical details are discussed in Appendix. ## II Definitions ### Models The Hamiltonian of the pure \(p\)-spin model is \[H_{p}=\sum_{i_{1}<i_{2}<\ldots<i_{p}}J_{i_{1}i_{2}\ldots i_{p}}\sigma_{i_{1}} \sigma_{i_{2}}\cdots\sigma_{i_{p}}\, \tag{1}\] where the \(\sigma_{i}\) are \(N\) real variables satisfying the spherical constraint \(\sum_{i}\sigma_{i}^{2}=N\). The couplings \(J_{i_{1}i_{2}\ldots i_{p}}\) are i.i.d. Gaussian variables with zero mean and variance \(1/(2N^{p-1}p!)\). We consider two classes of mixed \((p+s)\)-spin spherical models whose Hamiltonian is a linear combination of two \(p\)-spin models, each with independent random couplings. The \(2+s\) (with \(s>2\)) has Hamiltonian \[H_{2+s}^{\lambda}=\sqrt{\lambda}H_{2}+\sqrt{1-\lambda}H_{s}\, \tag{2}\] where the parameter \(0\leq\lambda\leq 1\) interpolates between the two pure models \(H_{2}\) and \(H_{s}\), and the \(3+s\) (with \(s>3\)) whose Hamiltonian is \[H_{3+s}^{\lambda}=\sqrt{\lambda}H_{3}+\sqrt{1-\lambda}H_{s}\, \tag{3}\] again with the interpolating parameter \(\lambda\). Following Ref. [35] we define the characteristic polynomial as the covariance between Hamiltonians at different configurations \(\sigma,\sigma^{\prime}\) on the hypersphere \[f_{p+s}^{\lambda}(\frac{\sigma\cdot\sigma^{\prime}}{N})\equiv N^{-1}\overline {H_{p+s}^{\lambda}(\sigma)H_{p+s}^{\lambda}(\sigma^{\prime})}\, \tag{4}\] where the overline \(\overline{\bullet}\) stands for an average over the \(J\)s disorder. This is equal to \[f_{p+s}^{\lambda}(q)=\frac{\lambda q^{p}+(1-\lambda)q^{s}}{2}\qquad\text{with} \quad q=\frac{\sigma\cdot\sigma^{\prime}}{N}\, \tag{5}\] where \(p=2\) or \(p=3\) depending on the considered model, and \(q\) is the overlap between two configurations on the hypersphere. The definition and use of the characteristic polynomial \(f(q)\) for multi-particle interactions in spin glasses was firstly introduced in Ref. [36]. For each class of models \(H_{2+s}^{\lambda}\) and \(H_{3+s}^{\lambda}\), we have selected a value of \(\lambda\) for each \(s\) such as to maximize \[\Delta E_{s}^{\lambda}=\frac{f(1)\sqrt{f^{\prime\prime}(1)}}{f^{\prime}(1)}- \frac{f^{\prime}(1)+f(1)+2}{\sqrt{f^{\prime\prime}(1)}}. \tag{6}\] Notice that here and in the following, \(f^{\prime}(q)=\partial_{q}f(q)\). The rationale for this choice of \(\lambda\) will be discussed in the following; the selected \(\lambda\)s are reported in table 1. ### Energy Landscape The ground state of the selected models displays different types of replica symmetry breaking (RSB), as shown in Fig. 1. The selected \((2+s)\)-spin models present a 1-step RSB (1-RSB) ground-state for \(s=3\) and 1-step+full RSB (1-FRSB) for \(s>3\), with the fullRSB part being located at small values of the overlap [38; 39]. The pure 2-spin (\(s=2\)) is peculiar because it presents a trivial landscape with only two minima, and its out-of-equilibrum dynamics Figure 1: Ground-state of the models studied in this paper. **(a)** Phase diagram for the ground-state of \((2+s)\)-spin models for different values of \(s\) and \(\lambda\). **(b)** Same phase diagram for \((3+s)\)-spin models. The plots are derived following Ref. [37]. is exactly integrable [40]. The selected \((3+s)\)-spin models present a 1-RSB ground-state for all \(s<13\), while \(s=13\) presents a 2-RSB ground-state [37]. Above the ground state, each of the selected models presents a rough energy landscape, with an exponential (in the system size) number of local minima. In order to characterize this complex landscape we define three different energies: * \(E_{gs}\), the ground-state energy, is the lowest energy of the landscape [39; 41]. * \(E_{th}\), the threshold energy, is the energy above which typical stationary points are saddles, and below which they are minima [42; 43; 25; 8; 44]. * \(E_{alg}\), the algorithmic energy, is the lowest energy reachable by an optimization algorithm which computes the gradient of the Hamiltonian a finite number of times [45; 46; 47; 48]. Each of these energies can be exactly evaluated for arbitrary mixed models. Below we only report calculations for both \(E_{gs}\) and \(E_{th}\) in the simplest case of a 1-RSB landscape, but the expressions can be generalized to any level of RSB. We define the complexity as the average over the disorder of the logarithm of the number \(\mathcal{N}\) of stationary points with given energy (per spin) \(E_{\rm IS}\). This is evaluated by the Kac-Rice formula \[\Sigma(E_{\rm IS})\equiv N^{-1}\overline{\log\left(\mathcal{N}(E_{\rm IS}) \right)}\qquad\mbox{with}\quad\mathcal{N}(E_{\rm IS})=\int_{\sigma\in\mathcal{ S}^{N}}d\sigma\;\delta(H-NE_{\rm IS})\;\delta(\nabla H)\;|\det(\nabla^{2}H)|\, \tag{7}\] with \(\int_{\sigma\in\mathcal{S}^{N}}d\sigma\) the integral over the space of configurations (hypersphere). Assuming a 1-RSB ansatz, the complexity of the energy landscape in mixed \(p\)-spin models is [49; 50; 25] \[\Sigma^{(1)}(\chi)=\frac{1}{2}\left(\chi^{2}f^{\prime}(1)+\log\left(\frac{1}{ \chi^{2}f^{\prime}(1)}\right)-f(1)\left(\frac{1}{\chi f^{\prime}(1)}-\chi \right)^{2}-1\right)\, \tag{8}\] where \(\chi\) is the linear susceptibility associated to typical local minima with energy \[E_{\rm IS}^{(1)}(\chi)=-f(1)\left(\frac{1}{\chi f^{\prime}(1)}-\chi\right)- \chi f^{\prime}(1). \tag{9}\] The superscript \({}^{(1)}\) reminds that these expressions hold under the 1-RSB ansatz, and the subscript \({}_{\rm 1S}\) stands for inherent structure, the name usually given to local minima in the glass literature. A parametric plot of \(\Sigma\) versus \(E\), eliminating \(\chi\), gives the complexity of local minima as a function of the energy level in the landscape. If the ground-state, or some of the higher-energy states, present a more complex RSB structure, then Eqs. (8) and (9) are not exact, and we must resort to more involved calculations, see Ref. [39] for 1-FRSB and Ref. [44] for 2-RSB. The ground-state energy corresponds to the energy at which the complexity vanishes, therefore within the 1-RSB ansatz \[E_{gs}^{(1)}=E_{\rm IS}^{(1)}(\chi_{gs}^{(1)})\qquad\mbox{with}\quad\chi_{gs} ^{(1)}\quad\mbox{s.t.}\quad\Sigma(\chi_{gs}^{(1)})=0. \tag{10}\] The energy \(E_{th}\) is defined as the energy at which dominant minima become saddles, i.e. the vibrational spectrum is marginal. The vibrational spectrum (see e.g. Ref. [35]) follows a semicircular law \(\rho_{\mu}(\lambda)\) of radius \(R=2\sqrt{f^{\prime\prime}(1)}\) centered at \(\mu\), where \(\mu\) is given in terms of the susceptibility by inverting \[\chi(\mu)=\int d\lambda\frac{\rho_{\mu}(\lambda)}{\lambda}=\frac{\mu-\sqrt{ \mu^{2}-\mu_{mg}^{2}}}{2f^{\prime\prime}(1)}\, \tag{11}\] with the marginal value thus corresponding to \(\mu_{mg}=R=2\sqrt{f^{\prime\prime}(1)}\). Note that the results on the spectrum hold for any level of RSB. The corresponding energy (at 1-RSB level) and susceptibility are \[E_{th}^{(1)}=E_{IS}^{(1)}(\chi_{mg})\qquad\mbox{with}\quad\chi_{mg}=\frac{1}{ \sqrt{f^{\prime\prime}(1)}}. \tag{12}\] The algorithmic energy \(E_{alg}\) is the minimal energy reachable by a search algorithm running in polynomial time in system size. This energy can be reached by moving in the Thouless-Anderson-Palmer (TAP) free energy landscape from magnetization \(\langle\sigma_{i}\rangle=m_{i}=0\) (center of sphere), in \(N\) orthogonal unit steps until reaching the surface of the sphere defined by \(\sum_{i}m_{i}^{2}=N\). In physical terms, this is a sort of annealing in temperature with a re-weighting of the Hamiltonian, see Ref. [45; 48]. The algorithmic energy reads \[E_{alg}=\int_{0}^{1}dqf^{\prime\prime}(q)^{1/2}\, \tag{13}\] and the corresponding ansatz is intrinsically of the continuous fullRSB kind. We notice that in the case of a continuous fullRSB ground state, one has \(E_{gs}=E_{th}=E_{alg}\)[45], i.e. the ground state energy (up to subleading corrections in \(1/N\)) can be reached in polynomial time. Finally, we can define a value of energy at which the 1-RSB complexity has a maximum, \[E_{max}^{(1)}=E_{IS}^{(1)}(\chi_{max}^{(1)})\quad\text{where}\quad\chi_{max}^{( 1)}=\frac{\sqrt{f(1)}}{\sqrt{f^{\prime}(1)^{2}-f(1)f^{\prime}(1)}}\quad s.t. \quad\partial\chi\Sigma^{(1)}(\chi)=0. \tag{14}\] This value of energy is located above \(E_{th}^{(1)}\) and has no particular physical meaning, because the complexity is ill-defined and unstable towards further RSB in that region [51; 52]. Yet, we take the quantity \(\Delta E_{s}^{\lambda}=E_{max}^{(1)}-E_{th}^{(1)}\) reported in Eq. (6) as an estimate of the energy range in which "non-trivial" effects may take place in the landscape, and this is why we choose \(\lambda\) such as to maximize \(\Delta E_{s}^{\lambda}\). We stress once again that this choice has no obvious physical meaning and is just one among several possible ways to choose a value of \(\lambda\) for each \(s\), which is helpful to reduce the parameter space of the model. ### Gradient Descent Dynamics In this work we consider the simplest form of local greedy dynamics, the gradient descent (GD) dynamics defined by \[\partial_{t}\sigma_{i}=-\mu\sigma_{i}-\nabla H_{i}\,\qquad\mu=\nabla H(t)\cdot \sigma(t)/N\, \tag{15}\] where the term \(\mu(t)\), also called "radial reaction", is added in order to enforce the spherical constraint. The system is prepared in an initial random configuration (on the sphere) and the gradient of the Hamiltonian is then followed until reaching a local minimum. In the \(N\to\infty\) limit, for any mixed model of covariance \(f(q)\), the GD dynamics can be rewritten in terms of correlation \(C_{tt^{\prime}}=\langle\sum_{i=1}^{N}s_{i}(t)s_{i}(t^{\prime})\rangle/N\) and response \(R_{tt^{\prime}}=\langle\sum_{i=1}^{N}\delta s_{i}(t)/\delta h_{i}(t^{\prime}) \rangle/N\), being \(\langle\rangle\) the average over different random initial conditions and different quenched disorder of the couplings \(J\)s. An external field \(h_{i}\) is added to the GD equations to compute the linear response [8]. The resulting DMFT equations for the GD dynamics read [8] \[\partial_{t}C_{tt^{\prime}}= -\mu_{t}C_{tt^{\prime}}+\int_{0}^{t}dsf^{\prime\prime}(C_{ts})R_{ ts}C_{st^{\prime}}+\int_{0}^{t^{\prime}}dsf^{\prime}(C_{ts})R_{t^{\prime}s}\, \tag{16}\] \[\partial_{t}R_{tt^{\prime}}= \delta_{tt^{\prime}}-\mu_{t}R_{tt^{\prime}}+\int_{t^{\prime}}^{t }dsf^{\prime\prime}(C_{ts})R_{ts}R_{st^{\prime}}\,\] \[\mu_{t}= \langle\nabla H(t)\cdot\sigma(t)\rangle/N=\int_{0}^{t}\big{(}f^{ \prime\prime}(C_{ts})R_{ts}C_{ts}ds+f^{\prime}(C_{ts})R_{ts}\big{)}ds\,\] where \(\mu_{t}\) is the average Lagrange multiplier enforcing the spherical constraint. The energy is given by \[E_{t}=\langle H(t)\rangle=-\int_{0}^{t}dsf^{\prime}(C_{ts})R_{ts}. \tag{17}\] We notice that these equations do not present any explicit dependence on the starting configuration, while a term proportional to \(C_{t0}\) is found if the dynamics is initialized in equilibrium at finite temperature [25]. For a broader and pedagogical introduction to GD in mean-field spin-glass models, see e.g. Ref. [7; 53; 54]. These equations were first studied in the out-of-equilibrium setting in Ref. [8] (for an arbitrary temperature of the thermal bath). There, an ansatz for the long-time dynamics was proposed, resulting in an asymptotic energy that coincides with the 1-RSB threshold energy \(E_{th}^{(1)}\) that separates minima from saddles. These studies were carried out on the pure \(p\)-spin model, which has the special property that all the marginal minima have exactly energy \(E_{th}^{(1)}\). In Refs. [25; 35], the situation was shown to be different for the more general mixed \((3+4)\)-spin model, for which marginal minima can be found in a broad range of energies. Yet, Refs. [25; 35] concluded that for this \((3+4)\)-spin model, GD initialized in random configurations would converge to the asymptotic solution of Ref. [8] and thus to energy \(E_{th}^{(1)}\). In more general models, it is known that \(E_{th}^{(1)}\) can go below the ground state energy. Because the correctness of Eqs. (16) has been mathematically proven [55], the ansatz of Ref. [8] cannot apply in such cases and must be generalized. In this work, we thus revisited the results of Refs. [25; 35] by numerically integrating Eqs. (16) in a similar way as done before, but considering a broader range of values of \(p\) and \(s\) in the mixed \((p+s)\)-spin model. Furthermore, we also considered a series expansion of the equations as in Ref. [26]; the details of this method can be found in appendix A. ## III Results Starting from a random configuration and performing a GD dynamics, we observe that the energy asymptotically reaches values above the threshold energy \(E_{th}^{(1)}\) predicted by the asymptotic solution derived in Ref. [8] (Fig. 2). Yet, it remains true that the asymptotically reached local minima are marginal, i.e. their spectrum has almost flat directions. This is not in contradiction with the structure of the energy landscape, which presents a large number of marginal minima (exponential in \(N\)) even above the threshold energy [25]. The results presented in this section are obtained by numerically integrating the DMFT Eqs. (16) as detailed in [25]. The main claim of Ref. [25] is that preparing a system in equilibrium at a finite temperature (below \(T_{\rm onset}\)) and then running GD dynamics, it will asymptotically reach marginal minima with energies below \(E_{th}^{(1)}\), aging in a confined space. However, starting from random initial conditions (or preparing above the onset temperature \(T_{\rm onset}\)) the system reaches the threshold energy \(E_{th}^{(1)}\). Here we claim instead that preparing the system in a random configuration, the GD dynamics reaches energies above \(E_{th}^{(1)}\). The fact that such behaviour was not observed in Ref. [25] is because in mixed \((p+s)\)-models with close \(p\) and \(s\) the effect is very small, as can be seen in Fig. 21. Furthermore, contrarily to what was proposed in Ref. [25], we propose that the long-time limit of the correlation \(C_{t0}\) between the initial and final Figure 2: Asymptotic energy reached by the gradient descent (GD) dynamics from random initial condition, compared to the ground state energy \(E_{gs}\), the threshold energy \(E_{th}^{(1)}\) and the algorithmic energy \(E_{alg}\). Black dots and crosses indicates two different extrapolations of the gradient descent dynamics, respectively with radial reaction \(\mu\) and with time as abscissa. **(a)** Asymptotic energies reached by \((2+s)\)-spin models. For \(E_{gs}\) the continuous line is the 1-RSB solution while the dashed-dotted is the 1-FRSB solution (barely distinguishable from \(E_{alg}\)). **(b)** Asymptotic energies reached by \((3+s)\)-spin models. Note that the \((3+13)\) model has a 2-RSB ground state, but here we reported the 1-RSB result (see [44] for further details). While for \(s=p\) (and, within numerical precision, for \(s\gtrsim p\)) gradient descent reaches \(E_{th}\), for \(s>p\) the gap between the two energies increases quickly. configurations does not vanish; again, for close \(p\) and \(s\) the effect is so small that it went undetected in previous work. In the scenario we propose here, initializing the GD dynamics in equilibrium at temperature \(T\) would always result in a \(T\)-dependent asymptotic energy, with \(E_{th}^{(1)}\) playing no special role, and no sharp \(T_{onset}\) would then be defined. In order to support our numerical findings (obtained via integration of the DMFT Eqs. (16) with fixed time step), we have solved the same equations by using a series expansion in the two times \(t,t^{\prime}\) as was first suggested in Ref. [26]. The obtained results confirm the findings from the numerical integration. The two methods, the "integration" and the "series expansion", give different insight on the problem. The first allows for a longer time span (up to 1500 time units, with time step 0.05), while the second (that can span up to 100 time units) allows one to precisely evaluate derivatives on the spanned region, which are needed to precisely evaluate power-law decays. We would like to stress, however, that both methods give access to a limited time interval, and in absence of an analytic solution, the infinite-time limit remains inaccessible. The possibility that the scenario we propose is only a pre-asymptotic regime that would crossover to a weak ergodicity regime thus remains open. ### Power-law decay with series expansion Focusing on the GD dynamics starting from a random initial configuration, three independent observables are considered: energy \(E(t)\), radial reaction \(\mu(t)\) and overlap with the initial condition \(C(t,0)=C_{t0}\). In the long time limit (in the aging regime), according to the asymptotic analysis of Ref. [8], we expect them to asymptotically follow three independent power laws: \[\Delta E(t)\sim t^{-\alpha_{E}}\,\qquad\Delta\mu(t)\sim t^{-\alpha_{\mu}}\, \qquad\Delta C(t,0)\sim t^{-\alpha_{C}}\, \tag{18}\] where \(\Delta O(t)=O(t)-\lim_{t\to\infty}O(t)\) for each observable. In the case of a quench to the critical temperature (for a fullRSB transition), exact relations between the \(\alpha_{E}\) and \(\alpha_{C}\) exponents were found in Ref. [56]. In order to numerically study the power laws describing the asymptotic dynamics, we adopt the idea -firstly introduced in Ref. [26] - of expanding the integro-differential DMFT equations describing the out-of-equilibrium dynamics in powers of the two times \(t,t^{\prime}\). The obtained series has a small radius of convergence (of order one). A Pade approximation is thus performed in order to extract useful information for the long-time dynamics, which consists in rewriting polynomials of degree \(L\) in terms of fractions of polynomials of degree \(L/2\), such that they have the same Taylor expansion. The attempt in Ref. [26] was conditioned by two important limitations, the computing power and a probable floating-point approximation error in the evaluated coefficients that has biased the Pade approximations and therefore the asymptotic results. In order to overcome this second difficulty we use a multiple-precision floating-point library (GNU MPFR) that allows one to keep an arbitrarily large number of digits for every coefficient in the expansion, thus avoiding numerical errors in the resummation. The only limitation is then due to the number of terms that can be computed (between 1000 and 2000 coefficients), which when resummed with the Pade approximation, gives access to times \(\sim 100\), to be compared to the times \(\sim 1000\) reached by a simple integration algorithm. The advantage of the series expansion is that we can also evaluate derivatives without suffering from discretization errors due to integration. In appendix A we explain in details the procedure we used to extract the exponents \(\alpha_{E},\alpha_{\mu},\alpha_{C}\) from the series expansion. In a nutshell, we extract the exponents from the asymptotic limit of the ratio \(t\partial_{t}^{2}O(t)/\partial_{t}O(t)\), \(O(t)\) being either \(E(t)\), \(\mu(t)\) or \(C(t,0)\). The results are shown in Fig. 3 and numerical values are given in table 2. The error bars are based on a fitting procedure over a Pade series (as reported in appendix A), and therefore must be considered as attempted estimation of the errors. Only in pure models \(\alpha_{E}=\alpha_{\mu}\), because the energy is proportional to the radial reaction [35]. For the special case of the pure 2-spin the fitted power law agrees with the analytically known one [40], i.e. \(\alpha_{E}=\alpha_{\mu}=1\) \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \(s\) & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 \\ \hline \hline & \(\alpha_{E}\) & 0.999 (1) & 0.681 & 0.604 & 0.547 & 0.508 & 0.481 & 0.462 & 0.447 & - & - & - \\ \cline{2-13} \(2+s\) & \(\alpha_{\mu}\) & 0.999 (1) & 0.702 & 0.678 & 0.663 & 0.652 & 0.636 & 0.647 & 0.632 & - & - & - \\ \cline{2-13} & \(\alpha_{C}\) & 0.748 (3/4) & 0.506 & 0.464 & 0.424 & 0.407 & 0.401 & 0.380 & 0.369 & - & - & - \\ \hline \hline & \(\alpha_{E}\) & - & 0.666 & 0.657 & 0.617 & 0.566 & 0.538 & 0.503 & 0.474 & 0.453 & 0.436 & 0.423 & 0.411 \\ \cline{2-13} \(3+s\) & \(\alpha_{\mu}\) & - & 0.666 & 0.670 & 0.663 & 0.655 & 0.650 & 0.643 & 0.637 & 0.631 & 0.624 & 0.618 & 0.615 \\ \cline{2-13} & \(\alpha_{C}\) & - & 0.375 & 0.338 & 0.313 & 0.299 & 0.285 & 0.283 & 0.277 & 0.284 & 0.266 & 0.271 & 0.266 \\ \hline \end{tabular} \end{table} Table 2: Power-law exponents in the \((2+s)\) and \((3+s)\) models. The exact results for the 2-spin model are shown in parenthesis (see Ref. [40]). Values have errors between 0.001 and 0.05 as shown in the error bars of Fig. 3, estimated by comparing different fitting ranges. and \(\alpha_{C}=3/4\). For the pure 3-spin we conjecture that the exact values are \(\alpha_{E}=\alpha_{\mu}=2/3\) and \(\alpha_{C}=3/8\). These coefficients are not universal, and it remains an open question whether there exists some equations relating them, in the spirit of Ref. [56]. ### Is the asymptotic dynamics marginal? In Fig. 4 we show the time evolution of the radial reaction \(\mu(t)\). We confirm that the gradient descent dynamics from random configurations asymptotically approaches configurations that have a marginal spectrum [8], i.e. the radial reaction approaches asymptotically its marginal value \(\mu_{mg}\equiv 2\sqrt{f^{\prime\prime}(1)}\)[25; 35; 49]. The convergence of \(\mu(t)\) towards \(\mu_{mg}\) is controlled by different power-law decays depending on the model. In the inset of Fig. 4 we show that the exponents \(\alpha_{\mu}\) derived from the series expansion can be used to confirm the convergence towards the marginal value: indeed, the curves appear perfectly linear when plotted as a function of \(t^{-\alpha_{\mu}}\), and the linear extrapolation to infinite times coincides with \(\mu_{mg}\). We notice that the evaluation from series expansion of \(\alpha_{\mu}\) (for each model) does not assume \(\mu=\mu_{mg}\), therefore the linear convergence towards zero (in the inset of Fig. 4) is a strong confirmation of the system being asymptotically marginal. Figure 3: Exponents \(\alpha_{E},\alpha_{\mu},\alpha_{C}\) extrapolated from the series expansion of the DMFT equations (see appendix A and table 2). These describe the decay of the energy \(E(t)\), the radial reaction \(\mu(t)\) and the correlation with the initial condition. **(a)** Power-law exponents in \((2+s)\)-spin models. **(b)** Power-law exponents in \((3+s)\)-spin models. Figure 4: Reduced radial reaction \((\mu_{mg}-\mu(t))/\mu_{mg}\) vs time \(t\). It is asymptotically expected to reach zero if the final configuration is marginal. In the inset, the time axis is rescaled as \(t^{-\alpha_{\mu}}\), to show a linear decay. Notice that \(\alpha_{\mu}\) is evaluated without any assumption on the marginality, i.e. without assuming \(\lim_{t\to\infty}\mu(t)=\mu_{mg}\), as explained in appendix A. **(a)** Time dependence of the rescaled radial reaction in \((2+s)\)-spin models. **(b)** Time dependence of the rescaled radial reaction in \((3+s)\)-spin models. ### What is the asymptotic energy? Having established that the gradient descent dynamics is asymptotically marginal for every model considered, we can employ the reduced radial reaction \(\tilde{\mu}(t)=(\mu_{mg}-\mu(t))/\mu_{mg}\) as a measure of time. It is \(\tilde{\mu}(0)=1\) at the initial time and reaches \(\tilde{\mu}(\infty)=0\) asymptotically. We can then study the energy decay as a function of \(\tilde{\mu}\), as shown in Fig. 5. Increasing \(s\) for both \((2+s)\) and \((3+s)\) models has the effect to increase the disagreement with the hypothesis of convergence to the threshold \(E_{th}\), both in the case of a 1-RSB and 1-FRSB ansatz, the second being used for \(s>3\) in the \((2+s)\) model. The insets of Fig. 5 show the same data as a function of \(\tilde{\mu}\) rescaled with the power-law exponents deduced from the series expansion. The resulting linear behavior is fitted to obtain the asymptotic estimates shown in Fig. 2 (black dots). A similar analysis has been done using the rescaled time \(t^{-\alpha_{E}}\) to obtain a slightly different fit, also shown in Fig. 2 (black crosses). Despite not being exactly coincident, the estimates of the asymptotic energy from the two fits suggest that the energy reaches values that are well above the threshold. ### Is there a strong ergodicity breaking? One very important question is whether the aging dynamics can decorrelate from any previously reached configuration over a long enough time. This is referred to as the weak ergodicity breaking ansatz, and it is one of the main ingredients that allowed the derivation of the asymptotic solution of the DMFT equations of the pure \(p\)-spin model [8; 9]. Recently, in Refs. [22; 23; 25], it has been suggested that weak ergodicity breaking could be non-universal, and some systems could age in a confined part of the phase space, thus showing strong ergodicity breaking, i.e. a persistent correlation with the initial condition. Our results for the GD dynamics of mixed \((p+s)\)-spin models starting from a random configuration seem to confirm strong ergodicity breaking. In Fig. 6 we show the correlation with the initial configuration \(C(t,0)\) as a function of the reduced \(\tilde{\mu}\). The inset shows the same data rescaled with the power-law exponent \(\alpha_{C}/\alpha_{\mu}\) derived from the series expansion. The linearized curves suggest a non-zero asymptotic value for the correlation, i.e. \(\lim_{t\to\infty}C(t,0)\neq 0\), at least if we assume that the time of integration (\(t=1500\)) is long enough to observe the asymptotic behavior. In order to further support this observation we have looked at the two-time correlation \(C(t,t_{w})=C_{tt_{w}}\) for different waiting times \(t_{w}\). The results are shown in Fig. 7, with as abscissa the radial reaction "time difference" \((1/\tilde{\mu}(t)-1/\tilde{\mu}(t_{w}))^{-1}\). As observed in Ref. [23], for increasing waiting time \(t_{w}\) the system decorrelates less and less, which once again suggests strong ergodicity breaking. Figure 5: Reduced energy vs reduced radial reaction. The inset shows the same data rescaled with the power-law exponent \(\alpha_{E}/\alpha_{\mu}\) in order to obtain a linear behavior, which once fitted gives the estimates of the asymtotic energy shown in Fig. 2 (black dots). Dotted lines shows the results for the same system with a different local dynamics, as described in section III.5. **(a)** Results for \((2+s)\)-spin models. For \(s>3\) the threshold is evaluated with the 1-FRSB ansatz [39]. **(b)** Results for \((3+s)\)-spin models, all the thresholds are evaluated with the 1-RSB ansatz. ### Are the results robust against a change of the short-time dynamics? In this section we investigate to what extent the asymptotic dynamics is influenced by changes in the short-time dynamics. We modify the thermal bath by adding an exponential persistence, i.e. the time-derivative operator is changed into \[\partial_{t}C\longrightarrow\Big{(}\partial_{t}+\int_{-\infty}^{t}dsK_{ts} \partial_{s}\Big{)}C\,\qquad\text{where}\qquad K_{tt^{\prime}}=\gamma\exp(-|t-t^{\prime}|/\tau). \tag{19}\] At finite temperature, in order to preserve the detailed balance condition, the thermal noise is changed accordingly as \[\langle\xi(t)\xi(t^{\prime})\rangle=2T\delta(t-t^{\prime})\longrightarrow 2T \Big{(}\delta(t-t^{\prime})+K_{tt^{\prime}}\Big{)}\, \tag{20}\] Figure 6: Correlation with the initial condition vs reduced radial reaction. The insets shows the same data rescaled with the power-law exponent \(\alpha_{C}/\alpha_{\mu}\) in order to display a linear behaviour. The dotted lines correspond to the same system with a persistent short-time dynamics, as described in section III.5. For \(s>p\), the results suggest an asymptotic memory of the initial configuration, consistent with a strong ergodicity breaking scenario. **(a)** Results for \((2+s)\)-spin models. **(b)** Results for \((3+s)\)-spin models. Figure 7: Two-time correlations \(C(t,t_{w})\) vs \((1/\tilde{\mu}(t)-1/\tilde{\mu}(t_{w}))^{-1}\) for different waiting times \(t_{w}=0,0.8,12.8,204.8\). For \(s>p\), the data suggest a strong ergodicity breaking scenario, i.e. \(\lim_{t\to\infty}C(t,t_{w})\neq 0\), for all \(t_{w}\). **(a)** Results for \((2+s)\)-spin models. **(b)** Results for \((3+s)\)-spin models. but since we work here at \(T=0\), this change is irrelevant. The parameters \(\tau\) and \(\gamma\) can be tuned to define the strength of the persistence. The resulting equations for correlation and response are \[\partial_{t}C_{tt^{\prime}}= -\mu_{t}C_{tt^{\prime}}+\int_{0}^{t}ds\Big{(}f^{\prime\prime}(C_{ts })R_{ts}-K_{ts}\partial_{s}\Big{)}C_{st^{\prime}}+\int_{0}^{t^{\prime}}ds\Big{(} f^{\prime}(C_{ts})-TK_{ts}\Big{)}R_{t^{\prime}s}\, \tag{21}\] \[\partial_{t}R_{tt^{\prime}}= \delta_{tt^{\prime}}-\mu_{t}R_{tt^{\prime}}+\int_{t^{\prime}}^{t} ds\Big{(}f^{\prime\prime}(C_{ts})R_{ts}-K_{ts}\partial_{s}\Big{)}R_{st^{\prime}}\,\] to be compared with Eqs. (69) and (71) of Ref. [57]. We have integrated and expanded in series the Eqs. (21) for \(\gamma=1\) and \(\tau=1\). We found that the exponential persistence gives just a linear rescaling of the characteristic time (for \(t>\tau\)), i.e. given a one-time observable \(O(t)\) of the original GD dynamics, its analog \(O^{pers}(t)\) in the persistent dynamics has an asymptotic behaviour \(O^{pers}(t)=O(t/t_{pers})\) with \(t_{pers}>1\) (that depends on the specific model). This behavior is confirmed by looking at parametric plots where the time has been substituted by the reduced radial reaction \(\tilde{\mu}\). In Fig. 5 and Fig. 6 the dotted lines corresponds to the persistent GD dynamics and are asymptotically matching the original GD dynamics (continous lines). We conclude that a modification of the short-time dynamics does not seem to have any impact on the asymptotic behavior, suggesting that a possible closure of the asymptotic DMFT equations is still possible, despite the lack of weak ergodicity breaking. In other words, while the dynamics remains confined in a region of space that depends on the initial condition, such region is asymptotically explored in a way that does not depend on short-time details, suggesting some kind of effective thermal behavior. In this regard, in order to completely overcome the short-time dynamics, a possible solution would be to study the quasi-equilibrium dynamics introduced in Ref. [58; 59; 60; 61]. ### Is there a well defined effective temperature? A standard way to analyze the aging dynamics [8] is by looking at the parametric plot of integrated response \(\chi(t,t_{w})=\int_{t_{w}}^{t}dsR(t,s)\) versus the correlation \(C(t,t_{w})\). These are shown in Fig. 8 for different models. One can introduce an effective temperature that quantifies the violation of the equilibrium fluctuation-dissipation relation (FDR), and is given by \[x^{t}[C(t,t_{w})]=-\frac{d\chi(t,t_{w})}{dC(t,t_{w})}. \tag{22}\] In pure \(p\)-spin models the 1-RSB ansatz for the GD dynamics gives, at long times, a unique effective temperature independent on the correlation, \(x=\frac{1}{\chi_{mg}f^{(1)}}-\chi_{mg}\), which is also equal to the derivative of the complexity at the Figure 8: Parametric plot over waiting time \(t_{w}\) of the rescaled integrated response \(\chi(t,t_{w})/\chi_{mg}\) vs the correlation \(C(t,t_{w})\), for several fixed values of time \(t=25.6,204.8,1500\). **(a)** Results for \((2+s)\)-spin models. **(b)** Results for \((3+s)\)-spin models. For both classes of models the linear 1-RSB ansatz is not able to describe the asymptotic behaviour (see Fig. 9 for a detailed comparison). The dynamics seems to asymptotically reach a different unknown ansatz. threshold energy \(E_{th}\)[8]. In Fig. 8b we see that general mixed \((3+s)\)-spin models do not exhibit a unique effective temperature (as given by a 1-RSB anstaz) but rather a varying temperature \(x^{t}[q]\) that depends on the correlation \(q=C(t,t_{w})\). The same can be said for the FDR of \((2+s)\) models. Moreover, as shown in Appendix B, even more refined "thermodynamic solutions" to the asymptotic behavior (such as a 1-FRSB dynamical ansatz [39]), do not agree with dynamical observations. In other words, the GD dynamical overlap probability \(P_{\rm GD}^{t}(q)=x_{\rm GD}^{\prime}=-\chi_{\rm GD}^{\prime\prime}\) does not converge to any expected replica ansatz. This is not only true because the support of \(P_{t}^{d}(q)\) does not reach \(q=0\) (due to strong ergodicity breaking), but also because of the actual shape of the solution. This is shown in more details in Fig. 9 where \(\chi_{\rm GD}(q)\) for different times \(t=25.6,204.8,1500\) are compared with the static ground state (\(\chi_{gs}\)), threshold (\(\chi_{th}\)) and optimal ansatz (\(\chi_{alg}\)). The most clear evidence that \(P_{\rm GD}^{t}(q)\) is not converging to the known \(\chi_{th}\) ansatz is given by \(\chi_{\rm GD}(q)\) for \(0.8<q<1\), for which the curves for different times \(t=25.6,204.8,1500\) seem to have converged to a convex (not 1-RSB nor 1-FRSB) solution. This gives further evidence that our actual understanding of the asymptotic dynamics is far from Figure 10: Correlation vs rescaled time \((t-t_{w})/t_{w}\) for different waiting times \(t_{w}=25.6,51.2,102.4,204.8,409.6,819.2\) shown with different dashed and dotted lines. The inset shows the same data with the time rescaled with \(\alpha_{C}\). For comparison the continuous lines show the correlation with the initial configuration \(C(t,0)\). **(a)**\((2+s)\)-spin models. **(b)**\((3+s)\)-spin models. Figure 9: Parametric plot over waiting time \(t_{w}\) of the rescaled integrated response \(\chi(t,t_{w})/\chi_{mg}\) vs the correlation \(C(t,t_{w})\), for several fixed values of time \(t=25.6,204.8,1500\), compared with several known asymptotic references: static (\(\chi_{gs}\)), dynamical 1-RSB (\(\chi_{th}^{(1)}\)) and algorithmic (\(\chi_{alg}\)) ansatz. **(a)**\((2+9)\)-spin model. **(b)**\((3+12)\)-spin model. The GD dynamics with persistence (red dotted line), shows the same asymptotic FDR shape. For \(q\) close to 1 the FDR is convex, which is incompatible with any known thermodynamic calculation (see Appendix B). being complete. We note that similar results are obtained by looking at the fluctuation-dissipation ratio for the persistent GD dynamics (red dotted line) in Fig. 9, which supports the claim that the asymptotic limit of \(P_{\rm GD}^{t}(q)\) is also independent of the short-time details of the dynamics. ### What kind of aging do we observe? As conjectured in Ref. [8] and shown in Ref. [62]2, the dynamics of the pure 3-spin model presents a simple aging in the asymptotic regime, i.e. two-time observables such as \(C(t,t_{w})\) depend only on the ratio \((t-t_{w})/t_{w}\). An intermediate sub-aging crossover appears for finite times, only for quenches near the critical temperature \(T\lesssim T_{\rm MCT}\). (The relative scaling can be exactly studied in the case of continuous fullRSB transitions [56]). Because we are considering quenches to \(T=0\), we assume that crossover behavior is suppressed. We observe (Fig. 10) that both \((2+s)\) and \((3+s)\) models seem to present simple aging in the asymptotic dynamics, at least over the available time scales. Footnote 2: Notice that Ref. [62] presents a numerical integration of the DMFT equations reaching times of order \(10^{7}\). However, their adaptive algorithm is not robust and in particular is unstable if used in the zero temperature GD dynamics, as commented in Ref. [25]. ## IV Conclusions In this paper we presented a detailed analysis of the gradient descent dynamics starting from a random initial condition, revisiting and extending previous work [8; 9; 25] to a general class of mixed \((p+s)\)-spin glass models. Our main results are the following (Fig. 11). * We confirm that, in all cases, the dynamics converges asymptotically to a marginally stable minimum, such that the support of its density of vibrational modes touches zero. Correspondingly, all quantities converge to their asymptotic limits as power laws, with exponents that we estimate from a series expansion [26]. * We confirm that in pure \(p\)-spin models (\(p=s\)) the energy converges to the threshold energy \(E_{th}\) that separates minima (\(E<E_{th}\)) from saddles (\(E>E_{th}\)). (This is a consequence of the fact that in pure models, marginal states only exist at the threshold level [8; 9].) The threshold level is asymptotically uniformly sampled by the dynamics, leading to weak ergodicity breaking, loss of memory of the initial condition, the emergence of a single effective temperature associated to the slow degrees of freedom, and time-reparametrization invariance, as predicted by the asymptotic solution of Cugliandolo and Kurchan [8]. * For mixed \((p+s)\)-spin models with \(s\gg p\), we obtain quite strong numerical indications against weak ergodicity breaking. The correlation between the initial and final states of the dynamics seem to remain finite, indicating that the dynamics remain confined in a restricted manifold that depends on the initial condition. One should of course keep in mind that our results are limited to finite times, and it is impossible to completely exclude that the dynamics could present a crossover to a weak ergodicity breaking regime at much larger times. We note, however, that such a crossover would already be a quite non-trivial phenomenon that is not captured by current existing theories of the asymptotic aging dynamics. Furthermore, even if weak ergodicity breaking is restored at very large time, the asymptotic manifold cannot be the 1-RSB threshold associated to the Cugliandolo-Kurchan solution, because this energy goes below the ground state energy for large enough \(s\). * Moreover, as far as we can integrate, the extrapolated asymptotic energy lies above the energy at which typical minima become marginal. The convergence to this marginal manifold follows a power-law decay \(t^{-\alpha}\), with a power \(\alpha\leq 2/3\) that depends on the specific model. * Yet, our results suggest that the large-time asymptotic dynamics is largely independent of the short-time details, suggesting that despite strong ergodicity breaking, the asymptotic manifold is sampled in some effectively thermal way. Figure 11: Scheme for the gradient descent dynamics from random initial condition in mixed \((p+s)\)-spin spherical models. The system reaches marginal energies above the energy of dominant marginal minima \(E_{th}\), and it does not lose memory of the initial condition. * As in the Cugliandolo-Kurchan solution, the effective temperature function \(\chi(q)\) converges to a finite limit for large times, but the asymptotic function does not seem to be described by any known ansatz for the long-time dynamics. * For models with \(s\) close to \(p\), such as the \((3+4)\) model studied in Ref. [25], we find that the strong ergodicity breaking, if present, is very weak. It is difficult to decide, but our feeling is that there is strong ergodicity breaking at any \(s>p\), which means that the claim made in Ref. [25] of the existence of a finite \(T_{\rm onset}\) separating weak and strong ergodicity breaking might be incorrect, i.e. \(T_{\rm onset}=\infty\). In fact, the semi-phenomenological approximation adopted in Ref. [25] to extract the onset temperature makes use of a 1-RSB structure of the aging dynamics (one effective temperature), which we have shown to be not valid for large \(s-p\). The coherence of our results suggests that the regime of times we can access is close to the asymptotic one, which calls for a rethinking of the asymptotic dynamics in mean-field models. We believe that our results are compatible with several scenarios: * The most likely scenario, in our opinion, is that of strong ergodicity breaking. In that case, one should look for an asymptotic solution with a finite \(\tilde{q}=\lim_{t\to\infty}C(t,0)\). This asymptotic solution would also be characterized, as in the Cugliandolo-Kurchan scheme [24], by a hierarchy of time scales with time-reparametrization invariance [11] and a non-trivial effective function \(x[q]\). First steps in this direction have been taken in Refs. [63, 25], but the analysis is far from being complete. * The other option is that weak ergodicity breaking (\(\tilde{q}=0\)) is restored at large times. As pointed out before, this requires a non-trivial crossover, e.g. to a logarithmic time decay [22, 23]. The corresponding asymptotic solution cannot be that of Cugliandolo and Kurchan with a single slow time scale (1-RSB) [8], because the asymptotic energy corresponding to that solution is the 1-RSB threshold that goes below the ground state for large \(s\). Hence, a different solution should be constructed, probably with multiple slow time scales and, again, a non-trivial \(x[q]\)[63, 24]. We believe that constructing such a solution is an important problem for future work, because it would shed light on problems such as the mean-field dynamics near to jamming, and the corrections to mean-field aging in finite-dimensional structural glasses [34]. ###### Acknowledgements. We thank L.Cugliandolo for bringing Ref. [26] to our attention. We also thank A.Altieri, L.Berthier, G.Biroli, P.Charbonneau, L.Cugliandolo, S.Franz, J.Kent-Dobias, J.Kurchan, and F.Ricci-Tersenghi for many useful discussions related to this work. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n. 723955 - GlassUniversality). ## Appendix A Series expansion In this appendix we explain how to evaluate the solution of the DMFT integro-differential equations by finding a recursive relation on the polynomial coefficients of a short-time series expansion, as first suggested in Ref. [26]. ### Equilibrium We start by the simpler one-dimensional example that corresponds to the equilibrium dynamics, \[\partial_{t}C(t)=-C(t)-\beta^{2}\int_{0}^{t}dsf^{\prime}(C(s)) \partial_{s}C(t-s)\, \tag{10}\] with \(f(q)=\sum_{p}\alpha_{p}^{2}q^{p}/2\). Given a Taylor expansion around \(t=0\), in the form \(C(t)=\sum_{k=0}^{\infty}C_{k}t^{k}\), this equation reads \[\sum_{k=0}^{\infty}(k+1)C_{k+1}t^{k}=-\sum_{k=0}^{\infty}C_{k}t^{k }-\beta^{2}\int_{0}^{t}ds\big{(}\sum_{m=0}^{\infty}C_{m}s^{m}\big{)}^{p-1}\big{(} \sum_{l=0}^{\infty}lC_{l}(t-s)^{l-1}\big{)}\, \tag{11}\] where for simplicity we have considered the pure model of degree \(p\). The last term can be rewritten as \[\int_{0}^{t}dsf^{\prime}(C(s))\partial_{s}C(t-s)=\sum_{p}\alpha_{ p}^{2}\sum_{k}F_{k}^{p}t^{k}\,\] \[F_{k}^{p} =\frac{p}{2}\sum_{m_{1}+m_{2}+...+m_{p-1}+l=k}C_{m_{1}}C_{m_{2}}...C_{m_{p-1}}lC_{l}\Big{(}\frac{\sum_{q=0}^{l-1}\binom{l-1}{q}(-1)^{q}}{m_{1} +m_{2}+...+m_{p-1}+1}\Big{)} \tag{12}\] \[=\frac{p}{2}\sum_{m}\frac{C_{k-m}}{\binom{k}{m}}\sum_{m_{1}+m_{2} +...+m_{p-1}=m}C_{m_{1}}C_{m_{2}}...C_{m_{p-1}}\,\] where we used the binomial expansion \((t-s)^{l-1}=\sum_{q=0}^{l-1}\binom{l-1}{q}t^{l-1-q}(-s)^{q}\) and the power expansion \[\left(\sum_{l=0}^{\infty}C_{m}s^{m}\right)^{p-1}=\sum_{m_{1}}\sum_{m_{2}}... \sum_{m_{p-1}}C_{m_{1}}C_{m_{2}}...C_{m_{p-1}}s^{m_{1}+m_{2}+...+m_{p-1}}. \tag{13}\] The last sum can also be rewritten in an encapsulated form which in a \(p=5\) case reads \[S_{m}^{5}\equiv\sum_{m_{123}=0}^{m}C_{m-m_{123}}\sum_{m_{12}=0}^{m_{123}}C_{m _{123}-m_{12}}\sum_{m_{1}=0}^{m_{12}}C_{m_{12}-m_{1}}\, \tag{14}\] where \(m_{12}=m_{1}+m_{2}\), \(m_{123}=m_{12}+m_{3}\), \(m=m_{123}+m_{4}\). Thus the original equation becomes a recursive equation for the polynomial coefficients \[C_{k+1}=-\frac{(C_{k}+\beta^{2}\sum_{p}\alpha_{p}^{2}F_{k}^{p}) }{k+1}. \tag{15}\] The obtained series has a small radius of convergence in \(t\), thus to retain the maximum of information we Pade-approximate it with a rational function of half the degree, as it will be explained in Sec. A.3. ### Out-of-Equilibrium We now treat the out-of-equilibrium case, in which correlations depend on two times. We first introduce a useful identity. If we need to evaluate the Taylor series of the product \(C(t)=A(t)B(t)\), we have \[C_{k}=\sum_{i=0}^{k}A_{i}B_{k-i}\, \tag{16}\] where \(C(t)=\sum_{i}C_{i}t^{i}\) and similarly for \(A(t)\) and \(B(t)\). Therefore the power expansion of any power \(p\) of a function \(C^{p}(t)\) can be iteratively deduced from the power expansion of its lower degrees, \(C_{k}\longrightarrow C_{k}^{2}\longrightarrow\cdots\longrightarrow C_{k}^{p}\), where each iteration has a computational cost \(\propto k\). This rule makes it possible to evaluate Taylor series for large values \(p\) of the \(p\)-spin interaction. The same construction can be applied to the product of two-time functions \(C(t,t^{\prime})=A(t,t^{\prime})B(t,t^{\prime})\), giving \[C_{k,l}=\sum_{i=0}^{k}\sum_{j=0}^{l}A_{i,j}B_{k-i,l-j}. \tag{10}\] In the following we will call total degree \(w=k+l\) the sum of the single index degrees. We now proceed to derive the iterative equations that allow one to obtain the two-time Taylor expansion of \(C(t,t^{\prime})\) and \(R(t,t^{\prime})\) from the out-of-equilibrium Eqs. (16). When we expand in series the integrals, two classes of integrals appear: \[\begin{split}\int_{0}^{t^{\prime}}dsf^{\prime}(C_{ts})R_{t^{ \prime}s}\,&\int_{0}^{t^{\prime}}dsf^{\prime\prime}(C_{ts})R_{ts}C_{t^{ \prime}s}&\longrightarrow&\int_{0}^{t^{\prime}}dsA(t,s)B(t^{\prime},s)\,\\ \int_{t^{\prime}}^{t}dsf^{\prime\prime}(C_{ts})R_{ts}C_{st^{ \prime}}\,&\int_{t^{\prime}}^{t}dsf^{\prime\prime}(C_{ts})R_{ts}R_{st^{ \prime}}&\longrightarrow&\int_{t^{\prime}}^{t}dsA(t,s)B(s,t^{\prime})\,\end{split} \tag{11}\] where in each two-time function \(F(t_{1},t_{2})\) we always have \(t_{1}>t_{2}\). The first class of integrals can be expanded in series as \[I^{1,AB}(t,t^{\prime})\equiv\int_{0}^{t^{\prime}}dsA(t,s)B(t^{\prime},s)=\int_ {0}^{t^{\prime}}ds\sum_{i,j}\sum_{k,l}A_{i,j}t^{i}s^{j}B_{k,l}t^{\prime k}s^{ l}=\sum_{i,j,k,l}A_{i,j}B_{k,l}t^{i}t^{\prime k}\,\frac{t^{\prime j+l+1}}{j+l+1}\, \tag{12}\] which gives the coefficients \[I^{1,AB}_{p,q}=\sum_{i=p}\sum_{k+j+l+1=q}\frac{A_{i,j}B_{k,l}}{j+l+1}. \tag{13}\] The second class of integrals is expanded as \[I^{2,AB}(t,t^{\prime})\equiv\int_{t^{\prime}}^{t}dsA(t,s)B(s,t^{\prime})=\int _{t^{\prime}}^{t}ds\sum_{i,j}\sum_{k,l}A_{i,j}t^{i}s^{j}B_{k,l}s^{k}t^{\prime l }=\sum_{i,j,k,l}A_{i,j}B_{k,l}t^{i}t^{\prime l}\frac{\left(t^{j+k+1}-t^{\prime j +k+1}\right)}{j+k+1}\, \tag{14}\] which gives the coefficients \[I^{2,AB}_{p,q}=\sum_{i+j+k+1=p}\sum_{l=q}\frac{A_{i,j}B_{k,l}}{j+k+1}-\sum_{i= p}\sum_{l+j+k+1=q}\frac{A_{i,j}B_{k,l}}{j+k+1}. \tag{15}\] Given these two expansions, we are able to express all the integrals in the Eqs. (16) as power series. In the calculation of our series expansion, following Ref. [26], we will proceed by increasing the total degree \(w\) in unit steps, as depicted in Fig. 12. We notice that the computational cost of the coefficient of order \(w\) of each integral scales as \(w^{2}\). Finally, for the radial reaction \(\mu(t)=T+I^{1,f^{\prime}R}_{kl}(t,t)+I^{1,(f^{\prime\prime}R)C}_{kl}(t,t)\), we obtain the series \[\mu_{q}=\sum_{k+l=q}I^{1,f^{\prime}R}_{kl}+\sum_{k+l=q}I^{1,(f^{\prime\prime}R )C}_{kl}\,\qquad\forall q>0\, \tag{16}\] and \(\mu_{0}=T\). The final coefficient equations are \[(k+1)C_{(k+1)l}=-\sum_{k_{1}+k_{2}=k}\mu_{k_{1}}C_{k2l}+I^{1,f^{\prime}R}_{kl} +I^{1,(f^{\prime\prime}R)C}_{kl}+I^{2,(f^{\prime\prime}R)C}_{kl}\, \tag{17}\] and \[(k+1)R_{(k+1)l}=-\sum_{k_{1}+k_{2}=k}\mu_{k_{1}}R_{k_{2}l}+I^{2,(f^{\prime \prime}R)R}_{kl}. \tag{18}\] Figure 12: Iteration scheme for the progressive evaluation of the coefficients. Different colors refers to different degrees \(w\). A new diagonal is obtain for \(C\) and \(R\) by Eqs. (17) and (18) (black arrows) and the constraint in Eq. (14) is then implemented (red arrow). Moreover, we have the constraints \(C(t,t)=1\) and \(R(t^{+},t)=1\) for all \(t\), which in terms of Taylor coefficients gives \[\sum_{k+l=w}C_{kl}=0\quad\text{and}\quad\sum_{k+l=w}R_{kl}=0\,\quad\forall w>0. \tag{101}\] These are used to evaluate the terms \(C_{0l}\) and \(R_{0l}\) not given by Eqs. (100) and (101). The algorithm runs in increasing order of the degree \(w=k+l\), as shown in Fig. 12. ### Pade approximation Once we have the power series in the two times \(t^{\prime},t\), we wish to study its long-time behavior. However, the series have a small radius of convergence (less than one in both times). This usually happens since the true function (the exact solution of the dynamics) has some poles at that distance from the origin in the complex plane 3. In order to get useful results, as suggested in Ref. [26], we proceed using the Pade approximation, a powerful method to extract the information hidden in the series. It consists in rewriting the original Taylor polynomial of degree \(2w\) in terms of a ratio of polynomials of degree \(w\) (or of similar degree), in such a way that the Taylor expansion of this ratio is equal to original one. The underlying idea is that the Pade rational function can absorbs the poles of the original function, avoiding the divergence that occurs in the original Taylor series. Footnote 3: It is important to notice that having a Taylor expansion with a given radius of convergence and doing a naive numerical analytical continuation does not give any benefice, in the sense that it is not possible to circumnavigate a pole by finite series expansion, i.e. a closure for the series is needed. We have analysed three main one-time observables 4: the energy \(E(t)\), the radial reaction \(\mu(t)\) and the correlation with the initial configuration \(C(t,0)\). For every chosen model in the \((2+s)\) and \((3+s)\) class, we have evaluated the first \(2w=1200\) orders and thus a \(w=600\) Pade terms at the numerator and denominator, Footnote 4: We have only considered the 1-dimensional Pade computation, while in principle it would be possible to extend Pade computations to a multidimensional case, the so-called Canterbury Approximations. \[E(t)\sim\sum_{k=0}^{2w}e_{k}t^{k}\sim\frac{\sum_{k=0}^{w}e_{k}^{\rm n}t^{k}}{ \sum_{k=0}^{w}e_{k}^{\rm d}t^{k}}\, \tag{102}\] where \(\sim\) means that they have the same Taylor expansion at \(t=0\). The coefficients \(e_{k}^{\rm n},e_{k}^{\rm d}\) are evaluated from the \(e_{k}\) by solving a linear equation (one-matrix inversion) [64]. The Pade approximation is very effective in substantially suppressing the influence of the closest poles, and allowing one to reach times \(\tau\) (see Fig.13) that grow linearly with the number of terms \(2w\) of the original series, roughly as \(\tau\approx w/10\). Yet, the values of \(\tau\) reached in this way are at least one decade smaller than those reached by numerical integration of the equations with a fixed time step \(dt=0.05\). The main advantage of the series analysis is that it allow us to evaluate the power-law exponents of the decay with much greater precision than the integration, as we see in the next subsection. In our analysis we have thus employed a hybrid method, using the series to evaluate the exponents and the integration to evaluate the correspondent asymptotic value. We notice that in order to explore large times, the Pade series needs to have very large powers in \(t\), up to \(t^{600}\). To obtain meaningful results, the coefficients \(e_{k}\) of the series must thus be evaluated with very high precision. It is not enough to use long-long double precision, as the number of digits considered should scale with the order \(w\). Therefore, when numerically evaluating the series (as described in the previous paragraph) we have kept a precision of 2000 digits for each coefficient. This can be achieved by using dedicated multi-precision floating-point libraries. For this work, we have used the c++ library GNU MPFR. ### Power Law Evaluation Despite that the series expansion (even when Pade-transformed) does not allow to reach long times, it can be used to precisely evaluate the power-law decay of different observables to their asymptotic value. Following [26; 65] we define \[\alpha_{E}(t)=-\left(1+\frac{t\partial_{t}^{2}E(t)}{\partial_{t}E(t)}\right)\, \tag{103}\] and by definition \(\lim_{t\to\infty}\alpha_{E}(t)=\alpha_{E}\) defined in Eq. (18). Analogous definitions can be given for \(\alpha_{\mu}(t)\) and \(\alpha_{C}(t)\). Given the Taylor series for \(E(t)\), we evaluate \(\alpha_{E}(t)\) and Pade-transform it. The results for the energy of \((3+s)\)-models are shown in Fig. 13. Note that for a precise analysis of the exponents, it is not convenient to analyze directly the Pade transform of the original series, since by construction the Pade approximation is a rational function that can only asymptotically behave as an integer power law, while we want to study non-integer power-law decays. Finally, we fit each \(\alpha(t)\) with an exponential function (dashed lines in Fig. 13) and extract the asymptotic value that correspond to the power-law exponent introduced in Eq. (18). In table 2 we report the values of the exponents in the \((2+s)\) and \((3+s)\) models for the values of \(\lambda\) given in table 1. We can check the accuracy of the extrapolation in the pure 2-spin model for which we have exact analytical results [40]. The energy exponent is known to be \(\alpha_{E}=\alpha_{\mu}=1\), and we obtain \(0.999\pm 0.001\), while the correlation is known to have exponent \(\alpha_{C}=3/4\) and we obtain \(0.748\pm 0.001\). For the pure 3-spin model the exponents seem to agree with the values \(\alpha_{E}=\alpha_{\mu}=2/3\) and \(\alpha_{C}=3/8\). Mixed models have variable exponents smaller than the pure ones, as could be expected because the interplay between different interactions slows down the dynamics. Since our analysis of the exponents is carried on for not vary large times (smaller than \(100\)), we cannot exclude that other regimes could emerge for later times. Yet, the monotonous behaviour of the \(\alpha(t)\) functions for \(t>10\) for any \(s\) seems to support the non-universality of exponents in the relaxation of mixed \(p\)-spin models. ## Appendix B Thermodynamic solution for the dynamics In order to understand the dynamics in terms of a static (thermodynamic) calculation we start from the free energy of the mixed \(p\)-spin model calculated on a generic Parisi hierarchical ansatz for the replica overlap matrix \(Q_{ab}\). This is conveniently written in functional form in terms of the susceptibility \(\tilde{\chi}(q)=\chi(q)/\beta\), with the additional boundary conditions \(\tilde{\chi}(1)=0\) and \(\tilde{\chi}(1)^{\prime}=-1\)[39; 49], \[F[\chi] =E[\chi]-TS[\chi]=\frac{1}{2}\int_{0}^{1}dq\;\big{(}f^{\prime \prime}(q)\beta\tilde{\chi}(q)+(\beta\tilde{\chi}(q))^{-1}\big{)}=\frac{1}{2} \int_{0}^{1}dq\;f^{\prime\prime}(q)^{1/2}\big{(}\tilde{\chi}(q)+\hat{\chi}(q) ^{-1}\big{)}\,\] \[E[\chi] =\frac{\partial\beta F}{\partial\beta}=\int_{0}^{1}dq\;f^{\prime \prime}(q)\beta\tilde{\chi}(q)=\int_{0}^{1}dq\;f^{\prime\prime}(q)^{1/2}\tilde {\chi}(q)\, \tag{10}\] \[S[\chi] =-\frac{\partial F}{\partial T}=\frac{1}{2}\int_{0}^{1}dq\;\big{(} f^{\prime\prime}(q)\beta^{2}\tilde{\chi}(q)-(\tilde{\chi}(q))^{-1}\big{)}\.\] Because the boundary conditions on \(\tilde{\chi}(q)\) do not depend on \(\beta\), the thermal derivatives can be taken only over the explicit temperature dependence, while the implicit dependence on \(\tilde{\chi}\) does not contribute because \(\partial F/\partial\tilde{\chi}=0\). Then, Figure 13: **(a)** Padé approximation of the energy \(E(t)\) in the \((3+11)\)-spin model. The inferred asymptotic energy \(E_{\rm fit\mu}\) is subtracted to show the power-law decay. Different orders \(w\) (from \(7\) to \(598\), \(\approx 1200/2^{k}\) with \(k\) from \(1\) to \(7\)) of Padé approximations are shown in different colors. These are compared with the solution obtained by integrating the equations with a finite step size \(dt=0.05\) (red points). The Taylor series has a radius of convergence \(<0.3\). We observe that the order of the Padé approximation is directly proportional to the time \(\tau\) below which convergence is observed (equi-spaced colored lines in log scale). Because the computational complexity of the series and its Padé approximation scales as \(w^{3}\), the computational complexity of the integration and of the series both scales as \(\tau^{3}\). **(b)** Time dependence of \(\alpha_{E}(t)\) in \((3+s)\)-spin models, for different \(s\). The dashed lines represents exponential fits in the range \((30,50)\). From this fit the asymptotic value correspondent to the power-law decay is extracted. we can change variable to \(\chi(q)=\beta\tilde{\chi}(q)\) with boundary conditions \(\chi(1)=0\) and \(\chi(1)^{\prime}=-\beta\), which is an implicit statement of FDT at equilibrium. We have also introduced another scaled function \(\hat{\chi}(q)=\chi(q)f^{\prime\prime}(q)^{1/2}\), which is sometimes more convenient. The function \(\chi(q)\) is in a one-to-one bijection with the Parisi matrix \(Q_{ab}\) (corresponding to its eigenvalues). For example considering a piecewise linear function \[\chi(q)=\chi_{0},\;q\in[0,q_{0}];\quad\chi_{1}+m(q_{1}-q),\;q\in[q_{0},q_{1}]; \quad(1-q),\;q\in[q_{1},1]\, \tag{111}\] gives back the 1-RSB free energy, Eq. (25) of Ref. [39]. The first term in Eq. (110) is the energy \(E[\chi]=\int_{0}^{1}dqf^{\prime\prime}(q)\chi(q)\) and it is equal to the dynamic definition in Eq. (17) if we consider \(\chi(q)\) as the integrated response. The second term \(S[\chi]\) corresponds instead to the entropic contribution, roughly given by the logarithm of the volume of a sphere of radius corresponding to the overlap \(q\). This second term does not have a clear correspondent in the dynamics, and we believe it to be responsible for the discrepancy between the static calculation and the asymptotic limit of the dynamics. Moreover \(x(q)=-\partial_{q}\chi(q)\) is the fluctuation-dissipation ratio and \(P(q)\propto\partial_{q}x(q)=-\partial_{q}^{2}\chi(q)\) is the probability of finding two states at overlap \(q\). We see that in order to have a positive probability, the second derivative of \(\chi(q)\) must be negative (convex) and its first derivative negative (monotonic). Thus in the functional space of all regular \(\chi(q)\) only convex-monotonic ones must be considered when minimizing Eq. (110). In the zero temperature limit (\(\beta\to\infty\)) the boundary conditions are trivially satisfied by a jump from \(\chi(1)=0\) to a finite value \(\chi(1^{-})\). This jump does not contribute to Eq. (110) and can thus be discarded. The ground state corresponds to the optimum between all convex solutions. There exists however other possible solutions, corresponding to non-optimal states, that can be metastable, i.e. have a higher free energy while being locally stable. An important non-thermodynamic solution is obtained by optimizing Eq. (110) without any constraint on the convexity of \(\chi(q)\), \[2\frac{\delta F[\chi(q)]}{\delta\chi(q)}=\int_{0}^{1}dq\;\big{(}f^{\prime \prime}(q)-\chi(q)^{-2}\big{)}=\int_{0}^{1}dq\;f^{\prime\prime}(q)\big{(}1- \hat{\chi}(q)^{-2}\big{)}=0\, \tag{112}\] which is identically satisfied by the so-called (non-strictly-convex) algorithmic solution \(\chi_{alg}(q)=f^{\prime\prime}(q)^{-1/2}\), or equivalently \(\hat{\chi}_{alg}(q)=1\). Note that \(\partial_{q}\chi_{alg}(q)<0\) for \(q\in[0,1]\), because \(f(q)\) belongs to the class of polynomials with positive coefficients. Its corresponding energy is exactly the algorithmic energy \(E_{alg}\) defined in Eq. (13). All the other solutions (convex or not) will have a higher free energy, but they can have a smaller energy, as it is the case for the ground state solution. We now wish to find a convex solution that corresponds to what is observed in the GD dynamics. The first observation is that the GD dynamics is asymptotically marginal, i.e. \(\chi(1^{-})=\chi_{mg}=f^{\prime\prime}(1)^{-1/2}\). This is our additional constraint in the search for a convex solution. The classical solution is the so-called 1-RSB dynamical solution derived by Cugliandolo and Kurchan [8]. This solution can be found in the static convex-minimization scheme by following the Monasson construction [66], which consists in (i) postulating a linear solution (thus quasi-convex and monotonic) \(\chi^{(1)}(q)=\chi+x(1-q)\), (ii) inserting it in Eq. (110), which gives \[2F[\chi^{(1)}]=f^{\prime}(1)\chi+xf(1)+\frac{1}{x}\log\left(\frac{\chi+x}{ \chi}\right)\, \tag{113}\] (iii) extremizing it with respect to \(\chi\) at fixed \(x\) (the effective temperature) obtaining \(x^{*}(\chi)=\frac{1}{\chi f^{\prime}(1)}-\chi\) and (iv) imposing the marginal condition \(\chi=\chi_{mg}\), thus obtaining the energy of the marginal state \(E[\chi^{(1)}]=f^{\prime}(1)\chi_{mg}+x^{*}(\chi_{mg})f(1)\), which is indeed the same as Eq. (12). In this solution, \(x^{*}(\chi)=\partial\Sigma(\chi)/\partial E(\chi)\) is the temperature associated to the complexity of metastable states of given linear susceptibility \(\chi\). Notice that minimizing Eq. (113) with respect of both \(\chi\) and \(x\) gives the 1-RSB ground-state solution. In the case of a pure \(p\)-spin model with \(p>2\), this is the only possible solution, since the correspondent \(\chi_{alg}(q)\) is non-convex everywhere. Hence (as argued in [41]) there cannot be any (meta)stable solution with more then 1-RSB. The only possibility of having more complicated solutions comes from the presence of convex sectors in the algorithmic solution \(\chi_{alg}(q)\), i.e. if it exists some \(q\in[0,1]\) such that \(\partial_{q}^{2}\chi_{alg}(q)<0\), or equivalently \[3qf^{\prime\prime\prime}(q)^{2}-2f^{\prime\prime\prime\prime}(q)f^{\prime \prime}(q)<0. \tag{114}\] In the case of the \((2+s)\) models there always exists such a convex sector near \(q=0\). Instead in the selected \((3+s)\) models a convex sector develops in the middle of the \(q\)-interval \([0,1]\) for large enough \(s\) only. Whenever there is a convex sector, it may be possible -also considering the monotonicity constraint- to build alternative solutions to the 1-RSB marginal one. Let us now consider another possible marginal ansatz, namely the 1F "dynamical" solution [39], which is a possible solution in \((2+s)\) models. This is defined by a collage of a 1-RSB and a fullRSB solution, \(\chi^{(1F)}(q)=\chi^{alg}_{[0,\bar{q}]}(q)+\chi^{(1)}_{[\bar{q},1]}(q)\), where the sub-parenthesis indicates the interval of \(q\) in which each solution is considered. Inserting this ansatz in Eq. (15), we obtain \[2F[\chi^{(1F)}]=\int_{0}^{\tilde{q}}dq\ f^{\prime\prime}(q)^{1/2}-\tilde{\chi}f^ {\prime}(\tilde{q})+\chi f^{\prime}(1)+x(f(1)-f(\tilde{q}))+\int_{0}^{\tilde{q} }dq\ f^{\prime\prime}(q)^{1/2}+\frac{1}{x}\log\left(\frac{\tilde{\chi}}{\chi} \right)\, \tag{16}\] where \(\tilde{\chi}=f^{\prime\prime}(\tilde{q})^{-1/2}\) and \(\chi=f^{\prime\prime}(\tilde{q})^{-1/2}+x(\tilde{q}-1)\), hence the linear part is given by \(\chi^{(1)}_{[\tilde{q},1]}(q)=\tilde{\chi}-x(q-\tilde{q})\). Therefore \(F[\chi^{(1F)}]\) is a function of the two parameters \(\tilde{q}\) and \(x\). Minimizing over both of them gives the 1F ground-state. Instead, in order to build the metastable "dynamical" 1F solution we follow the Monasson construction described above. We minimize with respect to \(\tilde{q}\) while keeping fixed the effective temperature \(x\), thus obtaining 5 Footnote 5: A second solution \(x^{*}(\tilde{q})=\frac{f^{(3)}\tilde{q})}{2f^{\prime\prime}(\tilde{q})^{3/2} }\equiv\chi^{\prime}_{alg}(\tilde{q})\) —which correspond to \(\chi^{(1)}\) being tangent to the algorithmic solution \(\chi_{alg}\)— appears, but it is locally unstable. \[x^{*}(\tilde{q})=\frac{\frac{1}{1-\tilde{q}}-\frac{f^{\prime\prime}(\tilde{q})} {f^{\prime}(1)-f^{\prime}(\tilde{q})}}{\sqrt{f^{\prime\prime}(\tilde{q})}}. \tag{17}\] Figure 14: Marginal 1F solution in the (2+9) model. **(a)** Free energy \(F[\chi^{(1F)}]\) of the 1F solution vs the point of contact \(\tilde{q}\). The ground state corresponds to the global minimum \(\tilde{q}_{gs}\). The inset shows the relative complexity \(\Sigma=(1/x^{2})\partial_{x}F[\chi^{(1F)}]\). **(b)**\(\chi/\chi_{mg}\) as a function of \(q\) for the marginal \(\tilde{q}_{th}\) (red line) and the ground-state \(\tilde{q}_{gs}\) (blue line). The black line shows the algorithmic solution \(\chi^{(1F)}\) with the non-convex sector shown as a dashed line (more visible in panel c). **(c)**\(\hat{\chi}=\chi/\chi^{alg}\) as a function of \(q\). Same curves as in panel b. Figure 15: **(a)** Asymptotic energy reached by the gradient descent (GD) dynamics from random initial condition, compared to the ground state energy \(E_{gs}\), the threshold energy \(E^{(1)}_{th}\),\(E^{(1F)}_{th}\) and the algorithmic energy \(E_{alg}\). Same plot as Fig. 2a, but with the value of the threshold energy evaluated with the 1F marginal solution for \(s>3\). **(b)** FDR for the \((2+9)\) model as in Fig. 9a but compared with a 1F marginal solution, i.e. parametric plot over waiting time \(t_{w}\) of the re-scaled integrated response \(\chi(t,t_{w})/\chi_{mg}\) vs the correlation \(C(t,t_{w})\), for several fixed values of time \(t=25.6,204.8,1500\), compared with the dynamical ansatz (\(\chi^{(1F)}_{th}\)). Substituting it back in Eq. (100) we obtain the \(F[\chi^{(1F)}]\) as a function of \(\tilde{q}\), as shown in Fig. 14a. Finally, imposing the marginality condition \(\chi=\chi_{mg}=f^{\prime\prime}(1)^{-1/2}\) gives the equation \(f^{\prime\prime}(\tilde{q})^{-1/2}+x^{*}(\tilde{q})(\tilde{q}-1)=f^{\prime \prime}(1)^{-1/2}\) which fixes the threshold overlap \[\tilde{q}_{th}\quad\text{s.t.}\quad\frac{(\tilde{q}-1)\sqrt{f^{\prime\prime}( \tilde{q})}}{f^{\prime}(\tilde{q})-f^{\prime}(1)}=\frac{1}{\sqrt{f^{\prime \prime}(1)}}\, \tag{101}\] where the use of the term threshold is made in analogy with the Cugliandolo-Kurchan picture. In fact at \(\tilde{q}_{th}\) we have the maximal complexity for typical minima. The correspondent threshold energy is \[E_{th}^{(1F)}\equiv E[\chi^{(1F)}]=\int_{0}^{\tilde{q}_{th}}dq\;f^{\prime \prime}(q)^{1/2}-f^{\prime\prime}(\tilde{q}_{th})^{-1/2}f^{\prime}(\tilde{q}_ {th})+f^{\prime\prime}(1)^{-1/2}f^{\prime}(1)+x^{*}(\tilde{q}_{th})(f(1)-f( \tilde{q}_{th})). \tag{102}\] In Fig.14a, the free energy of the dynamical solution \(F[\chi^{(1F)}]\) is plotted as a function of \(\tilde{q}\) for the \((2+9)\) model. It has a local minimum at the ground state solution \(\tilde{q}_{gs}\) (blue point) which correspond to a vanishing complexity (see inset). Instead at \(\tilde{q}_{th}\) the solution has maximal complexity (higher-energy solutions are unstable). The corresponding shape of \(\chi(q)\) for both \(\tilde{q}_{gs}\) (blue) and \(\tilde{q}_{th}\) (red) is shown in Fig. 14b and with the re-scaled \(\hat{\chi}(q)\) in Fig. 14c. By looking at Fig. 14c we note that (meta)stable solutions must lie both above and below \(\chi^{alg}\), and the regions must "compensate", as in the usual Maxwell construction for first order transitions. In Fig. 15a the 1F energy is compared with the asymptotic GD energy for all \((2+s)\) models, while in Fig. 15b we compare the shape of the 1F solution with the GD results. It is evident that this thermodynamic solution does not agree with the GD one. If we now consider \((3+s)\) models, we find that \(\chi_{alg}(q)\) has two non-convex sectors, one near \(q=0\) and the other near \(q=1\), which must be replaced by linear regions to satisfy the convexity requirement. We would then like to consider a solution of the kind \(1F1\) as an alternative to the standard 1-RSB solution, i.e. a collapse of linear+full+linear. We found that such a solution is not locally stable (see Fig. 16); the full part vanishes and we are left with either a standard 1-RSB solution or at most a 2-RSB one. We will not explore here all the steps of the calculation, but additional comments about the convex minimization of Eq. (100) can be found in section 2.1.7 of Ref. [49]. We conclude that minimizing the free energy in Eq. (100) in the space of monotonic and convex \(\chi(q)\) with the additional constraint of marginality is not consistent with the solution we found from the GD dynamics (see Fig. 15), in particular because the \(\chi_{\rm GD}(q)\) obtained from GD dynamics has a shape near \(q=1\) that is not linear, but convex, see Fig. 9 and Fig. 15b. Such a convex shape is not achievable by minimizing the functional in Eq. (100). If we still want to find a static (or geometric) description of the asymptotic non-equilibrium dynamics, one possibility would be to modify the entropic part of Eq. (100). How to do that remains, however, an open problem. A possible suggestion could come from exploring the quasi-equilibrium dynamics [58; 61]. Figure 16: Attempt to build a marginal 1F1 solution in the (3+12) model. **(a)** Free energy \(F[\chi^{(1F1)}]\) of the 1F1 solution vs the point of contact \(\tilde{q}\) of the left linear branch. \(\tilde{q}_{th}\) indicates the point of contact of the right linear branch. Any point of contact of the left linear branch turns out to be unstable, thus the fullRSB continuous region shrinks and eventually vanishes. The orange point indicate the solution plotted in panels b and c. **(b)**\(\chi/\chi_{mg}\) as a function of \(q\) for a 1F1 unstable solution. The right branch (red line) is stable while the left branch (orange line) is unstable. In green the stable marginal 1RSB solution. **(c)**\(\hat{\chi}=\chi/\chi^{alg}\) as a function of \(q\). Same curves as in panel b.
2309.12479
Human Following in Mobile Platforms with Person Re-Identification
Human following is a crucial feature of human-robot interaction, yet it poses numerous challenges to mobile agents in real-world scenarios. Some major hurdles are that the target person may be in a crowd, obstructed by others, or facing away from the agent. To tackle these challenges, we present a novel person re-identification module composed of three parts: a 360-degree visual registration, a neural-based person re-identification using human faces and torsos, and a motion tracker that records and predicts the target person's future position. Our human-following system also addresses other challenges, including identifying fast-moving targets with low latency, searching for targets that move out of the camera's sight, collision avoidance, and adaptively choosing different following mechanisms based on the distance between the target person and the mobile agent. Extensive experiments show that our proposed person re-identification module significantly enhances the human-following feature compared to other baseline variants.
Mario Srouji, Yao-Hung Hubert Tsai, Hugues Thomas, Jian Zhang
2023-09-21T20:50:55Z
http://arxiv.org/abs/2309.12479v1
# Human Following in Mobile Platforms with Person Re-Identification ###### Abstract Human following is a crucial feature of human-robot interaction, yet it poses numerous challenges to mobile agents in real-world scenarios. Some major hurdles are that the target person may be in a crowd, obstructed by others, or facing away from the agent. To tackle these challenges, we present a novel person re-identification module composed of three parts: a 360-degree visual registration, a neural-based person re-identification using human faces and torsos, and a motion tracker that records and predicts the target person's future position. Our human-following system also addresses other challenges, including identifying fast-moving targets with low latency, searching for targets that move out of the camera's sight, collision avoidance, and adaptively choosing different following mechanisms based on the distance between the target person and the mobile agent. Extensive experiments show that our proposed person re-identification module significantly enhances the human-following feature compared to other baseline variants. ## I Introduction Recent advancements in artificial intelligence have made it possible for humans to cooperate with mobile agents. This cooperation includes autonomous delivery platforms, service-based autonomous agents taking care of elders in hospitals, household agents cleaning floors, and telepresence agents creating new communication methods. These robots cooperate with humans through various types of interactions, and we argue that the human following feature is one of the most crucial. The objective of this paper is to design a human following system that meets the following requirements: 1) the agent can follow a targeted person, even if they are moving quickly or away from the agent, 2) the agent can search for the person if they move out of the camera's sight, 3) the agent can avoid obstacles while following the target, and 4) the agent can still identify the target even when they are in a crowd. Our human following approach is able to address all of the above requirements, and the core is a novel person re-identification module. Our person re-identification module consists of three main components. First, we introduce a 360-degree registration process to capture different angles of the target, which is an improvement over standard registration processes in prior literature [5, 4, 1] that only take a targeted person's front and back appearance into account. This process greatly helps us re-identify the target even when the target is side-facing the camera. Second, our method adopts different identification models for different body parts (face and torso) to re-identify the targeted person, unlike prior literature [17, 7, 8] that use the entire body's appearance. This approach combines the best of both worlds: primarily using faces for re-identification, since faces carry the most distinctive feature among body parts [28]; secondarily using torso for re-identification when faces are occluded or not present in the camera's sight. Third, we consider a motion tracker (e.g., Kalman filter) [3] to predict the target's future position, which helps to improve tracking when the target is occluded or far away. In addition, we show that the motion tracker can help us reduce the latency of running the person re-identification module, allowing our mobile agent to still track the target when they are moving quickly. Our human-following system also consists of several other modules, including collision avoidance with local planning and path planning, a searching algorithm that locates the targeted person when out of the camera's sight, and a following mechanism based on a RGBD camera and a fish-eye camera. By integrating all of these modules, our system controls the mobile agent to follow the targeted person and keep them in the camera's line of sight, while also preventing collisions with obstacles. It is important to note that our system runs directly on the agent's mobile device for on-device mobile computation. This is different from prior works [22, 25, 6, 26, 10, 5, 8, 4, 1] that run human following systems on a server and send control commands from the server to the mobile agent. As a summary, in this paper, we present a novel human-following system that incorporates a person re-identification module. We conducted extensive experiments and user studies to demonstrate that our system outperforms other baseline approaches in two different conditions: when a targeted person is walking to random markers in an environment and when a targeted person is following a course. The environments contain obstacles, and there are always two people in the environment: one is the target person, and the other is an interferer. Our system demonstrates strong performance in various metrics, including the average speed of the agent following the target person, the average following distance between the agent and the target person, the average distance to obstacles, the number of times the agent loses the target person, and the number of times the agent follows the wrong person. ## II Related Work Earlier human following techniques [22, 25, 6, 26, 30, 10] can be understood as performing detection and tracking for humans. Recent approaches [5, 8, 4, 1] consider person re-identification as an additional module on top of detection and tracking. In the following, we first discuss the approaches without person re-identification, and then we discuss the approaches with person re-identification. As some of the earliest attempts, Nagumo and Ohya [20] asked the targeted person to carry LED lights for the mobile agent to detect and track. Schlegal _et al._[21] proposed to use the human's contour and color histogram as the human following signal. Hirai and Mizoguchi [10] used the human back and shoulder as the human following signal, and Hu _et al._[16] used leg appearance as the human following signal. Instead of using only cameras on the mobile agent, Marioka _et al._[19] considered external cameras to improve detection and tracking. Nonetheless, using external cameras lowers the practical applicability for the human following feature. Other than visual features, Han and Jin [9] explored the usage of the audio signal for human following. Since these approaches do not consider person re-identification, their human following feature can easily fail when the targeted person 1) is occluded or 2) intersects with another individual. For the approaches with person re-identification ability, Gupta _et al._[8] and Gross _et al._[7] presented the use of template matching to identify the targeted person. Koide _et al._[17] used the edges, color, and texture of the targeted person's clothes as the identification features. Nonetheless, these approaches use pre-deep-learning vision features such as SURF [2], which have been shown to be less effective nowadays [18]. To address this concern, work [5, 4, 1] presented the use of deep-learning features as identification features. Our approach is most similar to these works, yet we consider 1) a different registration process (360-degree registration), 2) person identification models by multiple parts (faces and torsos), and 3) the integration with a motion tracker. ## III Proposed System In this section, we will first describe the person re-identification module, and then we will discuss our human following system with the remaining modules and algorithms. ### _Person Tracking and Re-Identification Pipeline_ We propose a person tracking and re-identification pipeline that includes three components: a 360-degree registration process, a body detection and tracking pipeline, and a person identification model that uses multiple body parts (face and torso). We depict this pipeline in Figure 1. **Body detection model.** For every frame, our system first runs a body and pose detector that provides the body bounding boxes and the poses of all humans in the image. We leverage the open source software provided by [33]1, which provides good detection performances. Footnote 1: [https://github.com/xingyizhou/CenterNet](https://github.com/xingyizhou/CenterNet) **Motion Tracker.** We use a standard open-source Kalman filter motion tracker [3][27]2 which tracks the position of the detected body bounding boxes. Each tracked bounding box is assigned a consistent ID across frames allowing tracking of the target even with noisy detections or when the target is occasionally not detected for a few frames. Footnote 2: [https://github.com/algewley/sort](https://github.com/algewley/sort) **Face and Torso Identification Models.** To distinguish the targeted individual from others, we utilize three additional neural-based machine learning models: a face detector that provides face bounding boxes in the current frame; a face embedding model that provides a face embedding on a cropped face bounding box; and a torso embedding model that provides a torso embedding on a cropped torso bounding box. We demonstrate the functionality of the face and torso identification models in Figure 2. First, given the initially detected body poses, we estimate the person's face and torso bounding boxes. We draw a square bounding box on the face with its width and height being the distance between the left and right ears. Then, we draw a rectangular bounding box on the torso with its width and height covering the left and right shoulders and hips. However, we have observed that the pose-induced face bounding boxes may not always be accurate, especially when the person is not facing the camera. In such cases, the pose-induced face bounding boxes may be falsely detected. On the other hand, the pose-induced torso bounding boxes are often accurate. Then, we obtain better face bounding boxes for all individuals using the unofficial open-source implementation 3 of CenterFace [29]. Note that if an individual is not facing the camera, their face may not be detected, resulting in a lower number of face bounding boxes than body bounding boxes. These detected face bounding boxes are matched with the pose-induced face bounding boxes based on their intersection of union (IoU) values. If the corresponding IoU value is greater than \(0.75\), we assign the detected face bounding boxes (from the face detection model) to a person. Footnote 3: [https://github.com/chenjun2hao/CenterFace_pytorch](https://github.com/chenjun2hao/CenterFace_pytorch) Footnote 4: [https://github.com/timesler/facenet-pytorch](https://github.com/timesler/facenet-pytorch) Eventually, we produce the face and torso embeddings by passing the face and torso bounding boxes to two different models: FaceNet [23] and Object-reID [32]. Again, we use open-source repositories 5 for reproducibility. Note that a person may not have a face embedding if their face is not present in the image. In summary, our models for identifying faces and torsos will provide the bounding box for each person, as well as the torso embedding. Additionally, the face embedding may be provided as an optional feature. Footnote 5: [https://github.com/layumi/Person_reID_baseline_pytorch](https://github.com/layumi/Person_reID_baseline_pytorch) **360 Registration Process.** Initially, we choose a target to follow, and collect its face and torso embeddings using a 360-registration process as shown in Figure 3. The targeted person is asked to turn around in front of our mobile agent. The process usually takes about 20 seconds to complete, during which our models process hundreds of images. We then randomly select 100 face embeddings and 100 torso embeddings to form the feature bank. **Re-identification module.** At the initial steps, the tracker is not yet aware of which ID corresponds to the target individual. The tracker might also lose the target due to occlusions or the person going out of sight. In those cases, the re-identification (re-id) module is called. It compares the target's embeddings stored in the feature bank with the face and torso embeddings of all the detected individuals in the scene. To perform the comparison, we initially compute the average of the embeddings within the bank. Then, we measure the cosine similarity between the individual's face and torso embeddings and the calculated average bank embeddings. A similarity score is computed as the maximum value between the face and the torso cosine similarities. If the highest score is above a fixed threshold (\(sim>0.8\)), we establish the corresponding person as the target. Otherwise, our following system starts its searching behavior, as described in the next section. On our mobile platform, we can run the person re-id Fig. 1: We register a target person using our 360-degree registration module. Our tracking and re-identification pipeline is then able to find the targeted person and track them among other individuals. Whenever the target is lost, our re-id module comes into action. It detects the torso and face (if available) of each detected person and computes corresponding embeddings. The target is identified by comparing these embeddings to the saved feature bank. If the target cannot be found, the robot enters search mode. Fig. 3: 360-degree and standard registration process for registering a targeted person’s face and torso embeddings. 360-degree registration process captures the embeddings from different angles of the target, while standard registration process captures only the front-facing and back-facing angels. Fig. 2: Left: when human is facing to the camera. Right: when human is not facing to the camera. When there is a face detected, our face and torso identification model will return 1) human bounding box, 2) face embedding, and 3) torso embedding. When there is no face detected, our face and torso identification model will only return 1) human bounding box and 2) torso embedding. module at 8 fps. This low frequency makes it hard to follow fast-moving targets. This is why we rely on the motion tracker most of the time, and make calls to the re-id module only when necessary. By doing so, we can match the camera frame rate up to 30 fps, and significantly reduce the reaction time of our following pipeline for the majority of the time. ### _Proposed Human Following System_ Our system for human following, illustrated in Figure 4, consists of our motion tracking and re-id modules, in addition to other modules including a dual camera setup, a local planner and collision avoidance module, and a search behavior to retrieve missing targets. The goal of the system is to follow a targeted person by controlling the mobile agent's movements (which include moving forward, moving back, rotating left, rotating right, etc). **Dual camera setup.** The challenges for a following system are different depending on the distance to its target. If the target is close to the system, it may go out of sight easily by walking to the side of the mobile agent. If it is far from the system, obstacles might be in the way, introducing occlusions or blocked paths. Therefore we use a short-range fish-eye RGB camera with a large field of view and a long-range RGBD camera with a more narrow field of view, but providing depth information. As suggested by prior research [19, 1, 1, 5, 4, 30] depth information is very valuable to follow humans as it allows for path planning. The switch between the two cameras is triggered by the measured depth of the person when using the RGBD camera or by the size of the bounding box when using the fish-eye camera. If the bounding box height is smaller than \(45\%\) of the image, we switch to the RGBD camera. If the person is closer than \(1.5\) meters to the camera, we switch to the fish-eye camera. **Navigation.** When using the fish-eye camera, we use a very simple visual servoing method, which produces a control command based on the bounding box position in the image, aiming at centering this bounding box, and maximizing the height of the bounding box to a certain extent. When using the RGBD camera, we are able to estimate the depth of the target and thus its \((x,y)\) position relative to the system. Therefore the control command can be produced directly with the local planner described below, using this local goal. **Local Planner and Collision Avoidance.** To ensure the mobile agent's safety, we take into account any obstacles that may be present between the agent and its target. This is done through the use of the SAFER local planner and collision avoidance algorithm [24]. This algorithm can either take a control command or a local goal as input and it outputs a safe control command that avoids obstacles. **Search behavior.** Thanks to the fish-eye camera and visual servoing, it is rare to lose the person when it walks to the side of the mobile agent. Nevertheless, this still might happen, and occlusions might also lead to the system losing its target. In this case, the system enters into search mode. When the system is using the RGBD camera, the last known position of the target is kept as the local goal for a short period of time (e.g. 2 seconds). After that, or when using the fish-eye camera, our mobile agent will stop moving forward and rotate in the direction where the target was last seen. This allows our mobile agent to have a better chance at re-finding the target. ## IV Experiments Our system, designed to track humans, is installed in a mobile agent that includes a base from Diablo Robotics [12], Nvidia Jetson Orin [14] compute, a J5 Create JVCU360 camera [11], a Zed camera [15], and a Livox Mid-360 lidar sensor [13]. We conducted experiments in indoor office environments where the mobile agent followed a target person. We tested two situations: the target person walking to random markers and the target person following a course. The environments included obstacles, and always had two people present, one being the target person and the other being the interferer. ### _Comparison Approaches_ For our experiments, we began by considering human-following systems without person re-identification. We first looked at the variant that only tracks the human, which is similar to the prior method [31]. Next, we looked at variants that use face-only or torso-only identification models. It's worth noting that the variant that only considers torso-only identification models is similar to the prior method [4], which uses a full-body identification module (e.g., clothing color, height, gait) for human tracking. We then considered our method without using a motion tracker, and finally, we looked at our method that uses only visual servo or only path planning to drive the mobile agent. Our goal in designing these baseline variants was to show the importance of 1) person re-identification in the human-following feature and 2) the individual parts of our person re-identification system (e.g., face-torso identification models, motion tracker, mobile agent driving by visual servo or path planning). See Table I for our summary. Fig. 4: Our mobile agent uses a dual-camera setup. A fish-eye RGB camera for short-range following using simple visual servoing and a RGBD camera for longer-range following using a local navigation goal. We leverage SAFER, a local planner and collision avoidance module [24] to handle safe navigation towards the target. In case the robot is in search mode, it will turn in the direction where the target was last seen. ### _Metrics_ To determine how to successfully follow a human, we conducted user studies to establish metrics. Our experiments involved 5 participants walking to 15 random markers (75 trials in total) and 5 participants following 3 different courses (15 trials in total). These trials were conducted in various office environments, each containing obstacles such as chairs, desks, cabinets, or walls, as well as an interferer who randomly walked in the same environment to interrupt the agent following the target person. To measure success, we report 1) the average speed of the agent when following the target person, 2) the average distance between the agent and followed person, 3) the average distance to obstacles according to the Livox Mid-360 lidar [13], 4) the number of times the agent loses the target person, and 5) the number of times the agent follows the wrong person. Additionally, we created a questionnaire and asked participants to rate their experience on a scale from 1 to 10, reporting the average score for each question. The questions are 1. [Safety], how safe did you feel while the robot was following you? (With 1 being not safe at all and 10 being very safe) 2. [Collision to Participants], how often did you feel the robot might collide with you? (With 1 being never and 10 being every time) 3. [Collision to other Objects], how often did you feel the robot might collide with another object? (With 1 being never and 10 being every time) 4. [Following Accuracy], how accurately did the robot follow you? (With 1 being not accurate at all and 10 being very accurate) 5. [Navigation Smoothness], how smoothly did the robot navigate to follow you? (With 1 being not smoothly at all and 10 being at very smoothly) ### _Quantitative evaluation_ The quantitative results of our experiments are compiled in Table II. First, we compared methods with (Ours) and without re-identification (re-id) models (Ours_w/o_reid). We found that using re-id models can improve the speed of the agent, reduce the following distance to the person being followed, and significantly decrease the number of lost targets and incorrect following. This observation suggests that using re-id models can greatly improve the human following feature. Similarly, when comparing methods with (Ours) and without the motion tracker (Ours_w/o_motion), we observe that the speed of the agent when following a person is greatly improved after using the motion tracker. We argue that this is because, without using the motion tracker, the agent must perform re-identification with re-id models most of the time, resulting in high latency. Then, we discuss methods using different re-id models. Comparing Ours_w/o_torso and Ours_w/o_face, we find that using face re-id models leads to higher average speed of the agent, lower average following distance, and far less following of the wrong person. This can be seen when comparing Ours_w/o_torso, which results in a higher number of lost targets, but a lower number for following an incorrect person when the agent is walking to a random marker. Ours_w/o_face has less overall lost targets, but all of the lost targets resulted in an incorrect following. We argue that this is because face is a stronger feature for person identification, however since many times the person may be facing away from the agent, the torso may be the only identification feature for the agent. Combining the best of both worlds, Ours improves over Ours_w/o_torso and Ours_w/o_face, suggesting that both torso and face re-id models are crucial for human following. Lastly, we discuss methods using different approaches (visual servo, path planning, or both) for the agent to take actions. Comparing Ours_w/o_visualservo and Ours_w/o_pathplanning, the main difference is that the average distance to obstacles is much larger when using path planning only, and the average speed is lower. In other words, path planning is a more conservative approach for collision avoidance than visual servo. On the other hand, Ours determines when to use visual servo or path planning depending on the distance of the target person to the agent. We find this to be a good combination of both approaches. ### _User study_ In addition to raw measurements, we designed a user study to evaluate how user would feel around our following system. The questions listed in Section IV-B aim at evaluating different aspects of the following experience. Questions 1, 2, and 3 are overall focusing on the feeling of safety that participants got from the system. Overall, the scores reflect a pretty good feeling of safety for all of the ablated variants of our method except for Ours_w/o_pathplanning. This observation suggests that using path planning for the agent's action is crucial for avoiding collisions. The best safety scores are obtained without the motion tracker, which is not surprising as it is the slowest variant. Question 4 tells us how well participants believe the following system performed. Our full system has the highest score of all, showing that every component of our system is crucial for the following accuracy. In particular, we notice how crucial our re-id module is as the lowest score is obtained by Ours_w/o_reid. We also observe the importance of using path planning and the benefits of combining both face and torso re-id models. Finally, the users rated the smoothness of our system navigation in question 5. The low score obtained by Ours_w/o_pathplanning suggests that using path planning helps the system to navigate more smoothly. ## V Conclusion This paper addresses the issue of human following for mobile agents. We have observed that person re-identification is crucial for human following, particularly when the targeted individual is in a crowd or interacting with other people. To tackle this problem, we have developed a new person re-identification module that consists of three core components: 360-registration, identification models that use both face and torso, and motion tracker. We conducted a series of experiments and found that our approach outperforms previous methods, highlighting the effectiveness of each component in our person re-identification module. We believe that our work can help to improve the development of human following mobile agents and contribute to the advancement of artificial intelligence.
2309.03574
Caveat (IoT) Emptor: Towards Transparency of IoT Device Presence (Full Version)
As many types of IoT devices worm their way into numerous settings and many aspects of our daily lives, awareness of their presence and functionality becomes a source of major concern. Hidden IoT devices can snoop (via sensing) on nearby unsuspecting users, and impact the environment where unaware users are present, via actuation. This prompts, respectively, privacy and security/safety issues. The dangers of hidden IoT devices have been recognized and prior research suggested some means of mitigation, mostly based on traffic analysis or using specialized hardware to uncover devices. While such approaches are partially effective, there is currently no comprehensive approach to IoT device transparency. Prompted in part by recent privacy regulations (GDPR and CCPA), this paper motivates and constructs a privacy-agile Root-of-Trust architecture for IoT devices, called PAISA: Privacy-Agile IoT Sensing and Actuation. It guarantees timely and secure announcements about IoT devices' presence and their capabilities. PAISA has two components: one on the IoT device that guarantees periodic announcements of its presence even if all device software is compromised, and the other that runs on the user device, which captures and processes announcements. Notably, PAISA requires no hardware modifications; it uses a popular off-the-shelf Trusted Execution Environment (TEE) -- ARM TrustZone. This work also comprises a fully functional (open-sourced) prototype implementation of PAISA, which includes: an IoT device that makes announcements via IEEE 802.11 WiFi beacons and an Android smartphone-based app that captures and processes announcements. Both security and performance of PAISA design and prototype are discussed.
Sashidhar Jakkamsetti, Youngil Kim, Gene Tsudik
2023-09-07T09:08:31Z
http://arxiv.org/abs/2309.03574v2
# Caveat (IoT) Emptor1: ###### Abstract. As many types of IoT devices worm their way into numerous settings and many aspects of our daily lives, awareness of their presence and functionality becomes a source of major concern. Hidden IoT devices can snoop (via sensing) on nearby unsuspecting users, and impact the environment where unaware users are present, via actuation. This prompts, respectively, privacy and security/safety issues. The dangers of hidden IoT devices have been recognized and prior research suggested some means of mitigation, mostly based on traffic analysis or using specialized hardware to uncover devices. While such approaches are partially effective, there is currently no comprehensive approach to IoT device transparency. Prompted in part by recent privacy regulations (GDPR and CCPA), this paper motivates and constructs a _privacy-agile_ Root-of-Trust architecture for IoT devices, called PAISA: Privacy-agile IoT Sensing and Actuation. It guarantees timely and secure announcements about IoT devices' presence and their capabilities. PAISA has two components: one on the IoT device that guarantees periodic announcements of its presence even if all device software is compromised, and the other that runs on the user device, which captures and processes announcements. Notably, PAISA requires no hardware modifications; it uses a popular off-the-shelf Trusted Execution Environment (TEE) - ARM TrustZone. This work also comprises a fully functional (open-sourced) prototype implementation of PAISA, which includes: an IoT device that makes announcements via IEEE 802.11 WiFi beacons and an Android smartphone-based app that captures and processes announcements. Both security and performance of PAISA design and prototype are discussed. + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + ecosystem where all impacted users are made aware of nearby IoT devices, which empowers them to make informed decisions. Another inspiration stems from recent data protection regulations, such as the European General Data Protection Regulation (GDPR) (Kumar et al., 2017) and California Consumer Privacy Act (CCPA) (Kumar et al., 2018). These regulations aim to protect user privacy by stipulating that service providers must be accountable and ask for user consent before collecting, processing, storing, and sharing user data. We want to apply the same principle to IoT devices. Note that these regulations are clearly focused on privacy, meaning that, in the IoT context, they naturally apply to devices that sense the environment. Whereas, our scope is broader - it includes actuation-capable devices that can directly impact nearby users' security and even safety. For example, consider a situation where a hotel guest with epilepsy is unaware of a "smart" fire/smoke alarm in the room which turns on a strobe light when it detects smoke or fire. Unexpected light strobing can easily cause an epileptic seizure or worse.2 Another example is an Airbnb renter who is unaware of a smart door-lock that can be (un)locked remotely which presents a risk of the door being closed or opened without the renter's knowledge. Whereas, if forewarned, the renter could disable it for the period of stay. To this point, a 2017 incident with an Austrian Hotel where all smart locks were hacked illustrates the danger.3 Footnote 2: Ideally, the guest who is warned about the alarm could switch it to another mode, without die consequences. Addressing privacy concerns in the IoT context poses two challenges: 1. How to make users aware of the presence of nearby devices? 2. How to ask for consent to: collect information (in case of sensing), or control the environment (in case of actuation)? In this paper, we take the first step by focusing on (1), while viewing (2) as its natural follow-up. Current means of achieving (2) mostly focus on obtaining user consent (Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018). For example, studies on Privacy Assistants(Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018) focus on automating the process of acquiring user preferences/consent efficiently. Another research direction(Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018) provides design (and implementation) guidelines for user privacy choices that address regulatory considerations. Regarding (1), there are several approaches for informing users about ambient devices. One approach involves manually scanning the environment using specialized hardware(Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018). Another way is by monitoring wireless traffic, i.e., WiFi and/or Bluetooth (Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018). Though somewhat effective, such techniques are cumbersome and error-prone, since it is not always possible to thoroughly scan the entire ambient space. Also, these approaches can be evaded if a device is mis-configured or compromised. Nevertheless, they represent the only option for discovering hidden and non-compliant devices. Instead of putting the burden on the users to monitor and analyze wireless traffic, we want to construct a technique that guarantees that all compliant IoT devices reliably announce their presence, which includes their types and capabilities. Consequently, a user entering an unfamiliar space can be quickly warned about nearby IoT activity. We believe that this is an important initial step towards making future IoT devices privacy-compliant. We imagine later integrating the proposed technique with other consent-seeking platforms. ### Overview & Contributions We construct a technique called PAISA: Privacy-Agile IoT Sensing and Actuation, that guarantees timely and secure announcements about IoT device presence and capabilities. We use the term _privacy-agile_ to denote PAISAs service - explicit user awareness of all nearby PAISA-compliant IoT devices. Each PAISA-compliant device reliably broadcasts secure announcements at regular intervals, ensuring continuous awareness, unless it is compromised via physical attacks or is powered off. PAISA has two main components: (1) one on the IoT device that guarantees periodic announcements of its presence, and (2) the other that runs on the user device (smartphone); it captures and processes announcements. To guarantee secure periodic announcements on the IoT device, PAISA relies on the presence of a Trusted Execution Environments (TEE) or some other active Root-of-Trust (RoT) component. The TEE ensures guaranteed and isolated execution of PAISA Trusted Computing Base (TCB). On the user device, PAISA imposes no special requirements to capture and process announcements: it simply uses standard network drivers to read announcement packets and validate them in an application. Anticipated contributions are: * Motivation for, and comprehensive treatment of, a _privacy-agile_ IoT architecture for IoT devices. To the best of our (current) knowledge, no prior work systematically approached privacy compliance in the IoT ecosystem, given that relevant attempts (Kumar et al., 2018; Kumar et al., 2018; Kumar et al., 2018), are either ad-hoc or not applicable to a wide range of devices. * Design and construction of PAISA, a secure and _privacy-agile_ TEE-based architecture that reliably informs nearby users about IoT devices. Notably, PAISA does not require any custom hardware, unlike some prior work, e.g., (Kumar et al., 2018; Kumar et al., 2018). It uses _off-the-shelf_ popular TEE, e.g., ARM TrustZone (Kumar et al., 2018). * A fully functional prototype implementation of PAISA, which includes: (a) a prototype IoT device based on ARM Cortex-M3 featuring announcements via IEEE 802.11 WiFi beacons, and (b) an Andriod application running on Google Pixel 6, which extracts and displays the announcements to the user. All source code is publicly available at (Kumar et al., 2018). ### Scope, Limitations, & Caveats As with most new designs, PAISA has certain limitations: * With regard to scope, it applies to a class of devices equipped with some basic security features, e.g., ARM TrustZone. Thus, it is unsuitable for simple "bare-metal" devices or even slightly higher-end ones that lack a secure hardware element. * In terms of the security level, it offers protection against hacked (directly re-programmed) or malware-infected devices. However, it does not defend against non-compliant devices. This includes devices that are home-made, jerryr-rigged, or produced by non-compliant manufacturers. * Furthermore, PAISA does not defend against local jamming or _wormhole_ attacks (Pedersen, 2015; Pedersen, 2015).4 The latter is nearly impossible to thwart. However, we propose a method to partially handle these attacks in Sections 4.3 and 5.2. Footnote 4: A Wormhole attack occurs when an announcement from one device is tunnled into a remote network and re-announced there, making it appear that the device is present. * Finally, we do not explore policy issues and implications, i.e., the focus is on reliably informing users about adjacent devices. What users do with that information is left to future work. While we acknowledge that a practical system must include this component, space limitations make it hard to treat this topic with the attention it deserves. ## 2. Background ### Targeted IoT Devices This work focuses on resource-limited IoT devices that have strict cost and energy constraints. Such devices tend to be deployed on a large scale and are meant to perform simple tasks, e.g., thermostats, security cameras, and smoke detectors. Due to the constraints, they are often equipped with micro-controller units (MCU), such as ARM Cortex-M series (Grover et al., 2016). Nonetheless, our work is also applicable to higher-end computing devices (e.g., smartwatches, drones, and infotainment units) that are equipped with a TEE. Recall that very simple devices that have no security features are out of scope. Figure 1 shows a general architecture of a device with an MCU and multiple peripherals. An MCU is a low-power computing unit that integrates a core processor, main memory, and memory bus on a single System-on-a-Chip (SoC). Its main memory is usually divided between program memory (or flash) where the software resides, and data memory (or RAM), which the software uses for its stack, heap, and peripheral memory access. A typical MCU also contains several internal peripherals such as a timer, General-Purpose Input/Output (GPIO), Universal Asynchronous Receiver/Transmitter (UART), Inter-Integrated Circuit (I2C), and Serial Peripheral Interface (SPI). **Sensors & Actuators:** Multiple purpose-specific sensors and actuators are connected to the MCU via internal peripherals. While sensors collect information from the environment, actuators control it. Examples of sensors are microphones, GPS units, cameras, as well as smoke and motion detectors. Examples of actuators are speakers, light switches, door locks, alarms, and sprinklers. **Network Interfaces:** IoT devices are often connected to the Internet and other devices, either directly or via a controller hub or a router. Thus, they are typically equipped with at least one network interface (such as WiFi, Bluetooth, Cellular, Ethernet, or Zigbee) attached to the MCU via internal network peripherals, e.g., UART, I2C, or SPI. WiFi and Cellular are used for wireless Internet connectivity at relatively high speeds. Bluetooth and Zigbee are used for relatively low-speed short-range communication with other devices, e.g., a smartphone for Bluetooth, or a controller hub for Zigbee. Since WiFi is currently the most common interface available for IoT devices(Han et al., 2017), PAISA uses it for broadcasting device announcements. However, any other broadcast media (wired or wireless) can be supported; see Section 8 for more details. Table 1 shows some examples of (low-end) commodity IoT devices with sensors, actuators, and their network interfaces. ### Trusted Execution Environments (TEEs) A TEE is a hardware-enforced primitive that protects the confidentiality and integrity of sensitive software and data from untrusted software, including user programs and the OS. Similar to some prior work (Han et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), we use ARM TrustZone-M as the TEE for the PAISA prototype. TrustZone-M is available on ARM Cortex-M23/M33/M55 MCUs(Wang et al., 2018). However, any TEE that offers trusted peripheral interfaces can be used instead. **ARM TrustZone-M** ARM TrustZone partitions the hardware and software within the MCU into two separate isolated regions: Secure and Normal. The former contains trusted security-critical code and data, while the latter houses user programs (or the device software). The MCU switches between secure and non-secure modes when accessing Secure and Normal regions, respectively. TrustZone hardware controllers prevent the MCU from accessing memory assigned to Secure region when it is running in non-secure mode, resulting in a secure execution environment. Moreover, at boot time, TrustZone verifies the integrity of trusted code via secure boot and always begins executing from the Secure region before jumping into the Normal region. TrustZone for ARMv8-M MCUs is called TrustZone-M (TZ-M). TZ-M features non-secure callable functions (NSC) for Normal region software to invoke trusted code. Also, TZ-M can lock internal peripherals into the Secure region making them inaccessible to the Normal region via the TrustZone Security Controller (TZSC) that, when configured at boot, maps desired peripherals into the Secure region. This mapping configuration is controlled by TZSC and is checked by the secure-boot process at boot time. Furthermore, interrupts attached to secure peripherals are always directed to the corresponding Interrupt Service Routines (ISR) in the Secure region. Also, TrustZone Illegal Access Controller (TZAC) raises a SecureFault exception, when a security violation is observed, to the Nested Vectored Interrupt Controller (NVIC) which is then securely processed by exception handlers. Figure 1. Architecture of an IoT Device. This example shows the peripherals of a security camera. PAISA relies on TZ-M for enabling a secure execution environment for its TCB and for implementing secure peripherals. For a comprehensive overview of TrustZone, see (Tran et al., 2018). **Other Active Roots-of-Trust (RoTs)** Active RoTs prevent security violations, unlike their passive counterparts that detect them (Tran et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). TEEs are considered active RoTs since they prevent violations by raising hardware-faults/exceptions, which are handled in the secure mode. Besides TEEs, some active RoTs have been proposed in the research literature, e.g.,(Tran et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Notably, GAROTA(Tran et al., 2018) and AWDT(Tran et al., 2018) offer guaranteed execution of secure ISRs when a configured peripheral is triggered. Although the current focus is on off-the-shelf devices, we believe that PAISA can be applied to either GAROTA or AWDT devices. Section 8 discusses the applicability of PAISA to other architectures. ### Remote Attestation (\(\mathcal{RA}\)) \(\mathcal{RA}\) is a security service that enables the detection of malware presence on a remote device (\(\mathcal{P}\mathsf{rv}\)) by allowing a trusted verifier (\(\mathcal{V}\mathsf{rf}\)) to remotely measure software running on \(\mathcal{P}\mathsf{rv}\). \(\mathcal{RA}\) is a challenge-response protocol, usually realized as follows: 1. \(\mathcal{V}\mathsf{rf}\) sends an \(\mathcal{RA}\) request with a challenge (\(\mathcal{Chal}\)) to \(\mathcal{P}\mathsf{rv}\). 2. \(\mathcal{P}\mathsf{rv}\) receives the attestation request, computes an authenticated integrity check over its software memory region (in program memory) and \(\mathsf{Chal}\), and returns the result to \(\mathcal{V}\mathsf{rf}\). 3. \(\mathcal{V}\mathsf{rf}\) verifies the result and decides if \(\mathcal{P}\mathsf{rv}\) is in a valid state. The integrity check is performed by computing either a Message Authentication Code (e.g., HMAC) or a digital signature (e.g., ECDSA) over \(\mathcal{P}\mathsf{rv}\)'s program memory. Computing a MAC requires \(\mathcal{P}\mathsf{rv}\) to share a symmetric key with \(\mathcal{V}\mathsf{rf}\), while computing a signature requires \(\mathcal{P}\mathsf{rv}\) to have a private key with the corresponding public key known to \(\mathcal{V}\mathsf{rf}\). Both approaches require secure key storage on \(\mathcal{P}\mathsf{rv}\). \(\mathcal{RA}\) architectures for low-end MCUs(Tran et al., 2018; Wang et al., 2018) use MACs whereas higher-end TEEs (e.g., Intel SGX(Tran et al., 2018) and AMD SEV(Tran et al., 2018)) use signatures. PAISA uses \(\mathcal{RA}\) to ensure integrity of normal device operation, i.e. the device software controling sensors and actuators. However, PAISA relies on TZ-M on the MCU to perform attestation locally, instead of via an interactive protocol. Also, it uses signatures to report the attestation result, similar to (Tran et al., 2018; Wang et al., 2018). ## 3. Design Overview PAISA primarily involves two parties: an IoT device (\(I_{\textit{dev}}\)) and a user device (\(U_{\textit{dev}}\)), e.g., a smartphone or a smartwatch. PAISA is composed of two modules: _announcement_ on \(I_{\textit{dev}}\) and _reception_ on \(U_{\textit{dev}}\). _Announcement_: On \(I_{\textit{dev}}\), the _announcement_ module is trusted and housed inside a TEE. It ensures that, at periodic intervals, \(I_{\textit{dev}}\) broadcasts an announcement to other devices within its immediate network reach. Such "reach", i.e. distance, is specified by the network interface, e.g., 802.11 WiFi beacons go up to 100 meters (Dong et al., 2015). Importantly, PAISA guarantees that announcement packets are broadcast in a timely manner, even if all device software is compromised. This is achieved via a secure timer and a secure network interface, available on TZ-M. An announcement packet consists of a fresh timestamp, a device description (sensors, actuators, and their purpose) and a signature that authenticates the origin of the packet as a legitimate \(I_{\textit{dev}}\). _Reception_: On \(U_{\textit{dev}}\), the _reception_ module captures the announcement packet via its network interface (of the same type as on \(I_{\textit{dev}}\)). The module then parses the packet, validates its timestamp and signature, and conveys the presence of \(I_{\textit{dev}}\) and functionality to the user. The proposed design presents some challenges: **Device State & Attestation:** Merely broadcasting static information, such as a device description, is not enough. If \(I_{\textit{dev}}\) software is compromised, information disseminated via announcement packets is invalid since \(I_{\textit{dev}}\) software does not match the device description. For example, consider a user who enters an Airbnb rental and learns about a motion detector/tracker from PAISA announcements. Suppose that this motion detector is compromised and the \begin{table} \begin{tabular}{|c||c|c|c|} \hline **IoT device** & **Sensor** & **Actuator** & **Network I/F** \\ \hline X-Sense smart smoke detector (Tran et al., 2018) & smoke, carbon monoxide detector & alarm & WiFi \\ \hline Amazon smart plug (Amazon et al., 2018) & - & switch & WiFi \\ \hline Blink Mini Security Camera (Ballall et al., 2018) & microphone, motion, camera & speaker & WiFi \\ \hline Google Nest thermostat (Ball et al., 2018) & light, motion, temperature, humidity & heating, cooling & WiFi \\ \hline iRobot Roomba 694(Ball et al., 2018) & cliff, drift, optical & brush/vaccum motor, drive motor & WiFi \\ \hline Fitbit - fitness tracker (Ball et al., 2018) & accelerometer, heart rate monitor, GPS, altimeter & vibrating motor, speaker & Bluetooth \\ \hline Wyze Lock Bolt - smart lock (Ball et al., 2018) & fingerprint & lock, speaker & Bluetooth \\ \hline \end{tabular} \end{table} Table 1. Various Types of IoT Devices with different Sensors, Actuators, and Network Interface. Figure 2. Overview of PAISA workflow. malware notifies the adversary about the user's presence and movements. To handle such cases, the user needs authentic real-time information about the software running on \(I_{\mathit{deo}}\) at the announcement time. Therefore, PAISA attests \(I_{\mathit{deo}}\) software and includes the timestamped attestation report in the announcement. The reception module on \(U_{\mathit{deo}}\) must check the attestation report as part of validating the announcement. If the attestation check fails, \(I_{\mathit{deo}}\) must be compromised and cannot be trusted, regardless of the description in the announcement. **Replay Attacks & Freshness:** To protect against replay attacks and establish freshness of announcements (via timestamps), \(I_{\mathit{deo}}\) needs a reliable source of time. However, a real-time clock is generally not viable for resource-constrained devices (Zhou et al., 2017; Wang et al., 2018; Wang et al., 2018). To this end, PAISA includes a _time synchronization_ technique: at boot time, \(I_{\mathit{deo}}\) synchronizes with a trusted server managed by the device manufacturer. See Sections 4.2 and 5.2 for details. To summarize, PAISA is comprised of all aforementioned components. Figure 2 presents a high-level overview of PAISA workflow. As soon as \(I_{\mathit{deo}}\) boots, it synchronizes its time with the manufacturer server. Next, it attests its software and composes an announcement packet including the current timestamp, the attestation result, the device description, and a signature. Then, \(I_{\mathit{deo}}\) broadcasts the packet via WiFi. This is repeated for every timer interrupt, which is scheduled (likely configured by the manufacturer5) according to the desired use-case. Each announcement is received by the PAISA app on every user device within range. After validating the announcement, the app alerts the user to \(I_{\mathit{deo}}\)'s presence. Footnote 5: It is debatable whether any other party should be allowed to set the announcement schedule. ## 4. System & Adversary Models ### Entities Involved PAISA considers three entities: \(I_{\mathit{deo}}\), \(U_{\mathit{deo}}\), and the manufacturer server (\(M_{\mathit{sor}}\)), which is responsible for provisioning \(I_{\mathit{deo}}\) at production time. \(I_{\mathit{deo}}\) is a resource-constrained IoT device installed either (1) in a public space, e.g., airports, restaurants concert/sports venues, or stores, or (2) in a semi-private space, e.g., hotel rooms or Airbnb rentals. \(I_{\mathit{deo}}\) is assumed to be equipped with a TEE to protect PAISA TCB from untrusted software (including the OS). \(U_{\mathit{deo}}\) is the personal and trusted device of the user. It is assumed to be within network transmission range of \(I_{\mathit{deo}}\). \(U_{\mathit{deo}}\) has an app that receives and verifies PAISA announcements. \(M_{\mathit{sor}}\) is a back-end (and sufficiently powerful) trusted server hosted by \(I_{\mathit{deo}}\) manufacturer. PAISA assumes multiple \(I_{\mathit{deo}}\)-s and multiple \(U_{\mathit{deo}}\)-s in the same IoT-instrumented space, i.e., within network transmission range. \(U_{\mathit{deo}}\) receives announcements from multiple \(I_{\mathit{deo}}\)-s. \(I_{\mathit{deo}}\)-s are unaware of \(U_{\mathit{deo}}\)-s in their vicinity. PAISA uses public key signatures to authenticate and verify announcements. We assume a public-private key-pair (\(pk_{I_{\mathit{deo}}}\), \(sk_{I_{\mathit{deo}}}\)) for each \(I_{\mathit{deo}}\) and another key-pair (\(pk_{M_{\mathit{sor}}}\), \(sk_{M_{\mathit{sor}}}\)) for each \(M_{\mathit{sor}}\). \(pk_{M_{\mathit{sor}}}\) is used to authenticate \(I_{\mathit{deo}}\) as part of announcement verification. ### PAISA Protocol Overview PAISA protocol has three phases: _Registration_, _BootTime_, and _Runtime_. Figure 3 shows its overview. _Registration_ phase takes place when \(I_{\mathit{deo}}\) is manufactured and provisioned. At the time of the registration, besides installing software, \(M_{\mathit{sor}}\) installs PAISA TCB on \(I_{\mathit{deo}}\) and provisions it with a device ID, a description, and a keypair (\(pk_{I_{\mathit{deo}}}\), \(sk_{I_{\mathit{deo}}}\)) using **Provision** request. Further details about the device description are in Section 5.2. A provisioned \(I_{\mathit{deo}}\) is eventually sold and deployed by its owner/operator. _BootTime_ phase is executed at \(I_{\mathit{deo}}\) boot, after a reset or a power-on. Before going into normal operation, \(I_{\mathit{deo}}\) synchronizes its time with \(M_{\mathit{sor}}\) using **TimeSync** 3-way protocol. At the end of this phase, the initial announcement is generated. _Runtime_ phase corresponds to \(I_{\mathit{deo}}\)'s normal operation. In this phase, \(I_{\mathit{deo}}\) announces its presence based on a preset timer interval. Announcement periodicity is set by \(M_{\mathit{sor}}\). (We are not advocating allowing owners to set this.) Whenever triggered by the timer, **Announcement** procedure is invoked. It attests \(I_{\mathit{deo}}\) software and broadcasts an announcement (\(\mathsf{Msg}_{\mathsf{anno}}\)). A nearby \(U_{\mathit{deo}}\) receives \(\mathsf{Msg}_{\mathsf{anno}}\) using its **Reception** app, which parses and verifies \(\mathsf{Msg}_{\mathsf{anno}}\). If the verification succeeds, \(\mathsf{Msg}_{\mathsf{anno}}\) is displayed to the user. For the complete protocol description, see Section 5.2. ### Adversary Model We consider an adversary \(\mathcal{A}\mathsf{dv}\) that has full control over \(I_{\mathit{deo}}\) memory, including flash and RAM, except for the TCB and its data inside the TEE. \(\mathcal{A}\mathsf{dv}\) can attempt to tamper with any \(I_{\mathit{deo}}\) components and peripherals, including sensors, actuators, network interfaces, and debug ports, unless they are configured as secure by the TEE. All messages exchanged among \(I_{\mathit{deo}}\), \(U_{\mathit{deo}}\), and \(M_{\mathit{sur}}\) are subject to eavesdropping and manipulation by \(\mathcal{A}\mathsf{dv}\), following the well-known Dolev-Yao model (Dolev-Yao, 2017). Furthermore, _Registration_ phase is considered secure - \(M_{\mathit{sur}}\) is trusted to correctly provision \(I_{\mathit{deo}}\) and keep the latter's secrets. Also, **Reception** app on \(U_{\mathit{deo}}\) is also considered trusted. **DoS Attacks:**\(\mathcal{A}\mathsf{dv}\) can essentially incapacitate ('brick') \(I_{\mathit{deo}}\) by consuming all of its resources by malware. It can also keep all peripherals busy in an attempt to prevent PAISA TCB from broadcasting \(\mathsf{Msg}_{\mathsf{anno}}\) packets. It can ignore or drop outgoing packets or Figure 3. PAISA Protocol Overview. flood \(I_{deo}\) with incoming malicious packets. We also consider DoS attacks whereby a malware-controlled \(I_{deo}\) reboots continuously and floods \(M_{spar}\) with frivolous **TimeSync** requests. However, we do not consider \(\mathcal{A}\mathsf{dv}\) that uses signal jammers to block \(U_{deo}\) from receiving \(\mathsf{Msganno}\). Such attacks are out of scope and there are techniques (95; 96; 105) to prevent them. **Replay Attacks:** we consider replay attacks whereby \(\mathcal{A}\mathsf{dv}\) replays old/stake \(\mathsf{Msganno}\)-s from any \(\mathsf{PAISA}\)-compliant \(I_{deo}\)-s. We also consider DoS attacks on \(U_{deo}\), e.g., \(\mathcal{A}\mathsf{dv}\) replays old \(\mathsf{Msganno}\)-s to swamp \(U_{deo}\) network interface. **Wormhole attacks:**6 PAISA does not consider so-called wormhole attacks(71; 78), whereby \(\mathcal{A}\mathsf{dv}\) records and tunnels \(\mathsf{Msganno}\) from remote locations (from outside \(U_{deo}\) communication range). There are well-known techniques (21; 81; 37; 81) to tackle such attacks. However, PAISA provides to \(U_{deo}\) coarse-grained location information, i.e., where \(I_{deo}\) was manufactured and where it was deployed at _Registration_ phase. Footnote 6: Replay and wormhole \(\mathsf{Msganno}\)-s attacks overlap, e.g., a replayed \(\mathsf{Msganno}\) from a non-local \(I_{deo}\) is both a replay and a wormhole attack. **Physical Attacks:** PAISA does not protect against physically invasive attacks on \(I_{deo}\), e.g., via hardware faults, modifying code in ROM, and extracting secrets via side-channels. We refer to (106) for protection against such attacks. However, PAISA protects against non-invasive physical attacks, i.e., if \(\mathcal{A}\mathsf{dv}\) tries to physically reprogram the device using wired debug interfaces such as JTAG. Such attacks are prevented using the secure boot feature of the TEE on \(I_{deo}\). **Non-Compliant Devices:** We do not consider attacks where \(\mathcal{A}\mathsf{dv}\) physically infiltrates and deploys malicious (non-compliant) hidden devices in an IoT-instrumented space. As mentioned earlier, there are "spyware-type" techniques, such as (12; 89; 114), and other prior work, such as (112; 113), that scan the area for hidden devices. Albeit, even these techniques are error-prone, potentially computationally expensive, and time-consuming for users, and/or require additional equipment. **Runtime Attacks:** Another limitation of PAISA is that it does not handle runtime control-flow attacks, such as buffer overflows, as well as non-control-flow and data-only attacks. PAISA can only detect software modifications via attestation. For mitigating these runtime attacks, there are techniques such as Control Flow Attestation (CFA) and Control Flow Integrity (CFI) (20; 43; 49; 52; 93; 116). Dealing with these attacks and deploying countermeasures is a good idea, though it is out-of-scope of this paper. Furthermore, many CFA/CFI techniques are resource-intensive, making their use challenging in IoT settings. ### Security & Performance Requirements Recall that the main objective of PAISA is to make \(I_{deo}\)_privacy-agile_ i.e., by guaranteed periodic announcements from \(I_{deo}\) about its activity to adjacent \(U_{deo}\)-s, in the presence of \(\mathcal{A}\mathsf{dv}\) defined in Section 4.3. To that end, PAISA must adhere to the following properties: * _Unforgeability:_ Announcements must be authenticated. \(U_{deo}\) should be able to verify whether \(\mathsf{Msganno}\) is from a legitimate \(I_{deo}\), i.e., \(\mathcal{A}\mathsf{dv}\) should not be able to forge \(\mathsf{Msganno}\). * _Timeliness:_ Announcements must be released at fixed time intervals. \(\mathcal{A}\mathsf{dv}\) should not be able to prevent \(\mathsf{Msganno}\)-s from being sent out. * _Freshness:_ Announcements must be fresh and must reflect the current (software) health of \(I_{deo}\). \(\mathcal{A}\mathsf{dv}\) should not be able to launch replay attacks. With respect to performance, PAISA must achieve the following: * _Low latency of_ **Announcement**: Announcements must be quick with minimal impact on \(I_{deo}\) normal utility. * _Low bandwidth of_ **Announcement**: Announcements must be short to consume minimal network bandwidth on \(I_{deo}\) and \(U_{deo}\). ## 5. Paisa Design This section elaborates on the design and protocol overview presented in Sections 3 and 4. ### Design Challenges There are a few design challenges (besides those mentioned in Section 3) to be addressed in order to achieve the security and performance requirements of PAISA. **DoS Attacks Prevention on \(I_{deo}\):**\(\mathcal{A}\mathsf{dv}\) can launch DoS attacks by either keeping the MCU or the network peripherals busy, as mentioned in Section 4.3. To prevent such attacks, PAISA configures both the timer and the network peripheral as _secure peripherals_ controlled by the TEE. By doing so, PAISA ensures that the MCU jumps into the TCB whenever the secure timer raises an interrupt according to scheduled periodicity. Moreover, the timer interrupt is marked with the highest priority so that no other interrupt can preempt it. This configuration (that determines which timer and network peripheral are trusted, and their interrupt priorities) is securely stored within the TEE. Hence, \(\mathcal{A}\mathsf{dv}\) cannot tamper with it. This also prevents DoS attacks that attempt to keep \(I_{deo}\) from executing PAISA TCB that provides guaranteed periodic broadcast of \(\mathsf{Msganno}\)-s. A typical target \(I_{deo}\) might have 2-6 timers and multiple network peripherals, such as UART, SPI, and I2C on an MCU. PAISA reserves one timer and one network peripheral for TCB usage. This means that the network interface (e.g., WiFi or BlueTooth) connected to that reserved network peripheral is marked as exclusive. We admit that reserving a network interface exclusively for TCB use might be expensive for \(I_{deo}\), since at least one other interface (for regular use) would be needed. To address this issue, we implement a secure stub, akin to the ideas from (65; 87; 125), to share the reserved network interface between secure and non-secure applications, detailed in Section 6.3. For further discussion on this issue, see Section 8. **Bandwidth of \(\mathsf{Msganno}\):** Broadcast messages are subject to size constraints that impact network efficiency and transmission capacity, regardless of the network type. Since the device description can be of arbitrary size, to minimize the size of \(\mathsf{Msganno}\), PAISA uses a fixed size broadcast message by placing all pertinent \(I_{deo}\) information in a manifest file (\(\mathsf{Manifest}_{l_{deo}}\)). \(I_{deo}\)-generated \(\mathsf{Msganno}\)-s carry only: (1) a URL that points to \(\mathsf{Manifest}_{l_{deo}}\), and (2) some metadata: a timestamp, and a signature of \(\mathsf{Msgamno}\). For the sake of simplicity, we assume that \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\) is hosted on \(M_{\mathit{swr}}\). \(U_{\mathit{dev}}\) receives \(\mathsf{Msgamno}\), verifies it, extracts the URL, and fetches \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\) from \(M_{\mathit{swr}}\). Note that \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\) can also be hosted by other third parties or on a blockchain; its authenticity is based on \(M_{\mathit{swr}}\)'s signature at the time of provisioning. ### PAISA Protocol Recall that PAISA includes three phases: _Registration_, _BootTime_, and _Runtime_. Below we describe each phase in detail. #### 5.2.1. Registration In this phase, \(M_{\mathit{swr}}\) interacts with \(I_{\mathit{dev}}\) to provision it with secrets and information needed to enable PAISA. Figure 5 depicts this phase. **Device Manifest:**\(M_{\mathit{swr}}\) creates \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\) for \(I_{\mathit{dev}}\), including device ID (\(ID_{\mathit{dev}}\)), a description which includes:7 Footnote 7: This is just a sample list; some attributes might be optional and others might be needed. device type/model, manufacturer, date/location of manufacture, types of sensors/actuators, deployment purpose, network interfaces, owner ID, and location of deployment Figure 4 shows \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\) examples. \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\) can also contain a link to \(I_{\mathit{dev}}\) developer documentation, as mentioned in (Vaswani et al., 2017). Note that, whenever the owner changes \(I_{\mathit{dev}}\)'s location, the corresponding manifest must be updated accordingly. The granularity of this location information influences the ability to mitigate wormhole attacks. We believe that the contents of \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\) suffice to make a user aware of \(I_{\mathit{dev}}\) capabilities. However, the exact contents of \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\) are left up to the manufacturer. \(M_{\mathit{swr}}\) stores each \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\) it in its database and generates a publicly accessible link \(\mathsf{URL}_{\mathsf{Man}}\). Since \(\mathsf{URL}_{\mathsf{Man}}\) can be long, we recommend using a URL shortening service (such as \(\mathsf{Bitly}\)(Becker et al., 2017) or \(\mathsf{TinyURL}\)(Becker et al., 2017)) to keep \(\mathsf{URL}_{\mathsf{Man}}\) short and of fixed size. Hereafter, we use \(\mathsf{URL}_{\mathsf{Man}}\) to denote the short URL and \(\mathsf{URL}_{\mathsf{Man}_{\mathsf{val}}}\) the original URL. (Note that if the shortening service is not used, then \(\mathsf{URL}_{\mathsf{Man}}\) is identical to \(\mathsf{URL}_{\mathsf{Man}_{\mathsf{val}}}\).) For simplicity's sake, besides manufacturing \(I_{\mathit{dev}}\), we assume that \(M_{\mathit{swr}}\) is responsible for deploying and maintaining the software (\(SW_{\mathit{dev}}\)) on \(I_{\mathit{dev}}\). However, in practical scenarios, other entities, such as software vendors, can be involved in managing individual applications on \(I_{\mathit{dep}}\). In such cases, vendors must be integrated into the trust-chain by including their information and certificates into \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\). Whenever a vendor-imposed software update occurs, \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\) must be updated and re-signed by \(M_{\mathit{swr}}\). We further discuss this update process in Section 8. **Provision:**\(M_{\mathit{swr}}\) installs \(SW_{\mathit{dev}}\) and PAISA TCB (\(SW_{\mathsf{PAISA}}\)) into the normal and secure regions of \(I_{\mathit{dev}}\), respectively. \(M_{\mathit{swr}}\) ensures that the timer and the network peripheral are configured as secure and exclusively accessible to \(SW_{\mathsf{PAISA}}\). Also, \(M_{\mathit{swr}}\) sends \(I_{\mathit{dep}}\) and a hash of \(SW_{\mathit{dev}}\) to \(I_{\mathit{dev}}\) to be stored in \(SW_{\mathsf{PAISA}}\). Next, \(SW_{\mathsf{PAISA}}\) picks a new public/private key-pair (\(pk_{I_{\mathit{dev}}}\), \(sk_{\mathit{L_{dev}}}\)) and sends \(pk_{\mathit{L_{dev}}}\)to \(M_{\mathit{swr}}\) for certification. \(M_{\mathit{swr}}\) also gives the current timestamp to \(SW_{\mathsf{PAISA}}\), to be used for implementing a clock on \(I_{\mathit{dev}}\) (see Section 5.2.2). \(M_{\mathit{swr}}\) appends \(pk_{\mathit{L_{dev}}}\) and the hash of \(SW_{\mathit{dev}}\) to \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\). Finally, to authenticate \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\), \(M_{\mathit{swr}}\) signs \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\) using \(sk_{\mathsf{M_{swr}}}\) and appends the signature and its own certificate to \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\). Alternatively, \(M_{\mathit{swr}}\) could directly register \(pk_{\mathit{L_{dev}}}\) with a Certificate Authority (CA) if there is a suitable deployed public key infrastructure (PKI), and include \(I_{\mathit{dev}}\)'s certificate in \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\). Also, \(\mathsf{URL}_{\mathsf{Man}_{\mathsf{val}}}\) included in \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\) so that \(U_{\mathit{dev}}\), when it later uses \(\mathsf{URL}_{\mathsf{Man}}\), can detect if the redirection is wrong. Also, for sanity purposes, \(M_{\mathit{swr}}\) can include a "status" flag in \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\) to indicate if \(I_{\mathit{dev}}\) is revoked, e.g., reported stolen. #### 5.2.2. BootTime As mentioned earlier, \(\mathsf{Msgamno}\) must contain the timestamp of \(M_{\mathit{swr}}\) to prevent replay attacks. Some IoT devices Figure 4. Examples of \(\mathsf{ Manifest}_{\mathsf{l_{dev}}}\). Left one is for Google Thermostat (Cheng et al., 2017) and right one is for Blink Security Camera (Becker et al., 2017). Figure 5. Registration Phase of PAISA. feature a reliable real-time clock (RTC) [(13)] powered by a separate power source, thus ensuring that \(I_{deq}\) time is always accurate. However, most resource-constrained IoT devices lack such an RTC. To this end, PAISA includes a secure time synchronization (**TimeSync**) protocol between \(I_{deq}\) and \(M_{sw}\). It assumes that \(M_{sw}\) is both reachable and available at all times. The main idea of **TimeSync** is to receive the latest timestamp from \(M_{sw}\) whenever \(I_{deq}\) (re)boots, or (optionally) at regular intervals. Figure 6 shows the _BootTime_ protocol. **TimeSync**: After completing the boot-up sequence, \(I_{deq}\) sends a time synchronization request \(\mathcal{S}\)yncReq to \(M_{sw}\), which includes \(ID_{deq}\) and the previous timestamp \(\mathsf{time}_{prev}\) given by \(M_{sw}\) at **Provision** or **TimeSync** of the last boot. \(\mathcal{S}\)yncReq also contains a signature to authenticate its origin as a legitimate \(I_{deq}\), and prevent DoS attacks on \(M_{sw}\) via flooding of fake requests.8 Upon receiving \(\mathcal{S}\)yncReq, \(M_{sw}\) verifies the signature using \(pk_{Ideq}\)and responds with \(\mathcal{S}\)yncResp that includes the current timestamp \(\mathsf{time}_{\mathsf{cur}}\). Upon receipt of a \(\mathcal{S}\)yncResp, \(I_{deq}\) verifies the signature using \(pk_{M_{sw}}\) obtained at **Provision**. If verification succeeds, \(I_{deq}\) updates its local timestamp and sends an authenticated acknowledgment \(\mathcal{S}\)yncAck to \(M_{sw}\). Finally, \(M_{sw}\) verifies \(\mathcal{S}\)yncAck and updates its local registered time database for \(ID_{deq}\). Next time \(I_{deq}\) requests a **TimeSync**, \(M_{sw}\) will know whether the signature is based on the same \(\mathsf{time}_{\mathsf{prev}}\) it previously sent. At the end of the protocol, \(I_{deq}\) and \(M_{sw}\) have the same \(\mathsf{time}_{\mathsf{cur}}\). Given the unavoidable network transmission latency, we suggest keeping a window of acceptance \(\epsilon\) when verifying timestamps. Footnote 8: We acknowledge that signature itself might be a DoS attack vector since it consumes \(M_{sw}\)’s resources to verify. Subsequently, \(I_{deq}\) can be synchronized with \(M_{sw}\) by re-starting the secure timer after receiving and updating \(\mathsf{time}_{\mathsf{prev}}\). Thereafter, \(I_{deq}\) computes the latest time by adding \(\mathsf{time}_{\mathsf{prev}}\) and the secure time value; we denote this time as \(\mathsf{time}_{\mathsf{dev}}\). However, since this secure timer value might still deviate due to hardware inconsistencies, repeating **TimeSync** at regular intervals is recommended. #### 5.2.3. Runtime The current PAISA design uses a _push_ model, whereby \(I_{deq}\) periodically transmits \(\mathsf{Ms}_{\mathsf{Sanno}}\)-s at fixed intervals. An intuitive alternative is to use a _pull_ model, in which \(U_{deq}\) announces its presence first and, in response, solicits information from all nearby \(I_{deq}\)-s. This is similar to the Access Point (AP) discovery process in WiFi: \(U_{deq}\) emits a "Probe Request" to which an AP responds with a "Probe Response" containing information about the various network parameters to establish the connection. In the same fashion, \(I_{deq}\) that receives a "Probe Request" could include \(\mathsf{Ms}_{\mathsf{Sanno}}\) in the "Probe Response" and send it to \(U_{deq}\). One advantage of the pull model is that \(\mathsf{Ms}_{\mathsf{Sanno}}\)-s are only sent when they are needed, thus reducing the burden on individual \(I_{deq}\)-s and easing the network traffic congestion. On the other hand, it becomes more challenging to deal with "sleeping" or intermittently powered-off \(I_{deq}\)-s, thereby raising the energy consumption issues. In any case, we intend to explore the pull model further as part of near-future work. PAISA runtime shown in Figure 7 involves two procedures: (1) **Announcement** on \(I_{deq}\) is part of \(\mathcal{S}\)\(\mathcal{W}_{\mathsf{PASA}}\), installed at **Provision** time, and (2) **Reception** is an app on \(U_{deq}\), installed by the user. **Announcement**: PAISA implements two time intervals using secure timer on \(I_{deq}\), \(\mathsf{T}_{\mathsf{offset}}\) and \(\mathsf{T}_{\mathsf{Announce}}\), which govern when \(\mathcal{A}\)fttest and \(\mathcal{A}\)Intounce must be executed, respectively, triggered by the timer interrupt. During \(\mathcal{A}\)fttest, i.e., when \(\mathsf{time}_{\mathsf{dev}}\) matches \(\mathsf{T}_{\mathcal{A}\mathsf{offset}}\), PAISA measures \(I_{deq}\) memory consuming \(\mathcal{S}\)\(\mathcal{W}_{deq}\) and compares it with the hash of \(\mathcal{S}\)\(\mathcal{W}_{deq}\) stored at **Provision** time. If the measurements match, \(I_{deq}\) sets \(\mathsf{Att}_{\mathsf{result}}=true\) and \(\mathsf{Att}_{\mathsf{report}}=(\mathsf{Att}_{\mathsf{result}},\mathsf{time}_{ \mathsf{dev}})\) and stores the latter in secure RAM. During \(\mathcal{A}\)Innounces, i.e., when \(\mathsf{time}_{\mathsf{dev}}\) matches \(\mathsf{T}_{\mathcal{A}\mathsf{Intounce}}\), \(I_{deq}\) generates new \(\mathsf{Ms}_{\mathsf{Sanno}}\) composed of: a nonce, the current timestamp \(\mathsf{time}_{\mathsf{dev}}\), \(\mathsf{URL}_{\mathsf{Man}}\) given at **Provision** time, \(\mathsf{Att}_{\mathsf{report}}\) from the latest attestation as per \(\mathsf{T}_{\mathcal{A}\mathsf{offset}}\), and a signature over its content. The size of \(\mathsf{Ms}_{\mathsf{Sanno}}\) depends on the signature algorithm used. Also, whenever the \(\mathsf{ Manifest}_{\mathsf{dev}}\) or \(\mathsf{URL}_{\mathsf{Man}}\) is updated (e.g., software Figure 6. _BootTime_ **Phase of** PAISA update, maintenance shutdown, or a change of the shortened URL), \(M_{\text{zw}}\) sends the updated URL\({}_{\text{Man}}\) to \(I_{\text{zw}}\) at the time of **TimeSync**. \(\mathcal{A}\)**ftest and \(\mathcal{A}\)**Innouncure **periodicity:** If \(\mathcal{A}\)**ftest is the same as \(\mathsf{T}_{\mathcal{A}\text{Innouncure}}\), then attestation and announcement are performed sequentially. This is recommended so that \(U_{\text{zw}}\) always receives the latest information about \(I_{\text{zw}}\). However, periodicity can be adjusted based on device capabilities and desired use-cases. If \(I_{\text{zw}}\) is a weak low-end device and/or must prioritize its normal applications, \(\mathsf{T}_{\mathcal{A}\text{ftest}}\) can be longer than \(\mathsf{T}_{\mathcal{A}\text{Innouncure}}\).9 In our experiments, \(\mathcal{A}\)**ftest time is much smaller than \(\mathcal{A}\)**Innouncure time because signing takes more time than just hashing a small amount of memory. Footnote 9: For example, \(\mathsf{T}_{\mathcal{A}\text{ftest}}\) can be set to one day while \(\mathsf{T}_{\mathcal{A}\text{Innouncure}}\) – to 10 seconds, implying that \(U_{\text{zw}}\) can confirm that \(I_{\text{zw}}\) exists in the locality and that it is not compromised, at least, the last 24 hours, provided that the verification is successful. **Reception**: After receiving \(\mathsf{M}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S }\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S} \mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\mathsf{S}\ This assurance is provided by TZ-M hardware, which raises a SecureFault (i.e., a hardware fault) whenever a non-secure application attempts to modify the configuration or access the secure peripherals directly. When a SecureFault is issued, the MCU enters into the SecureFault handler within the TCB, where PAISA resets the MCU. Therefore, even if \(\mathcal{J}\)dv attempts to cause a QoS attack by raising SecureFaults, PAISA issues announcements by transmitting new \(\mathsf{Msg}_{\mathsf{anno}}\) as soon as the device awakes, before any normal activity. Also, the secure timer is configured, with the highest priority, to interrupt the MCU via the NVIC every \(\mathsf{T}_{\mathcal{J}\mathsf{Innounc}}\). Hence, no other user-level interrupt can preempt the announcement schedule. ### Implementation Challenges **How to announce?** An interesting challenge is how to broadcast \(\mathsf{Msg}_{\mathsf{anno}}\) when \(U_{\mathit{deo}}\) does not have a connection with \(I_{\mathit{deo}}\). A naive option is to broadcast \(\mathsf{Msg}_{\mathsf{anno}}\) via UDP packets. However, this is not a robust model, since the local WiFi router in the subnet must be trusted to relay packets to \(U_{\mathit{deo}}\)-s. Moreover, it requires \(U_{\mathit{deo}}\)-s to be connected to the router to receive \(\mathsf{Msg}_{\mathsf{anno}}\)-s, which is not a fair assumption. To mitigate this issue, we use the IEEE 802.11 standard WiFi Beacon Frames (Hanan et al., 2015). Beacon frames are typically used by routers or APs to advertise their presence. PAISA can implement such beacon frames to broadcast its \(\mathsf{Msg}_{\mathsf{anno}}\) letting other devices know \(I_{\mathit{deo}}\) presence, akin to a router. More specifically, PAISA uses vendor-specific elements in the beacon frame to populate \(\mathsf{Msg}_{\mathsf{anno}}\). \(\mathsf{Msg}_{\mathsf{anno}}\)**size limitation:**\(\mathsf{Msg}_{\mathsf{anno}}\) size is limited to 255 bytes as per the length of a vendor-specific element in a beacon frame. Hence, to fit into that size, we minimized all fields in \(\mathsf{Msg}_{\mathsf{anno}}\). By using \(\mathsf{Bitly}\), \(\mathsf{URL}_{\mathsf{Man}}\) can be reduced to 11 bytes. By using ECDSA with Prime256v1 curve, \(\mathsf{Sig}_{\mathsf{anno}}\) can be reduced to 64 bytes. By using the UNIX Epoch format, \(\mathsf{time}_{\mathsf{dev}}\) requires only 4 bytes. Only 5 bytes are needed for the attestation report, including one byte for the attestation result (a boolean) and 4 bytes for the attestation timestamp. In total, \(\mathsf{MSg}_{\mathsf{anno}}\) size is about 116 bytes including a 32-byte nonce. A typical WiFi router beacon frame observed in our experiments is between 200 and 450 bytes. The beacon frame generated by PAISA \(\mathsf{Msg}_{\mathsf{anno}}\) is 240 bytes. It is relatively small since it contains only one vendor-specific element and no other optional tags (besides required fields), in contrast with a typical beacon frame that carries multiple proprietary optional tags. **Signing overhead:** Computing a signature is performance-intensive. Some very low-end devices cannot even afford them due to heavy cryptographic computations, and some take several seconds to do so. Fortunately, TEEs such as TrustZone, are (although optional) usually equipped with cryptographic hardware support. In our implementation, we use the cryptographic accelerator, CASPER, on the NXP board to perform Elliptic Curve Cryptography (ECC) to reduce signing overhead. ### Trusted Software in \(I_{\mathit{deo}}\) Figure 8 shows that \(I_{\mathit{deo}}\) contains three applications: non-secure application in the normal region, PAISA TCB in the secure region, and network stack connected to the secure UART4 interface. **Non-secure application:** We implemented a sample thermal sensor software as a non-secure application in the normal region. The software reads temperature data from the sensor (on the NXP board) every second and sends it to an external server via the network interface. Since the network interface is exclusive to the secure world, we implemented a secure stub that can be invoked by an NSC function, allowing non-secure applications to access the network interface. This stub always prioritizes PAISA announcements over other requests. For cryptographic operations, we use Mbed TLS library (Klele et al., 2017) on both \(I_{\mathit{deo}}\) and \(M_{\mathit{ser}}\). At \(\mathsf{Provision}\), \(I_{\mathit{deo}}\) and \(M_{\mathit{ser}}\) both sample new pairs of ECC keys based on the Prime256v1 curve. **PAISA TCB** mainly contains three modules: Secure Timer ISR, Attestation, and Announcement. Secure Timer ISR, connected to CTIMER2, is executed when the announcement interval \(\mathsf{T}_{\mathcal{J}\mathsf{Innounc}}\) is triggered via the NVIC. This ISR first calls Attestation module, if \(\mathsf{T}_{\mathcal{J}\mathsf{Offset}}\) is met, and then invokes Announcement module. Attestation module computes SHA256 over application program memory, in 4KB chunks, and generates \(\mathsf{Att}_{\mathsf{report}}\), as shown in Figure 7. Next, Announcement module creates \(\mathsf{Msg}_{\mathsf{anno}}\) and sends it to the WiFi interface using \(\mathsf{USART}_{\mathsf{WriteBlockocking}}\)(). **Network Stack:** The ESP32-C3-DevFiIC-02 board houses WiFi and Bluetooth on a single board, running on a 32-bit RISC-V single-core processor running at 160 MHz. The board complies with IEEE 802.11b/g/n protocol and supports Station mode, SoftAP mode, and SoftAP + Station mode. PAISA TCB uses Station mode for \(\mathsf{TimeSync}\) with \(M_{\mathit{sor}}\) and SoftAP mode for \(\mathsf{Announcement}\) to \(U_{\mathit{deo}}\). After receiving \(\mathsf{Msg}_{\mathsf{anno}}\) via \(\mathsf{uart}_{\mathit{read\_bytes}}\)(), WiFi module generates a beacon frame using \(\mathsf{esp}_{\mathit{wifi}}\_{\mathsf{80211}}\_{\mathsf{tx}}\)() API and sets \(\mathsf{SSID}\)="PAISA". Figure 9 shows an example beacon frame produced. It includes \(\mathsf{Msg}_{\mathsf{anno}}\) in the vendor-specific element: first byte (\(0xdd\)) indicates Element ID, second byte (\(0x83\)) denotes length of the tag, and next three bytes (\(0x00,0x14,0x6c\)) represent Organizationally Unique Identifier (OUI) for Netge Figure 8. PAISA Implementation Setup. Figure 9. Example of \(\mathsf{Msg}_{\mathsf{anno}}\). carry \(\mathsf{Msg}_{\mathsf{anno}}\) contents. The beacon frame is transmitted according to the same WiFi beacon standard. ### Reception App in \(U_{dev}\) We implemented **Reception** as an Android app on \(U_{deo}\)- Google Pixel 6. It was developed using Android Studio. To scan for beacon frames, **Reception** requires location and WiFi access permissions enabled by setting \(\mathsf{ACCESS}_{\mathsf{FINE}}\)\(\_\)LOCATION and \(\mathsf{CHANGE}_{\mathsf{WIFI}}\)\(\_\)STATE in the app configuration. **Reception** uses \(\mathsf{getScanResult}\)() API in wifi.ScanResult library to scan and identify WiFi beacon frames containing SSID-"PAISA". Then, it uses marshall() API from os.Parcel library to extract the list of vendor-specific elements from the frame. Next, the app parses \(\mathsf{Msg}_{\mathsf{anno}}\) and fetches \(\mathsf{Manifest}_{\mathsf{taw}}\) from \(\mathsf{URL}_{\mathsf{Man}}\) using \(\mathsf{getInputStreamAPI}\) in net.HttpURLConnection library. After receiving \(\mathsf{Manifest}_{\mathsf{taw}}\), it verifies signatures in \(\mathsf{Manifest}_{\mathsf{taw}}\) and \(\mathsf{Msg}_{\mathsf{anno}}\) using the corresponding public keys via java. security library. Finally, it displays the device description and the attestation report on \(U_{deo}\) screen, as shown in Figure 10. **Reception** app also has "SCAN PAISA DEVICE" button (as shown in the figure) to explicitly scan for \(U_{deo}\). ## 7. Evaluation This section presents the security and performance analysis of PAISA. ### Security Analysis We argue the security of \(I_{deo}\) by showing an \(\mathcal{H}\mathsf{dv}\) (defined in Section 4.3) that attempts to attack \(\mathsf{TimeSync}\) and \(\mathsf{Announcement}\) modules, and how PAISA defends against such \(\mathcal{H}\mathsf{dv}\). \(\mathcal{H}\mathsf{dv}\) who controls the normal region of \(I_{deo}\), can attack PAISA in the following ways: (a) attempt to modify the code, data, and configuration of the secure modules, or try to read \(sk_{I_{deo}}\)(b) attempt to keep normal application busy (for e.g., by running an infinite loop), (c) attempt to continuously raise interrupts to escalate into the privileged mode of execution in the normal region, (d) attempt to broadcast fake or replay old \(\mathsf{Msg}_{\mathsf{anno}}\)-s, (e) tamper with or drop \(\mathsf{TimeSync}\) messages, and (f) attempt to leak privacy of \(U_{deo}\). First, the TZSC in TZ-M hardware ensures the protection of all memory within the secure region including the secure peripheral configuration. Thus, it raises a SecureFault when (a) occurs and gives control back to the secure region handler. Second, the NVIC configuration of MCU ensures that the secure timer has the highest priority (i.e., not preemptible), and when that timer interrupt occurs, it guarantees to invoke the secure timer ISR within the secure region. Hence, despite \(\mathcal{H}\mathsf{dv}\) attempts to block announcements by (b) or (c), \(\mathsf{Announcement}\) is executed in a timely manner. Moreover, the network module is under the control of secure UART, thus, even that cannot be blocked by malicious applications. Additionally, since the announcements reach \(U_{deo}\) within one hop, \(\mathcal{H}\mathsf{dv}\) on the internet is totally harmless. Third, the unforgability guarantee of signature schemes ensures that \(\mathcal{H}\mathsf{dv}\) cannot generate a correct \(\mathsf{Msg}_{\mathsf{anno}}\) without knowing \(sk_{I_{deo}}\). This entails, \(\mathcal{H}\mathsf{dv}\) cannot modify the \(\mathcal{H}\mathsf{test}\) report to hide compromised applications, modify the timestamp of old \(\mathsf{Msg}_{\mathsf{anno}}\) to create fake new ones, or make a \(\mathsf{Msg}_{\mathsf{anno}}\) point to a wrong \(\mathsf{Manifest}_{\mathsf{taw}}\); as \(U_{deo}\) catches these during \(\mathcal{V}\mathsf{erify}\). And similarly, \(\mathcal{H}\mathsf{dv}\) cannot get away with replaying old \(\mathsf{Msg}_{\mathsf{anno}}\) with valid \(\mathcal{H}\mathsf{test}\) report because \(U_{deo}\) detects obsolete messages based on the timestamp in it. Hence, (d) is not possible. Fourth, messages exchanged in \(\mathsf{TimeSync}\) are all authenticated with signatures, so tampering is not viable. Next, since the network module on \(I_{deo}\) is secure, \(\mathcal{H}\mathsf{dv}\) cannot drop packets going out of \(I_{deo}\). However, \(\mathcal{H}\mathsf{dv}\) on the internet can intercept and drop messages that are in transit between \(I_{deo}\) and \(M_{\mathit{sor}}\). For that, PAISA carefully retransmits when necessary as mentioned in Section 5.2. Additionally, \(\mathcal{H}\mathsf{dv}\) can launch network DoS attacks by flooding \(M_{\mathit{sor}}\) or \(I_{deo}\) during \(\mathsf{TimeSync}\). Nonetheless, this does not harm the purpose of PAISA because, in that case, \(I_{deo}\) did not even boot to resume its activity, so no need to announce \(\mathsf{Msg}_{\mathsf{anno}}\) anyway. Lastly, \(\mathcal{H}\mathsf{dv}\) compromising one or more \(I_{deo}\) can attempt to trace \(U_{deo}\) location. However, by virtue of PKC, \(U_{deo}\) need not connect to any \(I_{deo}\) to learn about the IoT activity in the vicinity. Therefore, there is no user privacy leakage at all. The above five points conclude the security argument of PAISA, ensuring it meets all security requirements stated in Section 4.4. ### Performance Analysis Note that we measure the mean and standard deviation of each performance value over 50 iterations. **Performance of \(I_{deo}\):** PAISA overhead on \(I_{deo}\) is measured in two phases: _BootTime_ and _Runtime_. _BootTime_ comprises the time taken for device initiation (**InitDevice**), \(\mathsf{TimeSync}\), and \(\mathsf{Announcement}\). During \(\mathsf{InitDevice}\), \(I_{deo}\) initiates the MCU itself and peripherals including timers, sensors, actuators, and network interfaces. Next, during \(\mathsf{TimeSync}\), \(I_{deo}\) initiates its WiFi module in Station mode to connect to \(M_{\mathit{sor}}\) using UDP. After a successful connection, \(I_{deo}\) and \(M_{\mathit{sor}}\) communicate to synchronize the former's clock. Then, \(I_{deo}\) executes \(\mathsf{Announcement}\) to issue Figure 10. PAISA **Proof-of-Concept. The Phone screenshot on the right side shows Reception app with device details of \(I_{deo}\) (emulated on the NXP board beside it).** its first \(\mathsf{Msg}_{\mathsf{anno}}\). As shown in Table 2, the time for **InitDevice** is \(9.66ms\) with negligible standard deviation. Whereas, average latency of \(\mathsf{TimeSync}\) is \(1.076ms\) with a significant deviation of \(187ms\). This is because \(\mathsf{TimeSync}\) includes network delay and all messages exchanged between the parties. Another reason for the high mean latency of \(\mathsf{TimeSync}\) is due to: (a) two signing operations during \(\mathsf{SyncReq}\) and \(\mathsf{SyncAck}\), and (b) one verification operation during \(\mathsf{SyncResp}\). Each ECDSA signing/verification operation takes \(\approx 230ms\) at _150MHz_. Finally, \(\mathsf{Announcement}\) takes \(236ms\), which includes one signing operation and a beacon frame transmission. Adding all these, the total boot time is about \(1.3\)s, which is mostly due to \(\mathsf{TimeSync}\) and \(\mathsf{Announcement}\). However, since this happens infrequently, we believe it is reasonable. _Runtime_ overhead stems from the \(\mathsf{PAISA}\)\(\mathsf{Announcement}\) module. Figure 11 shows the performance of \(\mathsf{Announcement}\) with variable size of the attested region. The latency for generating and \(\mathsf{MSg}_{\mathsf{anno}}\) is constant since the signature is over a fixed-sized value. Attestation latency grows linearly with the attested memory size since it requires hashing. However, signing takes significantly longer, about \(230ms\), than attestation, which only requires \(1ms\) for \(64\)KB. This is because public key operations naturally take more time than hashing. Therefore, \(\mathsf{Announcement}\) latency almost equals that of one signature operation. Also, the software size of mid-to-low-tier devices is typically under \(100\)KB. Even if it reaches \(1\)MB, attestation would take only \(\approx~{}16ms\), which is \(14\) times less than one signature. Furthermore, during \(\mathsf{Announcement}\), the runtime overhead of the network interface is negligible, amounting to \(\approx~{}135\mu\)s, which has minimal impact on overall latency. **Performance of \(\mathsf{U_{deo}}\):** The latency of \(\mathsf{Reception}\) application is shown in Table 3. It takes \(1.070ms\) with a deviation of \(247ms\) to receive one \(\mathsf{Msg}_{\mathsf{anno}}\). This large deviation is due to two factors: the time to fetch \(\mathsf{Manifest}_{\mathsf{dao}}\) depending on network delay and frequency, plus context switching time on the smartphone. Note that Google Pixel 6 has heterogeneous cores (2 cores @ 2.8GHz, 2 cores @ 2.25GHz, and 4 cores @ 1.8GHz), thus, the overall frequency is represented as [1.8-2.8]GHz in Table 3. Despite it taking \(1\)s for one message, there is not much impact in case of multiple \(\mathsf{I_{deo}}\)-s, because \(\mathsf{Msg}_{\mathsf{anno}}\) processing can be done concurrently via threading (\(\mathsf{AsyncTask}\)). Therefore, upon launching the Reception app, the delay in receiving most announcements is expected to be within a few seconds. **Performance of \(\mathsf{M_{sor}}\):** TimeSync has one signing and two verification operations which take about \(1ms\) each at \(2.6\)GHz. Hence, the average latency of \(\mathsf{TimeSync}\) is \(5.6ms\) with a deviation of \(2.77ms\), mostly due to network delay. This latency is reasonable, despite \(\mathsf{M_{sor}}\) handling multiple devices, because they can be served in parallel. Moreover, \(\mathsf{TimeSync}\) only occurs at reboot which is quite infrequent for each \(\mathsf{I_{deo}}\). \(\mathsf{Manifest}_{\mathsf{dao}}\) size: Many factors, such as device description, cryptographic algorithm, key size, type of certificates, and encoding method used in certificates, influence the size of \(\mathsf{Manifest}_{\mathsf{dao}}\). Thus, \(\mathsf{Manifest}_{\mathsf{dao}}\) can vary from a few to a few hundred KB-s. The size of \(\mathsf{Manifest}_{\mathsf{dao}}\) used in our evaluation is 2,857 bytes. **TCB size:** As mentioned in Section 6.3, \(\mathsf{PAISA}\) TCB consists of software in TZ-M of the main NXP board and the driver in the network ESP32 board. On the main board, the TCB is \(184\)KB (includes Mbed TLS), and \(682\)KB on the network board (includes the network stack). ## 8. Discussion & Limitations We now discuss some limitations of \(\mathsf{PAISA}\) and potential mitigations. **Run-time Overhead:** To measure run-time overhead on \(\mathsf{I_{deo}}\), we define CPU utilization (\(U\)) as the percentage of CPU cycles that can be used by the normal application amidst the announcements, denoted by \(U=\frac{t_{\mathsf{amount}}}{t_{\mathsf{amount}}+t_{\mathsf{damn}}}\). Here, \(t_{\mathsf{normal}}\) is the CPU cycles for the normal application between two announcements, which equals to \(\mathsf{T}_{\mathsf{Announce}}\), and \(t_{\mathsf{ann}}\) is the time taken for one announcement, which is nearly \(250~{}ms\) (from Section 7.2). So if \(\mathsf{T}_{\mathsf{Announce}}=1s\), then \(U=80\%\) of normal utility, which is not good for general applications. If \(\mathsf{T}_{\mathsf{Announce}}=100s\), then \(U=99.7\%\), but it is not good for the users since they could not be aware of \(\mathsf{I_{deo}}\) to \(100s\). Therefore, depending on the application, there is a desired balance between the normal utility and the announcement interval. There are other ways to reduce the overhead of \(\mathsf{PAISA}\). If the normal application binary size is large, \(\mathsf{T}_{\mathsf{Affest}}\) can be increased to lower the overhead at every \(\mathsf{T}_{\mathsf{Announce}}\). However, this might not yield much of a reduction since, as can be seen in Figure 11, signing incurs higher overhead than attestation. Therefore, we consider the following option. If the activity schedule of \(\mathsf{I_{deo}}\) is known, it can pre-compute multiple \(\mathsf{Msg}_{\mathsf{anno}}\)-s during idle time and later release one at a time. In this case, amortized (real-time) overhead would be significantly lower, since it would be only due to broadcasting \(\mathsf{Msg}_{\mathsf{anno}}\). For example, a smart speaker can precompute a day's worth of announcements at midnight and gradually release them. However, this approach \begin{table} \begin{tabular}{|c||c|c|c|} \hline PAISA Procedure & \multicolumn{2}{c|}{Cycles} & \multicolumn{2}{c|}{Time of \(\mathsf{ISMBH}(\mathsf{img})\)} \\ \cline{2-4} & Main & Standard & Mean & Standard \\ \hline \(\mathsf{I_{deo}}\) & 14,446,64 & 121 & 9.66 & 0.62 \\ \hline \(\mathsf{TimeSync}\) & 14,18,680 & 23,12,473 & 1,75,031 & 81.73 \\ \hline Announcement & 34,53,478 & 87,119 & 236,21 & 0.58 \\ \hline \end{tabular} \end{table} Table 2. PAISA **Overhead on \(\mathsf{I_{deo}}\) at _BootTime_. Figure 11. PAISA Announcement Overhead on \(\mathsf{I_{deo}}\) at _Runtime_. is only applicable to devices that are not real-time and/or safety-critical. Also, in settings where a group of very low-end devices (e.g., smart bulbs) is connected to a local hub or controller, the latter can act as a PAISA proxy, i.e., it can broadcast a collective announcement on behalf of the entire group of its constituent devices. **Compatibility with other RoTs:** PAISA can be applied to any architecture that offers a secure timer and a secure network interface. ARM TrustZone-A (TZ-A) is widely available in higher-end IoT devices that rely on ARM Cortex-A-based microprocessors (e.g., Raspberry Pi and Rock Pi). Since TZ-A offers similar guarantees to TZ-M, PAISA can be directly realized on the former. For lowest-end MCUs, such as TI MSP430 (Tang et al., 2018) and AVR ATMega8 (Tang et al., 2018), an active RoT, called GAROTA(Garar et al., 2018), offers a secure timer, GPIO, and UART peripheral support based on some additional custom hardware. PAISA can be applied to GAROTA by extending the secure timer TCB of GAROTA to include periodic announcements. Furthermore, there is a software-based MultiZone TEE (Tang et al., 2018) for RISC-V-based MCUs. Relying on Physical Memory Protection Unit (PMP), Multizone divides memory and peripherals into well-isolated regions, called Zones, which are configured at compile-time. PAISA can be implemented as one of the Zones with a timer peripheral and a network peripheral assigned to it. **Compatibility with Other Network Interfaces:** We believe that PAISA is compatible with other network interfaces besides WiFi, such as Bluetooth-Low-Energy and Cellular. For example, with Bluetooth version 5.0 and above, devices scan for other nearby devices by broadcasting packets that contain the sender address and advertising payload which can be up to 255 bytes. A PAISA announcement (116 bytes) can easily fit into this payload. **Secure Update on \(I_{dev}\):** To support secure software updates on \(I_{dev}\), \(M_{sor}\) or software vendors can initiate an update request by sending the new software along with its authorization token. This token is generated using a private key for which the corresponding public key is known to \(I_{dev}\). Implementing this process requires extending PAISA TCB to include token verification and update installation. We expect that this update procedure can be implemented in a manner similar to existing frameworks, such as (Tang et al., 2018; Wang et al., 2018; Wang et al., 2018). **User Linkage:** There are both practical and conceptual techniques for anonymous retrieval that can be used to fetch Manifest\({}_{\text{latw}}\)-s. The former include Tor, Mix Networks (e.g., Jondo and Nym), and peer-to-peer networks (e.g., I2P, Freenet). They all facilitate anonymous communication, however, their use might be illegal in some jurisdictions, while in others their use might be impractical due to additional requirements, such as Virtual Private Network (VPN). Conceptual techniques include privacy-preserving cryptographic constructs, such as Private Information Retrieval (PIR) (Tang et al., 2018; Wang et al., 2018) and Oblivious RAM (ORAM) (Wang et al., 2018; Wang et al., 2018). Using these types of techniques would require building customized "wrappers" for PAISA. PAISA **TCB:** As discussed in Section 7.2, though the TCB size of the main device is small, the total size (including the network driver) increases the attack surface. Unfortunately, this is unavoidable because PAISA's main objective is guaranteed announcements which necessitates its reliance on a trusted network interface. However, to alleviate this problem, we suggest pruning the network module to only contain what is absolutely necessary. For example, PAISA only requires the driver to establish a UDP connection with \(M_{sor}\) and broadcast WiFi beacon frames. The rest of the driver module (including TCP, HTTP, etc.) can be removed, thus significantly reducing the binary size. However, if normal applications want to use these protocols (via the secure stub mentioned earlier), the driver has to retain them. **Exclusive Network Module:** To ensure protection from DoS attacks, PAISA requires exclusive access to a network peripheral on \(I_{dev}\). This is because a shared network interface can be easily exploited by \(\mathcal{A}\text{dv}\) by keeping the interface busy and not allowing \(\text{Msg}_{\text{anno}}\) packets to be sent out. However, reserving a network interface exclusively for TCB use is expensive, since the \(I_{dev}\) budget might not permit an additional interface (in terms of cost and/or energy) for normal use. To address this concern, we suggest using techniques such as (Tang et al., 2018; Wang et al., 2018; Wang et al., 2018) that involve a secure stub that shares peripherals between secure and non-secure programs. The main idea is to lock the network interface as a trusted peripheral controllable only by TZ-M. Also, a stub is implemented in the secure region that carefully parses inputs and relays them to the trusted interface. This stub is made available to normal applications by exposing an NSC function callable from the normal region. Furthermore, the stub must also implement a scheduling queue for handling requests from both secure and non-secure applications. This way, there is no need to equip \(I_{dev}\) with an additional interface. We implement a basic functionality of this approach as a proof-of-concept. It is available as part of (Wang et al., 2018). Nonetheless, we emphasize that, for the "timeliness" property of PAISA, the **Announcement** module is always given higher priority for accessing the network interface. **Role of \(M_{sor}\):** PAISA relies on \(M_{sor}\) for **TimeSync** and hosting a database for Manifest\({}_{\text{latw}}\). If the number of \(I_{dev}\)-s provisioned by \(M_{sor}\) is high and \(M_{sor}\) is consistently overloaded with requests, we suggest using helper third-party servers in the local area of deployment. Of course, such servers must be certified by \(M_{sor}\) to prove their authenticity when responding to **TimeSync** and Manifest\({}_{\text{latw}}\) retrieval requests. ## 9. Related Work Related work can be classified into six categories: **Active RoTs** proactively monitor activity on MCUs to prevent (or minimize the extent of) compromises. For example, (Garar et al., 2018; Wang et al., 2018; Wang et al., 2018) are co-design (hardware/software) architectures that guarantee the execution of critical software even when all device software is compromised. (Wang et al., 2018) guarantees sensor data privacy by letting only authorized software access sensor data via secure GPIO peripherals. On the other hand, (Wang et al., 2018) prevents code injection attacks by allowing only authorized software to run on the MCU while preventing any other software from modifying it except via secure authorized updates. Whereas, (Wang et al., 2018; Wang et al., 2018) rely on ARM TrustZone or a similar class of MCUs to protect devices from being "bricked", by resetting and updating the device whenever it does not respond to a watchdog timer. **Remote Attestation:** There is a large body of research proposing remote attestation architectures on wide-range of devices. (Garar et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) propose attestation architectures for MCUs. There are also other architectures such as (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) that discuss runtime attestation techniques, including control-flow, data-flow attestation, for low-end MCUs. All the aforementioned attestation architectures can be integrated with active RoTs mentioned earlier to enable PAISA. For servers and high-end IoT, there are TEE architectures such as Intel SGX [77], AMD SEV [24], Sanctum [41] and Keystone [85] that provide attestation APIs for attesting in-enclave applications. However, these are not applicable for PAISA because PAISA attests and reports the normal region instead of the secure region. **ARM TrustZone:** Lots of prior work leveraged TrustZone to improve the security of systems from various perspectives. [35; 73; 92] use TZ-A as an authorization tool for non-secure applications. [35] proposes an authorization architecture to regulate smaller user devices connected to IoT hubs, enabled by TZ-A. [73] implements a user authentication scheme based on TZ-A on smartphones. Besides these, TZ-M is also used to enhance security in several constrained settings, e.g., to optimize secure interrupt latencies [102], improve real-time systems [126], mitigate control-flow attacks [20; 90], and add support for virtualization [104]. Similarly, in PAISA, we use TZ-M to trigger announcements at regular intervals. **Hidden IoT Device Detection:** To detect hidden IoT devices in unfamiliar environments, there are a few approaches proposed in recent years. "spyware" solutions such as [12; 114] are popular detectors; however, the detector should be in close proximity to the IoT device. [89] designs specialized hardware - a portable millimeter-wave probe - to detect electronic devices. [107] leverages the time-of-flight sensor on commodity smartphones to find hidden cameras. However, they either take significant time or require specialized hardware to detect the devices. Moreover, they can only detect IoT devices, but cannot identify them. On the other hand, [68; 101; 112; 113] observe WiFi traffic to identify hidden devices. In particular, [112] monitors coarse attributes in the WiFi 802.11 layer to classify IoT devices. [113] establishes causality between WiFi traffic patterns to identify and localize an IoT device. [101] uses autoencoders for automatically learning features from IoT network traffic to classify them. However, all the aforementioned techniques rely upon probabilistic models, hence, they can be error-prone, especially when there are newer devices or when the adversary is strong enough to bypass the detection logic; moreover, they are computationally intensive. Conversely, PAISA takes a systematic approach to make users aware of the devices with minimal computation on their end. Furthermore, PAISA announcements convey more information regarding the device such as its revocation status, software validity, and complete device description, which is not possible with other approaches. **Broadcasting Beacon Frames:**[38] proposes a technique, Beaconstuffing, that allows Wi-Fi stations to communicate with APs without associating with any network. Subsequently, many applications of Beacon-stuffing have been introduced over the past decade. [23] uses beacon frames to figure out if a given device is physically located nearby a user device while the user is using the former for Two-Factor Authentication. [118] achieves two-way data encryption transmission by injecting custom data into the probe request frame. [54] proposes a smartphone-based Car2X communication system to alert users about imminent collisions by replacing the SSID field in the beacon frame with the alert message. Following the 802.11 standard, [66] shows that custom information can be embedded in a beacon frame by modifying vendor-specific fields. **IoT Privacy:** Some prior work focused on enhancing user privacy in the context of IoT via Privacy Assistants (PA-s) user notices, and consent. PA-s [79; 58; 70] provide users with an automated platform to configure their privacy preferences on nearby IoT resources. For example, a recent study [40] interviews 17 participants to learn user perceptions of several existing PA-s and identifies issues with them. It then suggests ideas to improve PA-s in terms of automated consent, and helping them opt out of public data collections. [62] explores a comprehensive design space for privacy choices based on a user-centered analysis by organizing it around five dimensions (e.g. type, functionality, and timing). It also devises a concrete use case and demonstrates an IoT privacy choice platform in real-world systems. Furthermore, some research efforts have explored privacy and security labels (akin to food nutrition labels) for IoT devices. For example, [59] suggests a set of IoT privacy and security labels based on interviews and surveys. It identifies 47 crucial factors and proposes a layered label approach to convey them. [60] conducts a survey with \(1,371\) online participants to evaluate the privacy factors proposed in prior research with two key dimensions: an ability to convey risk to consumers and an impact on their willingness to purchase an IoT device. Also, the study yields actionable insights on optimizing existing privacy and security attributes of IoT labels. Similarly, [61] conducts a survey with 180 online participants in order to evaluate the impact of five security and privacy factors (e.g. access control) on participants' purchase behaviors when individually or collectively presented with an IoT label. The study underscores participants' willingness to pay a substantial premium for devices with better security and privacy practices. These prior results are valuable and relevant to this paper since they provide guidelines for which privacy-related factors should be reflected in Manifest\({}_{\text{{latv}}}\) and how to utilize them in order to attain acceptable user experience with effective privacy configurations. ## 10. Conclusions This paper suggests taking a systematic approach to making IoT devices _privacy-agile_ by advocating that devices periodically inform nearby users about their presence and activity. As a concrete example of this approach, we presented the design and construction of PAISA: a secure and _privacy-agile_ TEE-based architecture that guarantees secure periodic announcements of device presence via secure timer and network peripherals. We implemented PAISA as an end-to-end open-source prototype [28] on: (1) an ARM Cortex-M33 device equipped with TrustZone-M that broadcasts announcements using IEEE 802.11 WiFi beacons, and (2) an Android-based app that captures and processes them. The evaluation shows that \(I_{\text{{detv}}}\) takes 236ms to transmit an announcement and it only takes 1sec for the app to process it. **Acknowledgements:** We thank ACM CCS'23 reviewers for constructive feedback. This work was supported in part by funding from NSF Awards SATC-1956393, SATC-2245531, and CICI-1840197, NSA Awards H98230-20-1-0345 and H98230-22-1-0308, as well as a subcontract from Peraton Labs.
2303.00032
Decentralized Model Dissemination Empowered Federated Learning in mmWave Aerial-Terrestrial Integrated Networks
It is anticipated that aerial-terrestrial integrated networks incorporating unmanned aerial vehicles (UAVs) mounted relays will offer improved coverage and connectivity in the beyond 5G era. Meanwhile, federated learning (FL) is a promising distributed machine learning technique for building inference models over wireless networks due to its ability to maintain user privacy and reduce communication overhead. However, off-the-shelf FL models aggregate global parameters at a central parameter server (CPS), increasing energy consumption and latency, as well as inefficiently utilizing radio resource blocks (RRBs) for distributed user devices (UDs). This paper presents a resource-efficient FL framework, called FedMoD (\textbf{fed}erated learning with \textbf{mo}del \textbf{d}issemination), for millimeter-wave (mmWave) aerial-terrestrial integrated networks with the following two unique characteristics. Firstly, FedMoD presents a novel decentralized model dissemination algorithm that makes use of UAVs as local model aggregators through UAV-to-UAV and device-to-device (D2D) communications. As a result, FedMoD (i) increases the number of participant UDs in developing FL model and (ii) achieves global model aggregation without involving CPS. Secondly, FedMoD reduces the energy consumption of FL using radio resource management (RRM) under the constraints of over-the-air learning latency. In order to achieve this, by leveraging graph theory, FedMoD optimizes the scheduling of line-of-sight (LOS) UDs to suitable UAVs/RRBs over mmWave links and non-LOS UDs to available LOS UDs via overlay D2D communications. Extensive simulations reveal that decentralized FedMoD offers same convergence rate performance as compared to conventional FL frameworks.
Mohammed S. Al-Abiad, Md. Zoheb Hassan, Md. Jahangir Hossain
2023-02-28T19:15:19Z
http://arxiv.org/abs/2303.00032v1
Decentralized Model Dissemination Empowered Federated Learning in mmWave Aerial-Terrestrial Integrated Networks ###### Abstract It is anticipated that aerial-terrestrial integrated networks incorporating unmanned aerial vehicles (UAVs) mounted relays will offer improved coverage and connectivity in the beyond 5G era. Meanwhile, federated learning (FL) is a promising distributed machine learning technique for building inference models over wireless networks due to its ability to maintain user privacy and reduce communication overhead. However, off-the-shelf FL models aggregate global parameters at a central parameter server (CPS), increasing energy consumption and latency, as well as inefficiently utilizing radio resource blocks (RRBs) for distributed user devices (UDs). This paper presents a resource-efficient FL framework, called FedMoD (federated learning with model dissemination), for millimeter-wave (mmWave) aerial-terrestrial integrated networks with the following two unique characteristics. Firstly, FedMoD presents a novel decentralized model dissemination algorithm that makes use of UAVs as local model aggregators through UAV-to-UAV and device-to-device (D2D) communications. As a result, FedMoD (i) increases the number of participant UDs in developing FL model and (ii) achieves global model aggregation without involving CPS. Secondly, FedMoD reduces the energy consumption of FL using radio resource management (RRM) under the constraints of over-the-air learning latency. In order to achieve this, by leveraging graph theory, FedMoD optimizes the scheduling of line-of-sight (LOS) UDs to suitable UAVs/RRBs over mmWave links and non-LOS UDs to available LOS UDs via overlay D2D communications. Extensive simulations reveal that decentralized FedMoD offers same convergence rate performance as compared to conventional FL frameworks. Decentralized FL model dissemination, energy consumption, UAV communications. ## I Introduction Unmanned aerial vehicles (UAVs) are expected to have a significant impact on the economy by 2026 with a projected global market value of US$59.2 billion, making the inclusion of UAVs critical in beyond 5G cellular networks [1]. There are several unique features of UAV-mounted communication platforms, including the high likelihood of establishing line-of-sight connections with ground nodes, rapid deployment, and adjustable mobility [2]. With such attributes, UAVs can serve as aerial base stations (BSs) or relays in conjunction with terrestrial base stations, resulting in aerial-terrestrial integrated networks (ATINs). By connecting cell-edge user devices (UDs) to terrestrial cellular networks via aerial BSs or relays, ATINs improve coverage and connectivity significantly [3]. The 3GPP standard also incorporates the use of UAVs as a communication infrastructure to complement terrestrial cellular networks [4]. During current 5G deployment efforts, it has been shown that the millimeter-wave band at 28 GHz is significantly larger and more capable than the sub-6 GHz band. At the same time, air-to-ground communications have the advantage of avoiding blockages and maintaining LOS connectivity as a result of UAV's high altitude and flexibility. [5]. Therefore, the mmWave band is suitable for deploying high-capacity ATINs in the next-generation cellular networks. A data-driven decision making process enables wireless networks to manage radio resources more efficiently by predicting and analyzing several dynamic factors, such as users' behavior, mobility patterns, traffic congestion, and quality-of-service expectations. Data-driven radio resource management (RRM) has gained increasing popularity, thanks to the expansion of wireless sensing applications, the availability of enormous data, and the increasing computing capabilities of devices. To train machine learning (ML) models, raw data collected from individual UDs is aggregated in a central parameter server (CPS). As a result, such centralized ML approaches require enormous amounts of network resources to collect raw data from UDs. In addition, centralized ML also impairs users' privacy since CPS can easily extract sensitive information from raw data gathered from UDs. Recently, Google proposed federated learning (FL) for UDs to collaboratively learn a model without sharing their private data [6]. In FL, UDs update parameters according to their local datasets, and only the most recent parameters are shared with the CPS. Using local models from all participating UDs, the CPS updates global model parameters and shares them with the UDs. The local and global models are adjusted iteratively until convergence. Unlike centralized ML approaches, FL not only protects UD privacy but also improves wireless resource utilization significantly. Nevertheless, the convergence performance of FL in wireless networks significantly depends on the appropriate selection of the participating UDs, based on both channel and data quality, and bandwidth allocation among the selected UDs [7]. The FL framework provides a powerful computation tool for ATINs to make decentralized decisions [8]. UAVs are frequently used as aerial sensors or aerial data collectors in several practical scenarios, and the convergence and accuracy of FL in such use cases can be improved by appropriately exploiting the unique attributes of air-to-ground communication links. For instance, a FL framework was developed for hazardous zone detection and air quality prediction by utilizing UAVs as local learners and deploying a swarm of them to collect local air quality index (AQI) data [9]. A UAV-supported FL framework was also proposed in which a drone visited learning sites sequentially, aggregated model parameters locally, and relayed them to the CPS for global aggregation [9]. Meanwhile, ATINs can also deploy UAVs as aerial BSs with edge computing capabilities. In this context, UAV can provide global model aggregation capability for a large numbers of ground UDs, thanks to its large coverage and high probability of establishing LOS communications [10]. In the aforesaid works, all the local model parameters were aggregated into a single CPS using the conventional star-based FL framework. Although such a star-based FL is convenient, it poses several challenges in the context of ATINs. Firstly, a star-based FL requires a longer convergence time due to the presence of straggling local learners. Recall, the duration of transmission and hovering of a UAV influences its energy consumption, and consequently, increasing the convergence time of FL directly increases the energy consumption of UAVs. This presents a significant challenge for implementing FL in ATINs since UAVs usually have limited battery capacity. In addition, as a result of increased distance and other channel impairments, a number of local learners with excellent datasets may be out of coverage of the central server in practice. The overall learning accuracy of FL models can be severely impacted if these local learners are excluded from the model. The use of star-based FL frameworks in ATINs is also confronted by the uncertainty of air-to-ground communication links resulting from random blocking and the mobility of UAVs. This work seeks to address these challenges by proposing a resource-efficient FL framework for mmWave ATINs that incorporates decentralized model dissemination and energy-efficient UD scheduling. ### _Summary of the Related Works_ In the current literature, communication-efficient FL design problems are explored. In [11], the authors suggested a stochastic alternating direction multiplier method to update the local model parameters while reducing communications between local learners and CPS. In [12], a joint client scheduling and RRB allocation scheme was developed to minimize accuracy loss. To minimize the loss function of FL training, UD selection, RRB scheduling, and transmit power allocation were optimized simultaneously [13]. Numbers of global iterations and duration of each global iteration were minimized by jointly optimizing the UD selection and RRB allocation [14]. Besides, since UDs participating in FL are energy-constrained, an increasing number of studies focused on designing energy-efficient FL frameworks. As demonstrated in [15], the energy consumption of FL can be reduced by uploading only quantized or compressed model parameters from UDs to CPS. Furthermore, RRM enhances energy efficiency of FL in large-scale networks. Several aspects of RRM, such as client scheduling, RRB allocation, and transmit power control, were extensively studied to minimize both communication and computation energy of FL frameworks [16, 17]. An energy-efficient FL framework based on relay-assisted two-hop transmission and non-orthogonal multiple access scheme was recently proposed for both energy and resource constrained Internet of Things (IoT) networks [18]. In the aforesaid studies, conventional star-based FL frameworks were studied. Due to its requirement to aggregate all local model parameters on a single server, the star-based FL is inefficient for energy- and resource-constrained wireless networks. Hierarchical FL (HFL) frameworks involve network edge devices uploading model parameters to mobile edge computing (MEC) servers for local aggregation, and MEC servers uploading aggregated local model parameters to CPS periodically. The HFL framework increases the number of connected UDs and reduces energy consumption [19]. To facilitate the HFL framework, a client-edge-cloud collaboration framework was explored [20]. HFL was investigated in heterogeneous wireless networks through the introduction of fog access points and multiple-layer model aggregation [21]. Dynamic wireless channels in the UD-to-MEC and MEC-to-CPS hops and data distribution play a crucial role in FL learning accuracy and convergence. Thus, efficient RRM is imperative for implementation of HFL. As a result, existing literature evaluated several RRM tasks, including UD association, RRB allocation, and edge association, to reduce cost, latency, and learning error of HFL schemes [22, 23]. While HFL increases the number of participating UDs, its latency and energy consumption are still hindered by dual-hop communication for uploading and broadcasting local and global model parameters. Server-less FL is a promising alternative to reduce latency and energy consumption. This FL framework allows UDs to communicate locally aggregated models without involving central servers, thereby achieving model consensus. The authors in [24] proposed a FL scheme that relies on device-to-device (D2D) communications to achieve model consensus. However, due to the requirement of global model aggregation with two-time scale FL over both D2D and user-to-CPS wireless transmission, this FL scheme has limited latency improvement. In [25, 26], the authors developed FL model dissemination schemes by leveraging connected edge servers (ESs), which aggregate local models from their UD clusters and exchange them with all the other ESs in the network for global aggregation. However, a fully connected ES network is prohibitively expensive in practice, especially when ESs are connected by wireless links. In addition, each global iteration of FL framework takes significantly longer because ESs continue to transmit local aggregated models until all other ESs receive them successfully [25, 26]. The authors in [27] addressed this issue by introducing conflicting UDs, which are the UDs covering multiple clusters, and allowing parameter exchanges between them and local model aggregators. In spite of recent advances in resource-efficient, hierarchical, and decentralized FL frameworks, existing studies have several limitations in utilizing UAVs as local model aggregators in mmWave ATINs. In particular, state-of-the-art HFL schemes of [20, 21, 22] can prohibitively increase the communication and propulsion energy consumption of UAVs because it involves two-hop communications and increased latency. Additionally, mmWave band requires LOS links between UDs and UAVs for local model aggregation, as well as LOS UAV-to-UAV links for model dissemination. Accordingly, the FL model dissemination frameworks proposed in [25, 26, 27] will not be applicable to mmWave ATINs. We emphasize that in order to maintain convergence speed and reduce energy consumption, the interaction among UD-to-UAV associations, RRB scheduling, and UAV-to-UAV link selection, in addition to the inherent properties of mmWave bands, must be appropriately characterized. Such a fact motivates us to develop computationally efficient models dissemination and RRM schemes for mmWave ATINs implementing decentralized FL. ### _Contributions_ This work proposes a resource-efficient and fast-convergent FL framework for mmWave ATINs, referred to as Federated Learning with Model Dissemination (FedMoD). The specific contributions of this work are summarized as follows. * A UAV-based distributed FL model aggregation method is proposed by leveraging UAV-to-UAV communications. Through the proposed method, each UAV is able to collect local model parameters only from the UDs in its coverage area and share those parameters over LOS mmWave links with its neighbor UAVs. The notion of physical layer network coding is primarily used for disseminating model parameters among UAVs. This allows each UAV to collect all of the model parameters as well as aggregate them globally without the involvement of the CPS. With the potential to place UAVs near cell edge UDs, the proposed UAV-based model parameter collection and aggregation significantly increases the number of participating UDs in the FL model construction process. Based on the channel capacity of the UAV-to-UAV links, a conflict graph is formed to facilitate distributed model dissemination among the UAVs and a maximal weighted independent search (MWIS) method is proposed to solve the conflict graph problem. In light of the derived solutions, a decentralized FedMoD is developed and its convergence is rigorously proved. * Additionally, a novel RRM scheme is investigated to reduce the overall energy consumption of the developed decentralized FL framework under the constraint of learning latency. The proposed RRM optimizes both (i) the scheduling of LOS UDs to suitable UAVs and radio resource blocks (RRBs) over mmWave links and (ii) the scheduling of non-LOS UDs to LOS UDs over side-link D2D communications such that non-LOS can transmit their model parameters to UAVs with the help of available LOS UDs. As both scheduling problems are provably NP-hard, their optimal solutions require prohibitively complex computational resources. Two graph theory solutions are therefore proposed for the aforementioned scheduling problems to strike a suitable balance between optimality and computational complexity. * To verify FedMoD's effectiveness over contemporary star-based FL and HFL schemes, extensive numerical simulations are conducted. Simulation results reveal that FedMoD achieves good convergence rates and superior energy consumption compared to the benchmark schemes. The rest of this paper is organized as follows. In Section II, the system model described in detail. In Section, the proposed FedMoD algorithm is explained thoroughly along with its convergence analysis. Section IV presents the RRM scheme for improving energy-efficiency of the proposed FedMoD framework. Section V presents various simulation results on the performance of the proposed FedMoD scheme. Finally, the concluding remarks are provided in Section VI. ## II System Model ### _System Overview_ The envisioned mmWave aerial-terrestrial integrated network (ATIN) model is illustrated in Fig. 1, that consists of a single CPS, multiple UAVs that are connected to each other through mmWave air-to-air (A2A) links, and multiple UDs that are under the serving region of each UAV. The UDs are connected with the UAVs via mmWave. The set of all the considered UDs is denoted by \(\mathcal{U}=\{1,2,\cdots,U\}\) and the set of UAVs is denoted by \(\mathcal{K}=\{1,2,\cdots,K\}\). The federated learning (FL) process is organized in iterations, indexed by \(\mathcal{T}=\{1,2,\cdots,T\}\). Similar to [28, 29], each UAV \(k\) has a set of orthogonal RRBs that is denoted by \(\mathcal{B}=\{1,2,\cdots,B\}\), and the UDs are scheduled to these RRBs to offload their local parameters to the UAVs. The set of UDs in the serving region of the \(k\)-th UAV is denoted by \(\mathcal{U}_{k}=\{1,2,\cdots,U_{k}\}\). In addition, for the \(u\)-th UD, the available UAVs are denoted by a set \(\mathcal{K}_{u}\). Therefore, some UDs are able to access multiple UAVs simultaneously. We assume that (i) the \(u\)-th UD is only associated to the \(k\)-th UAV during the \(t\)-th FL iteration and (ii) neighboring UAVs transmit FL models to the scheduled UDs over orthogonal RRBs. In this work, we offload the CPS for performing global aggregations. However, the CPS is required to coordinate the clustering optimization of UAVs and their associated UDs through reliable control channels. Suppose that UAV \(k\) flies and hovers at a fixed flying altitude \(H_{k}\), and it is assumed that all the UAVs have the same altitude. Let \(\mathbf{x}_{l}=(x_{k},y_{k},H_{k})\) is the 3D location of the \(k\)-th UAV and \((x_{u},y_{u})\) is the 2D location of the \(u\)-th UD. In accordance with [30], for the mmWave UD-UAV communications to be successful, one needs to ensure LOS connectivity between UAVs and UDs. However, some of the UDs may not have LOS communications to the UAVs, thus they can not transmit their trained local parameters directly to the UAVs. Let \(\mathcal{U}_{los}\) be the set of UDs who have LOS links to the UAVs, and let \(\mathcal{U}_{non}\) be the set of UDs who do not have LOS links to the UAVs. Given an access link between the \(u\)-th UD, i.e., \(u\in\mathcal{U}_{los}\), and the \(k\)-th UAV, the path loss of the channel (in dB) between the \(u\)-th UD and the \(k\)-th UAV is expressed as follows \(PL(u,k)=20\log_{10}(\frac{4\pi f_{c}d_{u,k}}{c})\), where \(f_{c}\) is the carrier frequency, and \(c\) is the light speed, and \(d_{u,k}\) is the distance between the \(u\)-th UD and the \(k\)-th UAV [30]. The wireless channel gain between the \(u\)-th UD and the \(k\)-th UAV on the \(b\)-th RRB is \(h_{k,b}^{u}=10^{-PL(u,k)/10}\). Let \(p\) be the transmission power of the UDs and maintains fixed and \(N_{o}\) as the AWGN noise power. Therefore, the achievable capacity at which the \(u\)-th UD can transmit its local model parameter to the \(k\)-th UAV on the \(b\)-th RRB at the \(t\)-th global iteration is given by Shannon's formula \(R_{k,b}^{u}=W\log_{2}(1+\frac{p|h_{k}^{u}\|^{2}}{N_{0}}),\forall u\in\mathcal{U}_ {k},k\in\mathcal{K}_{u}\), where \(\mathcal{U}_{k}\subset\mathcal{U}_{los}\) and \(W\) is the RRB's bandwidth. Note that the transmission rate between the \(u\)-th UD and the \(k\)-th UAV on the \(b\)-th RRB determines if the \(u\)-th UD is covered by the \(k\)-th corresponding UAV and has LOS to the \(u\)-th UD. In other words, the \(u\)-th UD is within the coverage of the \(k\)-th corresponding UAV if \(R_{k,b}^{u}\) meets the rate threshold \(R_{0}\), i.e., \(R_{k,b}^{u}\geq R_{0}\), and has LOS link to the \(k\)-th UAV. Each UAV \(k\) aggregates the local models of its scheduled UDs only. For disseminating the local aggregated models among the UAVs to reach global model consensus, UAVs can communicate through A2A links. Thus, the A2A links between the UAVs are assumed to be in LOS condition [31]. We also assume that the UAVs employ directive beamforming to improve the rate. As a result, the gain of the UAV antenna located at \(x_{k}\), denoted by \(G^{A}\), at the receiving UAV is given by [32] \[G^{A}(d_{A,x_{k}})=\left\{\begin{array}{rl}&G_{m}^{A},\ \mathrm{if}\ -\frac{\theta_{b}^{ a}}{2}\leq\Phi\leq\frac{\theta_{b}^{a}}{2}\\ &G_{s}^{A},\ \mathrm{otherwise},\end{array}\right. \tag{1}\] where \(d_{A,x_{k}}\) is the distance between the typical receiving UAV and the \(k\)-th UAV at \(x_{k}\), \(G_{m}^{A},G_{s}^{A}\) are the gains of the main-lobe and side-lobe, respectively, \(\Phi\) is the sector angle, and \(\theta_{b}^{a}\in[0,180]\) is the beamwidth in degrees [33]. Accordingly, the received power at the typical receiving UAV from UAV \(k\) at \(x_{k}\) is given by \[P_{r,k}^{A}=PG^{A}(d_{A,x_{k}})\zeta_{A}H_{A}^{x_{k}}d_{A,x_{k}}^{-\alpha_{A}}, \tag{2}\] where \(\zeta_{A}\) represents the excess losses, \(H_{A}^{x_{k}}\) is the Gamma-distributed channel power gain, i.e., \(H_{A}^{x_{k}}\approx\Gamma(m_{A},\frac{1}{m_{A}})\), with a fading parameter \(m_{A}\), and \(\alpha_{A}\) is the path-loss exponent. As a result, the SINR at the typical receiving UAV is given by \[\gamma=\frac{\mu_{A}H_{A}^{x_{k}}d_{A,x_{k}}^{-\alpha_{A}}}{I+\sigma^{2}}, \tag{3}\] where \(\mu_{A}=P_{A}G_{m}^{A}\zeta_{A}\), \(I\) is the interference power. Such interference can be expressed as follows \[I=\sum_{j=1,j\neq k}^{K}PG^{A}(d_{A,x_{j}})\zeta_{A}H_{A}^{x_{j}}d_{A,x_{j}}^ {-\alpha_{A}}, \tag{4}\] where \(G^{A}(d_{A,x_{j}})=G_{m}^{A}\) with a probability of \(q_{A}\) and \(G^{A}(d_{A,x_{j}})=G_{s}^{A}\) with a probability of \(1-q_{A}\). Once the local aggregated model dissemination among the UAVs is completed, the \(k\)-th UAV adopts a common transmission rate \(R_{k}\) that is equal to the minimum achievable rates of all its scheduled UDs \(\mathcal{U}_{k}\). This adopted transmission rate is \(R_{k}=\min_{u\in\mathcal{U}_{k}}R_{u}^{k}\), which is used to transmit the global model to the UDs to start the next global iteration. ### _Transmission Time Structure_ The UAVs start local model aggregations after receiving the local trained models of the scheduled UDs across all the RRBs. Since different UDs \(\mathcal{U}_{los}\) will have different transmission rates, they will have different transmission durations for uploading their trained parameters to the UAVs/RRBs. Let \(s\) be the size of the UD's local vector parameter (which is the same for the global model), expressed in bits. Note that the analysis in this subsection is for the transmission duration of one global iteration \(t\). For simplicity, we represent \(X\) as the number of elements in the set \(\mathcal{X}\). The time required by the \(u\)-th UD, \(u\in\mathcal{U}_{los}\), to reliably transmit its model update to the \(k\)-th selected UAV over the \(b\)-th RRB is then given by \(T_{u}=\frac{s}{R_{k,b}^{u}}\). With this consideration, we can see that, given the number of participating UDs \(U_{los}\), the transmission duration is \(\max_{u\in\mathcal{U}_{los}}\{T_{u}\}=\max_{u\in\mathcal{U}_{los}}\frac{s}{R_ {k,b}^{u}}\). When \(U_{los}\) is large, \(\max_{u\in\mathcal{U}_{los}}\{T_{u}\}\) can dramatically grow. The transmission duration is therefore constrained by the minimum rate of the scheduled UDs \(\mathcal{U}_{los}\), i.e., \(\min_{u\in\mathcal{U}_{los}}R_{k,b}^{u}\). Without the loss of generality, let us assume that UD \(u\in\mathcal{U}_{los}\) has the minimum rate that is denoted by \(R_{min}^{u}\). The corresponding transmission duration is \(\frac{s}{R_{min}^{u}}\). The design of \(R_{min}^{u}\) dominates the local models transmission duration from the UDs to the UAVs, thus it dominates the time duration of one FL global iteration. This is because the FL time consists of the local models transmission time and the learning computation time. Since the computation times of the UDs for local learning does not differ much, the FL time of one global iteration is dominated by \(R_{min}^{u}\). Thus, \(R_{min}^{u}\) can be adapted to include fewer or more UDs in the training process. For the different transmission durations \(\mathcal{U}_{los}\), some UDs will finish transmitting their local models before other UDs. Thus, high transmission rate UDs in \(\mathcal{U}_{los}\) will have to wait to start a new iteration simultaneously with relatively good transmission rate UDs. We propose to efficiently exploit such waiting times to assist the UDs that have non-LOS channels to the UAVs. Define the portion of the time that not being used by \(\bar{u}\)-th UD (i.e., \(\bar{u}\neq u,u\in\mathcal{U}_{los}\)) at the \(t\)-th iteration is referred to as the idle time of the \(\bar{u}\)-th UD and denoted by \(T_{idle}^{\bar{u}}\). This idle time can be expressed as \(T_{idle}^{\bar{u}}=(\frac{s}{R_{k,b}^{u}}-\frac{s}{R_{min}^{u}})\) seconds. Such idle time can be exploited by UDs \(\bar{u}\in\mathcal{U}_{los}\) via D2D links if they ensure the complete transmission of the local parameters of the non-LOS UDs to the UAVs. More specifically, the idle time of the \(\bar{u}\)-th UD should be greater than or equal to the transmission duration of sending the local parameters from the \(\hat{u}\)-th non-LOS UD to the \(\bar{u}\)-th UD plus the time duration of forwarding the local parameters from the \(\bar{u}\)-th UD to the \(k\)-th UAV. Fig. 1: ATIN network with one CPS, 3 UAVs, 9 UDs, and a set of RRBs per each UAV. For instance, UDs \(1\), \(3\), \(5\), and \(7\) do not have direct LOS links to the UAVs. Thus, they transmit the trained local models to the UAVs via LOS UDs, e.g., UDs \(2\), \(4\), and \(6\). UDs \(8\) and \(9\) can transmit their models directly to the UAVs via LOS mmWave links. Mathematically, it must satisfy \(T_{idle}^{\bar{u}}\geq(\frac{s}{R_{u}^{\bar{u}}}+\frac{s}{R_{u}^{\bar{u}}})\). From now on, we will use the term relay to UD \(\bar{u}\neq u,\bar{u}\in\mathcal{U}_{los}\). In relay mode, each communication period is divided into two intervals corresponding to the non-LOS UD-relay phase (D2D communications) and relay-UAV phase (mmWave communication). The aforementioned transmission duration components of UDs and relays for one global iteration is shown in Fig. 2. Note that UDs can re-use the same frequency band and transmit simultaneously via D2D links. When the \(\hat{u}\)-th UD does not have a LOS communication to any of the UAVs, it may choose the \(\hat{u}\)-th UD as its relay if the \(\hat{u}\)-th relay is located in the coverage zone of the \(\hat{u}\)-th UD. Let \(\mathcal{U}_{\hat{u}}\) is the set of relays in the coverage zone of UD \(\hat{u}\). Let \(R_{\hat{u}}^{\bar{u}}\) denote the channel gain for the D2D link between the \(\hat{u}\)-th UD and the \(\bar{u}\)-th relay. Then, the achievable rate of D2D pair \((\hat{u},\bar{u})\) is given by \(R_{\hat{u}}^{\bar{u}}=W\log_{2}(1+\frac{p\|h_{\hat{u}}^{\bar{u}}|^{2}}{N_{0}} ),\forall\bar{u}\in\mathcal{U}_{los},\hat{u}\in\mathcal{U}_{non}\). In relay mode, the transmission duration for sending the local parameter of the \(\hat{u}\)-th UD to the \(k\)-th UAV through relay \(\bar{u}\) is \(\Upsilon_{\bar{u}}=\frac{s}{R_{u}^{\bar{u}}}+\frac{s}{R_{u,k}^{\bar{u}}}\), which should satisfy \(\Upsilon_{\bar{u}}\leq T_{idle}^{\bar{u}}\). ## III FedMoD ### _Federated Learning Process_ In FL, each UD \(u\) possesses a set of local training data, denoted as \(\mathcal{D}_{u}\). The local loss function on the dataset of the \(u\)-th UD can be calculated as \[F_{u}(\mathbf{w})=\frac{1}{|\mathcal{D}_{u}|}\sum_{(x_{i},y_{i})\in\mathcal{D} _{u}}f_{i}(\mathbf{w}),\forall u\in\mathcal{U}, \tag{5}\] where \(x_{i}\) is the sample \(i\)'s input (e.g., image pixels) and \(y_{i}\) is the sample \(i\)'s output (e.g., label of the image) and \(f_{i}(\mathbf{w})\) is the loss function that measures the local training model error of the \(i\)-th data sample. The collection of data samples at the set of UDs that is associated with the \(k\)-th UAV is denoted as \(\widetilde{\mathcal{D}}_{k}\), and the training data at all the learning involved UDs, denoted as \(\mathcal{U}_{inv}\), is denoted as \(\mathcal{D}\). The ratios of data samples are defined as \(\hat{m}_{u}=\frac{|\mathcal{D}_{u}|}{|\mathcal{D}_{k}|}\), \(m_{u}=\frac{|\mathcal{D}_{u}|}{|\mathcal{D}|}\), and \(\tilde{m}_{k}=\frac{|\widetilde{\mathcal{D}}_{k}|}{|\mathcal{D}|}\), respectively. We define the loss function for the \(k\)-th UAV as the average local loss across the \(k\)-th cluster \(\hat{F}(\mathbf{w})=\sum_{u=1}^{|\mathcal{U}_{k}|}\frac{|\mathcal{D}_{u}|}{| \mathcal{D}_{k}|}F_{u}(\mathbf{w})\). The global loss function \(F(\mathbf{w})\) is then defined as the average loss across all the clusters \(F(\mathbf{w})=\sum_{u=1}^{|\mathcal{U}_{inv}|}\frac{|\mathcal{D}_{u}|}{| \mathcal{D}|}F_{u}(\mathbf{w})\). The objective of the FL model training is to find the optimal model parameters \(\mathbf{w}^{*}\) for \(F(\mathbf{w})\) that is expressed as follows \(\mathbf{w}^{*}=\arg\min_{\mathbf{w}}F(\mathbf{w})\). In this work, we propose FedMoD that involves three main procedures: 1) local model update at the UDs, 2) local model aggregation at the UAVs, and 3) model dissemination between the UAVs. #### Iii-B1 Local Model Update Denote the model of the \(u\)-th UD at the \(t\)-th global iteration as \(\mathbf{w}_{u}(t)\). This UD performs model updating based on its local dataset by using stochastic gradient descent (SGD) algorithm, which is expressed as follows \[\mathbf{w}_{u}(t)=\mathbf{w}_{u}(t-1)-\lambda g(\mathbf{w}_{u}(t-1)), \tag{6}\] where \(\lambda\) is the learning rate and \(g(\mathbf{w}_{u}(t-1))\) is the stochastic gradient computed on the dataset of the \(u\)-th UD. #### Iii-B2 Local Model Aggregation After all the selected UDs completing their local model updates, they offload their model parameters over the available RRBs to the associated UAVs. A typical UAV \(k\) aggregates the received models by computing a weighted sum as follows \[\mathbf{\tilde{w}}_{k}(t)=\sum_{u\in\mathcal{U}_{k}}\hat{m}_{u}\mathbf{w}_{u }(t),\forall k\in\mathcal{K}. \tag{7}\] #### Iii-B3 Model Dissemination Each UAV disseminates its local aggregated model to the one-hop neighboring UAVs. The model dissemination includes \(l=1,2,\cdots,\alpha\) times of model dissemination until at least one UAV receives the local aggregated models of other UAVs, where \(\alpha\) is the number of dissemination rounds. Specifically, at the \(t\)-th iteration, the \(k\)-th UAV aggregates the local models of its associated UDs as in (7). At the beginning of the model dissemination step, the \(k\)-th UAV knows only \(\mathbf{\tilde{w}}_{k}(t)\) and does not know the models of other UAVs' models \(\mathbf{\tilde{w}}_{j}(t),j\neq k,\forall j\in\mathcal{K}\). Consequently, at the \(t\)-th global iteration and \(l\)-th round, the \(k\)-th UAV has the following two sets: * The _Known_ local aggregated model: Represented by \(\mathcal{H}_{k}^{l}(t)=\{\mathbf{\tilde{w}}_{k}(t)\}\). * The _Unknown_ local aggregated models: Represented by \(\mathcal{W}_{k}^{l}(t)=\{\mathbf{\tilde{w}}_{j}(t),\mathbf{\tilde{w}}_{j}(t), \cdots,\mathbf{\tilde{w}}_{K}(t)\}\) and defined as the set of the local aggregated models of other UAVs. These two sets are referred as the side information of the UAVs. For instance, at \(l=\alpha\), the side information of the \(k\)-th UAV is \(\mathcal{H}_{k}^{\alpha}(t)=\{\mathbf{\tilde{w}}_{k}(t),\mathbf{\tilde{w}}_{j}(t), \mathbf{\tilde{w}}_{j}(t),\cdots,\mathbf{\tilde{w}}_{K}(t)\}\) and \(\mathcal{W}_{K}^{\alpha}(t)=\emptyset\). To achieve global model consensus, UAV \(k\) needs to know the other UAVs' models, i.e., \(\mathcal{W}_{k}(t)\), so as to aggregate a global model for the whole network. To this end, we propose efficient model dissemination scheme that enables the UAVs to obtain their _Unknown_ local aggregated models \(\mathcal{W}_{k}(t),\forall k\in\mathcal{K}\), with minimum dissemination latency. ### _Model Dissemination_ To overcome the need of CPS for global aggregations or UAV coordination, an efficient distributed model dissemination method is developed. Note that all the associations of UAVs \(\mathcal{K}_{k}\) can be computed locally at the \(k\)-th UAV since all the needed information (e.g., complex channel gains and the indices of the local aggregated models) are locally available. In particular, UAV \(k\in\mathcal{K}\) knows the information of its neighboring UAVs only. At each dissemination round, transmitting UAVs use the previously mentioned side information to perform XOR model encoding, while receiving UAVs need the stored models to obtain the _Unknown_ ones. The entire process of receiving the _Unknown_ models takes a small duration of time. According to the reception status feedback by each UAV, the UAVs distributively select the transmitting UAVs and their models to be transmitted to the receiving UAVs at each round \(l\). The Fig. 2: Transmission time structure for LOS UDs and non-LOS UDs for the \(t\)-th global iteration. transmitted models can be one of the following two options for each receiving UAV \(i\): * Non-innovative model (NIM): A coded model is non-innovative for the receiving UAV \(i\) if it does not contain any model that is not known to UAV \(i\). * Decodable model (DM): A coded model is decodable for the receiving UAV \(i\) if it contains just one model that is not known to UAV \(i\). In order to represent the XOR coding opportunities among the models not known at each UAV, we introduce a FedMoD conflict graph. At round \(l\), the FedMoD conflict graph is denoted by \(\mathcal{G}(\mathcal{V}(l),\mathcal{E}(l))\), where \(\mathcal{V}(l)\) refers to the set of vertices, \(\mathcal{E}(l)\) refers to the set of encoding edges. Let \(\mathcal{K}_{k}\) be the set of neighboring UAVs to the \(k\)-th UAV, and let \(\mathcal{K}_{w}\subset\mathcal{K}\) be the set of UAVs that still wants some local aggregated models. Hence, the FedMoD graph is designed by generating all vertices for the \(k\)-th possible UAV transmitter that can provide some models to other UAVs, \(\forall k\in\mathcal{K}\). The vertex set \(\mathcal{V}(l)\) of the entire graph is the union of vertices of all possible transmitting UAVs. Consider, for now, generating the vertices of the \(k\)-th UAV. Note that the \(k\)-th UAV can exploit its previously received models \(\mathcal{H}_{k}^{l}(t)\) to transmit an encoded/uncoded model to the set of requesting UAVs. Therefore, each vertex is generated for each model \(m\in\mathcal{W}_{k}^{l}(t)\cap\mathcal{H}_{k}^{l}(t)\) that is requested by each UAV \(i\in\mathcal{K}_{w}\cap\mathcal{K}_{k}\) and for each achievable rate of the \(k\)-th UAV \(r\in\mathcal{R}_{k,i}=\{r\in\mathcal{R}_{k}|r\leq r_{k,i}\text{ and }i\in\mathcal{K}_{w}\cap\mathcal{K}_{k}\}\), where \(\mathcal{R}_{k,i}\) is a set of achievable capacities between the \(k\)-th UAV and the \(i\)-th UAV, i.e., \(\mathcal{R}_{k,i}\subset\mathcal{R}_{k}\). Accordingly, the \(i\)-th neighboring UAV in \(\mathcal{K}_{k}\) can receive a model from the \(k\)-th UAV. Therefore, we generate \(|\mathcal{R}_{k,i}|\) vertices for a requesting model \(m\in\mathcal{H}_{k}^{l}(t)\cap\mathcal{W}_{i}^{l}(t),\forall i\in\mathcal{K}_{ w}\cap\mathcal{K}_{k}\). A vertex \(v_{i,m,r}^{k}\in\mathcal{V}(l)\) indicates the \(k\)-th UAV can transmit the \(m\)-th model to the \(i\)-th UAV with a rate \(r\). We define the utility of vertex \(v_{i,m,r}^{k}\) as \[w(v_{i,m,r}^{k})=rN_{k}, \tag{8}\] where \(N_{k}\) is the number of neighboring UAVs that can be served by the \(k\)-th UAV. This weight metric shows two potential benefits (i) \(N_{k}\) represents that the \(k\)-th transmitting UAV is connected to many other UAVs that are requesting models in \(\mathcal{H}_{k}^{t}(l)\); and (ii) \(r\) provides a balance between the transmission rate and the number of scheduled UAVs. Since UAVs communicate among them, their connectivity can be characterized by an undirected graph with sets of vertices and connections. All possible conflict connections between vertices (conflict edges between circles) in the FedMod conflict graph are provided as follows. Two vertices \(v_{i,m,r}^{k}\) and \(v_{i^{\prime},m^{\prime},r^{\prime}}^{k^{\prime}}\) are adjacent by a conflict edge in \(\mathcal{G}\), if one of the following conflict conditions (CC) is true. * _CC1. (encoding conflict edge): (\(k=k^{\prime}\)) and (\(m\neq m^{\prime}\)) and (\(m,m^{\prime}\)) \(\notin\mathcal{H}_{k^{\prime}}^{l}(t)\times\mathcal{H}_{k}^{l}(t)\). A conflict edge between vertices in the same local FedMoD conflict graph is connected as long as their corresponding are not decodable to a set of scheduled UAVs._ * _CC2. (rate conflict edge): (\(k=k^{\prime}\)) and (\(k\neq k^{\prime}\)) and (\(r\neq r^{\prime}\)). All adjacent vertices correspond to the same (or different) UAV \(k\) should have the same achievable rate._ * _CC3. (transmission conflict edge): (\(k\neq k^{\prime}\)) and (\(i=i^{\prime}\)). The same UAV cannot be scheduled to two different UAVs \(k\) and \(k^{\prime}\)._ * _CC4. (half-duplex conflict edge): (\(k=i^{\prime}\)) or (\(k^{\prime}=i\)). The same UAV can not transmit and receive in the same dissemination round._ To distributively disseminate the local aggregated models among the UAVs, we propose a graph theory method as follows. Let \(\mathcal{S}_{k}\) represent the associations of the neighboring UAVs in the coverage zone of the \(k\)-th UAV, i.e., the associations of UAV \(k\) to the set \(\mathcal{K}_{k}\). Then, let the local FedMoD conflict graph \(\mathcal{G}_{k}(\mathcal{S}_{k})\subset\mathcal{G}\) for an arbitrary UAV \(k\in\mathcal{K}\) represent the set of associations \(\mathcal{S}_{k}\). Our proposed distributed algorithm has two phases: i) the initial phase and ii) the conflict solution phase. In the initial phase, UAV \(k\in\mathcal{K}\) constructs the local FedMoD conflict graph \(\mathcal{G}_{k}(\mathcal{S}_{k})\) and selects its targeted neighboring UAVs using the maximum weight independent set (MWIS) search method [34, 35] that results in MWIS \(\mathbf{S}_{k}\). Each UAV exchanges its scheduled UAVs with its neighbor UAV. Then the conflict solution phase starts. The UAV that is associated to multiple UAVs (UAV that is located at the overlapped regions of UAVs) is assigned to one UAV that offers the highest weight of scheduling that UAV. UAVs that do not offer the maximum weight cannot schedule that UAV, and therefore remove that UAV from their set of associated UAVs and vertices. We then design the new graph. We repeat this process until all the conflicting UAVs are scheduled to at most a single transmitting UAV. The details process of the algorithm for a single dissemination round are presented in Algorithm 1. ### _Illustration of the Proposed Model Dissemination Method_ For further illustration, we explain the dissemination method that is implemented at the UAVs through an example of the network topology of Fig. 3. Suppose that all the UAVs have already received the local models of their scheduled UDs and performed the local model averaging. Fig. 3 presents the side information status of each UAV at round \(l=0\). **Round 1:** Since UAV \(2\) has good reachability to many UAVs (\(\mathcal{K}_{2}=\{1,4,3\}\)), it transmits its model \(\widetilde{\mathbf{w}}_{2,0}\) to UAVs 1, 4, and 3 with a transmission rate of \(r(l=1)=\min\{12,11,9\}=9\) Mbps (**CC2** is satisfied). Note that UAV \(5\) can not transmit to UAV \(3\) according to **CC3**, i.e., UAV \(3\) Fig. 3: A simple example of \(5\) UAVs with their arbitrary transmission rates and initial side information at round \(l=0\). is already scheduled to the transmitting UAV \(2\). When UAV \(2\) finishes model transmission, the _Known_ sets of the receiving UAVs is updated to \(\mathcal{H}_{1}^{1}(t)=\{\mathbf{\tilde{w}}_{1},\mathbf{\tilde{w}}_{2}\}\), \(\mathcal{H}_{3}^{1}(t)=\{\mathbf{\tilde{w}}_{3},\mathbf{\tilde{w}}_{2}\}\), and \(\mathcal{H}_{4}^{1}(t)=\{\mathbf{\tilde{w}}_{4},\mathbf{\tilde{w}}_{2}\}\). Accordingly, their _Unknown_ sets are: \(\mathcal{W}_{1}^{1}(t)=\{\mathbf{\tilde{w}}_{3},\mathbf{\tilde{w}}_{4}, \mathbf{\tilde{w}}_{5}\}\), \(\mathcal{W}_{3}^{1}(t)=\{\mathbf{\tilde{w}}_{1},\mathbf{\tilde{w}}_{4}, \mathbf{\tilde{w}}_{5}\}\), \(\mathcal{W}_{4}^{1}(t)=\{\mathbf{\tilde{w}}_{1},\mathbf{\tilde{w}}_{3}, \mathbf{\tilde{w}}_{5}\}\). **Round 2:** Although UAV \(2\) has good reachability to many UAVs, it would not be selected as a transmitting UAV at \(l=2\). This is because UAV has already disseminated its side information to the neighboring UAVs, thus UAV \(2\) does not have any vertex in the FedMoD conflict graph. In this case, UAVs \(4\) and \(5\) can simultaneously transmit models \(\mathbf{\tilde{w}}_{4}\) and \(\mathbf{\tilde{w}}_{5}\), respectively, to the receiving UAVs \(\{1,2\}\) and \(\{3\}\). When UAVs \(4\) and \(5\) finish models transmission, the _Known_ sets of the receiving UAVs is updated to \(\mathcal{H}_{1}^{2}(t)=\{\mathbf{\tilde{w}}_{1},\mathbf{\tilde{w}}_{2}, \mathbf{\tilde{w}}_{4}\}\), \(\mathcal{H}_{2}^{2}(t)=\{\mathbf{\tilde{w}}_{2},\mathbf{\tilde{w}}_{4}\}\), and \(\mathcal{H}_{3}^{2}(t)=\{\mathbf{\tilde{w}}_{3},\mathbf{\tilde{w}}_{2}, \mathbf{\tilde{w}}_{5}\}\). Clearly, UAVs \(4\) and \(5\) transmit their models to the corresponding UAVs with transmission rates of \(r_{4}=\min\{13,15\}=13\) Mbps and \(r_{5}=16\) Mbps, respectively. However, for simultaneous transmission and from **CC2**, all the vertices of the corresponding UAVs \(\{1,2,3\}\) should have the same achievable rate. Thus, UAVs \(4\) and \(5\) adopt one transmission rate which is \(r(l=2)=\min\{r_{4},r_{5}\}=13\) Mpbs. **Round 3:** UAV \(1\) transmits model \(\mathbf{\tilde{w}}_{1}\) to the receiving UAVs \(\{2,4\}\), and their _Known_ sets are updated to \(\mathcal{H}_{2}^{2}(t)=\{\mathbf{\tilde{w}}_{2},\mathbf{\tilde{w}}_{4}, \mathbf{\tilde{w}}_{1}\}\), \(\mathcal{H}_{3}^{4}(t)=\{\mathbf{\tilde{w}}_{4},\mathbf{\tilde{w}}_{2}, \mathbf{\tilde{w}}_{1}\}\). UAV \(1\) transmits its model to the corresponding UAVs with a transmission rate of \(r(l=3)=\min\{10,14\}=10\) Mbps. **Round 4:** Given the updated side information of the UAVs, UAV \(3\) can encode models \(\mathbf{\tilde{w}}_{5}\) and \(\mathbf{\tilde{w}}_{2}\) into the encoded model \(\mathbf{\tilde{w}}_{5}\oplus\mathbf{\tilde{w}}_{2}\) and broadcasts it to UAVs \(2\) and \(5\). Upon reception this encoded model, UAV \(5\) uses the stored model \(\mathbf{\tilde{w}}_{5}\) to complete model decoding \((\mathbf{\tilde{w}}_{5}\oplus\mathbf{\tilde{w}}_{2})\oplus\mathbf{\tilde{w}} _{5}=\mathbf{\tilde{w}}_{2}\). Similarly, UAV \(5\) uses the stored model \(\mathbf{\tilde{w}}_{2}\) to complete model decoding \((\mathbf{\tilde{w}}_{5}\oplus\mathbf{\tilde{w}}_{2})\oplus\mathbf{\tilde{w}} _{2}=\mathbf{\tilde{w}}_{5}\). The broadcasted model is thus decodable for both UAVs \(5\) and \(2\) and has been transmitted with a rate of \(r(l=4)=\min\{11,15\}=11\) Mpbs. The _Known_ sets of these receiving UAVs are as follows: \(\mathcal{H}_{2}^{4}(t)=\{\mathbf{\tilde{w}}_{2},\mathbf{\tilde{w}}_{4}, \mathbf{\tilde{w}}_{1},\mathbf{\tilde{w}}_{5}\}\) and \(\mathcal{H}_{3}^{4}(t)=\{\mathbf{\tilde{w}}_{5},\mathbf{\tilde{w}}_{2}\}\). **Round 5:** Given the updated side information of the UAVs at \(l=4\), UAV \(3\) transmits \(\mathbf{\tilde{w}}_{3}\) to UAVs \(2\) and \(5\). Upon reception this model, UAV \(2\) has obtained all the required models, i.e., \(\mathcal{H}_{2}^{5}(t)=\{\mathbf{\tilde{w}}_{1},\mathbf{\tilde{w}}_{2}, \mathbf{\tilde{w}}_{3},\mathbf{\tilde{w}}_{4},\mathbf{\tilde{w}}_{5}\}\) and \(\mathcal{W}_{2}^{5}(t)=\{\emptyset\}\). The broadcasted model is transmitted with a rate of \(r(l=5)=\min\{11,15\}=11\) Mpbs. Since UAV \(2\) has all the local aggregated models of other UAVs, it can aggregate them all which results the global model at the \(t\)-th iteration: \[\mathbf{\tilde{w}}(t)=\frac{1}{D}(\mathbf{\tilde{w}}_{1}+\mathbf{\tilde{w}}_{2 }+\mathbf{\tilde{w}}_{3}+\mathbf{\tilde{w}}_{4}+\mathbf{\tilde{w}}_{5}). \tag{9}\] Therefore, the global model \(\mathbf{\tilde{w}}\) is broadcasted from UAV \(2\) to UAVs \(\{1,4,3\}\) with a rate of \(\min\{12,11,9\}=9\) Mpbs. Next, UAV \(3\) can send \(\mathbf{\tilde{w}}\) to UAV \(5\) with a rate of \(15\) Mbps. Therefore, all the UAVs obtain the shared global model \(\mathbf{\tilde{w}}\) and broadcast it to their scheduled UDs to initialize the next iteration \(t+1\). Note that the transmission duration of these dissemination rounds is \[T_{diss}=\underbrace{\frac{s}{9}}_{l=1}^{+}\underbrace{\frac{13}{13}}_{l=2} +\underbrace{\frac{s}{10}}_{l=3}+\underbrace{\frac{s}{11}}_{l=4}+\underbrace{ \frac{s}{11}}_{l=5}+\underbrace{\frac{s}{9}+\frac{15}{15}}_{\mathbf{\tilde{w}} \ \text{broadcasting}} \tag{10}\] The size of a typical model is \(s=9.098\) Kb [13, 36, 37], thus \(T_{diss}=0.0059\) sec. Thanks to the efficient model dissemination proposed method that disseminates models from transmitting UAVs to the closest receiving UAVs with good connectivity, the dissemination delay is negligible. _Remark 1:_ _In the fully connected model, each UAV can receive the local aggregated models of all UAVs in \(K\) dissemination rounds, where each UAV takes a round for broadcasting its local aggregated model to other UAVs._ The steps of FedMoD that includes local model update, local aggregation at the UAVs, and model dissemination among the UAVs are summarized in Algorithm 2. ``` Data:\(\mathcal{K}\), \(\mathbf{\tilde{w}}_{k},\mathcal{H}_{k}^{0}(t),\mathcal{W}_{k}^{0}(t),\forall k \in\mathcal{K}\). Initialize Phase: Initialize:\(\mathtt{K}=\emptyset\). forall\(k\in\mathcal{K}\)do Construct \(\mathcal{G}_{k}(\mathcal{K}_{k})\) and calculate weight \(w(v)\) using (8), \(\forall v\in\mathcal{G}_{k}\). Find MWIS \(\mathbf{S}_{k}\). endfor Conflict Solution Phase:for\(i=1,2,\cdots\)do Transmit \(\mathbf{\hat{S}}_{k}=\{j\in\mathcal{K}_{k}\ |\ j\in\mathbf{S}_{k}\}\). Set \(\mathtt{K}=\{j\in\mathcal{K}_{\ }|\ \exists(k,k^{\prime})\in\mathcal{K}^{2},j\in\mathbf{\hat{S}}_{k}\cap\mathbf{ \hat{S}}_{k^{\prime}}\}\). forall\(j\in\mathtt{K}\)do Set \(\mathcal{\hat{K}}(j)=\{k\in\mathcal{K}_{\ }|\ j\in\mathbf{\hat{S}}_{k}\}\). forall\(k\in\mathcal{\hat{K}}(j)\)do Set \(M_{kj}=\sum_{v\in\mathbf{\tilde{S}}_{k}}w(v)\) and \(\mathcal{K}_{k}=\mathcal{K}_{k}\backslash\{j\}\). Construct \(\mathcal{G}_{k}(\mathcal{K}_{k})\) and compute \(w(v)\) by (8) and solve \(\mathbf{\tilde{S}}_{k}\) MWIS. Set \(\tilde{M}_{kj}=\sum_{v\in\mathbf{\tilde{S}}_{k}}w(v)\) and transmit \(M_{kj}\) and \(\tilde{M}_{kj}\). endfor Set \(k^{*}=\arg\max_{k\in\mathcal{\hat{K}}(j)}\Big{(}M_{kj}+\sum_{k^{\prime}\in \mathcal{\hat{K}}(j),k\neq k^{\prime}}\tilde{M}_{k^{\prime}j}\Big{)}\). Set \(\mathcal{K}_{k^{*}}=\mathcal{K}_{k^{*}}\cup\{j\}\). forall\(k\in\mathcal{\hat{K}}(j)\backslash\{k^{*}\}\)do Set \(\mathbf{S}_{k}=\mathbf{\tilde{S}}_{k}\). endfor endfor such that \(\mathbb{E}_{\mathcal{D}_{u}|\tilde{\mathbf{w}}}\bigg{\|}[f(\mathcal{D}_{u};\tilde{ \mathbf{w}})]-\triangledown F_{u}(\tilde{\mathbf{w}})\bigg{\|}_{2}^{2}\leq \sigma^{2}\). 3. For the degree of non-IIDness, we assume that there exists \(\kappa>0\) such that \(\|\triangledown F_{u}(\tilde{\mathbf{w}})-\triangledown F(\tilde{\mathbf{w}} )\|_{2}\leq\kappa,\) where \(\kappa\) measures the degree of data heterogeneity across all UDs. In centralized FL, the global model at the CPS at each global iteration evolves according to the following expression [13]: \[\mathbf{w}(t+1)=\mathbf{w}(t)-\lambda\mathbf{G}(t), \tag{11}\] where \(\mathbf{w}(t)=[\mathbf{w}_{u}(t)]_{u\in\mathcal{U}_{inv}}\) and \(\mathbf{G}(t)=[g(\mathbf{w}_{u}(t))]_{u\in\mathcal{U}_{inv}}\). However, in FedMoD, the \(k\)-th UAV maintains a model updated based on the trained models of its scheduled UDs only and needs to aggregate the models of other UAVs using the model dissemination method as in Section III-B. Therefore, each UAV has insufficient model averaging unless the model dissemination method is performed until all UAVs obtain the global model defined in (9), i.e., at \(l=\alpha\). In other words, at \(l=\alpha\), the global model of our proposed decentralized FL should be the one mentioned in (11). For convenience, we define \(\mathbf{\tilde{u}}(t)=\sum_{u\in\mathcal{U}_{inv}}m_{u}\mathbf{w}_{u}(t)\), and consequently, \(\mathbf{\tilde{u}}(t)=\mathbf{\tilde{w}}(t)\mathbf{m}\). By multiplying both sides of the evolution expression in (11) by \(\mathbf{m}\), yielding the following expression \[\mathbf{\tilde{u}}(t+1)=\mathbf{\tilde{u}}(t)-\lambda\mathbf{G}(t)\mathbf{m}, \tag{12}\] Following [25, 26] and leveraging the evolution expression of \(\mathbf{\tilde{u}}(t)\) in (12), we bound the expected change of the local loss functions in consecutive iterations as follows. **Lemma 1**.: _The expected change of the global loss function in two consecutive iterations can be bounded as follows_ \[\mathbb{E}[F(\mathbf{\tilde{u}}(t+1))]-\mathbb{E}[F(\mathbf{ \tilde{u}}(t))]\leq\frac{-\lambda}{2}\mathbb{E}\|\triangledown F(\mathbf{ \tilde{u}}(t))\|_{2}^{2}\] \[+\frac{\lambda^{2}L}{2}\sum_{u=1}^{U_{inv}}m_{u}\sigma^{2}-\frac{ \lambda}{2}(1-\lambda L)\tilde{Q}+\frac{\lambda L^{2}}{2}\mathbb{E}\bigg{\|} \mathbf{\tilde{w}}(t)(\mathbf{I}-\mathbf{M})\bigg{\|}_{\mathbf{M}}^{2}, \tag{13}\] _where \(\tilde{Q}=\mathbb{E}\bigg{[}\bigg{\|}\sum_{u=1}^{U_{inv}}m_{u}\triangledown F _{u}(\mathbf{w}_{u}(t))\bigg{\|}_{2}^{2}\bigg{]}\), \(\mathbf{M}=\mathbf{m}\mathbf{I}^{T}\), and \(\|\mathbf{X}\|_{\mathbf{M}}\)\(=\sum_{i=1}^{M}\sum_{j=1}^{N}m_{i,j}|x_{i,j}|^{2}\) is the weighted Frobenius norm of an \(M\times N\) matrix \(\mathbf{X}\)._ For proof, please refer to Appendix A. Notice that \(\mathbf{\tilde{w}}(t)\) deviates from the desired global model due to the partial connectivity of the UAVs that results in the last term in the right-hand side (RHS) of (13). However, through the model dissemination method and at \(l=\alpha\), FedMoD ensures that each UAV can aggregate the models of the whole network at each global iteration before proceeding to the next iteration. Thus, such deviation is eliminated. Due to the model dissemination among the UAVs, there is a dissemination gap that is denoted by the _dissemination gap_ between the \(k\)-th and \(j\)-th UAVs as \(\delta_{j,k}(t)\), which is the number of dissemination steps that the local aggregated model of the \(j\)-th UAV needs to be transmitted to the \(k\)-th UAV. For illustration, consider the example in Fig. 3, the highest dissemination gap is the one between UAVs \(5\) and \(1\) which is \(3\). Thus, \(\delta_{5,1}(t)=3\). The maximum dissemination gap of UAV \(k\) is \(\delta_{k}(t)=\max_{j\in\mathcal{K}}\{\delta_{j,k}(t)\}\). Therefore, a larger value of \(\delta_{j,k}(t)\) implies that the model of each UAV needs more dissemination step to be globally converged. The following remark shows that \(\delta_{k}(t)\) is upper bounded throughout the whole training process. **Remark 2**.: _There exists a constant \(\delta_{max}\) such that \(\delta_{k}(t)\leq\delta_{max},\)\(\forall t\in T,k\in\mathcal{K}\). At any iteration \(t\), the dissemination gap of the farthest UAV (i.e., the UAV at the network edge), \(\delta_{max}=\alpha\) gives a maximal value for the steps that the models of other UAVs have been disseminated to UAV \(k\)._ Given the aforementioned analysis, we are now ready to prove the convergence of FedMoD. **Theorem 1**.: _If the learning rate \(\lambda\) satisfies \(1-\lambda L\geq 0,1-2\lambda^{2}L^{2}>0\), we have_ \[\mathbb{E}[\|\triangledown F(\mathbf{\tilde{u}})(t)\|_{2}^{2}] \leq\frac{2\{\mathbb{E}[F(\mathbf{\tilde{u}})(0)-F(\mathbf{\tilde{u}})(T)]\}} {\delta}+\lambda L\sum_{u=1}^{U_{inv}}m_{u}\sigma^{2} \tag{14}\] Proof.: From (13), we have \[\frac{\lambda}{2}\mathbb{E}\|\triangledown F(\mathbf{\tilde{u}}( t))\|_{2}^{2} \leq\mathbb{E}[F(\mathbf{\tilde{u}}(t))]-\mathbb{E}[F(\mathbf{ \tilde{u}}(t+1))]\] \[+\frac{\lambda^{2}L}{2}\sum_{u=1}^{U_{inv}}m_{u}\sigma^{2}-\frac{ \lambda}{2}(1-\lambda L)\tilde{Q}. \tag{15}\] \[\mathbb{E}\|\triangledown F(\mathbf{\tilde{u}}(t))\|^{2} \leq\frac{2\{\mathbb{E}[F(\mathbf{\tilde{u}}(t))]-\mathbb{E}[F( \mathbf{\tilde{u}}(t+1))]\}}{\lambda}\] \[+\lambda L\sum_{i=1}^{U_{inv}}m_{u}\sigma^{2}-(1-\lambda L)\tilde{Q} \tag{16}\] Since \(1-\lambda L\geq 0\) from Theorem 1, the third term in the RHS of (16) is eliminated, thus we have \[\mathbb{E}[\|\forall F(\tilde{\mathbf{u}})(t)\|^{2}]\leq\frac{2\{\mathbb{E}[F( \tilde{\mathbf{u}})(0)-F(\tilde{\mathbf{u}})(T)]\}}{\lambda}+\lambda L\sum_{u=1}^ {U_{inv}}m_{u}\sigma^{2} \tag{17}\] ## IV FedMoD: Modeling and Problem Formulation ### _FL Time and Energy Consumption_ #### Iv-A1 FL Time The constrained FL time at each global iteration consists of both computation and wireless transmission time that is explained below. The wireless transmission time consists of (1) the uplink transmission time for transmitting the local updates from the UDs to the associated UAVs \(\mathcal{K}\). This transmission time is already discussed in Section II-C and represented by \(T_{u}\). (2) The transmission time for disseminating the local aggregated models among the UAVs. The model dissemination time among all the UAVs is \(T_{diss}\) as given in (10). (3) The downlink transmission time for transmitting the local aggregated models from the UAVs to the scheduled UDs \(\mathcal{U}\). The downlink transmission time for UAV \(k\) can be expressed \(T_{k}^{do}=\frac{s}{R_{k}}\). On the other hand, the computation time for local learning at the \(u\)-th UD is expressed as \(T_{u}^{comp}=T_{l}\frac{Q_{u}D_{u}}{f_{u}}\), where \(T_{l}\) is the number of local iterations to reach the local accuracy \(\epsilon_{l}\) in the \(u\)-th UD, \(Q_{u}\) as the number of CPU cycles to process one data sample, and \(f_{u}\) is the computational frequency of the CPU in the \(u\)-th UD (in cycles per second). By combining the aforementioned components, the FL time \(\tau_{k}\) at the \(k\)-th UAV can be calculated as \[\tau_{k} =\max_{u\in\mathcal{U}_{k}}T_{u}^{comp}+\max_{u\in\mathcal{U}_{k} }T_{u}^{com}+T_{k}^{do}\] \[=\max_{u\in\mathcal{U}_{k}}\left\{T_{l}\frac{Q_{u}D_{n}}{f_{u}} \right\}+\max_{u\in\mathcal{N}_{k}}\left\{\frac{s}{R_{k,b}^{u}}\right\}+\frac{ s}{R_{k}}. \tag{18}\] Therefore, the total FL time over all global iterations \(T\) is \(\tau=T(\max_{k\in\mathcal{K}}(\tau_{k})+T_{diss})\), which should be no more than the maximum FL time threshold \(T_{\text{max}}\). This constraint, over all global iterations \(T\), is expressed as \[\tau=T\bigg{(}\underbrace{\max_{u\in\mathcal{N}}\left\{T_{l}\frac{Q_{n}D_{u} }{f_{u}}\right\}}_{\text{local learning}}+\underbrace{T_{u}}_{\text{uplink transition}}+\underbrace{\max_{k\in\mathcal{K}}\left\{\frac{s}{R_{k}}\right\}}_{\text{ downlink transmission}}\\ +\underbrace{T_{diss}}_{\text{disemination duration}}\bigg{)}\leq T _{\text{max}}. \tag{19}\] #### Iv-A2 Energy Consumption The system's energy is consumed for local model training at the UDs, wireless models transmission, and UAVs' hovering in the air. #### Iv-A1 Local computation The well-known energy consumption model for the local computation is considered, where the energy consumption of the \(u\)-th UD to process a single CPU cycle is \(\alpha f_{u}^{2}\), and \(\alpha\) is a constant related to the switched capacitance [38, 39]. Thus, the energy consumption of the \(u\)-th UD for local computation is \(E_{u}^{comp}=T_{loc}C_{u}D_{u}\alpha f_{u}^{2}\). #### Iv-A2 Wireless models transmission The energy consumption to transmit the local model parameters to the associated UAVs can be denoted by \(E_{u}^{com}\) and calculated as \(E_{u}^{com}=P_{u}T_{u}^{com}\). Then, the total energy consumption \(E_{u}\) at the \(u\)-th UD is \(E_{u}=E_{u}^{comp}+E_{u}^{com}\). In a similar manner, the consumed energy for transmitting the local aggregated models back to the associated UDs can be denoted by \(E_{k}^{com}\) and calculated as \(E_{k}^{com}=PT_{k}^{com}\). #### Iv-A3 UAV's hovering energy UAVs need to remain stationary in the air, thus most of UAV's energy is consumed for hovering. The UAV's hovering power is expressed as [40]\(p^{how}=\sqrt{\frac{(mg)^{3}}{2\pi r_{p}^{2}n_{p}}}\), where \(m\) is UAV's weight, \(g\) is the gravitational acceleration of the earth, \(r_{p}\) is propellers' radius, \(n_{p}\) is the number of propellers, and \(\rho\) is the air density. In general, these parameters of all the UAVs are the same. The hovering time of the \(k\)-th UAV in each global iteration depends on \(\tau\). Hence, the hovering energy of the \(k\)-th UAV can be calculated as \(E_{k}^{how}=p^{how}\tau^{t}\). In summary, the overall energy consumption of the \(k\)-th UAV and the \(u\)-th UD, respectively, are \[E_{k}=T\left\{E_{l}^{how}+E_{l}^{com}\right\},E_{u}=T\left\{E_{u}^{comp}+E_{u }^{com}\right\}. \tag{20}\] ### _Problem Formulation_ Given the ATIN and its FL time and energy components, our next step is to formulate the energy consumption minimization problem that involves the joint optimization of two sub-problems, namely UAV-LOS UD clustering and D2D scheduling sub-problems. To minimize the energy consumption at each global iteration, we need to develop a framework that decides: i) the UAV-UD clustering; ii) the adopted transmission rate of the UDs \(\mathcal{U}_{los}\) to transmit their local models to the set of UAVs/RRBs; and iii) the set of D2D transmitters (relays) that helping the non-LOS UDs to transmit their local models to the set of UAVs \(\mathcal{K}\). As such, the local models are delivered to all UAVs with minimum duration time, thus minimum energy consumption for UAV's hovering and UD's wireless transmission. Therefore, the energy consumption minimization problem in the ATIN can be formulated as follows. \[\text{P0:}\min_{R_{min}^{m}\mathcal{U}_{los}\mathcal{U}_{non}}\sum_{k\in \mathcal{K}}E_{k}+\sum_{u\in\mathcal{U}}E_{u}\] \[\text{s.t.}\begin{cases}\text{C1: }\mathcal{U}_{k, los}\cap\mathcal{U}_{k^{\prime},los}=\emptyset,\forall(k,k^{\prime})\in \mathcal{K},\\ \text{C2: }\mathcal{U}_{k,los}\cap\mathcal{U}_{k^{\prime},los}=\emptyset,\forall k \in\mathcal{K},\\ \text{C3: }\mathcal{U}_{u,los}\cap\mathcal{U}_{k^{\prime\prime},los}=\emptyset,\\ \text{C4: }R_{k,b}^{u}\geq R_{0},(u,k,b)\in(\mathcal{U},\mathcal{K},\mathcal{B}),\\ \text{C5: }\text{T}_{\hat{u}}\leq T_{u},u\in\mathcal{U}_{los},\\ \text{C6: }T_{idle}^{\hat{u}}\geq(\frac{s}{R_{k}^{\hat{u}}}+\frac{s}{R_{k,b}^{\hat{u}}}),\bar{u}\in\mathcal{U}_{los},\hat{u}\in\mathcal{U}_{non},\\ \text{C7: }\tau\leq T_{\text{max}}.\end{cases}\] The constraints are explained as follows. Constraint C1 states that the set of scheduled UDs to the UAVs are disjoint, i.e., each UD must be scheduled to only one UAV. Constraints C2 and C3 make sure that each UD can be scheduled to only one relay and no user can be scheduled to a relay and UAV at the same time instant. Constraint C4 is on the coverage threshold of each UAV. Constraint C5 ensures that the local parameters of UD \(\hat{u}\) has to be delivered to UAV \(k\) via relay \(\bar{u}\) within \(\frac{s}{R_{min}^{m}}\), i.e., \(\text{T}_{\hat{u}}=\frac{s}{R_{\hat{u}}^{\hat{u}}}+\frac{s}{R_{k,b}^{\hat{u}}}\leq \frac{s}{R_{min}^{m}}\). Constraint C6 ensures that the idle time of UD \(\hat{u}\) is long enough for transmitting the local parameters of UD \(\hat{u}\) to UAV \(k\). Constraint C7 is for the FL time threshold \(T_{max}\). We can readily show that problem P0 is NP-hard. However, by analyzing the problem, we can decompose it into two sub-problems and solve them individually and efficiently. ### _Problem Decomposition_ First, we focus on minimizing the energy consumption via efficient RRM scheduling of UDs \(\mathcal{U}_{los}\) to the UAVs/RRBs. In particular, we can get the possible minimum transmission duration of UD \(u\in\mathcal{U}_{los}\) by jointly optimizing the UD scheduling and rate adaptation in \(\mathcal{U}_{los}\). The mathematical formulation for minimizing the energy consumption via minimizing the transmission durations for UDs-UAVs/RRBs transmissions can be expressed as \[\text{P1:}\min_{R_{min}^{min},\mathcal{U}_{los}}\sum_{k\in\mathcal{K}}E_{k}+\sum _{u\in\mathcal{U}}E_{u}\] \[\mathrm{s.t.}\left\{(\text{C1}),\ \ (\text{C4}),\ \ (\text{C5}),\ \ (\text{C7}).\right.\] Note that this sub-problem contains UD-UAV/RRB scheduling and an efficient solution will be developed in Section IV-B. After obtaining the possible transmission duration from UD-UAV transmissions, denoted by \(T_{u}\) of the \(u\)-th UD (\(u\in\mathcal{U}_{los}\)), by solving P1, we can now formulate the second sub-problem. In particular, we can minimize the energy consumption of non-LOS UDs \(\mathcal{U}_{non}\) that are not been scheduled to the UAVs within \(T_{u}\) by using D2D communications via relaying mode. For this, UDs being scheduled to the UAVs from sub-problem P1 can be exploited to work as relays and schedule non-LOS UDs on D2D links within their idle times. Therefore, the second sub-problem of minimizing the energy consumption of unscheduled UDs to be scheduled on D2D links via relaying mode can be expressed as P2 as follows \[\text{P2:}\min_{\mathcal{U}_{non}}\sum_{k\in\mathcal{K}}E_{k}+\sum _{u\in\mathcal{U}}E_{u}\] \[\mathrm{s.t.}\left\{(\text{C2}),\ \ (\text{C3}),\ \ (\text{C5}),\ \ (\text{C6}),\text{C8:}\ \mathcal{N}_{non}\in\mathcal{P}(\mathcal{U} \backslash\mathcal{N}_{los}).\right.\] Constraint C8 states that the set of relays is constrained only on the UDs that are not been scheduled to the UAVs. It can be easily observed that P2 is a D2D scheduling problem that considers selection of relays and their non-LOS scheduled UDs. ## V FedMoD: Proposed Solution ### _Solution to Subproblem P1: UAV-UD Clustering_ Let \(\mathcal{A}\) denote the set of all possible associations between UAVs, RRBs, and LOS UDs, i.e., \(\mathcal{A}=\mathcal{K}\times\mathcal{Z}\times\mathcal{U}_{los}\). For instance, one possible association \(a\) in \(\mathcal{A}\) is \((k,z,u)\) which represents UAV \(k\), RRB \(z\), and UD \(u\). Let the conflict clustering graph in the network is denoted by \(\mathcal{G}(\mathcal{V},\mathcal{E})\) wherein \(\mathcal{V}\) and \(\mathcal{E}\) are the sets of vertices and edges of \(\mathcal{G}\), respectively. A typical vertex in \(\mathcal{G}\) represents an association in \(\mathcal{A}\), and each edge between two different vertices represents a conflict connection between the two corresponding associations of the vertices according to C1 in P1. Therefore, we construct the conflict clustering graph by generating a vertex \(v\in\mathcal{V}\) associated with \(a\in\mathcal{A}\) for UDs who have enough energy for performing learning and wireless transmissions. To select the UD-UAV/RRB scheduling that provides a minimum energy consumption while ensures C4 and C7 in P1, a weight \(w(v)\) is assigned to each vertex \(v\in\mathcal{V}\). For simplicity, we define the weight of vertex \(v^{z}_{k,u}\) as \(w(v^{z}_{k,u})=E^{comp}_{u}+E^{com}_{u}\). Vertices \(v^{z}_{k,u}\) and \(v^{z^{\prime}}_{k^{\prime},u^{\prime}}\) are conflicting vertices that will be connected by an edge in \(\mathcal{E}\) if one of the below **connectivity conditions (CC)** is satisfied: * **CC1:** (\(u=u^{\prime}\) and \(z=z^{\prime}\) or \(k=k^{\prime}\)). **CC1** states that the same user \(u\) is in both vertices \(v^{z}_{k,u}\) and \(v^{z^{\prime}}_{k^{\prime},u^{\prime}}\). * **CC2:** (\(z=z^{\prime}\) and \(u\neq u^{\prime}\)). **CC2** implies that the same RRB is in both vertices \(v^{z}_{k,u}\) and \(v^{z^{\prime}}_{k,u^{\prime}}\). Clearly, **CC1** and **CC2** correspond to a violation of the constraint C1 of P1 where two vertices are conflicting if: (i) the UD is associated with two different UAVs and(or) two different RRBs; or (ii) the RRB is associated with more than one UD. With the designed conflict clustering graph, P1 is similar to MWIS problems in several aspects. In MWIS, two vertices should be non-adjacent in the graph (conditions **CC1** and **CC2** in Section V-B), and similarly, in P1, same local learning user cannot be scheduled to two different UAVs or two different RRBs (i.e., C1). Moreover, the objective of problem P1 is to minimize the energy consumption, and similarly, the goal of MWIS is to select a number of vertices that have small weights. Therefore, the following theorem characterizes the solution to the energy consumption minimization problem P2 in an ATIN. **Theorem 2**.: _The solution to problem P1 is equivalent to the minimum independent set weighting-search method, in which the weight of each vertex \(v\) corresponding to UD \(u\) is_ \[w(v)=E^{comp}_{u}+E^{com}_{u}. \tag{24}\] Finding the minimum weight independent set \(\Gamma^{*}\) among all other minimal sets in \(\mathcal{G}\) graph is explained as follows. First, we select vertex \(v_{i}\in\mathcal{V},(i=1,2,\cdots,)\) that has the minimum weight \(w(v^{*}_{i})\) and add it to \(\Gamma^{*}\) (at this point \(\Gamma^{*}=\{v^{*}_{i}\}\)). Then, the subgraph \(\mathcal{G}(\Gamma^{*})\), which consists of vertices in graph \(\mathcal{G}\) that are not adjacent to vertex \(v^{*}_{i}\), is extracted and considered for the next vertex selection process. Second, we select a new minimum weight vertex \(v^{*}_{i^{\prime}}\) (i.e., \(v^{*}_{i^{\prime}}\) should be in the corresponding set of \(v^{*}_{i}\)) from subgraph \(\mathcal{G}(\Gamma^{*})\). Now, \(\Gamma^{*}=\{v^{*}_{i},v^{*}_{i^{\prime}}\}\). We repeat this process until no further vertex is not adjacent to all vertices in \(\Gamma^{*}\). The selected \(\mathtt{C}\) contains at most \(ZK\) vertices. Essentially, any possible solution \(\Gamma^{*}=\{v^{*}_{1},v^{*}_{2},\cdots,v^{*}_{ZK}\}\) to P1 represents a feasible UD-UAV/RRB scheduling. ### _Solution to Subproblem P2: D2D Graph Construction_ In this subsection, our main focus is to schedule the non-LOS UDs to the LOS UDs (relays) over their idle times so as the local models of those non-LOS UDs can be forwarded to the UAVs. Since non-LOS UDs communicate with their respective relays over D2D links, the D2D connectivity can be characterized by an undirected graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\) with \(\mathcal{V}\) denoting the set of vertices and \(\mathcal{E}\) the set of edges. We construct a new D2D conflict graph that considers all possible conflicts for scheduling non-LOS UDs on D2D links, such as transmission and half-duplex conflicts. This leads to feasible transmissions from the potential D2D transmitters \(|\mathcal{U}_{\text{non},\text{tr}}|\). Recall \(\mathcal{U}_{\text{non}}\) is the set of non-LOS UDs, i.e., \(\mathcal{U}_{\text{non}}=\mathcal{U}\backslash\mathcal{U}_{los}\), and let \(\mathcal{U}_{\text{relay}}=\mathcal{U}_{\text{los}}\backslash\{u\}\) denote the set of relays that can use their idle times to help the non-LOS UDs. Hence, the D2D conflict graph is designed by generating all vertices for \(\bar{u}\)-th possible relay, \(\forall\bar{u}\in\mathcal{U}_{\text{relay}}\). The vertex set \(\mathcal{V}\) of the entire graph is the union of vertices of all users. Consider, for now, generating the vertices of the \(\bar{u}\)-th relay. Note that \(\bar{u}\)-th relay can help one non-LOS UD as long as it is in the coverage zone is capable of delivering the local model to the scheduled UAV within its idle time. Therefore, each vertex is generated for each single non-LoS UD that is located in the coverage zone of the \(\bar{u}\)-th relay and \(\Upsilon_{\bar{u}}\leq T_{idle}^{\bar{u}}\). Accordingly, \(i\)-th non-LOS UD in the coverage zone \(\mathcal{Z}_{\bar{u}}\) can transmit its model to the \(\bar{u}\)-th relay. Therefore, we generate \(|\mathcal{Z}_{\bar{u}}|\) vertices for the \(\bar{u}\)-th relay. All possible conflict connections between vertices (conflict edges between circles) in the D2D conflict graph are provided as follows. Two vertices \(v_{i}^{\bar{u}}\) and \(v_{i^{\prime}}^{\eta^{\prime}}\) are adjacent by a conflict edge in \(\mathcal{G}_{\text{End}}\), if one of the following conflict conditions is true: (i) (\(\bar{u}\neq u^{\prime}\)) and (\(i=i^{\prime}\)). The same non-LoS UD cannot be scheduled to two different helpers \(\bar{u}\) and \(u^{\prime}\). (ii) (\(i\neq i^{\prime}\)) and (\(\bar{u}=u^{\prime}\)). Two different non-LoS UDs can not be scheduled to the same relay. These two conditions represent C3 in P2, where each non-LoS UD must be assigned to one relay and the same relay cannot accommodate more than one non-LoS UD. Given the aforementioned designed D2D conflict graph, the following theorem reformulates the subproblem P3. **Theorem 3**.: _The subproblem of scheduling non-LOS UDs on D2D links in \(P2\) is equivalently represented by the MWIS selection among all the maximal sets in the \(\mathcal{G}_{\text{End}}\) graph, where the weight \(\psi(v_{i}^{\bar{u}})\) of each vertex \(v_{i}^{\bar{u}}\) is given by \(\psi(v_{i}^{\bar{u}})=r\)._ ## VI Numerical Results For our simulations, a circular network area having a radius of \(400\) meter (m) is considered. The height of the CPS is \(10\) m [40]. Unless specified otherwise, we divide the considered circular network area into \(5\) target locations. As mentioned in the system model, each target location is assigned to one UAV where the locations of the UAVs are randomly distributed in the flying plane with altitude of \(100\) m. The users are placed randomly in the area. In addition, \(U\) users are connected to the UAVs through orthogonal RRBs for uplink local model transmissions. The bandwidth of each RRB is \(2\) MHz. The UAV communicates with the neighboring UAVs via high-speed mmWave communication links [25, 31]. Our proposed FedMod scheme is evaluated on the MNIST and CIFAR-10 datasets, which are well-known benchmark datasets for image classification tasks. Each image is one of \(10\) categories. We divide the dataset into the UDs' local data \(\mathcal{D}_{u}\) with non-i.i.d. data heterogeneity, where each local dataset contains datapoints from two of the \(10\) labels. In each case, \(\mathcal{D}_{u}\) is selected randomly from the full dataset of labels assigned to \(u\)-th UD. We also assume non-iid-clustering, where the maximum number of assigned classes for each cluster is \(6\) classes. For ML models, we use a deep neural network with \(3\) convolutional layers and \(1\) fully connected layer. The total number of trainable parameters for MNIST is \(9,098\) and for CIFAR-10 is \(21,840\). We simulate an FedMod system with \(30\) UDs (for CIFAR-10) and \(20\) UDs (for MNIST) and \(5\) UAVs each with \(7\) orthogonal RRBs. In our experiments, we consider a network topology that is illustrated in Fig. 3 unless otherwise specified. The remaining simulation parameters are summarized in TABLE I and selected based on [13, 40, 41, 42, 34]. To showcase the effectiveness of FedMoD in terms of learning accuracy and energy consumption, we consider the _Star-based FL_ and _HFL_ schemes. We show the training accuracy with respect to number of iterations for both the MNIST and CIFAR-10 datasets with different model dissemination rounds \(\alpha\) in Fig. 4. Specially, in Figs. 4(a) and 4(b), we show the accuracy performance of our proposed FedMoD scheme with full dissemination against the centralized FL schemes. Particularly, in the considered star-based and HFL schemes, the CPS can receive the local trained models from the UDs, where each scheduled UD transmits its trained model directly to the CPS (in case of star-based FL) or through UAVs (in case of HFL). Thus, the CPS can aggregate all the local models of all the scheduled UDs. In the considered decentralized FedMoD, before the dissemination rounds starts, each UAV has aggregated the trained local models of the scheduled UDs in its cluster only. However, with the novel dissemination FedMoD method, each UAV shares its aggregated models with the neighboring UAVs using one hop transmission. Thus, at each dissemination round, UAVs build their side information (_Known_ models and _Unknown_ models) until they receive all the _Unknown_ models. Thus, the UAVs have full knowledge about the global model of the system at each global iteration. Thanks to the efficient FedMoD dissemination method, the accuracy of the proposed FedMoD scheme is almost the same as the centralized FL schemes. Such efficient communications among UAVs accelerate the learning progress, thereby FedMoD model reaches an accuracy of (\(0.945\), \(0.668\) for MNIST and CIFAR-10) with around \(200\) and \(300\) global iterations, respectively, as compared to the accuracy of (\(0.955\), \(0.944\) for MNIST) and (\(0.665\), \(0.668\) for CIFAR-10) for star-based and HFL schemes, respectively. It is important to note that although our proposed overcomes the strugggling UD issue of the star-based scheme and the two-hop transmission of the HFL, it needs a few rounds of model dissemination. However, the effective coding scheme of the models minimizes the number of dissemination rounds. In addition, due to the high communication links between the UAVs, the dissemination delay is negligible which does not affect the FL time. In Figs. 5(a) and 5(b), we further study the impact of the number of dissemination rounds \(\alpha\) on the convergence rate of the proposed FedMoD scheme for both the MNIST and CIFAR-10 datasets. For both figures, we consider the following three proposed schemes: (i) FedMoD scheme-full dissemination where UAVs perform full model dissemination \begin{table} \begin{tabular}{|l|l|} \hline **Parameter** & **Value** \\ \hline Carrier frequency, \(f\) & \(1\) GHz [40] \\ \hline Speed of light, \(c\) & \(3\times 10^{8}\) m/s \\ \hline Propagation parameters, \(a\) and \(b\) & \(9.6\) and \(0.28\)[40] \\ \hline Attenuation factors, \(\psi^{t\text{-}\text{CS}}\) and \(\psi^{t\text{-}\text{CS}}\) & \(1\) dB and \(20\) dB [40] \\ \hline UAV’s and UD’s transmit maximum & \(1\) and \(3\) Watt [40] \\ \hline Transmit power of the CPS & \(5\) Watt \\ \hline Noise PSD, \(N_{0}\) & -\(174\) dBm/Hz \\ \hline Local and aggregated parameters size, \(s\) & \(9.1\) KB \\ \hline UD processing density, \(C_{u}\) & \([400-600]\) \\ \hline UD computation frequency, \(f_{u}\) & \([0.0003-1]\) G cycles/s \\ \hline CPU architecture based parameter, \(\alpha\) & \(10^{-28}\) \\ \hline FL time threshold \(T_{max}\) & \(1\) Second \\ \hline Number of data samples, \(S_{u}\) & \(200\) \\ \hline \end{tabular} \end{table} TABLE I: Simulation Parameters at each global iteration, (ii) FedMoD scheme - \(\alpha=2\) where partially dissemination is performed and after each \(2\) complete global iterations, we perform full dissemination, and (iii) FedMoD scheme - \(\alpha=3\) where partially dissemination is performed and after each \(3\) complete global iterations, a full dissemination is performed. From Figs. 5(a) and 5(b), we observe that a partial dissemination with less frequent full dissemination leads to a lower training accuracy within a given number of training iterations. Specifically, the accuracy performance for full dissemination, \(\alpha=2\) and \(3\) schemes is \(0.966,0.66,0.75\) for MNIST and \(0.668,0.52,0.59\) for CIFAR-10, respectively. Infrequent inter-cluster UAV dissemination also leads to un-stable convergence since the UAVs do not frequently aggregate all the local trained models of the UDs. In Figs. 6(a) and 6(b), we show the test error with respect to number of iterations for both the MNIST and CIFAR-10 datasets with different model dissemination rounds \(\alpha\). It can be observed that, the test error of the proposed FedMod model drops rapidly at the early stage of the training process, and converges at around \(160\) iterations. On the other hand, the training progress of FedMoD model with infrequent full model dissemination (i.e., \(\alpha=2\), \(\alpha=3\)) lags far behind due to the insufficient model averaging of the UAVs, which is due to infrequent full communications among them. As result, both schemes do not converge and suffer higher testing loss compared to full dissemination of the FedMoD scheme since the communication among edge servers in the full dissemination is more efficient and thus accelerates the learning progress. We also evaluate the learning accuracy of FedMoD on different network topologies of the UAVs as shown in Fig. 7(a). We consider fully connected network where all the UAVs are connected and partially connected network where UAV \(4\) is not connected to UAV \(1\). In this figure, we perform 4 different rounds of dissemination. As shown in Fig. 7(b), we see that within a given number of global iterations, a more connected network topology achieves a higher test accuracy. This is because more model information is collected from neighboring UAVs in each round of inter-UAV model aggregation. It is also observed that when \(\alpha\) is greater than \(4\), the test accuracy of the partially connected network can approach the case with a fully-connected network. Therefore, based on the network topology of UAVs, we can choose a suitable value of \(\alpha\) to balance between the number of inter-cluster UAV aggregation and learning performance. In Fig. 8, we plot the energy consumption of the proposed and benchmark schemes versus the number of UDs for a network of \(4\) UAVs and \(4\) RRBs per UAV. From the objective function of problem P1, we can observe that an efficient radio resource management scheme leads to a lower energy consumption. Hence, from Fig. 8, we observe that for FedMoD, the average energy consumption is minimized. Such an observation is because the proposed schemes judiciously allocate LOS UDs to the UAVs and their available RRBs as well as D2D communications. In particular, the random scheme has the largest energy consumption because it Fig. 4: Performance comparison between FedMoD and baseline schemes for MNIST and CIFAR-10: Accuracy vs. number of iterations. Fig. 5: Performance comparison of FedMoD for MNIST and CIFAR-10 with different \(\alpha\). Fig. 8: Average energy consumption vs. number of UDs. Fig. 6: Test error of FedMoD for MNIST and CIFAR-10 with different \(\alpha\). Fig. 7: Typical network topologies of the UAVs for model dissemination and their FL accuracy. randomly schedules the UDs to the UAVs and their available RRBs. Accordingly, from energy consumption perspective, it is inefficient to consider a random radio resource management scheme. From Fig. 8, it is observed that the proposed centralized scheduling FedMoD and distributed FedMoD schemes offer the same energy consumption performances for the same number of UDs. Such an observation can be explained by the following argument. When we have a large number of UDs, the probability that a UD is scheduled to more than one UAV decreases. As a result, the conflict among UAVs and the likelihood of scheduling UDs to the wrong UAV decreases. As an insight from this figure, a distributed radio resource management scheme is a suitable alternative for scheduling the LOS UDs to the UAVs, especially for large-scale networks. ## VII Conclusion In this paper, we developed a novel decentralized FL scheme, called FedMoD, which maintains convergence speed and reduces energy consumption of FL in mmWave ATINs. Specifically, we proposed a FedMoD scheme based on inter-cluster UAV communications, and theoretically proved its convergence. A rate-adaptive and D2D-assisted RRM scheme was also developed to minimize the overall energy consumption of the proposed decentralized FL scheme. The presented simulation results revealed that our proposed FedMoD achieves the same accuracy as the baseline FL scheme while substantially reducing energy consumption for convergence. In addition, simulation results reveal various insights concerning how the topology of the network impacts the number of inter-cluster UAV aggregations required for the convergence of FedMoD.
2309.15302
STERLING: Self-Supervised Terrain Representation Learning from Unconstrained Robot Experience
Terrain awareness, i.e., the ability to identify and distinguish different types of terrain, is a critical ability that robots must have to succeed at autonomous off-road navigation. Current approaches that provide robots with this awareness either rely on labeled data which is expensive to collect, engineered features and cost functions that may not generalize, or expert human demonstrations which may not be available. Towards endowing robots with terrain awareness without these limitations, we introduce Self-supervised TErrain Representation LearnING (STERLING), a novel approach for learning terrain representations that relies solely on easy-to-collect, unconstrained (e.g., non-expert), and unlabelled robot experience, with no additional constraints on data collection. STERLING employs a novel multi-modal self-supervision objective through non-contrastive representation learning to learn relevant terrain representations for terrain-aware navigation. Through physical robot experiments in off-road environments, we evaluate STERLING features on the task of preference-aligned visual navigation and find that STERLING features perform on par with fully supervised approaches and outperform other state-of-the-art methods with respect to preference alignment. Additionally, we perform a large-scale experiment of autonomously hiking a 3-mile long trail which STERLING completes successfully with only two manual interventions, demonstrating its robustness to real-world off-road conditions.
Haresh Karnan, Elvin Yang, Daniel Farkash, Garrett Warnell, Joydeep Biswas, Peter Stone
2023-09-26T22:55:32Z
http://arxiv.org/abs/2309.15302v2
# Sterling: Self-Supervised Terrain Representation Learning from Unconstrained Robot Experience ###### Abstract _Terrain awareness_, i.e., the ability to _identify_ and _distinguish_ different types of terrain, is a critical ability that robots must have to succeed at autonomous off-road navigation. Current approaches that provide robots with this awareness either rely on labeled data which is expensive to collect, engineered features and cost functions that may not generalize, or expert human demonstrations which may not be available. Towards endowing robots with terrain awareness without these limitations, we introduce _Self-supervised TErrain Representation LearnING_ (sterling), a novel approach for learning terrain representations that relies solely on easy-to-collect, unconstrained (e.g., non-expert), and unlabeled robot experience, with no additional constraints on data collection. sterling employs a novel multi-modal self-supervision objective through non-contrastive representation learning to learn relevant terrain representations for terrain-aware navigation. Through physical robot experiments in off-road environments, we evaluate sterling features on the task of preference-aligned visual navigation and find that sterling features perform on par with fully-supervised approaches and outperform other state-of-the-art methods with respect to preference alignment. Additionally, we perform a large-scale experiment of semi-autonomously hiking a 3-mile long trail which sterling completes successfully with only two manual interventions, demonstrating robustness to real-world off-road conditions. Robot experiment videos and more details can be found in the appendix and the project website [https://hareshkaran.github.io/sterling/](https://hareshkaran.github.io/sterling/) Vision-Based Navigation, Representation Learning. ## 1 Introduction Off-road navigation is emerging as a crucial capability for autonomous mobile robots envisioned for use in a growing number of outdoor applications such as agricultural operations [1], package delivery [2], and search and rescue [3]. Endowing robots with this capability has, however, proved to be challenging and remains an active area of research. One particularly difficult challenge in off-road autonomous navigation is that of providing the robot with _terrain awareness_, i.e., the ability to identify distinct terrain features that are relevant to a wide variety of downstream tasks (e.g., changing preferences over terrain types) [4, 5, 6, 7, 8, 9, 10, 11]. While a litany of prior work has attempted to address this challenge [12, 13, 14, 15], existing approaches typically rely on difficult-to-collect curated datasets [16, 17, 18, 19, 20] or has been focused on particular tasks [21; 22; 23; 24; 25; 9] and is not amenable to downstream task changes [26; 25; 7]. These limitations prevent existing approaches from appropriately scaling to the vast distribution of terrains and navigation tasks in the real world. Toward overcoming the scalability challenges in terrain awareness, we introduce _Self-supervised TErrain Representation LearnING_ (sterling)1, a novel approach to learning terrain representations for off-road navigation. sterling learns an encoding function that maps high-dimensional, multi-modal sensor data to low-dimensional, terrain-aware representations that amplify differences important for navigation and attenuate differences due to extraneous factors such as changes in viewpoint and lighting. Importantly, sterling works with easy-to-collect unconstrained and unlabeled robot data, thereby providing a scalable pathway to data collection and system improvement for the wide variety of terrain and downstream tasks that off-road robots must face. Footnote 1: A preliminary version of this work was presented at the PT4R workshop at ICRA 2023 [27] To evaluate sterling, we apply it to the problem of preference-aligned off-road navigation and provide a detailed comparison to existing approaches to this problem, including rca[7], ganav[19], se-r[8], and a fully-supervised oracle. We find that sterling enables performance on par with or better than these existing approaches without requiring any expert labels or demonstrations. Additionally, we report the results of a large-scale qualitative experiment in which sterling enabled semi-autonomous robot navigation on a 3-mile long hiking trail. The key contributions of this paper are-- 1) _Self-supervised TErrain Representation LearnING_ (sterling), a novel approach that learns terrain representations from easy-to-collect unconstrained robot experiences, 2) Detailed evaluation of sterling against baseline methods on the task of operator preference-aligned off-road navigation, and 3) A large-scale qualitative experiment of semi-autonomously hiking a 3-mile long trail, demonstrating the effectiveness of sterling-features. ## 2 Related Work In this section, we review related work on terrain-aware visual off-road navigation. We specifically focus on approaches that learn to navigate off-road conditions using supervised and self-supervised learning. ### Supervised Methods Several approaches in the past have proposed using supervised learning from large-scale data to navigate off-road environments. We divide them into two categories as follows. **End-to-End Learning:** The initial success of applying learning-based solutions to off-road terrain-aware navigation was by LeCun et al. [28] who used a convolutional network to learn to drive in off-road conditions. More recently, Bojarski et al. [21] trained a deep neural network end-to-end using several miles of driving data collected on a vehicle in the real world. While both approaches were promising in urban and off-road environments, end-to-end methods require large amounts of data and are well-known to suffer from domain and covariate shifts [29; 30; 31]. **Image Segmentation:** Unlike end-to-end approaches that learn behaviors, segmentation-based approaches seek to characterize terrain using a set of known semantic classes, and the resulting semantic features are consumed by downstream planning and control techniques for navigation [32; 19; 33]. Guan et al. [19] propose ganav, a transformer-based architecture to pixel-wise segment terrains, trained on RELLIS [17] and RUGD [16] datasets, with manually assigned terrain costs. While effective at terrain awareness, segmentation-based methods are fixed to the specific terrain types available in the datasets and require additional labeling effort to generalize to novel terrains. In sterling, we do not require semantically labeled datasets and learn terrain representations from unconstrained experience collected onboard a mobile robot. ### Self-Supervised Learning To alleviate the need for extensive human labeling, self-supervised learning methods have been proposed to either learn terrain representations or costs from data gathered onboard a mobile robot. **Representation Learning:** Brooks et al. [34] utilize contact vibrations and visual sensors to classify terrains via self-supervision. Loquercio et al. [35] use proprioceptive supervision to predict extrinsic representations [36] of terrain geometry from vision, used as inputs to drive a Reinforcement Learning-based locomotion policy. In this work, we do not learn a robot-specific locomotion policy and instead learn relevant representations for off-road terrain awareness. Zurn et al. [8] introduce se-r which utilizes acoustic and visual sensors on the robot to segment terrains using a self-supervised triplet-contrastive learning framework. Using triplet-based contrastive learning methods requires negative samples which may not be available when learning using unlabeled data. In sterling, we use recently proposed non-contrastive unsupervised learning approaches such as vicreg [37] that do not require any negative samples and instead rely on correlations between data modalities to learn relevant terrain representations. **Cost Learning:** Several methods have applied self-supervision to assign traversability costs for the downstream off-road navigation task [7; 38; 26; 39; 40; 41; 42]. Specifically, these methods rely on inertial spectral features [7], future predictive models [26], inertial-odometry errors [38], or force-torque values from foothold positions [39; 43] as self-supervision signals to learn a traversability cost map, used to evaluate candidate actions. More recently, Frey et al. [44] have proposed an online traversability estimation approach inspired by the above self-supervision schemes. Instead of inferring costs or rewards using self-supervision for a fixed task, in this work, we focus on learning relevant visual features from unconstrained robot experiences that could be used in downstream tasks. This framework allows a designer to reuse features across tasks without retraining entirely from scratch. **Hybrid Methods:** The approach closest to ours is vrl-pap[6] which requires human expert teleoperated demonstrations of a particular trajectory pattern to both explicitly learn visual terrain representations as well as to infer terrain preference costs. However, in this work, we focus on learning terrain features from unconstrained robot experiences without requiring human experts in the field for demonstrations, which is a more general problem than the one considered by vrl-pap. ## 3 Approach In this section, we introduce the self-supervised terrain representation learning approach, sterling, proposed in this work. We first describe the offline pre-processing performed on unconstrained robot data and then summarize the self-supervision objectives. Finally, we describe the problem formulation for preference-aligned off-road navigation and present how features learned using sterling can be utilized within a planner for terrain-aware and preference-aligned navigation. ### Data-Collection and Pre-Processing sterling learns terrain representations from unconstrained, unlabeled robot experiences collected using any navigation policy. This policy may be, for instance, non-expert human teleoperation, curiosity-driven exploration [45], or point-to-point navigation using any underlying planner. Compared to requiring a human expert to provide teleoperated demonstrations and labels, collecting this type of robot experience is cheap and easy, thereby providing a scalable pathway to data collection and system improvement. We additionally assume that the robot is equipped with _multiple sensors_, e.g., an egocentric RGB camera, odometry sensors, onboard IMU, proprioceptive, and tactile sensors, that together provide rich multi-modal observations as the robot traverses over different terrains collecting experience. sterling leverages this multi-modal data by using the correlation between different modalities to inform the learned terrain representations. In order to learn terrain representations using sterling, we begin by pre-processing the visual and non-visual observations, which are explained in detail below. **Visual Patch Extraction:** The egocentric camera observations are homography-projected into a virtual bird's eye view (BEV) frame, assuming that the ground is a flat plane, using the intrinsic and extrinsic camera matrices. As shown in Fig. 1, we project the robot's trajectory onto the BEV frame and extract 64-by-64 pixels (equivalent to the robot's footprint of 0.5-by-0.5 meters) square visual patches of terrain along with the corresponding inertial, proprioceptive, and tactile observations at the same location, along the trajectory. Since the terrain at \(s_{k}\) is unobservable when the robot itself is at \(s_{k}\) (i.e., it is underneath the robot), we extract terrain image patches corresponding to \(s_{k}\) from BEV observations at previous locations \(s_{k-1},s_{k-2},\ldots\) along its trajectory. Fig. 1 illustrates the offline patch extraction process from two previous viewpoints, however, we extract patches from up to 20 previous viewpoints within 2 meters. Although just one viewpoint is sufficient to learn the correlation between visual and other sensor observations, when planning to navigate, the robot will need to visually evaluate terrain at future locations, and therefore sterling also seeks representations that are invariant to patch differences due to viewpoint, also known as _viewpoint invariance_. **IPT Pre-Processing:** For the inertial, proprioceptive, and tactile (ipt) observations, we retain up to 2-second history and convert the time-series signals into power-spectral density (psd) representation in the frequency domain. This ensures the ipt time-series data representations used as input to sterling are invariant to differences in length and phase in the recorded signals. Additional details are provided in Supplementary Section 9.5. ### Non-Contrastive Terrain Representation Learning It is desired for learned representations of terrains to be such that representations of similar terrain are close together in the embedding space and that representations of different terrains are sufficiently far apart. Although we do not possess privileged information such as semantic labels of terrains for training, the visual and kinodynamic observations experienced by the robot reflect similarities and differences between terrain samples. For instance, traversing a smooth terrain that a human may refer to as cement sidewalk may lead to relatively smooth motion by the robot's joints, whereas a rough terrain such as what might be referred to as marble rocks may correspond to jerkier motion. sterling leverages this multi-modal experience observed by the robot and computes a correlation objective between visual and inertial-proprio-tactile signals to learn desired terrain representations. Additionally, sterling uses viewpoint invariance as an objective unique to the visual component of the experience to learn viewpoint-invariant terrain representations. Fig. 2 provides an overview of the self-supervised representation learning framework adopted in sterling. A parameterized visual encoder (4-layer CNN with 0.25 million parameters) encodes terrain image patch observations \(v_{1}\) and \(v_{2}\) of the same location \(s\) into visual representations \(\phi_{v_{1}}\) and \(\phi_{v_{2}}\), respectively, collectively referred to as \(\phi_{v_{1,2}}\) for brevity. Similarly, an inertial-proprio-tactile encoder (4-layer MLP with 0.25 million parameters) encodes frequency domain ipt observations of the robot at that location to an inertial-proprio-tactile representation \(\phi_{i}\). We follow the framework of prior self-supervised representation learning algorithms from the computer vision community such as vicreg[37], and utilize a parameterized projector network (2-layer MLP with 0.25 million Figure 1: An illustration of the pre-processing performed on unconstrained robot experience. Image patches of traversed terrain at location \(s_{k}\) are extracted from bird’s eye view observations at prior locations \(s_{k-1},s_{k-2}\) along the trajectory. The corresponding ipt observations at \(s_{k}\) are transformed from time series to psd signals. Note the visual artifacts caused by noise in homography transformation from viewpoints farther away from \(s_{k}\). parameters) that maps encoded visual and non-visual representations independently to a higher-dimensional feature space \(\psi_{v_{1,2}}\) and \(\psi_{i}\) respectively, over which the self-supervision objectives are computed. The sterling objective composed of the multi-modal correlation \(\mathcal{L}_{MM}(\psi_{v_{1,2}},\psi_{i})\) and viewpoint-invariance \(\mathcal{L}_{VI}(\psi_{v_{1}},\psi_{v_{2}})\) objectives are defined as: \[\begin{split}\mathcal{L}_{\textsc{sterling}}&= \mathcal{L}_{VI}(\psi_{v_{1}},\psi_{v_{2}})+\mathcal{L}_{MM}(\psi_{v_{1,2}}, \psi_{i})\\ \mathcal{L}_{VI}(\psi_{v_{1}},\psi_{v_{2}})&= \mathcal{L}_{\textsc{vicReg}}(\psi_{v_{1}},\psi_{v_{2}})\\ \mathcal{L}_{MM}(\psi_{v_{1,2}},\psi_{i})&=[\mathcal{ L}_{\textsc{vicReg}}(\psi_{v_{1}},\psi_{i})+\mathcal{L}_{\textsc{vicReg}}( \psi_{v_{2}},\psi_{i})]/2\end{split} \tag{1}\] \(\mathcal{L}_{\textsc{vicReg}}\) is the vicreg loss that is composed of variance-invariance-covariance representation learning objectives, as proposed by Bardes et al. [37]. Given two alternate projected representations \(Z\) and \(Z^{\prime}\) of a data sample (in sterling, \(Z\) and \(Z^{\prime}\) are projected representations of the visual and non-visual sensor modalities), the vicreg loss is defined as \(\mathcal{L}_{\textsc{vicReg}}(Z,Z^{\prime})=\lambda s(Z,Z^{\prime})+\mu[v(Z)+ v(Z^{\prime})]+\nu[c(Z)+c(Z^{\prime})]\). Note that while Bardes et al. use vicreg to learn representations from visual inputs using artificial image augmentations, in this work, we extend vicReg to multi-modal inputs and use real-world augmentations via multi-viewpoint image patches as described in Sec. 3.1. \(\lambda\), \(\mu\), and \(\nu\) are hyper-parameters and the functions \(v\), \(s\), and \(c\) are the variance, invariance, and covariance terms computed on a mini-batch of projected features. We refer the reader to Bardes et al. [37] for additional details on the individual terms and also define them here for completeness. The variance term \(v\) is a hinge function defined as \(v(Z)=\frac{1}{d}\sum\limits_{j=1}^{d}max(0,\gamma-S(z^{j},\epsilon))\), where \(S\) is the standard deviation, and \(d\) is the dimensionality of the projected feature space. \(c\) is the covariance term, defined as \(c(Z)=\frac{1}{d}\sum\limits_{i\neq j}[C(Z)]_{i,j}^{2}\), where \(C(Z)\) is the covariance matrix of \(Z\). \(s\) is the invariance term defined as \(s(Z,Z^{{}^{\prime}})=\frac{1}{n}\sum\limits_{i}||z_{i}-z_{i}^{{}^{\prime}}||\). More details on the individual terms in the loss function are provided in Sec. 9.5. We apply an \(l^{2}\) norm on the visual and non-visual features to ensure they are on a hypersphere, which helped improve the quality of learned representations. On a mini-batch of data containing paired terrain image patches and ipt observations, we compute the \(\mathcal{L}_{\textsc{sterling}}\) loss and update parameters of the two encoder networks and the shared projector network together using Adam optimizer. ### Preference-Aligned Off-Road Navigation In this subsection, we describe the downstream navigation task of preference-aligned visual navigation that we focus on when evaluating sterling. **Preliminaries:** We formulate the task of preference-aligned terrain-aware navigation as a local path-planning problem, where the robot operates within a state space \(\mathcal{S}\), action space \(\mathcal{A}\), and a deterministic transition function \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\longrightarrow\mathcal{S}\) in the environment. The state space consists of \(s=[x,y,\theta,\phi_{v}]\), where \([x,y,\theta]\) denote the robot's position in \(SE(2)\) space, and \(\phi_{v}\) denotes the visual features of the terrain at this location. Given a goal location \(G\), the preference-aligned navigation task is to reach this goal while adhering to operator preferences over terrains. We assume access to a sampling-based planner, the details of which are provided in Supplementary Sec. 8. **Learning the preference utility:** Following Zucker et al. [46], we learn the utility function \(u:\Phi_{v}\rightarrow\mathbb{R}^{+}\) using human queries. From the predicted terrain features on data samples in our training set, we cluster the terrain representations using k-means with silhouette-score elbow criterion, and sample candidate terrain patches from each cluster, which is presented to the human operator using a GUI. The human operator then provides a full-order ranking of terrain preferences over clusters, which is utilized to learn the utility function \(u(.)\), represented by a 2-layer MLP. While recovering absolute cost values from ranked preference orders is an under-constrained problem, we find that this approximation provided by Zucker et al. [46] works well in practice. Experiments In this section, we describe the experiments performed to evaluate sterling. Specifically, the experiments presented in this section are tailored to address the following questions: * How effective are sterling features in comparison to baseline approaches at enabling terrain awareness in off-road navigation? * How effective are the proposed sterling objectives in learning discriminative terrain features in comparison to other representation learning objectives? We investigate \(Q_{1}\) through physical robot experiments on the task of preference-aligned off-road navigation. We perform quantitative evaluations in six different outdoor environments, and then further perform a large-scale qualitative evaluation by semi-autonomously hiking a 3-mile long off-road trail using preference costs learned using sterling features. To compare various methods, we use the success rate of preference alignment as a metric. If a trajectory followed by any algorithm fails to reach the goal, or at any time traverses over any terrain that is less preferred than any traversed by the operator-demonstrated trajectory, we classify the trial as a failure. We additionally investigate \(Q_{2}\) by comparing sterling against other unsupervised terrain representation learning methods and perform an ablation study on the two sterling objectives. Additional experiments are provided in Supplementary Sec. 9.2. **Baselines:** To perform quantitative evaluations for \(Q_{1}\), we compare sterling with se-r [8], rca [7], ganav [19], geometric-only planning [47], and a fully-supervised baseline. se-r and rca perform self-supervised learning from unconstrained robot experience to learn terrain representations and traversability costs respectively, making them relevant baselines for this problem. Since there is no open-source implementation of rca, we replicate it to the best of our abilities. The geometric-only approach ignores terrain costs (\(\mathcal{L}_{terrain}\)) and plans with geometric cost (\(\mathcal{L}_{geom}\)) only, making it a relevant ablation on the cost formulation for preference-aware planning. ganav2[19] is a segmentation-based approach trained on the RUGD [16] dataset. We additionally train the fully-supervised baseline in which the terrain cost function is learned end-to-end using supervised learning from linear extrapolation of operator preferences. ganav and the fully-supervised baseline require supervision via terrain labels to learn and hence serve as references for comparison. We normalize the terrain cost predicted by all methods to be between 0 and 1 for a fair comparison. Footnote 2: [https://github.com/rayguan97/GANav-offroad](https://github.com/rayguan97/GANav-offroad) ### Evaluating Terrain-Awareness via Robot Experiments In this subsection, we report on experiments to investigate the effectiveness of sterling features in enabling terrain awareness during off-road navigation. We quantitatively compare the performance of sterling with baselines rca[7], ganav [19], se-r [8] and the fully-supervised baseline, on the task of preference-aligned navigation. We identify six environments within the university campus, with eight different terrain types, as shown in Fig. 3. For this study, we use the same data collected on the robot to train rca, se-r, fully-supervised baseline, and sterling, and the operator Figure 2: Overview of the training architecture in sterling. Terrain patches \(v_{1}\) and \(v_{2}\) from different viewpoints of the same location are encoded as \(\phi_{v_{1}}\) and \(\phi_{v_{2}}\) respectively, and mapped into embeddings \(\psi_{v_{1}}\) and \(\psi_{v_{2}}\). Similarly, inertial, proprio, tactile signals are encoded as \(\phi_{i}\), and mapped as \(\psi_{i}\). Self-supervision objectives \(\mathcal{L}_{VI}\) for viewpoint-invariance and \(\mathcal{L}_{MM}\) for multimodal correlation are computed on the minibatch to perform gradient descent. provides the same rankings for all methods during training. Note that we use the same encoder and utility function across all environments and do not retrain/finetune to each environment to prevent environment-specific overfitting. Fig. 3 shows the operator's (first author) terrain preferences for all Envs. 1 to 5, and the performance of baseline approaches, including an operator-demonstrated trajectory for reference. In all environments, we see that sterling navigates in a terrain-aware manner while adhering to the operator's preferences. Note that although Fully-Supervised also completes the task successfully, it requires privileged information such as terrain labels during training, whereas sterling does not require such supervision, and can potentially be used on large datasets containing unlabeled, unconstrained robot experiences. ganav, trained on the rugd dataset fails to generalize to unseen real-world conditions. rca uses inertial spectral features to learn terrain traversability costs and hence does not adhere to operator preference. se-r does not address viewpoint invariance which is a significant problem in vision-based off-road navigation and hence performs poorly in Envs. 1 and 2. We perform additional experiments in an outdoor environment (Env. 6) to study adherence to operator preferences, detailed in Supplementary Sec. 9.1. Table 1 shows the success rate of preference alignment for all approaches in all environments, over five different trials. sterling outperforms other self-supervised baselines and performs on par with the fully-supervised approach. In summary, the physical experiments conducted in six environments quantitatively demonstrate the effectiveness of sterling features in enabling terrain awareness during off-road navigation. ### Evaluating Self-Supervision Objectives In this subsection, we investigate the effectiveness of sterling at learning discriminative terrain features and compare with baseline unsupervised terrain representation learning methods such as Regularized Auto-Encoder (rae) and se-r[8] and large pretrained networks such as a ResNet-50 pretrained on ImageNet. sterling uses multi-modal correlation (\(\mathcal{L}_{MM}\)) and viewpoint invariance (\(\mathcal{L}_{VI}\)) objectives for self-supervised representation learning, whereas, se-r and rae use soft-triplet-contrastive loss and pixel-wise reconstruction loss, respectively. Additionally, we also perform an ablation study on the two objectives in sterling to understand their contributions to learning discriminative terrain features. To evaluate different visual representations, we perform Figure 3: Trajectories traced by different approaches in 5 environments containing 8 different terrains. The operator preferences are shown above. We see that sterling navigates in an operator-preference aligned manner, by preferring cement sidewalk, red bricks, pebble sidewalk, and yellow bricks over mulch, grass, marble rocks, and bush, outperforming other baselines and performing on-par with the Fully-Supervised approach. unsupervised classification using k-means clustering and compare their relative classification accuracies with manually labeled terrain labels. For this experiment, we train sterling, se-r, and rae on our training set and evaluate on a held-out validation set. Fig. 4 shows the results of this study. We see that sterling-features using both the self-supervision objectives perform the best among all methods. Additionally, we see that using a non-contrastive representation learning approach such as vicReg [37] within sterling performs better than contrastive learning methods such as se-r, and reconstruction-based methods such as rae. This study shows that the proposed self-supervision objectives in sterling indeed help learn discriminative terrain features. ## 5 Limitations and Future Work sterling requires traversing over terrains in order to learn representations, which may be unsafe in certain situations. Uncertainty-aware safe exploration and exploration focusing on informative and diverse terrains for data collection is a promising direction for future work. Extending sterling features to work on unstructured non-flat environments such as stairs [48] and boulders [49] is another promising direction for future work. Extending sterling by pretraining with large-scale off-road datasets using modern architectures such as transformers that are known to scale well with large-scale data is an exciting direction for future work. ## 6 Conclusion In this paper, we introduce _Self-supervised TErrain Representation LearnING_ (sterling), a novel framework for learning terrain representations from easy-to-collect, unconstrained (e.g., non-expert), and unlabeled robot experience. sterling utilizes non-contrastive representation learning through viewpoint invariance and multi-modal correlation self-supervision objectives to learn relevant terrain representations for visual navigation. We show how features learned through sterling can be utilized to learn operator preferences over terrains and integrated within a planner for preference-aligned navigation. We evaluate sterling against state-of-the-art alternatives on the task of preference-aligned visual navigation on a Spot robot and find that sterling outperforms other methods and performs on par with a fully-supervised baseline. We additionally perform a qualitative large-scale experiment by successfully hiking a 3-mile-long trail using sterling, demonstrating its robustness to off-road conditions in the real world. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Approach**} & \multicolumn{6}{c|}{**Environment**} \\ \cline{2-7} & **1** & **2** & **3** & **4** & **5** & **6 (a)** & **6 (b)** \\ \hline Geometric-only & 0/5 & 0/5 & 0/5 & 0/5 & 0/5 & 0/5 & 5/5 \\ \hline rca[7] & 2/5 & 4/5 & 2/5 & 0/5 & 1/5 & 5/5 & 0/5 \\ \hline ganav[19] & 5/5 & 0/5 & 0/5 & 5/5 & 0/5 & 4/5 & 5/5 \\ \hline se-r[8] & 1/5 & 0/5 & 5/5 & 1/5 & 3/5 & 5/5 & 4/5 \\ \hline Fully-Supervised & 5/5 & 5/5 & 5/5 & 5/5 & 5/5 & 5/5 & 5/5 \\ \hline \hline sterling (Ours) & 5/5 & 5/5 & 5/5 & 5/5 & 5/5 & 5/5 \\ \hline \end{tabular} \end{table} Table 1: Success rates of different algorithms on the task of preference-aligned off-road navigation Figure 4: Ablation study depicting classification accuracy (value closer to \(1.0\) is better) from terrain representations learned using different approaches and objectives. The combined objective (vi + mm) proposed in sterling achieves the highest accuracy, indicating that the learned representations are sufficiently discriminative of terrains. #### Acknowledgments This work has taken place in the Learning Agents Research Group (LARG) and Autonomous Mobile Robotics Laboratory (AMRL) at UT Austin. LARG research is supported in part by NSF (CPS-1739964, IIS-1724157, NRI-1925082), ONR (N00014-18-2243), FLI (RFP2-000), ARO (W911NF19-2-0333), DARPA, Lockheed Martin, GM, and Bosch. AMRL research is supported in part by NSF (CAREER2046955, IIS-1954778, SHF-2006404), ARO (W911NF-19-2- 0333, W911NF-21-20217), DARPA (HR001120C0031), Amazon, JP Morgan, and Northrop Grumman Mission Systems. Peter Stone serves as the Executive Director of Sony AI America and receives financial compensation for this work. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research.
2309.05111
Beyond Generalized Eigenvalues in Lattice Quantum Field Theory
Two analysis techniques, the generalized eigenvalue method (GEM) or Prony's (or related) method (PM), are commonly used to analyze statistical estimates of correlation functions produced in lattice quantum field theory calculations. GEM takes full advantage of the matrix structure of correlation functions but only considers individual pairs of time separations when much more data exists. PM can be applied to many time separations and many individual matrix elements simultaneously but does not fully exploit the matrix structure of the correlation function. We combine both these methods into a single framework based on matrix polynomials. As these algebraic methods are well known for producing extensive spectral information about statistically-noisy data, the method should be paired with some information criteria, like the recently proposed Bayesean model averaging.
George T. Fleming
2023-09-10T18:46:43Z
http://arxiv.org/abs/2309.05111v1
# Beyond Generalized Eigenvalues in Lattice Quantum Field Theory ###### Abstract Two analysis techniques, the _generalized eigenvalue method_ (GEM) or _Prony's (or related) method_ (PM), are commonly used to analyze statistical estimates of correlation functions produced in lattice quantum field theory calculations. GEM takes full advantage of the matrix structure of correlation functions but only considers individual pairs of time separations when much more data exists. PM can be applied to many time separations and many individual matrix elements simultaneously but does not fully exploit the matrix structure of the correlation function. We combine both these methods into a single framework based on _matrix polynomials_. As these algebraic methods are well known for producing extensive spectral information about statistically-noisy data, the method should be paired with some information criteria, like the recently proposed Bayesean model averaging. + Footnote †: preprint: FERMILAB-PUB-23-495-T ## I Introduction We briefly review the construction of lattice correlation functions, which has been the subject of many reviews, _e.g._[1], and even textbooks. Our review is not comprehensive and serves only to fix the notation. A standard computational task of lattice quantum field theory is to generate statistical samples of Euclidean time correlation functions, which are often written as vacuum-to-vacuum transition amplitudes \[C_{ab}(\vec{p},|t-t_{0}|)=\] \[\sum_{\vec{x}}e^{i\vec{p}\cdot(\vec{x}-\vec{x}_{0})}\left\langle 0 \left|\mathcal{O}_{a}(\vec{x},t)\mathcal{C}^{\dagger}_{b}(\vec{x}_{0},t_{0}) \right|0\right\rangle \tag{1}\] where we have made explicit use of translation invariance of the ensemble average so that it is clear the correlation function is independent of the spatial source location \(\vec{x}_{0}\) and only depends on the relative time separation of the source and sink \(|t-t_{0}|\). \(\mathcal{O}^{\dagger}_{b}(\vec{x}_{0},t_{0})\) is a creation operator that creates a quantum state with a given set of quantum numbers associated with a particular irreducible representation of the lattice Hamiltonian and \(\mathcal{O}_{a}(\vec{x},t)\) is an annihilation operator that annihilates a quantum state of a given set of quantum numbers. If the two sets of quantum numbers are not commensurate then the correlation is zero by symmetry. There are many possible operators that can create and annihilate a given set of quantum numbers, so we give them labels \(a,b\in[1,N]\) and the correlation function can be a \(\mathbb{C}^{N\times N}\) matrix in this operator space. A standard exercise is to expand this correlation function in a complete set of eigenstates of the lattice Hamiltonian. Associated with each such state is an energy eigenvalue \(E_{k}\) and eigenvector \(|k\rangle\). For simplicity, we have suppressed labels associated with other conserved quantum numbers. The correlation function is now written \[C_{ab}(\vec{p},|t-t_{0}|)=\sum_{k}z_{ak}(\vec{p})\ z_{bk}^{*}(\vec{p})\ e^{-E_{k}( \vec{p})\ t} \tag{2}\] but from here on we will suppress the dependence on momentum \(\vec{p}\) and freely assume time translation invariance. We will assume for each creation operator \(\mathcal{O}^{\dagger}_{a}\) a conjugate annihilation operator can be defined \(\mathcal{O}_{a}\) such that \(z_{ak}=\left\langle 0\left|\mathcal{O}_{a}\right|k\right\rangle\) and \(z_{ak}^{*}=\left\langle k\left|\mathcal{O}^{\dagger}_{a}\right|0\right\rangle\). We will also assume that any operators that are not linearly independent have been removed from the operator basis and so \(z_{ak}\) can be viewed as elements of a matrix with full rank. In matrix notation, the correlation function can be written \(\mathbf{C}(t)=\mathbf{Z}\mathbf{\Lambda}^{t}\mathbf{Z}^{\dagger}\). Formally, the total number of eigenstates should be considered very much larger than the dimension of \(\mathbf{C}(t)\) that can be computed in a standard lattice calculation. Hence, the typical data analysis problem is to extract as much _reliable_ information as possible about the eigenvalue spectrum \(E_{k}\) and the matrix elements \(z_{ak}\) given a finite number of samples of the matrix function \(\mathbf{C}(t)\). Throughout the paper we use the following notation: \(C_{ab}(t)\)**:**: Scalar correlation function \(\in\mathbb{C}^{1\times 1}\). \(t\)**:**: Integer-spaced Euclidean time separations. \(\mathbf{C}(t)\)**, \(\mathbf{C}_{t}\)**:**: Matrix correlation function \(\in\mathbb{C}^{N\times N}\). \(N\)**:**: Dimension of operator basis. \(a,b,n\)**:**: Operator indices \(\in[1,N]\). \(L\)**:**: Exact polynomial order (could be \(\infty\)). \(K\)**:**: Assumed polynomial order. \(\ell\)**:**: Index \(\in[0,L-1]\) or \([0,K-1]\). \(k\)**:**: Index of states contributing to \(\mathbf{C}(t)\). \(\lambda_{k}\)**:**: \(\exp(-E_{k})\) energy contributing to \(\mathbf{C}(t)\). \(\Lambda\)**:**: Diagonal matrix of energies \(\in\mathbb{R}^{LN\times LN}\) or \(\mathbb{R}^{KN\times KN}\). \(z_{ak}\)**:**: \(\left\langle 0\left|\mathcal{O}_{a}\right|k\right\rangle\) amplitude contributing to \(\mathbf{C}(t)\). \(Z\)**:**: Matrix of amplitudes \(\in\mathbb{C}^{N\times LN}\) or \(\mathbb{C}^{N\times KN}\) \(\mathbf{C}\)**:**: Companion matrix \(\in\mathbb{C}^{LN\times LN}\) or \(\mathbb{C}^{KN\times KN}\). \(\mathbf{P}_{\ell}\)**:**: Linear prediction matrices \(\in\mathbb{C}^{N\times N}\). \(p(M|D)\)**:** Bayesian model probability [2]. \(q\)**:**: Number of model parameters, usually \(LN(N+1)\) or \(KN(N+1)\). ### Generalized Eigenvalue Method Early in the development of lattice quantum field theory, Wilson recognized [3; 4] that correlation function data could be analyzed by truncating the expansion in Eq. (2) to a relatively small number of states and that the energies computed in such a fashion would be variational estimates. A simple method to analyze a scalar correlation function is called _effective energy_ \[\lambda_{\text{eff}}(t,t_{0})^{t-t_{0}}=e^{-E_{\text{eff}}(t,t_{0})\ (t-t_{0})}=\frac{C(t)}{C(t_{0})} \tag{3}\] Asymptotically, \(E_{\text{eff}}(t,t_{0})=E_{1}+\mathcal{O}(e^{-(E_{2}-E_{1})t})\) as \(t\rightarrow\infty\), where \(E_{1},E_{2}\) are the lowest and next-to-lowest energy eigenvalues, assuming the states created by the operators used in the construction have at least some overlap with the true ground state, \(\left\langle 0\left|\mathcal{O}_{a}\right|1\right\rangle\neq 0\). The method of using generalized eigenvalues to extract information about the spectrum of the Hamiltonian is well-studied in lattice QFT [5; 6; 7]. For simplicity of discussion, we will assume that only \(N\) states contribute to the Hamiltonian and that we have a complete basis of \(N\) operators so the matrix \(\mathbf{Z}\) is of full rank and invertable. Generalized eigenvalues are computed by solving the following problem \[\lambda_{\text{eff},k}(t,t_{0})^{t-t_{0}}\ \mathbf{C}(t_{0})\ \mathbf{v}_{k}(t,t_{0})=\mathbf{C}(t) \ \mathbf{v}_{k}(t,t_{0}) \tag{4}\] In our simple case, if \(\mathbf{v}_{k}\) is the \(k\)-th column vector of \([\mathbf{Z}^{\dagger}]^{-1}\) then \(\lambda(t,t_{0})=e^{-E_{k}}\) and there will be \(N\) such solutions. Since the general case is that the number of Hamiltonian eigenstates is much larger than the dimension of the operator basis, the generalized eigenvalues \(\lambda(t,t_{0})\) will only be variational estimates and will depend on the choice \((t,t_{0})\). Thus, the generalized eigenvalue method can be viewed as an extension of the effective energy method to matrix-valued correlation functions. One approach to bounding the generalized eigenvalues is the _eigenvalue interlacing theorem_. Simply stated, if \(\mathbf{A}\in\mathbb{C}^{K\times K}\) is a Hermitian matrix with eigenvalues \(\alpha_{1}\geq\alpha_{2}\geq\cdots\geq\alpha_{K}\) and \(\mathbf{B}\in\mathbb{C}^{N\times N},N<K\) is also a Hermitian matrix with eigenvalues \(\beta_{1}\geq\cdots\geq\beta_{N}\) and an orthogonal projection matrix \(\mathbf{P}\in\mathbb{C}^{N\times K}\) exists such that \(\mathbf{P}\mathbf{A}\mathbf{P}^{\dagger}=\mathbf{B}\) then the eigenvalues interlace: \(\alpha_{k}\geq\beta_{k}\geq\alpha_{k+K-N}\). To apply these bounds to generalized eigenvalues requires \(\mathbf{C}(t_{0})\) to be invertable so the problem can be transformed to a standard eigenvalue problem. Then, \(\lambda_{k}\geq\lambda_{\text{eff},k}(t,t_{0})\geq\lambda_{k+K-N}\) which is true for any \((t,t_{0})\). Of course, if \(K\gg N\) then the right hand bound is not very restrictive but the more important bound is to the left which indicates the generalized eigenvalues are always an over-estimate of the true eigenvalue. Asymptotic bounds for \(\lambda_{\text{eff},k}(t,t_{0})\) as \(t\rightarrow\infty\) are also known [7]. For fixed \(t_{0}\) as \(t\rightarrow\infty\)[6] \[E_{\text{eff},k}(t,t_{0}) = E_{k}+\mathcal{O}\left(e^{-\Delta E_{k}t}\right)\] \[\Delta E_{k} = \min_{m\neq k}\left|E_{m}-E_{k}\right|. \tag{5}\] If \(t_{0}\) is increased such that \(t/2\leq t_{0}\) as \(t\rightarrow\infty\) then \[E_{\text{eff},k}(t,t_{0})=E_{k}+\mathcal{O}\left(e^{-(E_{N+1}-E_{k})t}\right). \tag{6}\] It would be desirable to find a method that extends the generalized eigenvalue method to simultaneously use the values of matrix-valued correlation functions computed for more than two time separations. The approach we would like to consider is viewing the generalized eigenvalue method as a matrix polynomial [8] of linear degree and ask whether we can construct higher degree matrix polynomials using more time separations. An monic \(N\times N\) matrix polynomial of degree \(L\) is \[\mathbf{\mathcal{P}}_{L}(\lambda)=\mathbf{I}\lambda^{L}+\sum_{\ell=0}^{L-1}\mathbf{P}_{\ell }\lambda^{\ell},\ \mathbf{P}_{\ell}\in\mathbb{C}^{N\times N} \tag{7}\] Generally, \(\det\mathbf{\mathcal{P}}_{L}(\lambda_{k})=0\) admits \(NL\) solutions \(\lambda_{k}\). If the matrix coefficient of \(\lambda^{L}\) is not the identity, as long as it's invertable, then the monic form can be constructed. For example, a first-order monic matrix polynomial can be derived from Eq. (4) \[\mathbf{\mathcal{P}}_{1}(\lambda)=\mathbf{I}\lambda^{t-t_{0}}-\mathbf{Q}^{\dagger}{}^{-1} \ \mathbf{C}(t)\ \mathbf{Q}^{-1} \tag{8}\] where \(\mathbf{C}(t_{0})=\mathbf{Q}^{\dagger}\mathbf{Q}\) and \(\det\mathbf{\mathcal{P}}_{1}(\lambda_{k})=0\) has the same \(N\) solutions as Eq. (4). However, it is not sufficient to construct just any matrix polynomial. For example, we could add Eq. (4) twice to form \(2\mathbf{C}(t_{0}+2)\mathbf{v}=\lambda\mathbf{C}(t_{0}+1)\mathbf{v}+\lambda^{2}\mathbf{C}(t_{0})\mathbf{v}\), which would be equivalent to a second-order matrix polynomial. While this equation yields two solutions in the \(N=1\) case, we leave as an exercise for the reader to see that the solutions are not exact when only two states contribute to the correlation function [9]. So, our goal is to construct degree \(L\) matrix polynomials from the correlation function data that admit \(NL\) solutions and those solutions are the exact solutions when only \(NL\) states contribute. In particular, the matrix polynomials should be equivalent to the polynomials produced by Prony's method when \(N=1\), since those are known to produce exact solutions. ### Prony's (and related) Method Another extension of the effective energy method would be Prony's method, first discovered in 1795 [10], and since rediscovered many times in several related guises. The method extracts multiple effective energies by using a single correlation function at more than two time separations. The basic idea is to assume that the correlation function over a range of time separations can be well-modeled by a spectrum of \(L\) effective energies \(\lambda_{k}=\exp(-E_{k})\). This spectrum is used to construct a characteristic polynomial \[\mathcal{P}_{L}(\lambda)=\prod_{k=1}^{L}(\lambda-\lambda_{k})=\lambda^{L}+\sum_ {\ell=0}^{L-1}p_{\ell}\lambda^{\ell} \tag{9}\] whose _linear prediction_ coefficients \(p_{\ell}\) can be computed as functions of the input spectrum \(\lambda_{k}\). \(\mathcal{P}_{L}(\lambda_{k})=0\) can be solved \[\lambda_{k}^{t}=\lambda_{k}^{t-L}\lambda_{k}^{L}=-\lambda_{k}^{t-L}\sum_{\ell= 0}^{L-1}p_{\ell}\lambda_{k}^{\ell},\ t\geq L \tag{10}\] and used in combination with Eq. (2) for a scalar function with a compact notation, \(C_{t}=C(t)\) to write the linear system \[\left[\begin{array}{c}C_{L}\\ C_{L+1}\\ \vdots\\ C_{2L-1}\end{array}\right]=-\left[\begin{array}{ccc}C_{0}&\cdots&C_{L-1}\\ C_{1}&\cdots&C_{L}\\ \vdots&\ddots&\vdots\\ C_{L-1}&\cdots&C_{2L-2}\end{array}\right]\left[\begin{array}{c}p_{0}\\ p_{1}\\ \vdots\\ p_{L-1}\end{array}\right] \tag{11}\] The structured matrix shown here is commonly called a Hankel matrix even though Hankel's work in this area occurred nearly a century later. Prony's solution is to take the computed scalar correlation function data \(C_{t}\) and solve the linear system to find the unknown \(p_{\ell}\). Then, given the \(p_{\ell}\), the polynomial equation \(\mathcal{P}_{L}(\lambda)=0\) can be solved to find the \(\lambda_{k}\). Once the solution for the \(\lambda_{k}\) has been computed, the coefficients can be determined by solving another structured linear system \[\left[\begin{array}{c}C_{0}\\ C_{1}\\ C_{2}\\ \vdots\\ C_{2L-1}\end{array}\right]=\left[\begin{array}{cccc}1&1&\cdots&1\\ \lambda_{1}&\lambda_{2}&\cdots&\lambda_{L}\\ \lambda_{1}^{2}&\lambda_{2}^{2}&\cdots&\lambda_{L}^{2}\\ \vdots&\vdots&\ddots&\vdots\\ \lambda_{1}^{2L-1}&\lambda_{2}^{2L-1}&\cdots&\lambda_{L}^{2L-1}\end{array} \right]\left[\begin{array}{c}a_{1}\\ a_{2}\\ \vdots\\ a_{L}\end{array}\right] \tag{12}\] using singular value decomposition (SVD). The structured matrix here is called a Vandermonde matrix. Vandermonde, at least, was a contemporary of Prony but it's not clear the connection between their work was recognized at the time. There have been numerous approaches to combining data from multiple matrix elements but we have yet to see an approach that fully exploits the structure of the matrices. In particular, as the scalar Prony method involves solving a polynomial equation to find the spectrum, we anticipate that a fully structured approach to the matrix problem will involve solving a matrix polynomial. In preparation for constructing a block Prony method, we see how a naive application of linear prediction applied element-by-element to correlator matrices fails in a case where only two states contribute to the correlation function \[C_{ab}(t)=\lambda_{1}^{t}z_{a1}z_{b1}^{*}+\lambda_{2}^{t}z_{a2}z_{b2}^{*} \tag{13}\] Now, use \(L=2\) linear prediction \[C_{ab}(2) = -(p_{1}\lambda_{1}+p_{0})z_{a1}z_{b1}^{*}-(p_{1}\lambda_{2}+p_{0} )z_{a2}z_{b2}^{*} \tag{14}\] \[= -p_{1}C_{ab}(1)-p_{0}C_{ab}(0)\] \[\mathbf{C}_{t} = -\mathbf{C}_{t-1}(p_{1}\mathbf{I})-\mathbf{C}_{t-2}(p_{0}\mathbf{I}) \tag{15}\] Substituting Eq. (15) into Eq. (11) reveals the problem: solving the block linear system does not guarantee the resulting linear prediction matrices \(\mathbf{P}_{0}\), \(\mathbf{P}_{1}\) are proportional to the identity matrix or even have degenerate eigenvalues. ## II Block Prony Method As previously mentioned, there are a number of previous attempts to incorporate matrix-valued correlation functions into Prony's method, or other algebraically-related methods [11; 12; 13; 14]. We will name this method the _Block Prony Method_ (BPM) as we are not aware of any other method called by this name. To demonstrate the construction, we will assume we are given \(2L\) Hermitian positive definite correlation matrices \(\mathbf{C}_{t}\in\mathbb{C}^{N\times N}\) computed at equally-spaced time separations \(t\in[0,2L-1]\). We will also assume that precisely \(NL\) energies contribute to the correlation matrices, \(\lambda_{k}=\exp(-E_{k})\) and \(k\in[1,NL]\), according to Eq. (2). A matrix \(\mathbf{C}\in\mathbb{C}^{NL\times NL}\), called the second companion matrix, yields the correct characteristic polynomial \(\det[\lambda\mathbf{I}-\mathbf{C}]\propto\prod_{k=1}^{NL}(\lambda-\lambda_{k})\). In terms of \(\mathbb{C}^{N\times N}\) blocks \[\mathbf{C}=\left[\begin{array}{cccc}0&0&\cdots&0&-\mathbf{P}_{0}\\ \mathbf{I}&0&\cdots&0&-\mathbf{P}_{1}\\ 0&\mathbf{I}&\ddots&\vdots&-\mathbf{P}_{2}\\ \vdots&\ddots&\ddots&0&\vdots\\ 0&\cdots&0&\mathbf{I}&-\mathbf{P}_{L-1}\end{array}\right] \tag{16}\] and the matrix polynomial \(\mathbf{\mathcal{P}}_{L}(\lambda)\) of Eq. (8) using these blocks satisfies the same characteristic equation. To derive the block equivalent of Eq. (10), we consider an eigenvector \(\mathbf{v}_{k}\) of \(\mathbf{\mathcal{C}}\) partitioned into \(L\)\(\mathbb{C}^{1\times N}\) blocks \[\lambda_{k}\left[\begin{array}{c}\mathbf{v}_{k,0}\\ \mathbf{v}_{k,1}\\ \vdots\\ \mathbf{v}_{k,L-1}\end{array}\right]=\left[\begin{array}{cccc}0&0&\cdots&-\mathbf{P}_ {0}\\ \mathbf{I}&\ddots&\cdots&-\mathbf{P}_{1}\\ \vdots&\ddots&0&\vdots\\ 0&\cdots&\mathbf{I}&-\mathbf{P}_{L-1}\end{array}\right]\left[\begin{array}{c}\mathbf{v}_ {k,0}\\ \mathbf{v}_{k,1}\\ \vdots\\ \mathbf{v}_{k,L-1}\end{array}\right] \tag{17}\] which can also be written \[\lambda_{k}\mathbf{v}_{k,0} = -\mathbf{P}_{0}\mathbf{v}_{k,L-1} \tag{18}\] \[\lambda_{k}\mathbf{v}_{k,\ell} = \mathbf{v}_{k,\ell-1}-\mathbf{P}_{\ell}\mathbf{v}_{k,L-1}. \tag{19}\] These equations can be reduced by substitution to \[\lambda_{k}^{L}\mathbf{v}_{k,L-1}=\left(-\sum_{\ell=0}^{L-1}\lambda_{k}^{\ell}\mathbf{P}_ {\ell}\right)\mathbf{v}_{k,L-1} \tag{20}\] Given our assumptions, this can be repeated for any linear combination of eigenvectors of \(\mathbf{\mathcal{C}}\) which enables us to extend Eq. (11) to block form \[\left[\begin{array}{c}\mathbf{C}_{L}\\ \mathbf{C}_{L+1}\\ \vdots\\ \mathbf{C}_{2L-1}\end{array}\right]=-\left[\begin{array}{ccc}\mathbf{C}_{0}&\cdots& \mathbf{C}_{L-1}\\ \mathbf{C}_{1}&\cdots&\mathbf{C}_{L}\\ \vdots&\ddots&\vdots\\ \mathbf{C}_{L-1}&\cdots&\mathbf{C}_{2L-2}\end{array}\right]\left[\begin{array}{c}\bm {P}_{0}\\ \mathbf{P}_{1}\\ \vdots\\ \mathbf{P}_{L-1}\end{array}\right] \tag{21}\] Now, the algorithm is clear. Given the \(\mathbf{C}_{t}\), solve Eq. (21) to find the \(\mathbf{P}_{\ell}\) and then construct the second companion matrix \(\mathbf{\mathcal{C}}_{L}\) and solve for the eigenvalues \(\lambda_{k}\). As an aside, if more than \(2L\) equally-spaced \(\mathbf{C}_{t}\) are available, additional rows could be appended to the block Hankel in Eq. (21) while keeping the number of columns fixed. This transforms the problem into an overconstrained system could be analyzed by least squares using SVD. ## III Bilinear system (BLS) of equations Once the \(\lambda_{k}\) have been computed following the block Prony method of the previous section, it is desirable to use the given \(\mathbf{C}_{t}\) and the previously computed \(\lambda_{t}\) to compute the coefficients \(z_{ak}\) of Eq. (2). In the scalar case, this can be done by solving the Vandermonde system Eq. (12) but in the matrix case, the coefficients appear in pairs, leading to a bilinear system (BLS) of equations. A summary of current approaches for solving BLS problems can be found in [15]. First, let's count the number of data inputs and the number of free parameters to see whether such a solution can exist. Given \(2L\) Hermitian \(\mathbf{C}(t)\), there are \(2LN(N+1)/2=LN(N+1)\) inputs, \(LN\) eigenvalues \(\lambda_{k}\) and \(LN^{2}\) matrix elements \(z_{nk}\). So the equal number of inputs and free parameters suggest a solution is possible. If we naively construct the block Vandermonde system analogous to Eq. (12) \[\left[\begin{array}{c}\mathbf{C}_{0}\\ \mathbf{C}_{1}\\ \vdots\\ \mathbf{C}_{2L-1}\end{array}\right]=\left[\begin{array}{ccc}\mathbf{I}&\cdots&\mathbf{ I}\\ \lambda_{1}\mathbf{I}&\cdots&\lambda_{NL}\mathbf{I}\\ \vdots&\ddots&\vdots\\ \lambda_{1}^{2L-1}\mathbf{I}&\cdots&\lambda_{LN}^{2L-1}\mathbf{I}\end{array}\right] \left[\begin{array}{c}\mathbf{A}_{1}\\ \vdots\\ \mathbf{A}_{LN}\end{array}\right] \tag{22}\] the input data is \(\mathbb{C}^{2LN\times N}\), the solution is \(\mathbb{C}^{LN^{2}\times N}\). So, a unique solution exists by SVD if \(N\leq 2\) and for \(N=2\) the \(\mathbf{A}_{k}\) should be rank-1. In our numerical tests we have found that the \(N=2\) solution is always rank-1, perhaps because our test function always started from a rank-1 construction, but we don't have a proof that it should always be the case. For \(N\geq 3\), the system is underconstrained so an infinite number of general rank solutions exist but it is still possible to find a unique rank-1 solution [15]. In our particular case, where the number of equations equals the number of unknowns it should be possible to construct a solution where one unknown is solved for each matrix element and the result used to eliminate that unknown from the remaining system of equations by substitution. For example, start with \[C_{12}(0)=\sum_{k=1}^{LN}z_{1k}z_{2k}^{*} \tag{23}\] and solve for \(z_{11}\) \[z_{11}=\frac{1}{z_{21}^{*}}\left[C_{12}(0)-\sum_{k=2}^{LN}z_{1k}z_{2k}^{*}\right] \tag{24}\] Next, write the equation for \(C_{13}(0)\), eliminate \(z_{11}\) by substitution \[C_{13}(0)=\frac{z_{31}^{*}}{z_{21}^{*}}\left[C_{12}(0)-\sum_{k=2}^{LN}z_{1k}z_ {2k}^{*}\right]+\sum_{k^{\prime}=2}^{LN}z_{1k^{\prime}}z_{3k^{\prime}} \tag{25}\] and then solve for \(z_{12}\), _etc_. The challenge of this approach is that the reduced set of equations will become increasingly higher polynomial order in the remaining unknowns. Still, with a sufficiently small number number of unknowns, it should be possible for symbolic math program like Mathematica to find a solution. Another possibility would be to increase the amount of input data to \(LN\) equally spaced \(\mathbf{C}_{t}\) and form an overconstrained Hankel system Eq. (21) to extract using SVD a least-squares estimate of the same \(L\)\(\mathbf{P}_{\ell}\) coefficient matrices. Now, a square Vandermonde system Eq. (22) can be formed for any \(N\) and solved uniquely. There is no guarantee that the \(\mathbf{A}_{k}\) matrices found this way are rank-1. Also, the amount of input data exceeds the number of free parameters of a rank-1 solution, so even if a rank-1 solution is constructed it should be considered, at best, a least-squares solution. Another consideration is that in Euclidean lattice field theory calculations it is often computationally less expensive to increase \(N\) than it is to increase the number of available times \(t\). It is even worse if one considers the signal-to-noise of such correlation matrices typically decreases exponentially with increasing time separations [16; 17]. In the absence of an exact method, the method we will advocate is given \(\mathbf{C}_{t}\) and \(\lambda_{k}\) following the block Prony method, use non-linear least squares minimization to find the solution for the \(z_{nk}\). If the numerical residual is sufficiently close to zero, we will take that as an indication that we have found the unique rank-1 solution. Setting up the minimization is not hard since the gradients and Hessian of the residual with respect to \(z_{nk}\) are easy to compute. Error-free examples First we will consider an \(N=2\) example of \(L=6\) polynomial order. We choose the spectrum to be equally-spaced in the interval \(0<\lambda_{k}<1\) \[\lambda_{k}=\frac{2L+1-k}{2L+1}\in\left\{\frac{12}{13},\frac{11}{13},\cdots, \frac{1}{13}\right\}. \tag{26}\] The matrix elements \(z_{nk}\) are drawn at random in Mathematica using SeedRandom[11] and Transpose[RandomInteger[{-9,9},{\(L\)\(N\),\(N\)}]] \[\mathbf{Z}=\left[\begin{array}{cccccccccccc}-3&-1&6&-3&6&4&4&-6&6&0&-6&6\\ -3&2&-8&-5&6&5&-6&6&9&-1&-7&4\end{array}\right] \tag{27}\] \(\det\mathbf{Z}\mathbf{Z}^{\dagger}=93894\) confirms the linear independence of the rows. With these inputs \[\mathbf{C}_{0}=\left[\begin{array}{cc}267&90\\ 90&382\end{array}\right] \tag{28}\] \[\mathbf{C}_{1}=\left[\begin{array}{cc}\frac{2763}{12}&\frac{437}{17}\\ \frac{137}{17}&\frac{4060}{17}\end{array}\right] \tag{29}\] and so forth. The reader can verify with these data that the block Prony method reproduces the spectrum and the solution to Eq. (22) reproduces the correct rank-1 solution \(\mathbf{Z}\). As mentioned previously, we do not have a proof for \(N=2\) that the solution will be rank-1 but it is the case in this example and all other random examples we tried for various sizes \(L\). Next, we consider an \(N=3\) example of \(L=4\) polynomial order so we can use the same input spectrum as Eq. (26). The matrix elements \(z_{nk}\) are drawn at random in Mathematica using SeedRandom[13] and Transpose[RandomInteger[{-9,9},{\(L\)\(N\),\(N\)}]] \[\mathbf{Z}=\left[\begin{array}{cccccccc}5&1&8&4&0&2&-8&6&-4&8&5&-7\\ 6&5&1&-7&4&8&0&-6&1&-7&-2&-5\\ 6&-1&0&2&8&-5&-6&-4&1&-7&-9&9\end{array}\right] \tag{30}\] \(\det\mathbf{Z}\mathbf{Z}^{\dagger}=38,448,718\) confirms the linear independence of the rows. With these inputs \[\mathbf{C}_{0}=\left[\begin{array}{cccccccc}364&-40&-117\\ -40&306&56\\ -117&56&394\end{array}\right] \tag{31}\] \[\mathbf{C}_{1}=\left[\begin{array}{cccccccc}\frac{2042}{13}&\frac{6}{13}&14\\ \frac{13}{13}&\frac{203}{13}&\frac{489}{13}\end{array}\right] \tag{32}\] and so forth. Again, the reader can verify the block Prony method reproduces the spectrum. Eq. (22) cannot be used to compute the amplitudes as it is under-determined. If we define the components of the residual vector from Eq.(2) \[r_{abt}=C_{ab}(t)-\sum_{k=1}^{LN}z_{zk}z_{bk}^{*}\lambda_{k}^{t} \tag{33}\] Then minimizing the squared norm of the residual vector \[r^{2}=\sum_{a=1}^{N}\sum_{b\geq a}\sum_{t=0}^{2L-1}|r_{abt}|^{2} \tag{34}\] using Mathematica's NMinimize[] with 30 digits of precision finds \(r\approx 0.00089\) and \[\mathbf{Z}\approx\left[\begin{array}{cccccccccccc}5.0&-0.94&-8.0&-4.2&-0.096&1.8 &7.6&-7.1&-2.3&-8.1&5.2&-7.0\\ 6.0&-5.0&-1.1&6.8&4.6&8.0&0.48&5.0&4.2&6.3&-2.2&-5.0\\ 6.0&0.95&0.12&-2.3&7.8&-5.3&6.1&3.0&3.2&6.5&-9.2&9.0\end{array}\right] \tag{35}\] whereas computing the residual using the computed eigenvalues and correct amplitudes from Eq. (32) gives zero to machine precision. Clearly finding the global minimum of the residual will be a challenging optimization problem. ## V Variational analysis In practical applications, the number of eigenstates exceeds the number of operators that is typically constructed so any eigenvalues computed using the block Prony method (BPM) will, at best, be variational estimates. In Sec. I.1 we discussed various bounds and asymptotic limits of generalized eigenvalues which confirm their variational nature. We would like to test the BPM eigenvalues in a similar fashion. We continue to consider an error-free correlation function constructed from 12 states as in Eq. (26) and generate various combinations of \(LN=12\). We then assume the polynomial order is \(K=L-1\) and compute \(KN\) BPM eigenvalues for \(t_{\min}=t_{\max}-2K+1\), \(t\in[t_{\min},t_{\max}]\). For any time range, the eigenvalues are always bounded by the interlacing theorem. They also approach the true values as \(t_{\max}\to\infty\). In Figs. 1, 2, 3 and 4 we compute the exponential decay rate of the error of the effective energies and compare with the asymptotic estimate in Eq. (6). In all cases, the decay rate of the error approaches the estimate. In the figures we plot a vertical line where points to the right satisfy \(t_{\min}>t_{\max}/2\). In [7], this was considered to be a necessary condition for the asymptotic decay rate to dominate. In the figures, it seems that this is not always sufficient, particularly if it occurs at small time values. From this study, we conclude that the BPM eigenvalues are natural extensions of the generalized eigenvalues to higher polynomial order and are variational estimates of the true eigenvalues with well-defined asymptotic behavior. and the \(\chi^{2}\) statistic is computed from residuals of Eq. (33) and some estimate of the inverse of the data covariance matrix in the usual way. In the sense described in [18], the full set of BPM parameters describe a "perfect" model in that \(\chi^{2}=0\). Even though the BPM model describes the data perfectly, it is unlikely maximize \(p(M|D)\). To see this, imagine a state \(k\) encodes mostly noise. Removing parameters \(\lambda_{k}\) and \(z_{nk}\) will reduce \(2q\) by \(2(N+1)\) but will not increase \(\chi^{2}\) by anywhere near the same amount due to the noise. Thus, the model with state \(k\) removed will be more probable that the full model. So, our heuristic in dealing with noisy data will be to partition the BPM parameters by states which are mostly signal and mostly noise by finding the subset of states \(k\) which maximizes the model probability. We will describe this process more completely in a future work. ## VII Conclusion In this work, we have noted that the generalized eigenvalue method (GEM) is an extension of the usual effective energy method to Hermitian matrix-valued correlation functions and can be reformulated as a first-order matrix polynomial. We then extend GEM to higher-order matrix polynomials by finding the generalization of Prony's method to Hermitian matrix-valued correlation functions, which we call the block Prony method (BPM). The companion matrix plays a central role as it enables us to understand the space of solutions of the matrix polynomial as the eigenspace of the companion matrix. We provided some error-free numerical examples, including a demonstration that the BPM eigenvalues are variational estimates with the expected asymptotic error bounds [7]. We also provided some comments on our future plans for applying the BPM to the analysis of data with noise.
2309.13773
GHN-QAT: Training Graph Hypernetworks to Predict Quantization-Robust Parameters of Unseen Limited Precision Neural Networks
Graph Hypernetworks (GHN) can predict the parameters of varying unseen CNN architectures with surprisingly good accuracy at a fraction of the cost of iterative optimization. Following these successes, preliminary research has explored the use of GHNs to predict quantization-robust parameters for 8-bit and 4-bit quantized CNNs. However, this early work leveraged full-precision float32 training and only quantized for testing. We explore the impact of quantization-aware training and/or other quantization-based training strategies on quantized robustness and performance of GHN predicted parameters for low-precision CNNs. We show that quantization-aware training can significantly improve quantized accuracy for GHN predicted parameters of 4-bit quantized CNNs and even lead to greater-than-random accuracy for 2-bit quantized CNNs. These promising results open the door for future explorations such as investigating the use of GHN predicted parameters as initialization for further quantized training of individual CNNs, further exploration of "extreme bitwidth" quantization, and mixed precision quantization schemes.
Stone Yun, Alexander Wong
2023-09-24T23:01:00Z
http://arxiv.org/abs/2309.13773v1
GHN-QAT: Training Graph Hypernetworks to Predict Quantization-Robust Parameters of Unseen Limited Precision Neural Networks ###### Abstract Graph Hypernetworks (GHN) can predict the parameters of varying unseen CNN architectures with surprisingly good accuracy at a fraction of the cost of iterative optimization. Following these successes, preliminary research has explored the use of GHNs to predict quantization-robust parameters for 8-bit and 4-bit quantized CNNs. However, this early work leveraged full-precision float32 training and only quantized for testing. We explore the impact of quantization-aware training and/or other quantization-based training strategies on quantized robustness and performance of GHN predicted parameters for low-precision CNNs. We show that quantization-aware training can significantly improve quantized accuracy for GHN predicted parameters of 4-bit quantized CNNs and even lead to greater-than-random accuracy for 2-bit quantized CNNs. These promising results open the door for future explorations such as investigating the use of GHN predicted parameters as initialization for further quantized training of individual CNNs, further exploration of "extreme bitwidth" quantization, and mixed precision quantization schemes. ## 1 Introduction Low-bit neural networks that use limited precision quantization for inference [1; 2] can make state-of-the-art models much more practical. However, perturbations induced by quantization of weights and activations can change DNN behaviour in non-trivial ways. In some cases, state-of-the-art performance can have significant degradation after quantization of weights and activations [3]. So how can we find high-performant, quantization robust CNN parameters? Recent works by [4] and [5] have shown remarkable performance using Graph Hypernetworks (GHN) to predict all trainable parameters of unseen DNNs in a _single forward pass_. Preliminary research in [6] has explored the use of GHNs to predict quantization-robust parameters for 8-bit and 4-bit quantized CNNs. However, this work trained GHNs to predict parameters for full-precision float32 candidate CNNs and only quantized them for testing. Building on this, we explore quantization-specific training and find that quantization-aware training of GHNs (which we refer to as GHN-QAT) can significantly improve quantized accuracy for GHN predicted parameters of 4-bit quantized CNNs and even achieve greater-than-random accuracy for 2-bit CNNs. More specifically, we simulated quantization (SimQuant) in sampled CNNs such that GHN-QAT adapts to the quantization errors induced by quantizing GHN-predicted models (see Fig. 1). By finetuning GHNs on a mobile-friendly, **quantized** CNN architecture space, GHNs learn representations specifically for efficient quantized CNNs. ## 2 Experiment We first investigate SimQuant based quantization training (commonly referred to as quantization-aware training/QAT) on a target design space for limited precision quantization. We evaluate a few low bit-width settings (W4/A4, W4/A8, W2/A2 where W indicates weight bitwidth and A indicates activation bitwidth). Using SimQuant for W2/A2 proved to be unstable and we found that modelling quantization as uniform noise (NoiseQuant) led to much better results. The reported W2/A2 results are from training with NoiseQuant where the sampling distribution is computed based on 2-bit precision. In all cases, GHN-QAT training is precision/bitwidth-specific. Encoding bitwidth into the CNN graph could potentially remove the need for bit-width specific finetuning. We finetuned a CIFAR-10, DeepNets-1M pretrained GHN-2 model obtained from [7] on the ConvNets-250K graph dataset from [6]. We use tensorwise, asymmetric, uniform quantization throughout. Figure 1 shows how GHN-QAT is finetuned to predict efficient, quantization-robust CNNs. GHN-QAT was finetuned on ConvNets-250K for 100 epochs using CIFAR-10 [8]. We follow a testing procedure similar to [4], [6] and evaluate GHN-QAT by comparing the mean CIFAR-10 test accuracy at the stated target precisions. Table 1 shows the top-1 and top-5 accuracy results on different testing splits. To establish the benefits of QAT, we also include results from [6] where the authors used full-precision float32 training and only quantized CNNs for testing. In [6], W2/A2 degraded to random accuracy. We use weight-tiling and normalization as described in [4] and use \(s^{(max)}=10\) for max shortest path of virtual edges. ## 3 Discussion As demonstrated, we can easily simulate quantization of CNNs to arbitrary precision. Furthermore, we could even model other scalar quantization methods. Thus, GHN-QAT becomes a powerful tool for quantization-aware design of efficient CNN architectures. The parameters predicted by GHN-QAT are remarkably robust and the QAT finetuning results (see Table 1) show a significant improvement over the full-precision float32 finetuning used in [6]. This shows a clear benefit to adapting GHNs specifically to predict parameters for quantization-aware graphs. Additional possibilities/challenges of leveraging quantization-aware training, such as learned quantization thresholds or reducing QAT Figure 1: GHN-QAT finetuned on ConvNets-250K. We generate a large number of CNN graphs which are then quantized to target bitwidth for training. Once trained, GHN-QAT can predict robust parameters for unseen CNNs. oscillations like in [9], should be explored to further improve GHN-QAT, especially for "extreme" low bitwidths. It's possible that such improvements to QAT would make SimQuant more stable for 2-bit quantization. From GHN-QAT, we can see that introducing quantization into our GHN training allows for greater use of GHNs for quantization-specific neural network parameter prediction. Besides leveraging GHN-QAT for quantized versions of floating point operations, we should be able to encode quantization information such as bit-width and quantization scheme into the graphs. If used as a form of quantized accuracy prediction, GHN-QAT could greatly accelerate the process of searching for accurate, quantized CNNs. Additionally, GHN-QAT could be a useful weight initialization for quantized CNN training. The authors of [10] found noticeable differences in quantized accuracy of CNNs depending on their initialization. Thus, a quantization-aware weight initialization could be more robust than random initialization. If GHN-QAT-predicted parameters can be used as initialization for quantization-aware training rather than first training models to convergence in full float precision before additional QAT, then the training time of quantized models would be significantly reduced. Unfortunately, GHNs do not yet match the accuracy of iterative optimization and further improvements should be explored to bridge this gap. However, the aforementioned benefits could still greatly reduce the costs of designing quantized CNNs.
2309.06418
C4CAM: A Compiler for CAM-based In-memory Accelerators
Machine learning and data analytics applications increasingly suffer from the high latency and energy consumption of conventional von Neumann architectures. Recently, several in-memory and near-memory systems have been proposed to remove this von Neumann bottleneck. Platforms based on content-addressable memories (CAMs) are particularly interesting due to their efficient support for the search-based operations that form the foundation for many applications, including K-nearest neighbors (KNN), high-dimensional computing (HDC), recommender systems, and one-shot learning among others. Today, these platforms are designed by hand and can only be programmed with low-level code, accessible only to hardware experts. In this paper, we introduce C4CAM, the first compiler framework to quickly explore CAM configurations and to seamlessly generate code from high-level TorchScript code. C4CAM employs a hierarchy of abstractions that progressively lowers programs, allowing code transformations at the most suitable abstraction level. Depending on the type and technology, CAM arrays exhibit varying latencies and power profiles. Our framework allows analyzing the impact of such differences in terms of system-level performance and energy consumption, and thus supports designers in selecting appropriate designs for a given application.
Hamid Farzaneh, João Paulo Cardoso de Lima, Mengyuan Li, Asif Ali Khan, Xiaobo Sharon Hu, Jeronimo Castrillon
2023-09-12T17:30:34Z
http://arxiv.org/abs/2309.06418v1
# C4CAM: A Compiler for CAM-based In-memory Accelerators ###### Abstract Machine learning and data analytics applications increasingly suffer from the high latency and energy consumption of conventional von Neumann architectures. Recently, several in-memory and near-memory systems have been proposed to remove this _von Neumann_ bottleneck. Platforms based on _content-adressable memories_ (CAMs) are particularly interesting due to their efficient support for the search-based operations that form the foundation for many applications, including K-nearest neighbors (KNN), high-dimensional computing (HDC), recommender systems, and one-shot learning among others. Today, these platforms are designed by hand and can only be programmed with low-level code, accessible only to hardware experts. In this paper, we introduce C4CAM, the first compiler framework to quickly explore CAM configurations and to seamlessly generate code from high-level TorchScript code. C4CAM employs a hierarchy of abstractions that progressively lowers programs, allowing code transformations at the most suitable abstraction level. Depending on the type and technology, CAM arrays exhibit varying latencies and power profiles. Our framework allows analyzing the impact of such differences in terms of system-level performance and energy consumption, and thus supports designers in selecting appropriate designs for a given application. Content addressable memories (CAM), compute in memory (CIM), TCAM, MLIR ## I Introduction Search operations come in numerous forms at the heart of many comparison-intensive applications. In the past decade, the revolution in machine learning, data analytics, and bioinformatics has played a significant role in driving the demand for efficient hardware acceleration of these operations. Domains such as network security [1], bioinformatics [2], data mining and data analytics [3] heavily rely on _exact matching_ of the query pattern with pre-stored patterns. In other applications, such as K-nearest neighbors (KNN) and genome analysis [4, 5], the emphasis lies on identifying similarities rather than exact pattern matching. In approximate search, when the dissimilarity between a stored pattern and the query pattern is within a predefined threshold, the stored pattern is regarded as a "match". From the computational standpoint, both exact and approximate search operations are time-consuming and are often bottlenecks in comparison-intensive kernels [6]. Recently, there has been a surge in the adoption of _content addressable memories_ CAM-based system designs for efficient search operations. CAMs were originally used in network routing and CPU caching [7]. Recently, they found applications in a wider range of data-intensive domains [4, 5, 8]. CAMs allow massively parallel search operations for an input query, enabling the search to be performed across the entire memory with a single operation. CAM's high-speed parallel search makes it a popular component for constructing cutting-edge _compute-in-memory_ (CIM) systems, aiming to provide an energy-efficient alternative to the von Neumann bottleneck in terms of both latency and energy consumption. CAM designs are broadly classified into binary, ternary, multi-state, and analog CAMs (BCAM, TCAM, MCAM, ACAM, respectively), with implementations based on either conventional CMOS or emerging non-volatile memory (NVM) technologies [4, 6, 9, 10]. Compared to CMOS technologies, NVM technologies, like magnetic RAM (MRAM), resistive RAM (ReRAM) or ferroelectric (FeFET), are denser and more energy efficient, yielding more efficient CAM arrays [11, 12, 13]. BCAMs and TCAMs use a bit-wise Hamming distance (HD) to compare the query and stored data, whereas MCAMs and ACAMs apply a specific distance metric to compare the query with memory entries and determine which memory entries match the query based on the distance metric. In terms of match types, CAMs can be classified into the exact match (EX), best match (BE), and threshold match (TH) [13]. Although CAM designs have shown better performance than traditional methods for computing similarity in many domains [4, 5, 8], effectively mapping applications written in high-level programming languages onto CAM-based accelerators remains a challenge. This is due to the disparity in the abstractions of the applications (high-level) and the (low-level) set of commands needed to program the CAM arrays. Presently, CAM arrays are programmed manually with low-level code that only the device experts understand. Existing design automation and compilation tools for in-memory computing [14, 15] do not provide support for CAM primitives, highlighting the need for solutions that can support mapping a wider range of applications and accelerate the design process. This paper proposes C4CAM, the first end-to-end automated framework that enables efficient mapping of applications from a higher TorchScript program onto CAM arrays. C4CAM leverages the multi-level intermediate representation (MLIR) framework to seamlessly optimize and offload comparison-intensive kernels to CAM-enabled systems. Concretely, we make the following contributions: * An automated end-to-end compilation flow that (i) makes CAM accelerators accessible to non-experts and (ii) enables device/circuit/architecture experts to explore design trade-offs. C4CAM takes applications written in TorchScript along an architectural model for retargetability and generates code for the given architecture (see Section III). * An extension to the MLIR front-end to express search operations in PyTorch applications (see Section III-C). * We extend the CIM abstraction from [16] to cater for CAM accelerators. Specifically, we propose analyses to detect computational primitives in applications that can be rewritten as search operations (see Section III-D1). * A novel CAM abstraction in MLIR that supports different CAMs types and search operations (see Section III-D2). * Transformation passes to optimize for latency, power, and array utilization (see Section III-D2). * A comprehensive evaluation of the generated code, including validation and comparison to a GPU target and the hand-crafted implementations (see Section IV). Our evaluation of C4CAM demonstrates that the generated code achieves comparable results to hand-crafted designs. We also showcase the capabilities of C4CAM in performing design space exploration on different CAM architectures. ## II Background and related work This section presents background on the MLIR framework and CAM-based structures and describes our proposed architecture. It also motivates the need for automatic compilation tools by explaining the challenges in the state-of-the-art programming models for CAMs. ### _MLIR compiler infrastructure_ MLIR is a framework that enables representing and transforming intermediate representations (IR) at various abstraction levels, catering to diverse application domains and heterogeneous hardware targets [17]. It offers a customizable IR, with minimal built-in features, enabling compiler developers to incorporate their own abstractions. This empowers them to optimize for specific domains or targets by leveraging matching techniques at the appropriate levels of abstraction. MLIR consists of a collection of reusable abstractions organized into _dialects_. Each dialect incorporates custom types, operations, and attributes, which serve as fundamental building blocks of the IR. In MLIR, values are associated with compile-time known types, while attributes provide compile-time information linked to operations. Dialects in MLIR maintain preconditions for transformation validity within their IR, reducing the complexity and cost of analysis passes. Dialects are typically designed for specific domains (e.g., linalg for linear algebra, TOSA for tensor operations), representations (e.g., affine for the polyhedral model, scf for control flow), or targets (e.g., gpu, cim). The llvm dialect models LLVM IR constructs. Abstractions in MLIR can be progressively lowered (from high-level domain-specific dialects to low-level platform-specific dialects) and raised [18]. ### _Content addressable memories_ Content addressable memories (CAMs) are one type of CIM fabric that enables fast and energy-efficient search operations. CAMs support two main functions: search, which identifies the memory entries that match the input query, and write, which stores data entries in the memory cells. With CAMs, parallel searches can be performed on all stored data in memory in constant time (O(1)). The most common type of CAM is the ternary CAM (TCAM), where each element of queries and stored data can be either 0, 1, or don't care ('x'), which is a wildcard state matching both 0 and 1. Figure 1 illustrates a TCAM array with \(R\) rows and \(C\) columns. Each cell in a row is connected to a common match line (ML) and stores one of the tree states. During a search operation, each cell \(C_{ij}\) in row \(i\) performs an XNOR operation between its content and the query element \(q_{j}\). The ML implements a logic OR operation of all the cells in the row to determine the result for that row. Different sensing circuits can be designed to realize different match schemes, such as EX, BE, and TH. EX search is the fastest search type due to its simple sensing requirement, whereas best match search reports the row with the least number of mismatching cells and is widely used for nearest neighbor search. To find the best match, more sophisticated sensing circuits are needed, e.g., analog-digital-converters or a winner-take-all circuit, with the latter being more energy and area efficient but limited to finding the best matches only within a certain number of mismatch cells [19]. CAM's efficient data retrieval capabilities, i.e., outputting the addresses of stored data that match the input search query based on a specific distance metric, make it highly suitable for applications that rely heavily on large-scale matching or search operations. Recent works have demonstrated the use of CAMs in various fields, e.g., bioinformatics [21], high dimensional computing [22], reinforcement learning [23], few-shot learning [24] and recommender systems [8]. ### _Accelerator architecture_ For this work, we consider a general CAM-accelerator design based on the state-of-the-art [22]. As illustrated in Figure 2, the CAM structure is organized into a four-level hierarchy comprising \(B\) banks, each bank containing \(T\) mats where each mat consists of \(A\) CAM arrays which are further Fig. 1: Structure of a FeFET-based CAM array [20] partitioned into \(S\) subarrays. The subarrays can be operated and accessed independently. This hierarchical organization allows for scalable and flexible computation, as the number of banks, mats, and arrays can be allocated according to the computational requirements of each application. Within each bank, all mats and arrays can perform parallel search operations using the \(S\) CAM subarrays either in a sequential or parallel manner, providing further granularity for parallel processing and resource allocation. \(B\) banks operate independently to allow for task-level parallelism. RecSys [8], for instance, can profit from CAMs in both filtering and ranking stages, where each stage executes different tasks on different banks in parallel. ### _State-of-the-art programming models for CIM-CAMs_ Most of the current proposals for compilers targeting CIM architectures primarily focus on crossbar-based accelerators [15, 16] or general-purpose logic-in-memory architectures [14]. While the fundamental programming model of the CIM abstraction introduced by CINM [16] is generic and can be applied to any CIM architecture, it does not provide support for CAM accelerators. CINM primarily focuses on facilitating arithmetic and logic operations, whereas CAMs are specifically designed to handle search operations that involve computing distances, similarities, or comparisons. Proposals for supporting CAM-based accelerators are relatively rare, such as DT2CAM [25], a framework for mapping and simulating decision trees onto TCAMs. However, this mapping tool does not generalize to other comparison-intensive kernels and requires programmers with a deep understanding of both the application and the accelerator architecture. Therefore, there is a considerable demand for a generalized framework capable of efficiently handling lowering high-level language programs for diverse input applications. This framework should also incorporate CAM array optimizations, such as selective row precharging, to generate optimized code for the underlying architecture. In the following section, we introduce how hierarchical C4CAM framework effectively addresses this gap. ## III The C4CAM framework This section presents C4CAM, including the abstractions, and the lowering, analysis and optimization passes. ### _An overview of the compilation flow_ Figure 3 shows a high-level overview of C4CAM. The TorchScript functions chosen for offloading to the CAM accelerators are transformed into MLIR's representation using the PyTorch MLIR converter (see Section III-C). This produces the Torch IR which is the entry point into C4CAM and encompasses ATen tensor library operations. Torch MLIR is then lowered to the cim abstraction which is a comprehensive solution for various CIM technologies, taking over the shared responsibilities of host-device interfacing and device mapping (see Section III-D1). The cim abstraction has been previously investigated in [26] and [16], where a programming model for CIM devices was introduced. C4CAM extends this abstraction by incorporating the necessary analysis for CAM devices. To enable the mapping, cim supports partitioning, rewriting, or modifying the functions to include device-compatible sizes and operations, which the low-level dialects can then process. Subsequently, the cim dialect is either lowered to cam or loop, or other device dialects. The cam dialect and other device dialects at the same level, such as crossbar, offer an abstraction for programming and executing functions on the target device (see Section III-D2). The cam dialect also provides transformation passes that enable mapping and optimization of the selected kernel, while accounting for the concrete hierarchy and other characteristics of CAM-based architectures. ### _Architecture specification_ In addition to the input application, C4CAM takes the architectural configuration as input, as shown in Figure 3. This configuration outlines the hierarchy of the proposed architecture (as discussed in Section II-C), as well as the access mode for each level of the hierarchy, whether it supports sequential or parallel accesses. Note that all active rows within a subarray are accessed in parallel. However, through selective row accessing [27], it is possible to activate and pre-charge only a subset of rows within a subarray. Furthermore, this input file specifies the optimization target which can be set to latency, power, or array utilization. ### _C4CAM front end_ The PyTorch MLIR converter [28] is responsible for converting Python code written in TorchScript. However, certain operations from the ATen library, particularly those used in search-based applications such as norm and topk, are not supported. From the CAMs perspective, these are essential primitives in any input application. Since C4CAM is built upon the MLIR framework and this is the only available front end that enables lowering TorchScript input to the MLIR torch dialect, we extend the frontend to support the norm and topk primitives that are commonly accelerated on CAM arrays. ### _C4CAM progressive lowering_ The compilation flow begins with the Torch dialect, as depicted in Figure 3. This dialect includes most of the ATen tensor library operations. To enable support for the cim abstraction, we have introduced a torch-to-cim conversion pass. This pass lowers the operations that are compatible with the cim abstraction. Examples of these operations include topk, norm, sub, and matmul, which can be executed individually or as part of a kernel on a CIM device. Fig. 2: Hierarchical structure of a CAM-based accelerator To demonstrate how the IR of the application transforms at each hierarchy level, we use the similarity kernel in hyperdimensional computing (HDC) as a running example. Figure 3(a) shows the TorchScript code of the input kernel. Figure 3(b) presents its MLIR representation at the Torch abstraction as produced by the MLIR PyTorch front end. The conversion from Torch to cim is accomplished with target-agnostic transformations. The outcome of the conversion primarily showcases the interface with a generic CIM device. #### Iii-B1 The extended cim abstraction cim is the primary high-level dialect encompassing device-supported operations and essential transformations required to prepare kernels to run on target devices. This abstraction is mainly responsible for: (i) analyzing the input code to identify CIM-amenable primitives that can be offloaded to the accelerator, (ii) if a CIM-executable pattern is identified but the operand sizes exceed the array sizes specified in the given architecture, dividing the input into smaller partitions to ensure compatibility with the accelerator, and (iii) providing an abstract programming model to enable the execution of kernels on a device. The programming model employed in C4CAM's cim abstraction for CIM devices is derived from the concept proposed in [16]. This model encompasses three main functions: To allocate an accelerator, cim uses the cim.acquire function that returns a handle to the device. The cim.execute function uses this handle and specifies the operations that are to be executed on this accelerator. Finally, the device is released using the cim.release function. For CAM architectures, we show how these functions are lowered to different CAM functions in Section III-D2. Figure 4(a) shows the IR for the running example at the cim abstraction, which can be produced by running the conversion pass torch-to-cim at the Torch abstraction. As the Torch abstraction does not, and is not supposed to, specify the kernel type, the fundamental assumption of the torch-to-cim conversion is that each supported operation can be executed on a separate (non-)CIM device. Since all the torch operations are supported by the cim dialect (as they are part of the dot similarity), they are lowered to their corresponding cim versions. **Pattern matching and fusing:** cim implements analysis and optimization passes to recover patterns that can be offloaded to a CIM accelerator and, when possible, optimizes them for the target. The analysis pass identifies blocks containing operations that cannot be directly lowered to the accelerator and fuses them. Once the code analysis is complete, the execution blocks can be transformed and offloaded to CIM accelerators, or they can follow the standard MLIR pipeline to generate llvm code for execution on the host processor. ``` Input: cim:ExecuteOp 1_DotProdSimPattern_ = {(transpose()\(\rightarrow\)(v1)), (matmat(v1)\(\rightarrow\)(v2)), (body(v2)\(\rightarrow\)(v3))}; 2_EuclForm_=---- {((sub()\(\rightarrow\)(v1)), (norm(v1)\(\rightarrow\)(v2)), (topk(v2)\(\rightarrow\)(v3))}; 3_CoinPattern_= {(norm()\(\rightarrow\)(v1)), (norm()\(\rightarrow\)(v2)), (transpose(\(\rightarrow\)(v3)), (matmat(v3))\(\rightarrow\)(v4)), (div(v4, v2, v1)\(\rightarrow\)(v5))}; 4_oList_= _o getB_oList_: {_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_o_p_p_o_p_o_p_o_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_o_p_o_p_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_o_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_o_p_p_o_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_o_p_p_p_o_p_p_o_p_p_p_o_p_p_p_o_p_p_p_o_p_p_p_p_o_p_p_p_p_o_p_p_p_p_p_o_p_ smallest block within the system is the subarray. Therefore, when partitioning the application, it is important to consider this level of granularity and divide it accordingly. To support this, C4CAM includes a partitioning transformation within the cim abstraction. This transformation can be likened to tiling in compiler terminology, with some hardware-specific considerations. It enables the efficient partitioning of kernels to facilitate their execution on the device(s), but requires an abstraction to accumulate partial results. To this end, the cim dialect includes the cim.merge_partial operation. It considers both the type of operation for which partial results are generated and the direction in which these results are accumulated. The partitioned version of Figure 4(c) for a device of size 32x32 is demonstrated in Figure 4(d). The cim abstraction focuses on identifying operations that can be offloaded to the CAM accelerator and partitions them based on the subarray size. It does not address the mapping of an input application or its partitions onto the CAM accelerator, nor does it incorporate any device-specific optimizations. The latter are performed by the device-specific cam abstraction, which is discussed in the next section. #### V-C2 The cam abstraction To convert the cim IR into the cam IR, C4CAM introduces the cim-to-cam conversion pass. This pass requires the specification of the target CAM device type (e.g., ACAM, TCAM or MCAM) as an input which also determines the search type and metric to be utilized during the conversion process. The cam dialect is responsible for mapping the high-level functions from the cim dialect to the CAM-device calls. After applying this conversion pass, occurrences of a sequence of cim.acquire, cim.execute, and cim.release working on the same device handle are substituted with calls to allocate a simple system consisting of a bank, a mat, an array, and a single subarray. This system executes the targeted execution block intended to run on the chosen CAM. More concretely, the cam.alloc_bank function is used to allocate a CAM bank, taking the row and column sizes of the desired CAM size as parameters. Furthermore, allocating a mat from the bank and a CAM array from the mat, and a subarray from an array is accomplished using the cam.alloc_mat, cam.alloc_array, and cam.alloc_subarray functions, respectively. Similarly, the cim.execute function is lowered into three CAM function calls: cam.write_value, cam.search, and cam.read_value. The write operation programs the CAM arrays with the input data. The search operation performs the actual search on the data based on the specified search type and metric. The supported search types include exact match, best match, and range match, while the available distance metrics are Euclidean and Hamming. The read operation reads the values and indices of the search results from the device. The original program underwent partitioning at the CIM dialect without considering the hierarchy. This approach was chosen because dealing with synchronization and accumulation of partial results across different levels of the hierarchy often requires hardware-specific information, which goes against the principles of the cim dialect. To map an application onto the CAM abstraction, the cam-map pass within the cam dialect can be employed. This pass transforms the application into a nested loop structure according to the provided specifications, incorporating the required hardware Fig. 5: cim IR of the HDC similarity function, after different analysis and optimization passes Fig. 6: cam IR after mapping calls at each loop level. The code in Figure 6 shows the mapping The required devices (bank, tile, array, subarray) and buffers for storing the results (values and indices) are allocated at each level of the loop. Additionally, the necessary functions responsible for accumulating these partial results are called. In cases where the system size precisely matches the data size, the levels of the nested loop, starting from the outermost level, iterate over the banks, mats within each bank, arrays within each bank, and subarrays within each array. However, if the data size exceeds the system's capacity, an additional loop is introduced. This loop includes iteration over banks, allowing the system to be called multiple times to process the data effectively. The cim to cam conversion pass also performs bufferization of tensors involved in executing a kernel on the CAM. This process determines how the memory is handled between the host and the device. During the process of lowering from cam to scf and subsequently to llvm, the cam operations are mapped to function calls of a CAM simulator. **Built-in optimizations:** C4CAM provides an extensible and flexible framework that enables future research in code optimizations and auto-tuning. Currently, the framework uses simple heuristics to optimize for different metrics, namely, for latency/performance, power consumption and device utilization. This is enabled by device-specific transformations that can be further composed by performance engineers. For example, in order to minimize latency, C4CAM prioritizes maximizing the utilization of parallel-executing arrays in the system. In contrast, to minimize power consumption C4CAM reduces the number of enabled subarrays at a time inside an array. For devices that support selective search and if the standard data placement is not feasible due to a smaller number of rows compared to the array, multiple batches of data can be placed on the same array. By utilizing selective search, different queries can be searched on corresponding rows of the same array in multiple cycles. As demonstrated in Section IV-C1, employing the same hierarchy specification (mat, array, and subarray sizes) consistently leads to longer latency. However, the impact on latency varies depending on the dimensions of the subarray, which subsequently alters the number of banks. This variation can either increase or decrease the energy consumption. ## IV Evaluation This section presents our experimental setup and gives a detailed analysis of the code generated with C4CAM. ### _Experimental setup_ #### Iv-A1 System setup and technology parameters For the CAM technology parameters, we consider the 2FeFET CAM design proposed in [20] at the 45nm technology node. Energy and latency numbers for TCAM and MCAM operations were extracted from Eva-CAM [29]. Since we are varying the array size for design space exploration, the search latency can vary from \(860ps\) to \(7.5ns\) for array sizes of \(16\times 16\) and \(256\times 256\), respectively. For the GPU results, we use the NVIDIA Quadro RTX 6000 GPU (16 nm process). The power consumption is measured using the NVIDIA System Management Interface nvidia-smi, and energy is derived thereof. #### Iv-A2 Simulation infrastructure We use the same simulation infrastructure as in [8][22]. It models the architecture and performs functional simulation of the functions called by C4CAM. We extend the simulator to include performance and energy estimation. To handle large data dimensions and entry sizes, the extended simulator allows for fine-grain control of the hierarchy, and models CAM queries to obtain energy and latency based on real hardware behavior. The functional simulation generates CAM outputs that can be analyzed to assess application accuracy. Moreover, the tool supports different underlying CAM designs and performs architectural modeling for chip-level estimations. Additionally, it estimates the performance cost of additional peripherals to provide application-level results. For a fair comparison, we were also granted access to the same simulation and evaluation parameters. #### Iv-A3 Benchmarks We evaluate C4CAM on two benchmarks. The first one is _K-nearest neighbor (KNN)_, a popular algorithm used for classification, regression, and anomaly detection tasks. It works by identifying the \(K\) closest training examples in the feature space to a given test sample. It is especially interesting because of its versatility and interpretability, with no training required. KNNs are both memory and computationally expensive, making their scalability and performance strongly limited on conventional systems. We evaluated KNN on chest X-Ray images from the Pneumonia dataset. The second benchmark is _hyperdimensional computing (HDC)_ which is a framework inspired by the human brain's ability to process information. It utilizes high-dimensional vectors known as hypervectors as a fundamental building block. Hypervectors are large binary vectors with thousands of dimensions. We evaluated HDC on the MNIST dataset with 8k dimensions. HDC on MNIST is a commonly used benchmark that allows us to validate the results of C4CAM and compare them against existing manual implementations on GPUs and CAM-based accelerators. The PyTorch GPU implementation operates on int32 elements. ### _Validation_ In order to validate the C4CAM framework, we use the CAM-design and hand-optimized mapping from [22] as a baseline. We generate code for binary and multi-bit implementations of HDC for different CAM architectures, i.e., with array sizes of \(32\times C\) where \(C\) is varied to 16, 32, 64, and 128. For this evaluation, we use the same system configuration as in the baseline, i.e., four mats per bank, four arrays per mat, eight sub-arrays per array, and as many banks as needed to store the whole dataset. The validation results for latency and energy are shown in Figure (a)a and Figure (b)b, respectively. In this experiment, the observed deviation in the latency and the energy consumption is, on average (geomean), 0.9% and 5.5%, respectively (notice that the y-axes do not start at 0 for better visualization). These small deviations can be attributed to slight differences in the versions of the simulation environment rather than to fundamental differences in the implementations. Hence, C4CAM effectively matches the quality of the manual designers. To understand the latency results in Figure 6(a), it is important to note that all search operations happen in parallel, and the ML discharges more slowly for larger columns. As for the energy numbers shown in Figure 6(b), larger \(C\) leads to lower energy consumption because fewer peripherals and fewer levels (arrays, mats, and banks) are required as \(C\) increases. Moreover, as observed in [22], we corroborate that binary implementations are more energy efficient than multi-bit ones. This improvement is associated with the higher ML and data line voltages of the multi-bit implementations. **GPU comparison:** We compared end-to-end performance against the GPU setup using the CIM setup described in [22]. The improvement in execution time is \(48\times\) which deviates only by 5% compared to the manual design, while the energy consumption improvement translates to \(46.8\times\) which is nearly the same since CAMs contribute minimally to the overall energy consumption in their CIM system. ### _Design space exploration_ #### Iv-C1 Fixed architectural parameters While we could reproduce the results of single manual designs, the automation provided by C4CAM allows for quick exploration of different software and hardware implementations. To demonstrate this, we evaluated systems consisting of sub-arrays with sizes of \(R\times C\), where \(C=R\) assuming values of 16, 32, 64, 128, and 256 with different configurations for the same, as outlined in Section IV-A1, namely: * _cam-base_: In this configuration, applications are allocated to the CAM accelerator without incorporating the optimizations discussed in Section III-D2. In this setup, parallel execution is enabled at each level. * _cam-power_: In this configuration, we implement a restriction on the maximum number of sub-arrays activated concurrently. Specifically, for each application, we have chosen to enable only one sub-array per array at a time. * _cam-density_: This configuration demonstrates the impact of employing selective search [27] to enhance both the utilization of arrays and the system's overall capacity, as shown in Table I. * _cam-power+density_: This configuration imposes limitations on the number of enabled sub-arrays at a time. Simultaneously, it incorporates selective search technique to enhance the system's capacity. For all sub-array sizes, the configuration remains consistent, with 4 mats per bank, 4 arrays per mat, and 8 sub-arrays per array. We use as many banks as needed to accommodate the input data. Figure 7(a) and Figure 7(b) illustrate the energy consumption and latency of the configurations mentioned above respectively when executing the HDC application on the MNIST dataset with 8k dimensions. In the _cam-power_ configuration, only one sub-array within the array is active at a time. With a sub-array of size \(16\times 16\), the power consumption is reduced to approximately \(0.57\times\) with respect to the base configuration (Figure 7(c)). Similarly, the power requirement for the largest array size is merely 20% of the base configuration. However, this reduction in power consumption results in increased latency. For instance, executing the application on a \(32\times 32\)-sized subarray incurs a latency increase of approximately \(2\times\) compared to the baseline. As the array size increases, the latency rises, reaching up to \(4.86\times\) the baseline for the largest sub-array size. The overall energy consumption remains the same between the two configurations, _cam-power_ and _cam-base_. The analysis of the KNN benchmark is similar to the analysis for HDC. For space reasons we summarize the results in Table II for EDP and power. The absolute values of energy and latency are considerably higher than in the HDC case. This is simply due to the sheer size of the Pneumonia dataset, requiring many banks in the CAM accelerator. The _cam-density_ configuration uses selective search to improve resource utilization, as shown in Table I. In the case of the smallest array size (\(16\times 16\)), the execution time is less than twice compared to the base configuration. This trend scales further, and with the largest subarray size (\(256\times 256\)), the execution time is nearly \(23\times\) longer compared to the _cam-base_ configuration. The energy consumption for subarray sizes ranging from \(16\times 16\) to \(64\times 64\) in the cam-density configuration is, on average, \(0.6\times\) that of the corresponding sub-array size in the baseline configuration. However, for sub-arrays of \(128\times 128\) or \(256\times 256\), the energy consumption \begin{table} \begin{tabular}{c|c|c|c|c|c} & \(16\times 16\) & \(32\times 32\) & \(64\times 64\) & \(128\times 128\) & \(256\times 256\) \\ \hline cam-based & 512 & 256 & 128 & 64 & 32 \\ \hline cam-density & 512 & 86 & 22 & 6 & 2 \\ \hline \end{tabular} \end{table} TABLE I: Number of subarrays used to implement HDC. Fig. 7: C4CAM validation against manual designs [22]. increases compared to the baseline, reaching \(1.4\times\) and \(5.1\times\), respectively. It is worth noting that by fixing the system configuration and enabling selective search, the number of banks required for application execution is reduced, thus reducing the overall power consumption. The _cam-power-density_ configuration combines the approaches of both _cam-power_ and _cam-density_ to achieve the most significant reduction in power consumption. A \(16\times 16\)-sized subarray utilizes only 23.4% of the base power, while the largest sub-array requires only 4.2% of the base power. However, this reduction in power consumption comes at the cost of significantly increasing the execution time. In the case of the largest subarray configuration, the execution time is approximately \(121\times\) longer compared to the base configuration. #### Iv-B2 Iso-capacity analysis With the iso-capacity experiments, we investigate the relationship between energy consumption and latency by changing the size of subarrays and the number of subarrays per array while keeping the capacity fixed to \(2^{16}\) TCAM cells per array. To achieve this, we modify the subarray size, starting from \(256\times 256\) which corresponds to one subarray per array, and gradually decrease it to \(16\times 16\), resulting in 256 subarrays per array. The numbers of arrays per mat and mats per bank are fixed as in the previous sections. It is important to note that these systems are not iso-area since each subarray has its own set of peripherals. This means that as the size of the subarrays is reduced, more peripherals are needed, and chip area increases. Figure 8(b) shows that the energy consumption in _iso-base_ remains nearly constant across subarray sizes. Moreover, _cam-density_ and _cam-power+density_, on average, achieve \(1.75\times\) energy improvement over _iso-capacity-base_, except for large subarray sizes like \(128\times 128\) and \(256\times 25\). The total execution time across different subarray sizes also varies within a moderate range, i.e., from \(58\mu s\) for \(16\times 16\) to \(150\mu s\) for \(256\times 256\), as shown in Figure 8(b). Again, as the search latency increases for larger columns, the execution time also increases despite the consistent number of cells within an array. As for the _cam-density_ and _cam-power+density_ transformations, Figure 8(b) shows a significant decrease in power consumption, offering potential CAM configuration that can be used in power-constrained system setups. ## V Conclusions We present C4CAM, a framework that enables the exploration of CAM configurations and seamless code generation from high-level TorchScript code. We introduce an MLIR abstraction named cam that is specifically tailored for CAM-based accelerators. This abstraction provides control knobs that allow for the tuning of various metrics by adjusting the mapping of applications to the CAM arrays. To validate the effectiveness of C4CAM, we compare our results with those obtained from a hand-crafted design and demonstrate \begin{table} \begin{tabular}{c|c|c|c|c|c||c|c|c|c} \multicolumn{1}{c}{} & \multicolumn{4}{c||}{**EDP (n-s)**} & \multicolumn{4}{c}{**POWER (W)**} \\ \hline subarray size & 16x16 & 32x32 & 64x64 & 128x128 & 256x256 & 16x16 & 32x32 & 64x64 & 128x128 & 256x256 \\ \hline cam-based & 0.75 & 0.30 & 0.15 & 0.08 & 0.05 & 44.14 & 16.30 & 5.97 & 2.34 & 0.86 \\ \hline cam-power & 1.32 & 0.61 & 0.44 & 0.29 & 0.23 & 25.23 & 8.15 & 2.10 & 0.66 & 0.19 \\ \hline \end{tabular} \end{table} TABLE II: EDP and power for KNN execution. Fig. 8: Impact of subarray size and C4CAM optimizations on latency, energy, and power. Fig. 9: Impact of optimizations on iso-capacity setups. that C4CAM produces comparable results. Moreover, we demonstrate C4CAM capabilities by automatically generating implementations optimized for performance, power and device utilization. Finally, we show how C4CAM retargetability facilitates design space exploration by varying architectural parameters without any application recoding effort. The architecture specification supported by C4CAM, along with its compilation flow, also enables the specification of heterogeneous systems. However, determining the optimal mapping strategy for heterogeneous systems based on different optimization targets remains a subject for future research. ## Acknowledgments This work is partially funded by the German Research Council (DFG) through the HetCIM(502388442) projects.
2302.14408
LES-informed resolvent-based estimation of turbulent pipe flow
A resolvent-based methodology is employed to obtain spatio--temporal estimates of turbulent pipe flow from probe measurements of wall shear-stress fluctuations. Direct numerical simulations (DNS) and large-eddy simulations (LES) of turbulent pipe flow at friction Reynolds number of 550 are used as databases. We consider a DNS database as the true spatio--temporal flow field, from which wall shear-stress fluctuations are extracted and considered as measurements. A resolvent-based estimator is built following our earlier work (Amaral et al. J. Fluid Mech., vol. 927, 2021, p. A17), requiring a model for the nonlinear (or forcing) terms of the Navier-Stokes equations system, which are obtained from another DNS database, as in our earlier work, and from a series of computationally cheaper LES databases with coarser grids; the underlying idea is that LES may provide accurate statistics of non-linear terms related to large-scale structures at a low computational cost. Comparisons between the DNS and the estimates indicate that sufficiently accurate results can be achieved with estimators built with statistics from LES with an order of magnitude less grid points than the DNS, with estimates closely matching the reference DNS results up to the buffer layer and reasonable agreement up to the beginning of the log layer.
Filipe Ramos do Amaral, André Valdetaro Gomes Cavalieri
2023-02-28T08:43:34Z
http://arxiv.org/abs/2302.14408v3
# LES-informed resolvent-based estimation of turbulent pipe flow ###### Abstract A resolvent-based methodology is employed to obtain spatio-temporal estimates of turbulent pipe flow from low-rank probe measurements of wall shear-stress fluctuations. Direct numerical simulations (DNS) and large-eddy simulations (LES) of turbulent pipe flow at friction Reynolds number of 550 are used as databases. We consider a DNS database as the true spatio-temporal flow field, from which wall shear-stress fluctuations are extracted and considered as measurements. A resolvent-based estimator is built following our earlier work (Amaral et al. _J. Fluid Mech._, vol. 927, 2021, p. A17), requiring a model for the nonlinear (or forcing) terms of the Navier-Stokes equations system, which are obtained from another DNS database, as in our earlier work, and from a series of computationally cheaper LES databases with coarser grids; the underlying idea is that LES may provide accurate statistics of non-linear terms related to large-scale structures at a low computational cost. Comparisons between the DNS and the estimates indicate that sufficiently accurate results can be achieved with estimators built with statistics from LES with an order of magnitude less grid points than the DNS, with estimates closely matching the reference DNS results up to the buffer layer and reasonable agreement up to the beginning of the log layer. ## I Introduction Estimation of space-time flow fluctuations from noisy, low-rank measurements is an interesting option for the understanding of the turbulence physics, design of flow control strategies and reconstruction of missing or corrupted data. For wall-bounded turbulent flows, wall quantities such as shear stress and/or pressure are usually employed as inputs for the estimation algorithms, as for practical applications the measurement of such quantities is easier to obtain than, e.g. the velocity components at a given distance from the wall. To build the estimator, model-based methodologies can be used [1; 2; 3; 4; 5; 6], although it is also possible to conduct flow estimations based solely on data [7; 8; 9; 10]. For both model- and data-driven methodologies the basic idea is to obtain transfer functions between the measurements and the estimated flow state, bearing in mind that the model-based methodologies have the additional advantage of providing insight on the underpinning physics. In recent years, the use of linear models to understand the physics that drive turbulent wall-bounded flows became widespread. Linear models provide a simple framework to work with and the emergency of tools such as resolvent analysis enables modeling coherent structures and self-sustaining mechanisms in flows [11; 12; 13; 14; 15; 16]. In the resolvent framework the Navier-Stokes system is written in the state-space form, and the nonlinear terms are interpreted as external forcing terms [17; 18; 19; 20], hence providing a convenient input-output formulation, relating the flow response and the forcing modes, related to non-linear terms in the Navier-Stokes system. The input-output formulation enables the use of control theory tools [21], that can be adapted to estimate the flow state components after low-rank measurements. Towne et al. [22] introduced a resolvent-based estimator for flow statistics, which was further generalized by Martini et al. [23] for time-domain estimates. For the latter case, in order to build the transfer functions between the low-rank measurements and the flow state components, it is necessary inform the algorithm with the cross-spectral density (CSD) of the nonlinear terms of the Navier-Stokes system, treated as forcing. If the true forcing CSD is used, optimal estimates of time-varying flow quantities are obtained. Other forcing models provide sub-optimal estimates. Such estimates are not causal, as the full time series of sensor data is required for estimation; extension to causal estimation, using only past sensor information, is proposed by [24]. In our previous work we have successfully applied the methodology developed by Martini et al. [23] to direct numerical simulation (DNS) of turbulent channel flow, using wall shear-stress and pressure low-rank measurements [25]. Results show a close agreement between estimates and reference DNS fluctuations in the near wall region, and good agreement for large scale structures throughout the channel. A key feature is the use of forcing statistics extracted from the DNS database, which leads to an optimal estimator but requires expensive simulation and post-processing to obtain the forcing CSD. One option to model the forcing statistics is to consider it as spatially white noise. Amaral et al. [25] show that this is a good choice for near-wall structures estimates, close to where the measurements were taken, whereas far from the wall, the estimator failed to deliver good results. The work also explored the use of a standard Cess eddy-viscosity model Cess [26], del Alamo and Jimenez [27] embedded within the linear operator to somehow account for the nonlinear terms that are missing in the linearized Navier-Stokes (LNS) system. The eddy-viscosity improved the channel flow large-scale structures estimates but at the cost of worsening the near-wall structures estimates. Chinta and Luhar [28] employed the resolvent framework to estimate the velocity field of turbulent channel flow after low-rank measurements. Their method differs from that by Martini et al. [23] as they compute the resolvent modes for various wavenumber/frequency combinations and then assume that the flow state is a linear combination of such modes, calibrating linear coefficients after the input/measurements data. In other words, the method by Chinta and Luhar [28] seeks to identify the resolvent modes that best represent the measurement data. It is interesting to note that the inclusion of an eddy-viscosity model in the linear operator also had a dual effect in the results by Chinta and Luhar [28]: although it improved the flow statistics, it also increased the velocity field estimates errors. Obtaining optimal resolvent-based estimators to turbulent flow was shown to be feasible in Amaral et al. [25], but with a high computational cost related to the extraction of forcing statistics from a DNS database. In the present paper we employ the Martini et al. [23] resolvent-based methodology to estimate the space-time velocity fluctuation components of turbulent pipe flow at friction Reynolds number \(Re_{r}*550\) using wall shear-stress measurements. In addition to DNS, we also explore the capability of wall-resolved large-eddy simulations (LES) to construct estimators. A first DNS database is used as the reference case from which we extract low-rank measurements of wall-shear stresses. A second DNS and the other LES databases provide the forcing (nonlinear) statistics to build the linear estimators. The underlying assumption is that LES provides sufficiently accurate statistics of large-scale structures, including associated non-linear terms, and thus may be used to construct estimators with near-optimal performance. Here we investigate the capability of such LES databases on the reconstruction of the space-time flow field, aiming to obtain a reliable and lower-cost estimator that could be used for various high-Reynolds-number flows of practical interest. The remainder of the manuscript is organized as follows. Section II presents the methods employed in this work, including the resolvent-based estimator algorithm and details on the turbulent pipe flow simulations. Section III contains the results, including a direct comparison among the reference DNS and the estimators using the different strategies to model the nonlinear terms of the LNS equations, as well as metrics to evaluate the performance of each estimator. Finally, section IV presents the conclusions of this study. ## II Methodology ### Resolvent-based estimator Let us begin by applying a Reynolds decomposition over the flow state, i.e. \(\mathbf{q}=\bar{\mathbf{q}}+\mathbf{q}^{\prime}\), where \(\bar{\mathbf{q}}\) and \(\mathbf{q}^{\prime}\) denote mean flow and fluctuation around the mean flow components. In the resolvent framework, a the forcing term is obtained gathering all nonlinear terms on \(\mathbf{q}^{\prime}\). The vector of velocity components is given by \(\mathbf{u}=\left[u_{x}\ u_{r}\ u_{\theta}\right]\), with \(u_{x}\), \(u_{r}\) and \(u_{\theta}\) as the streamwise, radial and azimuthal velocity components, respectively. The full state vector is written as \[\mathbf{q}=\left[\mathbf{u}\ p\right]^{T}, \tag{1}\] with \(\mathbf{q}=\mathbf{q}\left(x\text{,}r\text{,}\theta\text{,}t\right)\), where \(r\) and \(\theta\) indicate the radial and azimuthal directions, respectively, \(t\) denotes time and \(p\) the pressure component. The linear Navier-Stokes (LNS) equations in cylindrical coordinates can be written as \[\partial_{t}\mathbf{u}+u_{r}\partial_{r}\bar{U}\mathbf{e}_{\mathbf{x}}+\bar{U }\partial_{x}\mathbf{u} = \nabla p+\frac{1}{Re}\nabla^{2}\mathbf{u}+\mathbf{f}, \tag{2a}\] \[\nabla\cdot\mathbf{u} = 0, \tag{2b}\] where \(\partial\) denotes partial derivatives with respect to \(t\), \(r\) or \(x\) for time, radial and streamwise directions, respectively, \(\bar{U}\) is the mean turbulent velocity profile, \(\nabla\) is the gradient operator, \(Re\) is the Reynolds number based on bulk velocity, \(\mathbf{f}\) denotes the forcing components and \(\mathbf{e}_{\mathbf{x}}\) is the unity vector in the streamwise direction. For use in resolvent analysis, the forcing \(\mathbf{f}\) includes the non-linear terms in the Navier-Stokes equation, \(\mathbf{f}=\left(-\mathbf{u}\cdot\nabla\mathbf{u}\right)\), such that equations 2 are an exact rearrangement of the full Navier-Stokes system. Regarding the forcing components, they are structured as \[\mathbf{f}=\left[f_{x}\ f_{r}\ f_{\theta}\right]^{T}, \tag{3}\] with \(\mathbf{f}=\mathbf{f}\left(x\),\(r\),\(\theta\),\(t\)). Only molecular viscosity is considered in the present work, such that Eq. 2 is exact if the full forcing \(\mathbf{f}\) is used [29]. In the following, primes (\({}^{\prime}\)) will be dropped from the state and forcing notations for simplification. In a discretized state-space form, considering a grid with \(N_{r}\) points in the radial direction, the LNS equations take the form \[\mathbf{M}\frac{d\mathbf{q}(t)}{dt} = \mathbf{A}\mathbf{q}(t)+\mathbf{B}\mathbf{f}, \tag{4a}\] \[\mathbf{z}(t) = \mathbf{C}\mathbf{q}(t)+\mathbf{n}(t). \tag{4b}\] where \(\mathbf{A}\) denotes the linearized Navier-Stokes operator, \(\mathbf{B}\) is the input matrix that restricts the forcing terms to appear only in the momentum equation, \(\mathbf{z}\) is the system observation (measurements), \(\mathbf{C}\) is the observation matrix that selects \(N_{s}\) sensor readings from the state vector (in the present paper, wall-shear stresses in the axial and azimuthal directions), and \(\mathbf{n}\) is the measurement noise. \(\mathbf{M}\) is a diagonal matrix whose entries are set to one and zero for the momentum and continuity equations, respectively. The dependencies on wall-normal distance \(y\) (or radial distance \(r\)), were dropped to simplify notations. The state components can be written as a superposition of Fourier modes as \[\mathbf{q}\left(x\text{,}r\text{,}\theta\right)=\sum_{m}\sum_{\alpha}\int_{- \infty}^{\infty}\mathbf{\hat{q}}(\alpha\text{,}r\text{,}m\text{,}\omega)e^{i(\alpha x +n\theta\text{-}\omega t)}d\omega, \tag{5}\] where \(\omega\) denotes frequency, \(\alpha\) and \(m\) indicate longitudinal and azimuthal wavenumbers, respectively, hats are used for Fourier-transformed quantities, \(m\) is constrained to be an integer number and \(i=\sqrt{-1}\). Similar to the azimuthal direction, a Fourier series is taken along \(x\), since periodic boundary conditions are applied for the axial direction in the simulations considered here. Equation 4 can be written in the frequency domain as \[\mathbf{\hat{z}}(\omega)=\left[\mathbf{C}(-i\omega\mathbf{M}-\mathbf{A})^{-1}\mathbf{B}\right] \mathbf{\hat{f}}(\omega)+\mathbf{\hat{n}}(\omega). \tag{6}\] where dependencies on longitudinal (\(\alpha\)) and azimuthal (\(m\)) wavenumbers were also dropped to simplify notations. The term \(\mathbf{R}=(-i\omega\mathbf{M}-\mathbf{A})^{-1}=\mathbf{L}^{-1}\) is the resolvent operator, which is well posed once non-slip boundary conditions are enforced for the three velocity components. For pipe flow, the linear operator \(\mathbf{A}\) is written in cylindrical coordinates [30] and linearization is around the mean turbulent profile, considered as known; the linearized operator becomes \[\mathbf{A}=\left[\begin{array}{cccc}-i\alpha\mathbf{\bar{U}}+\frac{\mathbf{\Delta}+\mathbf{ r}\mathbf{r}^{-2}}{Re}&-\mathbf{D}\mathbf{\bar{U}}&\mathbf{Z}&-i\alpha\mathbf{I}\\ \mathbf{Z}&-i\alpha\mathbf{\bar{U}}+\frac{\mathbf{\Delta}}{Re}&-\frac{2imr^{-2}}{Re}&-\mathbf{ D}\\ \mathbf{Z}&\frac{2imr^{-2}}{Re}&-i\alpha\mathbf{\bar{U}}+\frac{\mathbf{\Delta}}{Re}&-im \mathbf{r}^{-1}\\ i\alpha\mathbf{I}&\mathbf{D}+\mathbf{r}^{-1}&im\mathbf{r}^{-1}&\mathbf{Z}\end{array}\right], \tag{7}\] where \(\mathbf{\Delta}=-\alpha^{2}\mathbf{I}-\left(m^{2}+1\right)\mathbf{r}^{-2}+\mathbf{r}^{-1}\mathbf{ D}+\mathbf{D}^{2}\), \(\mathbf{D}=\frac{\mathbf{d}}{\mathbf{d}\mathbf{r}}\) is a diagonal matrix with the radial direction derivative operator (finite differences scheme), \(\mathbf{I}\) is the identity matrix, \(\mathbf{r}\) and \(\mathbf{\bar{U}}\) are diagonal matrices containing the radial discretization and mean turbulent velocity profile, respectively, and \(\mathbf{Z}\) is the zero matrix. The actuation/input operator \(\mathbf{B}\) is defined as \[\mathbf{B}=\left[\begin{array}{cccc}\mathbf{I}&\mathbf{Z}&\mathbf{Z}\\ \mathbf{Z}&\mathbf{I}&\mathbf{Z}\\ \mathbf{Z}&\mathbf{Z}&\mathbf{I}\\ \mathbf{Z}&\mathbf{Z}&\mathbf{Z}\end{array}\right], \tag{8}\] whereas the observation/output operator \(\mathbf{C}\) is given by \[\mathbf{C}=\left[\begin{array}{cccc}\mathbf{D}&\mathbf{Z}&\mathbf{Z}&\mathbf{Z}\\ \mathbf{Z}&\mathbf{Z}&\mathbf{D}&\mathbf{Z}\end{array}\right], \tag{9}\] and the diagonal matrix \(\mathbf{M}\) is written as \[\mathbf{M}=\begin{bmatrix}\mathbf{I}&\mathbf{Z}&\mathbf{Z}&\mathbf{Z}\\ \mathbf{Z}&\mathbf{I}&\mathbf{Z}&\mathbf{Z}\\ \mathbf{Z}&\mathbf{Z}&\mathbf{I}&\mathbf{Z}\\ \mathbf{Z}&\mathbf{Z}&\mathbf{Z}&\mathbf{Z}\end{bmatrix}, \tag{10}\] From the definitions above, it is possible to obtain an optimal linear transfer function (\(\mathbf{\hat{T}_{q}}\)) between the system observation (\(\mathbf{\hat{z}}\)) and the estimated flow state components (\(\mathbf{\hat{q}}\)), such that \[\mathbf{\hat{\hat{q}}}=\mathbf{\hat{T}_{q}}\mathbf{\hat{z}}, \tag{11}\] where \(\mathbf{\hat{T}_{q}}\) is the transfer function and dependency on frequency \(\omega\) was dropped to simplify notations. The tilde superscript denotes estimates. Martini et al. [23] derived a expression for \(\mathbf{\hat{T}_{q}}\) that is based on the minimization of the error between the true (\(\mathbf{\hat{f}}\)) and estimated (\(\mathbf{\hat{f}}\)) forcing terms. The resulting transfer function is given by \[\mathbf{\hat{T}_{q}}=\mathbf{RBP_{ff}}\mathbf{H^{*}}\left(\mathbf{HP_{ff}}\mathbf{H^{*}}+\mathbf{P_{nn }}\right)^{-1}, \tag{12}\] where \(\mathbf{H}=\mathbf{CRB}\) is the resolvent operator including the observation (\(\mathbf{C}\)) and actuation (\(\mathbf{B}\)) matrices, \(\mathbf{P_{nn}}=\mathbf{(\hat{n}\hat{n}^{*})}\) and \(\mathbf{P_{ff}}=\mathbf{\big{<}\mathbf{\hat{f}}\mathbf{\hat{f}^{*}}}\)) are the CSDs of sensor noise and forcing, respectively. The asterisk (\({}^{*}\)) indicates a Hermitian transpose. To build the transfer function (equation 12), it is necessary to specify _a priori_ the forcing CSD (\(\mathbf{P_{ff}}\)). When true forcing statistics are known, equation 12 provides the optimal linear estimator, whereas other models for the forcing CSD provide sub-optimal estimators. The snapshots are reconstructed in space and time according to the procedures briefly addressed below. First, it is necessary to take the inverse Fourier transform of the transfer function \(\mathbf{\hat{T}_{q}}\), equation 12, in order to return to time domain and obtain \(\mathbf{T_{q}}\), \[\mathbf{T_{q}}(\alpha\text{,}r\text{,}m\text{,}t)=\int_{-\infty}^{\infty}\mathbf{ \hat{T}_{q}}(\alpha\text{,}r\text{,}m\text{,}\omega)e^{i\omega t}d\omega. \tag{13}\] Hence, the time domain transfer function \(\mathbf{T_{q}}\) must be convolved with the measurements/observations \(\mathbf{z}\) to evaluate the state estimate in time domain \(\mathbf{\tilde{q}}\), \[\mathbf{\tilde{q}}(\alpha\text{,}r\text{,}m\text{,}t)=\int_{-\infty}^{\infty}\mathbf{ T_{q}}(\alpha\text{,}r\text{,}m\text{,}\tau)\mathbf{z}(\alpha\text{,}r\text{,}m\text{,}t- \tau)d\tau. \tag{14}\] Finally, double inverse Fourier transforms in the azimuthal and longitudinal directions are taken in order to return from wavenumber domain to physical space, \[\mathbf{\tilde{q}}(x\text{,}r\text{,}\theta\text{,}t)=\sum_{m}\sum_{\alpha}\mathbf{ \tilde{q}}(\alpha\text{,}r\text{,}m\text{,}t)e^{i\alpha x+im\theta}. \tag{15}\] It is also possible to model the forcing statistics (\(\mathbf{P_{ff}}\)), which provides a cheaper and sub-optimal estimator. Another option is to consider an eddy-viscosity model in the linear operator \(\mathbf{A}\), equation 7, to somehow take into account the nonlinear terms of the Navier-Stokes system, as discussed by Symon et al. [31], Morra et al. [29] and Amaral et al. [25]. Appendix A addresses the linear operator containing an eddy-viscosity model. In this case, the forcing statistics are considered as white-noise in space. We will consider both cheap estimators to compare them with the LES ones. ### Numerical simulations To generate the databases, we conducted numerical simulations with the Openpipeflow code [32]. Periodic boundary conditions were assumed in the streamwise and azimuthal directions. Figure 1 shows a sketch of the geometry and coordinate system employed in this study. For all simulations the pipe length is \(L_{x}=10R\), where \(R\) is the pipe radius, and the bulk Reynolds number is \(Re_{b}=\frac{U_{b}D}{\nu}=19\text{,}000\), where \(U_{b}\) is the bulk velocity, \(D=2R\) is the pipe diameter and \(\nu\) is the kinematic viscosity. Table 1 shows the parameters for all cases, including the number of radial (\(N_{r}\)), streamwise (\(N_{x}\)) and azimuthal (\(N_{\theta}\)) grid points, the mesh discretization in the streamwise (\(\Delta x^{+}\)), azimuthal (\((R\Delta\theta)^{+}\)) and radial (\(\Delta r^{+}\)) directions and the mesh points ratio with respect to the DNS case (\(N/N_{DNS}\)). Plus symbols denote inner (wall and/or viscous) units. All simulations contain 5,000 snapshots and the time steps based on outer units is \(\Delta t=0.1\). Cases starting with D denote DNS, whereas letter L indicate LES, carried out using Smagorinsky [33] subgrid scale model, with a Smagorinsky constant set as \(C_{s}=0.05\); the wall damping function of van Driest [34] was used. As estimations lose accuracy for large wavenumbers, only the lowest 16 and 32 streamwise and azimuthal wavenumbers were used to construct the estimators. Welch's method [35] was employed to evaluate the forcing and state components statistics, with blocks containing \(N_{fft}=512\) time steps and 75% overlap. Appendix B shows the results of the block size convergence test, justifying the use of \(N_{fft}=512\). A Hann window was applied to each block to minimize spectral leakage. The simulations were validated with reference DNS results by El Khoury et al. [36], as shown in figure 2. Cases D1, D2 and L1 show close agreement with the reference simulations, regarding mean flow profile, axial, azimuthal and radial velocity fluctuations, almost perfectly matching the results by El Khoury et al. [36]. The coarser grid cases (L2, L3, L4 and L5) progressively deteriorate the agreement, with the L5 case showing strong mismatch with all quantities, as expected for coarser LES. that may have different grids than the measurements database D1 (see table 1). Hence, after the evaluation of the transfer functions \(\mathbf{\hat{T}_{q}}\), they are interpolated to a grid equivalent to that of the measurements database D1. Moreover, when the database employed to extract the forcing statistics does not contain data for a given pair (\(\alpha\),\(m\)), the forcing terms are modelled as white noise in space. The noise CSD (\(\mathbf{P_{nn}}\)) was defined as machine precision, as we deal with DNS data mimicking measurements. ## III Results ### Snapshot estimates Figures 3, 4 and 5 show sample snapshots of the streamwise velocity fluctuations from the D1 database, filtered to retain only the lower axial and azimuthal wavenumbers (\(N_{\alpha}\) and \(N_{m}\)), as indicated in table 2, and corresponding estimates obtained using D2, L1, L3, L4 and L5 forcing statistics. Results are shown at radial distances from the pipe wall of \(y^{+}=(1-r^{+})\approx 15\) and 100 and 200, respectively. In the figures we use a pseudo-spanwise coordinate \(z=r\theta\) (\(\lambda_{z}=r\lambda_{\theta}\)) for enabling comparisons with structures found in planar wall-bounded flows, in particular with the \begin{table} \begin{tabular}{l l l l l l l l} Case & \(N_{\alpha}\) & \(N_{m}\) & \(\alpha_{cut}\) & \(m_{cut}\) & \(R\lambda_{\theta}^{+}{}_{cut}\) & \(\lambda_{m}^{+}{}_{cut}\) & Downsampling ratio \\ \hline D1 & 32 & 48 & 9.425 & 47 & 365.873 & 73.368 & 182 \\ D2 & 32 & 48 & 9.425 & 47 & 365.873 & 73.368 & 182 \\ L1 & 32 & 48 & 9.425 & 47 & 365.873 & 73.368 & 54 \\ L2 & 32 & 48 & 9.425 & 47 & 365.873 & 73.368 & 24 \\ L3 & 32 & 48 & 9.425 & 47 & 365.873 & 73.368 & 14 \\ L4 & 24 & 12 & 6.912 & 12 & 498.918 & 313.480 & 6 \\ L5 & 24 & 12 & 6.912 & 12 & 498.918 & 313.480 & 3 \\ \end{tabular} \end{table} Table 2: Wavenumber cut-off parameters used to downsample the simulations. Figure 2: Validation of the numerical simulations with reference DNS results by El Khoury et al. [36]: mean flow velocity profile (top-left frame), axial (top-right frame), radial (bottom-left frame) and azimuthal (bottom-right frame) velocity profiles. turbulent channel results of our previous work [25]. White-noise forcing statistics model (White forcing) and the linear operator containing an eddy-viscosity model (EV forcing, see appendix A) are also considered in the figure for means of comparison, since with these two models there is no need of prior evaluation of the forcing statistics, considered as white-noise. When considering the buffer layer, at a wall-normal distance of \(y^{+}\approx 15\), the resemblance between DNS results and the estimates is remarkable, even when the white, eddy-viscosity and L5 forcing are employed. This is somehow expected, as in the channel flow analysis in [25] we have shown that assuming the forcing statistics as spatial white noise to build the estimator also provides accurate estimates for distances close to the wall. Moving further from the wall, the estimates are not as accurate, especially for the two coarser LES (L4 and L5) estimators, although most of the large-scale structures present in the DNS snapshots are still recognizable in all but the L4 and L5 estimators, as well as the white noise and eddy-viscosity estimators, in agreement with recent literature [5, 29, 37]. As one moves further from the wall, only the largest turbulent structures are estimated. An interesting observation is that the well resolved LES L1 and L2 lead to estimators with similar accuracy to the optimal one, built using the DNS statistics taken from the D2 database. This indicates that LES is a viable approach to obtain forcing statistics to build estimators of wall turbulence, without significant performance degradation if standard grid requirements for wall-resolved LES are used. This may be understood by considering that large-eddy simulations are able to accurately resolve larger turbulent structures, which are the ones that may be estimated from the wall, as seen in earlier studies [7, 25, 38]. Regarding the radial and azimuthal velocity components, not shown here, similar results as those of the streamwise velocity component were observed. The small-scale structures can be estimated from the pipe wall even for the coarser LES, white-noise and eddy-viscosity cases, whereas large-scale structures can be observed up to \(y^{+}\approx 50\) only for the finer mesh cases. Note that [25] could Figure 3: Comparison between streamwise velocity component instantaneous snapshot of the D1 database and resolvent-based estimates using wall measurements of shear stress and considering white, eddy-viscosity (EV), D2, L1, L3, L4, and L5 forcing statistics at \(y^{+}\approx 15\). Fluctuations shown in outer units. Figure 4: Comparison between streamwise velocity component instantaneous snapshot of filtered DNS and resolvent-based estimates at \(y^{*}\approx 100\). See comments in the caption of figure 3. Figure 5: Comparison between streamwise velocity component instantaneous snapshot of filtered DNS and resolvent-based estimates at \(y^{*}\approx 200\). See comments in the caption of figure 3. observe structures up to \(y^{+}\approx 100\) using the same resolvent-based estimator and only wall-shear stress measurements, but that work built an optimal estimator using the forcing statistics extracted from the reference DNS. ### Estimates accuracy Figure 6 displays normalized correlations (\(Corr\), left frame), r.m.s errors (\(Err\), middle frame) and variance (\(\left(q^{\prime}q^{\prime}\right)^{+}\), right frame) for the streamwise velocity component. Correlation and error metrics are defined, as a function of wall-normal distance \(y\), as \[Corr(y) = \frac{\int q_{D1}\left(y,\!t\right)q_{est}\left(y,\!t\right)dt}{ \sqrt{\int q_{D1}\left(y,\!t\right)^{2}dt}\sqrt{\int q_{est}\left(y,\!t\right)^ {2}dt}}, \tag{16a}\] \[Err(y) = \frac{\sqrt{\int\left(q_{est}\left(y,\!t\right)-q_{D1}\left(y,\!t \right)\right)^{2}dt}}{\sqrt{\int q_{D1}\left(y,\!t\right)^{2}dt}}, \tag{16b}\] where \(q_{D1}\) denotes a flow state component, e.g. streamwise velocity fluctuation, extracted from the baseline DNS database, and \(q_{est}\) denotes an estimated component, from one of the estimators considered here. Accurate estimates correspond to low normalised error \(Err\), close to 0, and high correlation \(Corr\), close to 1. A sample probe at a given \(y\) position was used to evaluate the correlation and error metrics, scanning the complete time series of the baseline (D1) and estimates (D2, L1-L5, white-noise and eddy-viscosity). The first and last \(N_{fft}/2\) instants of the time series were discarded. Such snapshots can not be estimated due to end effects when computing the convolution using finite time series. All estimators are accurate up to \(y^{+}\approx 10\), showing correlation and r.m.s. errors of approximately 0.95 and 0.35, respectively, and were able to correctly identify the streaky structures of the reference DNS. This correlation level very close to the wall is slightly lower to that obtained for \(Re_{r}\approx 550\) turbulent channel flow estimates [25]. However, the channel estimation made use of pressure and wall shear stress from both walls, and forcing terms were directly extracted from the reference DNS, i.e. the same DNS employed for the observations of wall shear-stress and pressure. Moving farther from the wall, only estimators D2, L1 and L2 maintain the same accuracy, especially regarding the correlation and r.m.s. metrics; correlation is higher than 0.5 up to \(y^{+}\approx 100\). It is interesting that estimator L2, which has a grid with less than 10% of the points used for the DNS-based estimator, could attain such accuracy. This indicates that the large scales of interested are well calculated in the LES, as expected, and their statistics may be used to build an accurate estimator at a fraction of the computational cost of the DNS-based estimator considered in Amaral et al. [25]. Figures 7 and 8 show the estimates performance metrics for the radial and azimuthal velocity components, respectively. The plots follow the same trends of the streamwise velocity component metrics, although the effects of loss of coherence, higher error and mismatch in variance for the two coarser LES grids are more dramatic for the radial and azimuthal Figure 6: Flow state comparison metrics for the streamwise velocity fluctuation component. Dotted grey curve in the left frame denote DNS (D1) results. Frames, from left to right: correlation, normalized r.m.s. and variance. The DNS variance refers solely to wavenumbers retained for estimation. velocity components. The estimators built with the finer large-eddy simulations maintain an accuracy similar to the DNS estimator based on D2. To establish a lower-bound case of what is possible to achieve in terms of estimator performance considering cheaper forcing terms modeling, we included a case in which noise forcing statistics are modeled as spatially white (dashed lines in figures 6-8) and the linear operator containing an eddy-viscosity model (dash-dotted lines in figures 6-8). It is interesting to note that in the near-wall region, up to \(y^{+}\approx 20\), white noise slightly outperforms the L5 estimator, and the use of L4 estimator is only justifiable above \(y^{+}\geq 20\). On the other hand, the eddy-viscosity estimator outperforms even the L4 estimator for \(y^{+}\geq 55\), corroborating previous studies [25] that showed the eddy-viscosity model is a good option to model the nonlinear terms of the Navier-Stokes system for distances far from the wall. Overall, the quantitative metrics in figures 6-8 confirm the qualitative results shown in figures 3-5, although for the azimuthal and radial velocity components the eddy-viscosity model can even outperform the L4 estimator, but not the better resolved LES estimators. Normalized r.m.s. error as a function of the wavenumber pair (\(\alpha\),\(m\)) and wall-distance \(y\) are shown in figure 9 for wall distances of \(y^{+}\approx 15\) and \(100\). The error metric is defined as \[Err(\alpha\text{,}y\text{,}m)=\frac{\sqrt{\int\sum_{i=1}^{3}\left|q_{est}^{i} \left(\alpha\text{,}y\text{,}m\text{,}t\right)-q_{D1}^{i}\left(\alpha\text{,}y \text{,}m\text{,}t\right)\right|^{2}dt}}{\sqrt{\int\sum_{i=1}^{3}\left|q_{D1}^ {i}\left(\alpha\text{,}y\text{,}m\text{,}t\right)\right|^{2}dt}}, \tag{17}\] with superscript \(i\) in \(q^{i}\), for \(i=1\), \(2\) and \(3\), denoting streamwise (\(u_{x}{}^{\prime}\)), wall-normal/radial (\(u_{r}{}^{\prime}\)) and azimuthal (\(u_{\theta}{}^{\prime}\)) velocity fluctuation components, respectively. Note that for cases L4 and L5 (rows 5 and 6 of figure 9) the region delimited by the dashed lines is the region on Figure 8: Flow state comparison metrics for the azimuthal velocity fluctuation component. See comments in the caption of figure 6. Figure 7: Flow state comparison metrics for the radial velocity fluctuation component. See comments in the caption of figure 6. which the streamwise and azimuthal wavenumbers are contained within the LES databases. For the streamwise and azimuthal wavenumbers outside that region, the white noise estimator is considered in the calculations, as can be observed on row 8 of figure 9. The large structures, which are characterized by small \(\alpha\) and \(m\), are accurately estimated for the D2, L1 and L2 cases, with virtually zero r.m.s. error at both planes. The estimates for smaller structures (large \(\alpha\) and \(m\)), on the other hand, display higher r.m.s. error, especially for the \(y^{+}\approx 100\) plane. For the coarser L4 and L5 estimators, even for the \(y^{+}\approx 15\) plane, the accuracy of smaller structure estimates is quite low. This indicates that such very coarse LES are not suitable to build estimators; notice that the grid spacing of L5, of about 100 wall units in the azimuthal direction, is close to the typical streak spacing; hence, near wall structures cannot be captured by the coarser LES, which compromise their potential to build resolvent-based estimators. However, the finer LES allow an accuracy close to the one from the optimal D2 estimator, showing that LES is a viable approach to obtain forcing statistics required for resolvent-based estimation. This is far superior than the estimates built from white-noise assumption, also shown in figure 10. Figure 9: Normalized r.m.s. error as a function of (\(\alpha\),\(m\)) state comparison metrics for D2, L1, L2, L3, L4, L5, white-noise and eddy-viscosity estimators. in figure 9, which have a large error as one moves away from the wall. Inclusion of an eddy viscosity in the operator, also shown in figure 9, and consideration of white-noise forcing also leads to inaccurate estimates. Such finding is in agreement with recent literature [31] that showed the use of eddy-viscosity models may lead to errors regarding the modeling of turbulent structures. The works by Illingworth et al. [5] and Oehler et al. [12], which employed a Kalman-filter to estimate turbulent channel flows after streamwise and spanwise velocity components, at \(Re_{\tau}=1000\) and \(2000\), respectively, also show the estimate error as a function of the structure wavelengths. Although it is difficult to make a direct comparison between the present study and theirs, since here we deal with turbulent pipe flow at \(Re_{\tau}\approx 550\), overall our resolvent-based estimates using LES forcing are more accurate than what is observed with the Kalman-filter employed by Illingworth et al. [5] and Oehler et al. [12], specially for the finer LES L1, L2 and L3. ### Structures observable from the pipe wall Smits et al. [39] and Jimenez [40], among other authors, have shown that, for wall-bounded flows, many structures leave their footprint on the walls, even the ones present in the outer layer. In order to explore which flow structures leave a footprint on the walls, in Amaral et al. [25] we introduced a metric that consists of a distance from the wall, the maximum observed height, \(y_{obs}=(1-r_{obs})\), from which the estimate normalized error is \(Err(\alpha\),\(y_{obs}\),\(m)\leq 0.5\). We select an error value of 0.5, considering that an estimator that attains 50% of accuracy or more is good enough, but this value can be calibrated with a more restrictive criterion if desired. As observed in the previous results, the estimators used in this study based on wall-measurements of shear stress lose accuracy far from the wall, and only the largest scales may be estimated from wall measurements. Figure 10 shows the maximum observed height in plus units \(({y_{obs}}^{*})\) as a function of streamwise and pseudo-spanwise wavelengths \(({\lambda_{x}}^{{}^{+}}\),\({\lambda_{z}}^{{}^{+}})\), with \(\lambda_{z}=r\lambda_{\theta}\). Contour levels of \(y^{*}=5,10,15,30\) and 50 are displayed in the maps. For all estimators, the smaller structures (of small wavelength pairs) can only be well estimated very close to the wall, whereas far from the wall only large structures (of large wavelength pairs) can attain some level of accuracy. Estimators D2, L1, L2 and L3 display similar behavior, keeping accuracy for large structures up to \(y^{*}\approx 50\). The worse performance of estimators L4 and L5 is mostly associated to smaller structures, which cannot be accurately resolved by the coarser grids. It is interesting to notice that for larger structures, \(({\lambda_{x}}^{{}^{+}}\),\({\lambda_{z}}^{{}^{+}})\approx(4000,\,1000)\), even the coarse-LES estimator L4 maintains some accuracy of estimates, which is in line with the idea that even coarse LES are able to resolve the largest turbulent structures [41]. However, if the grid is too coarse there is a worsening of estimates at all scales, as observed from the L5 results, which show performance lower than the white-noise and eddy-viscosity estimators. ## IV Conclusions In this paper we employed a (linear) resolvent-based estimator methodology [23] to obtain the space-time flow field of a \(Re_{\tau}\approx 550\) turbulent pipe flow from wall-shear stress low-rank measurements. A DNS database was used to extract the wall-shear stress measurements, whereas other DNS and LES databases were employed for the modeling of the statistics of non-linear terms (which constitute a forcing in resolvent analysis) necessary for state estimation. Hence, an optimal estimator, built from DNS statistics, is compared to sub-optimal resolvent-based estimators informed by LES. We compared the accuracy of the estimators in terms of snapshot reconstruction, correlation, normalized error and variance. Satisfactory results were obtained with the forcing statistics from LES, especially up to the buffer layer. The accuracy progressively deteriorates for distances far from the wall and for coarser LES meshes, although for cases D2, L1 and L2 the large-scale structures can still be recognizable up to \(y^{+}\approx 200\). The LES-based estimators using typical grids for wall turbulence maintain an accuracy similar to the optimal estimation built from DNS statistics (D2). The present results are in agreement with recent literature on estimation of wall-bounded flows from wall measurements. For instance, Encinar and Jimenez [7], estimated turbulent structures of channel flows for \(932\leq Re_{\tau}\leq 5300\) regimes using a linear stochastic estimator. The authors attained a good level of accuracy near the walls, i.e. at \(y/H\approx 0.2\), where \(H\) is the channel height. Guastoni et al. [9], on the other hand, employed two different convolutional-network algorithms to estimate the instantaneous velocity components of channel flows at \(Re_{\tau}=180\) and \(550\), obtaining good agreement with reference result up to \(y^{+}=50\). The question that remains is that at what computational cost the two strategies above cited can attain the same accuracy as the resolvent-based estimator employed together with forcing statistics provided by LES. The accuracy level of a LES estimator containing approximately \(10\%\) of the grid points of the DNS database, case L3 (see tables 1 and 2), which is very close to what is obtained with the DNS estimator, case D2. It would be desirable to be able to estimate flow fluctuations from wall measurements with a simple model for forcing statistics, but the consideration of white-noise forcing leads to inaccurate estimates; hence, _linear_ estimation requires information on the statistics of _non-linear_ terms. The present results show that such statistics of non-linearity do not need unrealistic levels of accuracy, and moderately coarse large-eddy simulation may provide the required information on dominant non-linear effects. LES-informed resolvent-based estimation is thus a viable approach for accurate estimates of turbulent flow at high Reynolds numbers and can also be seen as a more accurate alternative, with moderate additional cost, to the use of eddy-viscosity models that have been recently explored in the literature for wall-bounded flows [18; 27; 42; 43]. **Funding.** F. R. Amaral received funding from from Sao Paulo Research Foundation (FAPESP/Brazil), grant #2019/02203-2. A. V. G. Cavalieri was supported by the National Council for Scientific and Technological Development (CNPq/Brazil), grant #313225/2020-6. The authors were also funded by FAPESP/Brazil, grant #2019/27655-3. **Author ORCID.** F. R. Amaral, [https://orcid.org/0000-0003-1158-3216](https://orcid.org/0000-0003-1158-3216); A. V. G. Cavalieri, [https://orcid.org/0000-0003-4283-0232](https://orcid.org/0000-0003-4283-0232) **Declaration of Interests.** The authors report no conflict of interest. ## Appendix A Eddy viscosity model If an eddy-viscosity model is considered, the LNS equations can be written as \[\partial_{t}\mathbf{u}+u_{r}\partial_{r}\bar{U}\mathbf{e}_{\mathbf{x}}+\bar{U }\partial_{x}\mathbf{u} = \nabla p+\frac{1}{Re}\nabla\cdot\left[\nu_{T}\left(r\right)\left( \nabla\mathbf{u}+\nabla\mathbf{u}^{T}\right)\right])+\mathbf{f}, \tag{11a}\] \[\nabla\cdot\mathbf{u} = 0. \tag{11b}\] where \(\nu_{T}\) is the eddy-viscosity, which can be modeled as [26] \[\frac{\nu_{T}(y)}{\nu}=\frac{1}{2}\left\{1+\frac{\kappa^{2}\hat{Rc}^{2}\hat{B}}{9} \left(2y-y^{2}\right)^{2}\left(3-4y+2y^{2}\right)^{2}\left[1-e^{\left(\frac{- \nu\hat{Rc}\sqrt{2}}{A^{\ast}}\right)}\right]^{2}\right\}^{1/2}+\frac{1}{2}, \tag{10}\] where \(y=1-r\),the constants \(\kappa\) and \(A^{\ast}\) are given as \(0.42\) and \(27\), respectively [44], \(\hat{Rc}=Re/2\) and \(\hat{B}=-2\partial_{x}p\). Following Willis et al. [45], the linear operator accounting for the eddy-viscosity model is given by \[\mathbf{A}=\begin{bmatrix}-i\alpha\mathbf{\bar{U}}+\frac{1}{Re}\big{(}\nu_{T}\left( \mathbf{\Delta}+\mathbf{r}^{-2}\right)+\mathbf{E}\big{)}&-\mathbf{D}\mathbf{\bar{U}}+\frac{1}{Re}i \alpha\nu_{T}{}^{\prime}&\mathbf{Z}&-i\alpha\mathbf{I}\\ \mathbf{Z}&-i\alpha\mathbf{\bar{U}}+\frac{1}{Re}\big{(}\nu_{T}\mathbf{\Delta}+2\mathbf{E} \big{)}&-\frac{1}{Re}\mathbf{F}&-\mathbf{D}\\ \mathbf{Z}&\frac{1}{Re}\big{(}\mathbf{F}+i\beta\nu_{T}{}^{\prime}\mathbf{r}^{-1}\big{)}&-i \alpha\mathbf{\bar{U}}+\frac{1}{Re}\left(\nu_{T}\mathbf{\Delta}+\mathbf{G}\right)&-i\beta \tau^{-1}\\ i\alpha\mathbf{I}&\mathbf{D}+\mathbf{r}^{-1}&i\beta\mathbf{r}^{-1}&\mathbf{Z}\end{bmatrix}. \tag{11}\] where \[\mathbf{\Delta}=\mathbf{\nabla}^{2}-\mathbf{r}^{-2} = -\alpha^{2}\mathbf{I}-\left(\beta^{2}+1\right)\mathbf{r}^{-2}+\mathbf{r}^{-1} \mathbf{D}+\mathbf{D}^{2}, \tag{12a}\] \[\nu_{T}{}^{\prime} = \mathbf{D}\nu_{T},\] (12b) \[\mathbf{E} = \nu_{T}{}^{\prime}\mathbf{D},\] (12c) \[\mathbf{F} = 2i\beta\nu_{T}\mathbf{r}^{-2},\] (12d) \[\mathbf{G} = \nu_{T}{}^{\prime}\left(\mathbf{D}-\mathbf{r}^{-1}\right). \tag{12e}\] ## Appendix B Block size convergence tests Figure 11 displays a convergence test for various block sizes (\(128\leq N_{fft}\leq 2048\)) while keeping the block overlap fixed in \(75\%\). The metrics depicted in the figure are the correlation, the normalized error and the variance of the streamwise velocity fluctuation component. To obtain these results, the forcing terms were extracted from case D2. It is observed that the statistics are well converged in the \(256\leq N_{fft}\leq 1024\) range. This shows the typical compromise in the application of the Welch method to obtain frequency-domain statistics: blocks that are too short have a low frequency resolution, while long blocks may lead to worse statistical convergence. It is nonetheless reassuring that various signal processing choices lead to similar estimation properties. We have taken the intermediate value \(N_{fft}=512\) to obtain the forcing statistics used to build the estimators. Similar results, not shown here, were obtained for the radial and azimuthal velocity fluctuation components. Figure 11: Flow state comparison metrics for the streamwise velocity fluctuation component and different block sizes using D2 forcing statistics. Dashed curves in the left frame denote DNS (D1) results. Frames, from left to right: correlation, normalized r.m.s. and variance. The DNS variance refers solely to wavenumbers retained for estimation.
2309.08353
Continual Learning with Deep Streaming Regularized Discriminant Analysis
Continual learning is increasingly sought after in real world machine learning applications, as it enables learning in a more human-like manner. Conventional machine learning approaches fail to achieve this, as incrementally updating the model with non-identically distributed data leads to catastrophic forgetting, where existing representations are overwritten. Although traditional continual learning methods have mostly focused on batch learning, which involves learning from large collections of labeled data sequentially, this approach is not well-suited for real-world applications where we would like new data to be integrated directly. This necessitates a paradigm shift towards streaming learning. In this paper, we propose a streaming version of regularized discriminant analysis as a solution to this challenge. We combine our algorithm with a convolutional neural network and demonstrate that it outperforms both batch learning and existing streaming learning algorithms on the ImageNet ILSVRC-2012 dataset.
Joe Khawand, Peter Hanappe, David Colliaux
2023-09-15T12:25:42Z
http://arxiv.org/abs/2309.08353v1
# Continual Learning with Deep Streaming Regularized Discriminant Analysis ###### Abstract Continual learning is increasingly sought after in real-world machine learning applications, as it enables learning in a more human-like manner. Conventional machine learning approaches fail to achieve this, as incrementally updating the model with non-identically distributed data leads to catastrophic forgetting, where existing representations are overwritten. Although traditional continual learning methods have mostly focused on batch learning, which involves learning from large collections of labeled data sequentially, this approach is not well-suited for real-world applications where we would like new data to be integrated directly. This necessitates a paradigm shift towards streaming learning. In this paper, we propose1 a streaming version of regularized discriminant analysis as a solution to this challenge. We combine our algorithm with a convolutional neural network and demonstrate that it outperforms both batch learning and existing streaming learning algorithms on the ImageNet ILSVRC-2012 dataset. Footnote 1: [https://github.com/SonyCSLParis/Deep_SRDA.git](https://github.com/SonyCSLParis/Deep_SRDA.git) ## 1 Introduction Continual learning, also known as lifelong learning, refers to the ability of a learning system to sequentially acquire and adapt knowledge over time. This type of learning mimics animal learning [8] and is increasingly sought after in various domains such as medical diagnostics [22], autonomous vehicles [40], and finance [35], where the learner needs to continually adapt to changing data. The major challenge in continual learning is the phenomenon of _catastrophic forgetting_[9, 31]. It refers to the situation where a naively incrementally trained deep neural network forgets previously learned representations to specialise to the new task at hand. Traditionally, the bulk of research [45] in continual learning has primarily concentrated on batch learning approaches, which process data in fixed batches. In this setting, a continual learner typically iterates multiple times over the given task in an offline manner, allowing them to achieve satisfactory performance. However, this approach requires storing all data from the current task for training, which is not suitable for on-device learning. As a result, recent research has emerged in the field of Online Continual learning [27], where data arrives in small, incremental batches and previously seen batches from the current or previous tasks are no longer accessible. Therefore, a model must effectively learn from a single pass over the online data stream, even when encountering new classes (Online Class Incremental, OCI) or data non-stationarity, such as new background, blur, noise, illumination, and occlusion (Online Domain Incremental, ODI). Figure 1: Deep SRDA model diagram. We take this scenario one step further and consider a **streaming case of the online continual learning** scenario, where a learner learns from batches of size 1. This particular case of Streaming learning aims to develop methodologies that can efficiently learn from streaming data, enabling continuous adaptation without relying on complete batches. Specifically, we concentrate on the application of classification in computer vision, focusing on the general scenario of Online Class Incremental. In this paper, we aim to contribute to the field of continual learning by proposing a novel approach called Deep Streaming Regularized Discriminant Analysis (SRDA). Building upon the foundations of Deep Streaming Linear Discriminant Analysis (SLDA) [14], our method combines SLDA with a streaming version of Quadratic Discriminant Analysis (QDA) to achieve state-of-the-art performance. As done in [14], we combine our model with a convolutional neural network (CNN) and empirically demonstrate its superiority over the other streaming and batch learning methods on the ImageNet ILSVRC-2012 dataset [39]. To the best of our knowledge, this use of a regularized discriminant analysis method with a neural network represents a novel contribution that has not been explored before. **This paper makes the following contributions:** 1. We present the SQDA and SRDA algorithms. We show that SQDA does not generalize to high dimensional problems and present SRDA as a solution. 2. We demonstrate that SRDA outperforms state-of-the-art streaming and continual learning algorithms. ## 2 Related work ### Continual Learning Continual learning addresses the challenges of catastrophic forgetting that occur when training a model incrementally, breaking the usual i.i.d. assumption on the training data. This problem arises from the plasticity dilemma [1] and has been heavily studied in recent years [45]. Various continual learning scenarios have been developed to continually train models, with the three main ones being task incremental, class incremental, and domain incremental. In the task incremental setting, different training steps are identified by a label. Models trained in this setting tend to perform well because there is an indication of the task in the data. However, this scenario is not very realistic as real-world data is not typically labelled by tasks. The class incremental scenario considers the real-world scenario of adding new classes without any task delimiters. Lastly, in the domain incremental scenario, the focus is on dealing with the addition of new domains or changing environments without any explicit task labels. To mitigate catastrophic forgetting, several methods have been employed. The main ones include regularization [26, 44, 20, 49, 3, 23], rehearsal or pseudo-rehearsal [38, 41, 37], combined [18, 5, 24], and architectural [29, 25]. Notably, the rehearsal and pseudo-rehearsal categories have shown the most promising results. In these approaches, the learner stores previously encountered samples in a buffer for future training [38]. In some instances, pseudo-rehearsal techniques explore the replacement of the buffer with a generative model [41, 37]. ### Online Continual Learning Online continual learning is a more challenging subset of continual learning where data arrives in an online fashion one tiny batch at a time and previously encountered batches are not accessible. This field builds upon existing methods for continual learning while adding specific tricks to tackle this scenario. In this online setting, recent works [30, 16, 47, 2, 27] have shown that the Softmax layer and its associated Fully-Connected layer suffer from _task-recency bias_, where those layers tend to be biased to the last encountered classes. This has prompted the creation of multiple tricks to alleviate this problem. One example is the application of various tricks in replay-based scenarios: * **Labels Trick**[50]: Cross-entropy loss calculation considers only the classes present in the mini-batch, preventing excessive penalization of logits for classes absent from the mini-batch. * **Multiple Iterations**[4]: A single mini-batch is stored in a buffer and iterated upon multiple times. In addition to that, previously stored experiments are also replayed. * **Nearest Class Mean Classifier**: Replaces the last biased fully connected classification layer by a nearest mean classifier such as in iCarl [36]. * **Separated Softmax**[2]: Since one softmax layer results in a bias explained in [30, 16, 47, 2, 27], this technique employs two Softmax layers one for old classes and one for new classes. Thus training new classes will not overly penalize the old logits. * **Review trick**[6]: Adds an additional fine-tuning step using a balanced subset of the memory buffer. This trick is used in the End-to-End method used in our benchmarks 5.3. However, when the batch size is reduced to one, the Stochastic Gradient Descent (SGD) usually employed in most of these methods becomes noisy, making convergence challenging. This is precisely where streaming learning comes into play. ### Streaming Learning Streaming learning, a field of study since 1980 [33], primarily focuses on i.i.d data streams and utilizes online learning methods. However, due to the Softmax bias and SGD instability for batches of size 1, regular online learning methods are not optimal for streaming learning on **non-i.i.d data streams**. For this case, one area of streaming learning considers the use of streaming decision trees [21], where Hoeffding decision trees [17] are adapted to avoid catastrophic forgetting. Those can also be combined into Streaming forests [21, 46]. However, the issue with these types of methods is that they are slow to train [11], and require extensive hyperparameter tuning, making them unsuited for fast-paced streaming scenarios and real-time on-device learning. Another approach involves employing a Nearest Mean Classifier [32] instead of a Softmax layer. Research conducted by [28] has demonstrated that this simple yet effective substitute not only addresses recency bias but also avoids structural changes in the Fully-Connected layer when new classes are encountered. Notably, this method has been effectively employed by iCarl [36]. Another method used is Exstream [13]. This method only updates fully connected layers of a CNN while maintaining a prototype for each class. It also has a policy for managing the buffer when it is full, merging the two closest exemplars. But as we will see in section 5.5, this method suffers in terms of computation time as it requires, in this case, 64 hours to run on our experiment, whereas SRDA requires 12 hours. Especially relevant to this paper is Streaming LDA [34] that was first used for data streams and has since been adapted in [14] to work with CNNs. SLDA uses running class means and a common covariance matrix for all classes to assign labels to inputs based on the closest Gaussian distribution. ## 3 Problem Setting We consider ensembles \(\mathcal{X}\) and \(\mathcal{Y}\), representing our datapoints and labels, respectively. We aim to train a model \(F\) with parameters \(\theta\) to accurately classify classes in \(\llbracket 1,C\rrbracket\), where \(C\in\mathbb{N}^{*}\). To achieve this, we adopt a streaming fashion approach, where each datapoint \(x\in\mathcal{X}\) is individually sent to the model for fitting. Additionally, we adopt a class incremental scenario by ordering the samples in batches of classes. We consider this type of scenario to be the most general as it is similar to animal and human learning scenarios. ## 4 Deep Streaming RDA Similar to Hayes and Kanan's work [14], our model can be formally divided into a composition of two distinct functions \(G\) and \(F\), such as \(y=F(G(x))\). \(G\) is comprised of the initial layers of a CNN, specifically a ResNet-18 [15] in our case, while \(F\) represents our SRDA head. The early layers of a CNN, such as those in \(G\), tend to learn filters that exhibit minimal variation across large natural image datasets and demonstrate high transferability [48]. Therefore, we made the decision to freeze the parameters of \(G\) and solely focus on training \(F\). The following subsections will present our SRDA model. We will start by presenting discriminant analysis before presenting an initial quadratic streaming version that led to our deep SRDA algorithm. ### Discriminant Analysis Discriminant analysis is a traditional machine learning algorithm that can be used for classification [12]. It works on the hypothesis that the data follows a Gaussian multivariate distribution that is used to calculate the log posterior probability using Bayes' rule. For each training example \(x\in\mathcal{X}\) and \(k\in\llbracket 1,C\rrbracket\), the goal is to calculate the posterior probability in order to classify correctly. The Bayes' rule on the posterior probability of being in class \(k\) for an element \(x\) is: \[P(y=k|\mathbf{x})=\frac{P(\mathbf{x}|y=k)P(y=k)}{P(\mathbf{x})} \tag{1}\] With \(P(\mathbf{x}|y=k)\) modeled as a multivariate Gaussian distribution with a mean \(\mu_{k}\) and a covariance \(\Sigma_{k}\): \[P(\mathbf{x}|y=k)=\frac{\exp{(-\frac{1}{2}(\mathbf{x}-\mathbf{\mu}_{k})^{2}\Sigma_{k}^{-1 }(\mathbf{x}-\mathbf{\mu}_{k}))}}{(2\pi)^{C/2}|\Sigma|^{1/2}} \tag{2}\] According to equations 1 and 2, the log of the posterior or **discriminant**\(\gamma_{k}\) is given as follows: \[\begin{split}\gamma_{k}=-\frac{1}{2}\log{|\Sigma_{k}|}-\frac{1} {2}(\mathbf{x}-\mathbf{\mu}_{k})\Sigma_{k}^{-1}(\mathbf{x}-\mathbf{\mu}_{k})\\ +\log{P(y=k)}+B\end{split} \tag{3}\] Where \(B\in\mathbb{R}\) is a constant. Finally, the classification rule is written as: \[F(\mathbf{x})=\operatorname*{arg\,max}_{k}\gamma_{k} \tag{4}\] With no further assumptions, this is referred to as Quadratic Discriminant Analysis (QDA). Linear Discriminant Analysis (LDA) and the streaming version of it [14], constrains equation 3 and considers equal covariance matrices between classes. ### Streaming Discriminant Analysis #### 4.2.1 Quadratic In order to adapt equation 3 to streams of data we need to calculate \(\mu_{k}\), \(\Sigma_{k}\), and \(\Sigma_{k}^{-1}\) in a streaming fashion. We choose to replace those values by their **empirical estimators**. We consider a new element \(z_{t}\) and \(k\in\llbracket 1,C\rrbracket\). The update functions are written as follows, where: * \(c\) is the vector of encountered classes: \[c_{(k=y,t+1)}=c_{(k=y,t)}+1\] (5) * \(\hat{\mu}\) is the saved class means: \[\hat{\mathbf{\mu}}_{(k=y,t+1)}=\frac{c_{(k=y,t)}\hat{\mathbf{\mu}}_{(k=y,t)}+\mathbf{z}_{t} }{c_{(k=y,t)}+1}\] (6) * \(\hat{\Sigma}\) is the vector containing all the class covariance matrices: \[\hat{\mathbf{\Sigma}}_{(k,t+1)}=\frac{t\hat{\mathbf{\Sigma}}_{(k,t)}+\mathbf{\Delta}_{t}}{ t+1}\] (7) \[\mathbf{\Delta}_{t}=\frac{t(\mathbf{z}_{t}-\hat{\mathbf{\mu}}_{(k=y,t)})(\mathbf{z}_{t}- \hat{\mathbf{\mu}}_{(k=y,t)})^{T}}{t+1}\] (8) * \(\Lambda\) is the inverse of \(\Sigma\) regularized with a shrinkage coefficient \(\epsilon\): \[\mathbf{\Lambda}_{(k,t)}=[(1-\epsilon)\hat{\mathbf{\Sigma}}_{(k,t)}+\epsilon\mathbf{I}]^{ -1}\] (9) * \(P(y=k)\) is calculated by incrementally and uniformly updating it for seen classes at time \(t\). For a balanced dataset, this factor can be considered constant but is important for unbalanced datasets serving as a corrective term. \[P(y=k)_{t}=\frac{c_{(k=y,t)}}{\sum_{n=1}^{C}c_{(k=n,t)}}\] (10) Applying those updates to equation 3 leads to a streaming version of QDA mentioned by Hayes and Kanan [14]. But the problem with SQDA is that, in high dimensionality, the number of datapoints needed to correctly empirically estimate the covariance matrices of each class are high [10]. As we will show in section 5, this approach struggles to translate to our high dimensional problem and mostly works with **low dimensional** datasets or ones with numerous examples per class. **This prompted us to look for a regularized alternative that solves this issue.** #### 4.2.2 Regularized Friedman [10] proposed a compromise between LDA and QDA, that shrinks the separate covariances of QDA toward a common covariance as in LDA. Using a coefficient \(\alpha\in[0,1]\), the regularization targets the class covariance matrices as follows: \[\tilde{\mathbf{\Sigma}}_{(k,t)}=\alpha\hat{\mathbf{\Sigma}}_{(k,t)}+(1-\alpha)\tilde {\mathbf{\Sigma}}_{\nu} \tag{11}\] Where \(\hat{\mathbf{\Sigma}}_{(k,t)}\) is the empirical class covariance calculated through QDA (eq.7), and \(\hat{\mathbf{\Sigma}}_{\nu}\) the empirical covariance matrix calculated with SLDA, in this equation 12: \[\hat{\mathbf{\Sigma}}_{\nu+1}=\frac{t^{\prime}\hat{\mathbf{\Sigma}}_{\nu}+\mathbf{\Delta} _{\nu}}{t^{\prime}+1} \tag{12}\] Replacing this new regularised \(\tilde{\Sigma}\) in equations 9 and 3 gives us **SRDA.** The coefficient \(\alpha\) controls the degree of shrinkage of the individual class covariance matrix estimates towards the pooled estimate. Since it is often the case that even small amounts of regularization can largely eliminate quite drastic instability [43], some values of \(\alpha\) have the potential of superior performance when the population class covariances substantially differ [10]. This performance boost is clearly shown in section 5. ## 5 Experiments & Results ### Baselines We conducted a comprehensive analysis by comparing our method with both streaming methods and batch streaming methods. To evaluate the performance of our model, we utilized the metric described in [13, 19], which involves normalizing a model's performance by the offline model's performance: \[\Omega_{all}=\frac{1}{T}\sum_{t=1}^{T}\frac{\rho_{t}}{\rho_{\textit{offline},t}} \tag{13}\] In our case, \(\rho_{t}\) refers to the top-5 accuracy of our model at time \(t\). An \(\Omega_{all}\) of 1 indicates that the continual learner performs equally well as the offline model. While it is theoretically possible to achieve results higher than one if the continual learner outperforms the offline model, such instances are rare in practice. Figure 2: Top-5 Accuracy on ImageNet ILSVRC-2012. We compare our SRDA with \(\alpha=0.55\) to SQDA and SLDA with a plastic (non-fixed) covariance matrix. For comparison, we use the models and results of [14] as we follow the same experiment settings. We don't use models that require task labels as this is not compatible with this more general class incremental learning. ### Initialization As was done in previous works [14, 6, 36], we initialize \(F\) and \(G\) with a 100 fixed randomly selected classes. We use the same weights for \(G\) as [14] for the first 100. The 900 remaining classes are trained incrementally with a fixed representation for \(G\) as mentioned in section 4. ### Results As shown in table 5.3, our method outperforms the streaming state-of-the-art by **5 %** and is very close to the offline training of the last layer. SRDA also outperforms iCarl, and End-to-End, which are methods that update the whole model and can iterate multiple times on the data. It should be noted that SQDA, as mentioned earlier, struggles in high-dimensional settings due to the limited per-class availability of data points required for accurate estimation of covariance matrices. To address this limitation, SRDA serves as a corrective measure by leveraging the well-estimated LDA covariance matrix in combination with the estimated class covariance matrices. The figure 3 provides visual evidence supporting our findings, with a grid search CV revealing the optimal \(\alpha\) value of 0.55 for this experiment. Better results can potentially be achieved by using a more recent backbone, such as EfficientNets [42], enabling higher accuracy with a lighter model more adapted to on device-learning. ### Hyperparameter tuning Because this method requires the adjustment of a hyperparameter, alpha, one would think that it cannot be readily used out of the box. However, in contrast to regular machine learning hyperparameters, alpha can be modified at the end of training with minimal additional computational costs. This is due to the independent calculation of the two covariance matrices. Consequently, the model can be trained using SRDA and tuned with a quick grid search CV at the end utilizing the validation dataset without re-training. In cases where a validation dataset is unavailable, a potential solution is to maintain a small, class-balanced buffer specifically for hyperparameter tuning, which can be employed at the end of training. This enables this classification technique to be directly used and competitive with other Streaming Learning algorithms. ### Computation Due to its quadratic complexity, our algorithm takes 12 hours to compute, which is considerably higher than SLDA, which takes 30 minutes on ImageNet. Nonetheless, this is still comparatively manageable compared to other batch learning and streaming algorithms. For example, according to [14] and our experiences, ExStream takes 64 hours, and iCarl [36] 35 hours on the same hardware. ### Memory usage As it is with computational consumption 5.5, SRDA consumes more memory than SLDA as it has to store a covariance per class compared to one covariance matrix in SLDA. For instance, in the case of ImageNet ILSVRC-2012 [39] one needs (\(1000\times 4\times(512^{2}+512)\)) bytes which is equivalent to 1.051 GB. For comparison, SLDA requires 0.001 \begin{table} \begin{tabular}{l c c} \hline \hline Models & Streaming & CLS-IID \\ \hline **Output Layer Only:** & & \\ Fine-Tuning* & Yes & 0.146 \\ ExStream* [13] & Yes & 0.569 \\ SLDA [14] & Yes & 0.752 \\ SQDA (ours) & Yes & 0.677 \\ **SRDA (ours)** & Yes & **0.801** \\ \hline **Representation Learning:** & & \\ Fine-Tuning* & Yes & 0.121 \\ iCaRL* [36] & No & 0.692 \\ End-to-End* [6] & No & 0.780 \\ \hline \hline **Offline Upper Bounds:** & & \\ Offline (Last Layer) & No & 0.853 \\ Offline & No & 1.000 \\ \hline \hline \end{tabular} \end{table} Table 1: \(\Omega_{all}\) accuracy on ImageNet. The results marked with * are taken from [14] as our experiment follows the same conditions. Figure 3: Variations of Top-5 Accuracy on ImageNet ILSVRC-2012 with regard to \(\alpha\). An \(\alpha=0\) represents a regular SLDA, whereas an \(\alpha=1\) represents an SQDA. GB, ExStream requires 0.041GB, and iCarl requires 3.011 GB. ## 6 Conclusion & Discussions We presented Deep Streaming Regularized Discriminant Analysis, a generative classifier able to adapt to non-iid data streams and outperform existing batch and streaming learning algorithms when paired with a CNN. We outperformed SLDA by 5%, iCarl by 11%, and End-to-End by 2%. This is an impressive result considering that both iCarl and End-to-End update the whole network and should intuitively beat a method only focusing on the last layer. This method provides better results than SLDA at the cost of computation and memory but remains comparatively manageable compared to other methods. SQDA is better suited for low dimensional and low class counts problems, while SRDA manages to adapt to high dimensional problems with the correct regularization parameter alpha that can be found at the end of training with minimal additional computational costs For use cases where one would like to combine the speed of SLDA and the performance of SRDA, one can imagine a model where SLDA is used for rapid learning while SRDA slowly trains in the background enabling improved accuracy in the long run. Finally, this method represents a step forward in the research of Sustainable AI. As presented by [7], Continual Learning is a promising candidate for achieving Sustainable AI. This case of Streaming Learning justifies this choice even further as it presents a more realistic application that respects the principles of Sustainable AI, including efficiency, privacy, and robustness. Our deep SRDA has many potential applications, including robotics, edge learning, and human-machine interfaces. It removes the need to store the data as the model can learn on data streams, learning at approximately 28Hz for our experiment on ImageNet. More importantly, it enables on-device Continual Learning, removing the need for retraining and thus saving resources. ## Appendix The models were trained using these parameters: * **Offline**: Same parameters as [14]. SGD for 90 epochs, with \(lr=0.1\) with decay at 10 30 and 60 epochs, \(momentum=0.9\) and weight decay of \(10^{-4}\). * **iCarl**: Parameters from [36], and stored 20 exemplars per class. * **ExStream**: Same parameters as offline with 20 exemplars per class. ## Acknowledgments This work was performed as part of the DREAM project. The DREAM project received funding from the European Union's European Innovation Council (EIC) under grant agreement No 101046451.
2309.17273
Characterizing losses in InAs two-dimensional electron gas-based gatemon qubits
The tunnelling of cooper pairs across a Josephson junction (JJ) allow for the nonlinear inductance necessary to construct superconducting qubits, amplifiers, and various other quantum circuits. An alternative approach using hybrid superconductor-semiconductor JJs can enable superconducting qubit architectures with all electric control. Here we present continuous-wave and time-domain characterization of gatemon qubits and coplanar waveguide resonators based on an InAs two-dimensional electron gas. We show that the qubit undergoes a vacuum Rabi splitting with a readout cavity and we drive coherent Rabi oscillations between the qubit ground and first excited states. We measure qubit relaxation times to be $T_1 =$ 100 ns over a 1.5 GHz tunable band. We detail the loss mechanisms present in these materials through a systematic study of the quality factors of coplanar waveguide resonators. While various loss mechanisms are present in III-V gatemon circuits we detail future directions in enhancing the relaxation times of qubit devices on this platform.
William M. Strickland, Lukas J. Baker, Jaewoo Lee, Krishna Dindial, Bassel Heiba Elfeky, Patrick J. Strohbeen, Mehdi Hatefipour, Peng Yu, Ido Levy, Jacob Issokson, Vladimir E. Manucharyan, Javad Shabani
2023-09-29T14:23:28Z
http://arxiv.org/abs/2309.17273v2
# Characterizing losses in InAs two-dimensional electron gas-based gatemon qubits ###### Abstract The tunnelling of cooper pairs across a Josephson junction (JJ) allow for the nonlinear inductance necessary to construct superconducting qubits, amplifiers, and various other quantum circuits. An alternative approach using hybrid superconductor-semiconductor JJs can enable a superconducting qubit architecture with full electric field control. Here we present continuous-wave and time-domain characterization of gatemon qubits based on an InAs 2DEG. We show that the qubit undergoes a vacuum Rabi splitting with a readout cavity and we drive coherent Rabi oscillations between the qubit ground and first excited states. We measure qubit coherence times to be \(T_{1}=100\) ns over a 1.5 GHz tunable band. While various loss mechanisms are present in III-V gatemon circuits we detail future directions in enhancing the coherence times of qubit devices on this platform. The superconducting qubit is a hallmark solid-state system that displays quantum coherence and strong light-matter coupling [1; 2; 3; 4]. Recently, the coherence times of fixed-frequency planar transmon qubits have exceeded 300 us [5] and further improvements are expected with improved materials and fabrication [6]. A common design choice is to introduce flux tunability of a qubit or coupler for fast, high-fidelity single-qubit control and two-qubit gates [7; 8], almost exclusively realized by flux-sensitive superconducting quantum interference devices (SQUIDs) [9; 10; 11; 12]. An architecture based on flux-biased SQUIDs may lead to future complications, however. The heat load induced by milliampere-level currents flowing through resistive wires can impose substantial cooling requirements as the scale of superconducting qubit chips increases. Stray magnetic fields in higher density qubit arrays can also cause irremediable crosstalk. In addition, low-frequency \(1/f\)-type flux noise can limit the dephasing times of flux-tunable qubits, and its origin and mitigation is an active area of research [13; 14; 15; 16; 17]. An all-electric tunability scheme may prove to be beneficial in large-scale quantum processors, and JJs based on hybrid superconductor-semiconductor (S-Sm) materials are one interesting candidate to realize this. A hybrid S-Sm Josephson junction device has current flow facilitated by Andreev bound states in the semiconductor weak-link. By biasing with an applied gate voltage one can tune the Fermi level in the semiconductor and the occupation of Andreev bound states, effectively controlling the conduction through the junction. It was shown that InAs makes an excellent candidate for a proximitized semiconductor because it makes an Ohmic contact with superconducting metals such as Al [18]. It was discovered later that thin films of Al (111) can grow epitaxially on InAs (100) by molecular beam epitaxy [19; 20; 21], enabling a high quality contact [22; 23; 24]. The incorporation of an S-Sm junction in a superconducting qubit was demonstrated in Refs. [25] and [26] using an InAs nanowire. Voltage tunable Josephson junctions have since made many appearances in qubits [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36], couplers [37; 38; 39; 40; 41; 42; 43; 44], and other elements, such as an amplifiers and nonreciprocal elements [45; 46; 47]. Of note, a wafer-scale architecture was implemented in the form of an InAs two-dimensional electron gases (2DEG) in Ref. [27], making the qubit processing more amenable to bottom-up fabrication. While, fabrication reproducibility and lossy III-V materials have prevented the wider adoption of the 2DEG-based gatemon qubit architecture, the prospect of using the voltage tunable junction in tunable couplers between qubits to implement low-power, fast two-qubit gates could make near-term potentially useful and interesting material platform to study and integrate with state of the art superconducting qubits. We report on the coherent manipulation of InAs 2DEG-based gatemon qubits. We tune the qubit frequency over 1.5 GHz and show that the qubit undergoes a vacuum Rabi splitting with the readout resonator. We drive coherent Rabi oscillations between the ground and first excited states of the qubit and by fitting the decay of these oscillations we find that the characteristic time scale of the decay is \(T_{2}^{\rm Rabi}\) of 98 ns. We measure the energy relaxation times \(T_{1}\) of gatemon qubits over a wide gate voltage range and find a maximum \(T_{1}=102\) ns, where \(T_{1}\) generally increases with decreasing qubit frequency. Finally, we outline in detail future steps which can enhance gatemon \(T_{1}\) times. The 2DEG is realized by an InAs quantum well grown near the surface by molecular beam epitaxy. A schematic of the heterostructure is shown in Fig. 1(a). An epi-ready, 500 um thick, Fe-doped, semi-insulating InP substrate is loaded into an ultra-high vacuum molecular beam epitaxy chamber. The native oxide is thermally desorbed, followed by the growth of an In\({}_{0.53}\)Al\({}_{0.47}\)/In\({}_{0.52}\)Al\({}_{0.48}\) superlattice, a 100 nm thick In\({}_{0.52}\)Al\({}_{0.48}\)As layer, and a 400 nm thick In\({}_{x}\)Al\({}_{1-x}\)As graded buffer layer. The composition is graded from \(x=0.52\) to 0.81. The quantum well is then grown, consisting of layers of InGaAs, InAs, and InGaAs with thicknesses of 4 nm, 4 nm, and 10 nm respectively. The temperature is then lowered and a 30 nm thick layer of Al is deposited _in-situ_. Details of the growth are expanded on in detail in [48; 49; 50]. Schematics of the device fabrication process are also shown in Fig. 1, with a view of the surface shown in the left-hand panel, and a tilted, side view of the layers shown on the right panel. The process starts by dicing a \(7\times 7\) mm piece from the wafer. We use electron beam lithography to define the patterns and polymethyl methacrylate is used as elec Figure 1: **Fabrication procedure and optical image:** The left column shows a top view of the surface, while the right column shows the side view, where each layer is clearly visible and noted. The layer structure is shown in (a) where layers of InAlAs (teal), InAs/InGaAs (green/dark green), and Al (blue) are grown on an InP substrate (gray). The Al and III-V layers are etched in order to define the microwave circuit as shown in (b). The Josephson junction shown in (c) is then defined with an Al etch. The width of the superconducting electrodes are nominally 5 μm and are separated by 100 nm (not to scale). We then blanket deposit a layer of AlO\({}_{x}\) (white), pattern the gate electrodes, and deposit an Al gate electrode, as shown in (d). An optical image of the device after fabrication is shown in (e). The qubit has a characteristic frequency \(f_{Q}\) controlled by the gate voltage \(V_{G}\) and is coupled to a readout resonator with a coupling strength \(g\), set by the coupling capacitance \(C_{g}\). An external drive line is coupled with a strength of \(\kappa\), set by the capacitance \(C_{\kappa}\), which drives transitions in the qubit. The equivalent circuit diagram is shown in (f). The Josephson inductance \(L_{J}\) is shunted to ground by a capacitance \(C_{S}\). Input and output lines are coupled inductively to the readout resonator (orange). Figure 2: **Continuous wave measurement** (a) Measuring \(|S_{21}|\) across the transmission line we find absorption at four distinct frequencies corresponding to the resonant frequencies of the readout resonators. (b) The junction gate voltage \(V_{G}\) tunes the qubit frequency, and near the readout resonator frequency a vacuum Rabi splitting is observed. We find this corresponds to a coupling strength of \(g/2\pi\) = 95 MHz. tron beam resist. To etch the native Al layer, we use a wet chemical etchant Transene Type D, and to etch the epitaxial III-V layers, we use a solution consisting of phosphoric acid (H\({}_{3}\)PO\({}_{4}\), 85%), hydrogen peroxide (H\({}_{2}\)O\({}_{2}\), 30%), and deionized water in a volumetric ratio of 1:1:40. The first lithography and etching step defines the microwave circuit, where we etch the native aluminum layer, and the III-V layers in successive steps. The resulting pattern (to scale) can be seen in in Fig. 1(b). We next expose and etch a thin 100 nm long (separation of the two aluminum leads), 5 um wide strip to define the planar Josephson junction. This can be seen in Fig. 1(c), where the inset of the left panel shows a zoomed in image of the junction area (exposed semiconductor region enlarged for visibility). Following the deposition of a 40 nm blanket layer of AlO\({}_{x}\) to serve as a gate dielectric, we pattern the gates and deposit a 50 nm thick Al layer for the gate electrodes either by thermal evaporation or by sputtering. The AlO\({}_{x}\) gate dielectric can be seen in Fig. 1(d) as an opaque white layer over the whole chip. We report on a chip containing four qubits, each coupled capacitively to a readout resonator, drive line, and gate electrode. The readout resonators are coupled inductively to a common feedline. An optical image of one such qubit is shown in Fig. 1(e) with an equivalent circuit diagram shown in Fig. 1(f). We use Ansys Q3D extractor to calculate the Maxwell capacitance matrix. We find that the qubit shunt capacitance is \(C_{S}=62.7\) fF, giving an estimated charging energy of \(E_{C}/h=e^{2}/2C_{S}=309\) MHz, where \(e\) is the elementary charge and \(h\) is Planck's constant. The Josephson junction provides a nonlinear inductance \(L_{J}\) in parallel with a shunt capacitance. By virtue of tuning the current through the junction, the top gate electrode sets a voltage \(V_{G}\) which controls the qubit frequency \(f_{Q}\). At a qubit frequency of \(f_{Q}=6\) GHz, the qubit is detuned from the readout resonator by \(>1\) GHz, allowing for dispersive readout of the qubit state. The Josephson energy at this frequency would be \(E_{J}=16\) GHz giving a ratio of \(E_{J}/E_{C}=52\), satisfying the transmon condition \(E_{J}\gg E_{C}\). We note that the critical current through the Josephson junction at this frequency is \(I_{C}=30\) nA. A \(\lambda/4\) readout resonator with a frequency of \(f_{\mathrm{r}}=7.14\) GHz is coupled capacitively to the qubit with an estimated coupling strength \(g/2\pi=109\) MHz. We note that the readout resonator frequency is shifted down due to an appreciable kinetic inductance of the thin film Al from a frequency of \(f_{\mathrm{r}}^{0}=7.56\) GHz expected by design [51], leading to a kinetic inductance fraction of 10%. An external drive line is coupled to the qubit by a coupling strength of \(\kappa/2\pi=396\) kHz. The devices are mounted in a commercially available low-loss cavity designed by QDevil. The chip is connected to a printed circuit board by aluminum wire bonds. The cavity is mounted on the mixing chamber of a cryogen-free dilution refrigerator with a base temperature of 15 mK. Signals are attenuated before being passed through the transmission line on the chip. Outgoing signals are then sent through a travelling wave, quantum-limited parametric amplifier and further amplified by a low noise amplifier mounted on the 4K plate, followed by 2 subsequent room temperature amplifiers. Pulsed signals are generated by an arbitrary waveform generator with a 1 GSa/s sampling rate and mixed with a continuous microwave source. The outgoing signal is then demodulated by an IQ mixer before being recorded by a digitizer with a sampling rate of 500 MSA/s. Measuring the complex transmission \(|S_{21}|\) across the feedline as a function of frequency, we find that there are four sharp dips in magnitude at distinct frequencies shown in Fig. 2(a), corresponding to each of the four readout resonators on the chip labelled according to their placement on the chip from left to right. Sweeping the top gate voltage \(V_{G}\), and looking at resonator R3, we find that the qubit and readout resonator modes exhibit a vacuum Rabi splitting, Figure 3: **Two tone spectroscopy** (a) Applying a drive tone on the qubit at a frequency of \(f_{Q}=6.55\) GHz, detuned from the readout resonator by \(\Delta=610\) MHz, the readout resonator dispersively shifts when the drive is turned on. (b) Sweeping the top gate voltage \(V_{G}\) and applying a drive tone with varying frequency \(f_{\mathrm{drive}}\), we find that two-tone spectroscopy reveals the qubit response to gate voltage. where the splitting in frequency is equal to twice \(g/2\pi\). This can be seen in Fig. 2(b) where the minimum detuning of the two modes is noted by an arrow. The bare frequencies of the two modes as a function of gate voltage are shown as white dashed lines. Extracting the measured coupling strength we find that \(g/2\pi=95\,\)MHz, within 15% of the value expected from finite element simulations. We utilize the strong coupling to the readout resonator to perform dispersive measurements of the qubit state [4]. In a coupled qubit-resonator system in the dispersive regime, with large coupling and detuning \(g/\Delta\ll 1\), we expect the readout resonator frequency to have a qubit state dependent frequency, \(f_{r}\pm\chi\), where \(\chi\) is the dispersive shift. Thus, for a sufficiently small linewidth \(\kappa\lesssim\chi\), a measurement of the transmission through the resonator would allow for the unique determination of the qubit state. We detune the qubit from the readout resonator by applying a gate voltage of \(V_{G}=-3.535\,\)V. We then continuously measure the transmission across the feedline. The probe tone is set to a frequency corresponding to the measured frequency of the resonator with no drive. Sweeping the frequency of the drive tone at a power of \(P_{\text{drive}}=\) -36 dBm, we find that at a frequency \(f_{\text{drive}}=5.62\,\)GHz, the readout resonator exhibits a shift down in frequency corresponding to the excitation of the qubit \(|0\rangle\) to \(|1\rangle\) transition. This can be seen in Fig. 3(a), where we measure a shift of 8.66 MHz between the resonator frequency when the drive is on. The continuous microwave tone incoherently drives the qubit \(|0\rangle\) to \(|1\rangle\) transition, therefore shifting the readout resonator by less than \(2\chi\). We then repeat this measurement while sweeping the gate voltage, modifying the current through the junction and thus the qubit frequency. As shown in Fig. 3, we find that the drive frequency that causes a characteristic shift in the resonator generally decreases with gate voltage, which is expected for a depleting junction, decreasing the qubit frequency. Furthermore, the qubit response to gate voltage is non-monotonic and is similar to what has been observed in the past foratem qubits in Refs. [25, 26, 27, 32, 36]. It was discussed in Ref. [32] that in an InAs nanowire device, at very low junction critical currents on the order of 10 nA, the junction becomes subject to universal conductance fluctuations. We find wide tunability \(>1\,\)GHz of the qubit frequency via gate voltage. A two-level quantum system undergoes Rabi oscillations when driven at the transition frequency between the two levels. The drive coherently rotates the qubit between the \(|0\rangle\) to \(|1\rangle\) states, where the final qubit state depends on the width of the drive pulse \(\tau_{\text{Rabi}}\). We measure the dynamics of the qubit in the time domain by coherently driving Rabi oscillations and measuring the characteristic lifetime. Pulsed signals are generated by an arbitrary waveform generator with a 1 GSa/s sampling rate and mixed with a continuous microwave source. Simultaneously, a continuous probe tone set to the readout resonator frequency is used to dispersively measure the qubit state. The outgoing signal is then demodulated and a homodyne detection voltage \(V_{H}\) is measured by a digitizer with a 500 MSA/s sampling rate. Shown in Fig. 4(a), we send pulses to the drive line of the qubit at a frequency of \(f_{01}=6.56\,\)GHz with the gate voltage set to \(V_{G}=\) -3.530 V. As we vary the drive pulse width, the homodyne detection voltage \(V_{H}\) undergoes oscillations between two values, periodic in the pulse width \(\tau_{\text{Rabi}}\). We find that the frequency of these oscillations decreases with decreasing power, as is expected for Rabi oscillations. Taking a linecut at high power \(P_{\text{drive}}=\) -52.5 dBm, we fit the Rabi oscillations to an exponentially decaying sine wave with a linear slope. This linear contribution to the signal \(V_{H}\) was identified in Ref. [36] and could possibly be due to leakage into higher levels [52]. There are four free parameters in the fits to Rabi oscillations: the time constant characterizing the exponential decay \(T_{2}^{\text{Rabi}}\), the oscillation frequency, and the slope and y-intercept of the decaying contribution. Fitting the data with the method of least squares, we extract a time constant of \(T_{2}^{\text{Rabi}}=98\,\)ns. In Fig. 4(c) we plot the extracted Rabi frequency in blue markers versus the square root of the drive power. It is seen that at the low power regime, the frequency follows \(\sqrt{P_{drive}}\), shown as an orange line. By calibrating the pulse width to half a Rabi period, one is able to coherently drive the qubit to the \(|1\rangle\) state. By then measuring the decay to the \(|0\rangle\) state averaged over many runs, one can fit the characteristic time of this decay to extract the energy relaxation time \(T_{1}\). At a qubit frequency of \(f_{Q}=6.51\,\)GHz, we apply a 10 ns wide pulse with a drive power of \(P_{\text{drive}}=-41\,\)dBm and average over \(2\times 10^{5}\) runs. The decay of the measured signal \(V_{H}\) is fit to a decaying exponential, and we extract a time constant of \(T_{1}=102\pm 18\,\)ns as shown in the inset of Fig 5(a). We conduct similar measurements on the other two qubits and obtain a spread of \(T_{1}\). We find that for the other two qubits, the average \(\overline{T_{1}}\) are 50 ns for device Q2 and 110 ns for device Q3. The maximum \(T_{1}\) measured for the three qubits is 140 ns. We measure \(T_{1}\) over a range of gate voltages and qubit frequencies. As shown in Fig. 5(a), the qubit frequency \(f_{Q}\) again exhibits a non-monotonic tuning between 6.5 GHz and 5.2 GHz over the gate voltage range \(-3.545\,\)V \(<V_{G}<-3.535\,\)V, consistent with the measurement from Fig. 3(b). We find that the measured \(T_{1}\) also vary over this range, generally increasing with decreasing qubit frequency. Qubit lifetimes can be limited by a number of different loss mechanisms, such as capacitive, inductive, radiative, Purcell, and quasiparticle losses [53]. To elucidate the effects of some of these loss mechanisms, in Fig. 5(b) we plot the measured \(T_{1}\) values in terms of an equivalent qubit quality factor, \(Q=T_{1}/2\pi f_{Q}\), as a function of qubit frequency \(f_{Q}\). We compare the measured qubit quality factors to low-power measurements of the internal quality factor \(Q_{\mathrm{int}}\) of three CPWs. These three CPWs underwent varying fabrication conditions in order to pinpoint some dominating loss mechanisms in our current qubit device. Each CPW is fabricated with an identical center conductor width and gap to ground. The complex transmission is fit using an algorithm detailed in Ref. [54]. The first is fabricated using a nominally identical procedure as the qubit used in the qubit chip, consisting of a 30 nm thick epitaxial Al layer as the superconductor and a blanket 40 nm thick AlO\({}_{x}\) gate dielectric layer. We measure a \(Q_{\mathrm{int}}\) of around \(2.5\times 10^{3}\). In a second device, we deposit an additional 100 nm Al layer by sputtering following a 5 minute argon plasma cleaning. We then deposit a 40 nm blanket AlO\({}_{x}\) layer. We find the measured \(Q_{\mathrm{int}}\) increases to about \(1.0\times 10^{4}\), suggesting that inductive loss in the thin Al layer could contribute significantly to the loss [55]. Finally, we measure \(Q_{\mathrm{int}}\) of a third device, consisting of a 100 nm additional sputtered Al. This device did not have a deposited AlO\({}_{x}\) layer. We find that the measured \(Q_{\mathrm{int}}\) increases to \(3.7\times 10^{4}\). This suggests that the dielectric loss in the 40 nm AlO\({}_{x}\) layer limits \(Q_{\mathrm{int}}\) could dominate the loss in the second device. The measured \(Q_{\mathrm{int}}\) of the third device is similar to what has been measured previously on InP in Ref. [27], however an accurate measurement of the loss of the bulk InP substrate would be of interest and is a topic of future study. It is also possible to extend the lifetimes of InAs 2DEG gatemons beyond the limit set by dielectric loss in the InP substrate by employing strategies such as deep reactive ion etching [56], flip-chip [39], epitaxial lift-off [57], substrate backside etching or polishing, or the growth of III-Vs directly on Si [58]. The CPW measurements presented here detail immediate next steps to enhance gatemon coherence times by reducing inductive and capacitive losses. Dielectric loss from the blanket AlO\({}_{x}\) layer can be reduced by patterning and lifting off the gate dielectric, or by using hexagonal boron nitride, a low-loss, small form factor gate dielectric [59; 60]. In order to reduce inductive losses, a thicker _in-situ_ Al layer can be deposited, or a thick, low-loss superconducting layer can be deposited _ex-situ_ via sputtering, given a sufficient cleaning step before the deposition. Decreasing inductive losses in the superconducting film will also manifest as decreased Purcell loss through the readout resonator. We note that inductive loss in the junction, radiative loss through the gate line, and quasiparticle loss may also play a role in limiting \(T_{1}\)[61]. ## I Conclusion We have presented measurements on a InAs 2DEG-based gatemon qubit. We observe a qubit state dependent shift of 9 MHz in the readout resonator due to driving of the qubit, where the two undergo a vacuum Rabi splitting with a minimum detuning of 95 MHz. Measuring two tone spectroscopy of the qubit with gate voltage, we find Figure 4: **Rabi oscillations:** (a) Qubit \(|1\rangle\) state population \(P_{|1\rangle}\) as a function of Rabi pulse width \(\tau_{\mathrm{Rabi}}\) with the drive power \(P_{\mathrm{drive}}\) varied. (b) Fitting a linecut of the data in (a) at a power of -52.5 dBm to a decaying sinusoid. We find a time constant \(T_{2}^{\mathrm{Rabi}}=98\,\mathrm{ns}\) characterizing the coherence. (c) Extracting Rabi oscillation frequency \(\nu_{\mathrm{Rabi}}\) at different drive powers, we find at low power that the data fits roughly to \(\sqrt{P_{\mathrm{drive}}}\). At high power, the data deviates from the expected square root dependence, possibly due to high power nonlinearities. the qubit frequency is tunable by over 1 GHz and undergoes mesoscopic conductance fluctuations. By varying the drive pulse width we observe the qubit undergo Rabi oscillations between the \(\ket{0}\) and \(\ket{1}\) states, and we measure coherence times across three qubits with a maximum \(T_{1}\) of 140 ns. We find that over a \(>1\,\)GHz frequency range, the \(T_{1}\) generally decreases with increasing qubit frequency. We investigate the dominating inductive and capacitive loss mechanisms in our device and outline a path forward for InAs 2DEG gatemon qubits with coherence times approaching and possibly exceeding the limits set by dielectric losses in the InP substrate. ## II Acknowledgements We thank Joseph O'Connell Yuan for fruitful conversations. We acknowledge support from the Army Research Office agreements W911NF2110303 and W911NF2210048. We also acknowledge support from MURI ONR award no. N00014-22-1-2764 P00001. W. M. S. acknowledges funding from the ARO/LPS QuaCR Graduate Fellowship. Figure 5: \(T_{1}\) **measurements** (a) Applying a pi pulse and measuring in time \(t\) we find that the decay of the qubit state can be fit to an exponential to extract the time constant for qubit decay \(T_{1}=143\) ns. The \(T_{1}\) measurement is repeated over a range of gate voltages. Qubit \(T_{1}\)’s (green) and frequencies (pink) as a function of gate voltage are plotted. The qubit quality factor \(Q=T_{1}/2\pi f_{Q}\) is plotted versus qubit frequency in (b). For reference, we include quality factor measurements from three test coplanar waveguide resonators (black star, square, and triangle).
2309.12684
Visualization According to Statisticians: An Interview Study on the Role of Visualization for Inferential Statistics
Statisticians are not only one of the earliest professional adopters of data visualization, but also some of its most prolific users. Understanding how these professionals utilize visual representations in their analytic process may shed light on best practices for visual sensemaking. We present results from an interview study involving 18 professional statisticians (19.7 years average in the profession) on three aspects: (1) their use of visualization in their daily analytic work; (2) their mental models of inferential statistical processes; and (3) their design recommendations for how to best represent statistical inferences. Interview sessions consisted of discussing inferential statistics, eliciting participant sketches of suitable visual designs, and finally, a design intervention with our proposed visual designs. We analyzed interview transcripts using thematic analysis and open coding, deriving thematic codes on statistical mindset, analytic process, and analytic toolkit. The key findings for each aspect are as follows: (1) statisticians make extensive use of visualization during all phases of their work (and not just when reporting results); (2) their mental models of inferential methods tend to be mostly visually based; and (3) many statisticians abhor dichotomous thinking. The latter suggests that a multi-faceted visual display of inferential statistics that includes a visual indicator of analytically important effect sizes may help to balance the attributed epistemic power of traditional statistical testing with an awareness of the uncertainty of sensemaking.
Eric Newburger, Niklas Elmqvist
2023-09-22T07:47:58Z
http://arxiv.org/abs/2309.12684v1
Visualization According to Statisticians: An Interview Study on the Role of Visualization for Inferential Statistics ###### Abstract Statisticians are not only one of the earliest professional adopters of data visualization, but also some of its most prolific users. Understanding how these professionals utilize visual representations in their analytic process may shed light on best practices for visual sensemaking. We present results from an interview study involving 18 professional statisticians (19.7 years average in the profession) on three aspects: (1) their use of visualization in their daily analytic work; (2) their mental models of inferential statistical processes; and (3) their design recommendations for how to best represent statistical inferences. Interview sessions consisted of discussing inferential statistics, eliciting participant sketches of suitable visual designs, and finally, a design intervention with our proposed visual designs. We analyzed interview transcripts using thematic analysis and open coding, deriving thematic codes on statistical mindset, analytic process, and analytic toolkit. The key findings for each aspect are as follows: (1) statisticians make extensive use of visualization during all phases of their work (and not just when reporting results); (2) their mental models of inferential methods tend to be mostly visually based; and (3) many statisticians abhor dichotomous thinking. The latter suggests that a multi-faceted visual display of inferential statistics that includes a visual indicator of analytically important effect sizes may help to balance the attributed epistemic power of traditional statistical testing with an awareness of the uncertainty of sensemaking. **Keywords:** Inferential statistics, qualitative interview study, thematic coding, statistical visualization. ## 1 Introduction Statistics was one of the early adopters of data visualization, and graphical methods in themselves are a valid form of inference [7]. Even today, statisticians remain some of the more prolific users of data visualization, with certain statistical tests routinely involving visual inspection and graphical inference. As a case in point, John W. Tukey's 1977 book on _exploratory data analysis_ (EDA) [30] established the field of _visual statistics_[4], where interactive visual representations are used to inform and generate hypotheses, or even confirm them. While workflows differ between each practicing statistician, it is clear that most have an intimate and working knowledge of visualization for sensemaking. Interestingly, visualization is also commonly described as a key enabling technology for helping people understand and make decisions based on data [15, 19, 24]. Unlike arcane statistical tests and mathematical formalism, interactive visual representations straightforwardly invite users to overview, filter, and drill into [27] data with little prerequisite knowledge. Given appropriate prompting and visual representations, even laypeople can manually perform statistical tests such as comparing averages in time-series data [10, 1], fit trend lines to point clouds [9], and make mean value judgments in multi-class scatterplots [14]. The prevalence of visualization for both novices and experts alike suggest that **visualization could become a bridge for giving people access to advanced statistical workflows**. In this paper, we perform an interview study with professional statisticians to understand their practices for analyzing and making decisions using visualization. Our research team is well placed to do this work: the first author is a former practicing statistician with a career spanning two decades in the U.S. Census Bureau. Using this author's professional network, we recruit a total of 18 statisticians with a combined 350 years of experience (average 19.7 years). We focus on three questions: 1. How do statisticians use visualization in their daily analytic work? 2. What mental models of inferential statistics do statisticians have? 3. How to design a representation for statistical inference that builds on current practices of professional statisticians? Our interview study was conducted as a semi-structured interview over Zoom videoconference. Each session involved three phases: (1) statistical practice; (2) graphical elicitation of their internal understanding of inferential statistics; and (3) review of a design probe [13]: a prototype visual representations for statistical inference. All sessions were professionally transcribed and the first author used their statistical expertise and experience to code the transcripts using an opencoding approach [20]. We then derived our findings using thematic analysis. At a high level, we found that visualization tends to be a key activity in most statistician's daily workflow, and not just during presentation. Furthermore, our participants mostly reported mental models that are visually based. Finally, we found that our participants tend to abhor dichotomous thinking and distrust insights lacking multiple evidence. ## 2 Background Here we review the key background research on statistical inference, visual statistics, and visualization. ### Statistical Inference Inferential statistics involves using statistical methods to make inferences about a population based on a sample of data [6]. The field can be traced back to the work of Ronald Fisher, who is considered one of the founding fathers of modern statistics [16, 12]. The traditional approach to statistical inference is _confirmatory data analysis_ (CDA) [21, 26], which typically involves specifying a hypothesis, and using a statistical test to determine whether a counter (null) hypothesis might explain the data. CDA is often used in experimental research and is a critical component of modern science. _Exploratory data analysis_ (EDA), on the other hand, first coined by John Tukey in the 1970s [30], is an approach to analyzing data that involves exploring and summarizing the data to gain insights and identify patterns. Tukey argued that EDA was an essential first step in data analysis that is necessary for understanding the data before applying confirmatory techniques. His work became part of the foundation for visualization, where interactive visual representations are used to inform, generate, or even confirm hypotheses. Most statisticians agree that both EDA and CDA have important roles to play in data analysis. ### Visual Statistics There are many routine tasks for which a practicing statistician will turn to graphical methods [7], including model selection (using residual plots or Q-Q plots to verify assumptions), outlier detection (using scatterplots or histograms), and quality control (plotting the data). There has been a growing interest in the visualization community in how graphical representations of data can facilitate higher-order tasks, such as making inferences. To address this issue, Buja et al. [4] proposed frameworks for _visual statistics_, which rely on visual representations to serve as a test statistic while human cognition functions as the statistical test. They demonstrated their approach using what amounts to a "Rohrschach test" of random data in a lineup of small Figure 1: **Visualization as a bridge.** Understanding how professional statisticians use and think about visualization may help designing effective visualizations to support sensemaking for everyone. Note that these characters bear no likeness to the original participants and their quotes have been slightly edited for brevity. (Images by MidJourney v5.) multiples, where only one of the multiples utilized real data. In subsequent research, Wickham et al. [32] adapted this concept for the visualization community and discussed how it can be applied to common visualizations to reveal new insights while minimizing false positives. Beecham et al. [3] implemented the lineup protocol for graphical inference in geographic clustering visualizations, while Correll et al. [11] used it to examine the effectiveness of common distribution graphics in displaying gaps or outliers. ### Everyone a Statistician Novices often struggle with choosing methods, understanding assumptions, interpreting results, and implementing tests when they are in need of statistics to analyze and understand the data they deal with in their everyday lives [22]. Interestingly, a visual approach to statistics can help even these novice users perform advanced statistical tests [15, 19, 24]. As a case in point, recent work has shown how even novice users can compare averages in time-series data [10, 1], fit trend lines to point clouds [9], and make mean value judgments in multi-class scatterplots [14] without specialized training or knowledge. The theoretical basis of our work is grounded in the observation that visualization represents common ground between novices and experts alike, and thus that visualization can become a bridge for giving people access to advanced statistical workflows. However, to achieve this, we must first understand the visualization practices of expert statisticians. Below we will describe our approach to achieve this goal. ## 3 Method We conducted a qualitative study through semi-structured interviews with professional statisticians. Our questioning focused on participants' relationship with visualization and their understanding of statistical inference with an emphasis on frequentist, parametric statistical tests (even if many expressed their understanding of non-parametric and/or Bayesian approaches as well). Data were coded according to a composite scheme that mixed a priori and emergent codes. ### Positionality Statement Given the qualitative nature of our evaluation study, the positionality of the authors may have had an impact on our reporting and interpretation of our findings. The first author is a recent Ph.D. in information science and a former statistician with the U.S. Federal government, and the second is a faculty member in information and computer science. We have implemented the following strategies to mitigate the impact of these potential biases: * During **participant recruitment**, we drew from a mix of analytic communities to ensure a diversity of analytic viewpoints, but also to reach statisticians with experiences separate from ours. * While **creating the collection instrument**, we mixed three different collection modes (open-ended questions, graphic elicitation, observations on designs probes). This diversity of data types gives the data collection resilience against potential biases. * In **conducting interviews**, we endeavored to create an environment which would provide participants the comfort of feeling they were having a conversation with an interested and supportive colleague; to both create a space in which they could openly discuss the most technical aspects of their work without fear of alienating their listener, and safely share private thoughts about their work. There is evidence this environment succeeded, as more than one participant expressed their relief that the study's anonymity precautions would ensure none of their employers would know what they had been saying. * During **coding of the findings**, we used a combination of a priori and emergent codes. A priori coding required us to take a disciplined approach to some of the results, with the potential to directly falsify our initial hypotheses. ### Participants We recruited 18 paid participants via direct email request. Participants were offered $25 compensation in the form of a gift card. Two thirds of participants were selected from among members of the American Statistical Association, with guidance from the organization's leadership. With one exception, these participants had no prior professional contact with the study authors. The remaining one third of participants were reached through the authors' professional networks. Our recruitment efforts explicitly targeted statisticians working in three broad industries: government statistical agencies, academia, and private industry. We used the following screening criteria: at least 18 years of age; at least one relevant degree (undergraduate or graduate); 5 or more years of experience as a professional statistician after the completion of their education; and job duties that included statistical inference, such as statistical tests or confidence intervals. Table 1 summarizes the study and Table 2 gives an overview of the participants. As can be seen from the tables, partipants varied in their professional experience, educational experience, and demographics. ### Data Collection All interviews were conducted via live videoconferencing on Zoom using a laptop or desktop computer. Sessions were both video and audio recorded. Video allowed for screen-captures of sketches drawn by participants. Audio recordings were submitted to a transcription service ([http://rev.com](http://rev.com)) to provide accurate text for coding. The researcher took notes during the sessions, including sketches of participant-described visualizations, which the researcher then showed to the participants via the video feed for their approval of the sketches. The interviewer also took notes immediately after each session to record general impressions, and recall details not captured in the moment. There were a few exceptions to the interview protocol. The first participant used their smartphone for the videoconferencing, which resulted in lower fidelity transmission of graphics for the third section of the interview. Two of the 18 interviews failed to record, and, therefore, only the researcher's notes captured the outcomes of these two sessions. One interview skipped the graphic elicitation section of the interview, while completing the other two sections in full. ### Interview Script Interviews followed a script (see supplemental material). Questions in bold were asked word for word of all participants, optional follow-up questions appear indented in the table. Prior to data collection, the researchers conducted two practice interviews using early versions of the interview script, and a sketch-version of a strawman graphic (see below). Scripts and graphic were refined based upon feedback during these practice runs, and a second strawman graphic was added. ### Design Probes: Strawman Graphics One phase of our evaluation involved the use of _design probes_[13, 31] to elicit feedback from our expert participants. In this case, our design probes were visual representations designed for supporting graphical inference by experts. We call these probes "strawman graphics" because we hope that participants will have constructive feedback on each. During the course of our study, we ended up developing three separate strawman graphics (labeled #1, #2, and #3) based on participant feedback and suggestions collected from the interviews. The first was designed prior to the first session, the second during the pilot interviews, and the third came out of the first four interviews. We report on the design iterations of each graphic in the results section. ### Procedure Pre-interview.Scheduling was conducted over email. Participants were asked to fill out a consent form before their interview. This included a small number of demographic and qualifying questions. Interview: Preliminaries.At the start of each session before recording began, participants were given the opportunity to ask any questions they had. With these preliminaries cleared, recording began. Interview: I - Analytic Process (RQ1).We first asked participants to recall the steps of their analytic process. This framed subsequent discussions as focused on their day-today work process. Probes into these processes sought to capture how visualization played a part (or not) in their workflow. Additional probes sought to uncover the statisticians' tacit understanding of their processes. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{3}{c}{**System**} & \multicolumn{3}{c}{**Methods**} \\ \cline{2-7} \multicolumn{1}{c}{**Accuracy**} & \multicolumn{1}{c}{**Accuracy**} & \multicolumn{1}{c}{**Accuracy**} & \multicolumn{1}{c}{**Accuracy**} & \multicolumn{1}{c}{**Accuracy**} & \multicolumn{1}{c}{**Accuracy**} & \multicolumn{1}{c}{**Accuracy**} & \multicolumn{1}{c}{**Accuracy**} \\ \hline \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(1\) & \(1\) & \(1\) & \(1\) & \(1\) ### Interview: II - Graphic Elicitation (RQ2). Here we used any references to inferential statistics from the prior interview section as a bridge to turn the conversation toward participants' understanding of inferential statistics in general, and Student t-tests [28] in particular. Participants were asked whether they had a picture in their heads of what a two-mean t-test looked like. They were then asked to draw a picture of that image and walk the interviewer through their sketch. ### Interview: III - Strawman Graphics (RQ3). The interviewer then presented participants with the design probes: strawman graphics of a t-test constructed by the research team. Participants were walked through each version (2 versions for the first three interviews, with strawman #3 added to the rest based upon the early results). They were first asked to use the graphics to answer whether the displayed example met a selected significance level (\(p=\{0.1,0.05,0.01,0.001\}\) for strawman #1 and #2 and only \(p=.05\) for strawman #3). The same underlying data were displayed with all three graphics, so that differences in response would reflect the graphic rather than a difference in data samples. After capturing participant performance in using each graphic, 'correct' answers were provided before moving onto the next graphic. Afterward, participants answered a series of questions designed to probe for their understanding or initial misunderstanding, what was missing or wrong about the graphics, or other design recommendations. ### Interview: Closing. The interviews ended in two phases. Participants were first thanked, and asked on camera whether they had additional questions or reactions to share. They were then given the opportunity to share unrecorded feedback. ### Coding Process We used thematic coding methodology derived from grounded theory [20] (although we did not use the full grounded theory machinery). The three parts of the interview were coded by separate processes, with some overlap. We used a combination of open and closed coding [20]; the details of the coding scheme are given in the results section. Here is the rough procedure we followed: We first reviewed our notes, looking for themes. Based upon the themes, we created the coding schema (Table 3) combining a priori codes with open spaces for capturing emergent themes. Tables 3 and 4 denote open codes with '(list)'; other codes are a priori. We then coded the transcripts, first by deriving primary codes and refining them into detailed thematic codes. Finally, we grouped all codes into broad themes. To check the validity of the coding process, we engaged a second coder external to the research team. Given the nature of the data, this coder had to be a statistician of similar experience to the researchers and participants. The external coder was provided with a 10% random sample of texts (with codes stripped), along with the final state of the codebook. After calculating the inter-coder reliability, we conferred with the external coder to update the codebook appropriately. ## 4 Results We transcribed and coded interviews for all participants. Here we describe the coding scheme and the design probes. Then we give an overview of the coding results and the inter-rater reliability. Finally we report our detailed findings for each of the themes. Note that this treatment only gives a high-level overview of our findings; the supplemental material contains all of the detailed results (including additional tables and charts). ### Coding Scheme We applied three parallel coding schema to the main body of the interviews based upon three analytical perspectives: expertise, tasks, and tools. Each perspective answers a different fundamental question about participant expectations from inferential statistical methods, potential visualizations of inferential statistics, and visualizations in general. * **Expertise perspective:** The explicit as well as tacit knowledge that professional statisticians have about their field. * **Tasks perspective:** Analysis steps that statisticians pursue. * **Tools perspective:** The tools statisticians employ in their work. \begin{table} \begin{tabular}{|c|c|c|c|} \hline #### 4.1.1 Coding Process The three perspectives constitute what amount to three different dimensions of analysis, orthogonal to one another, any or all of which might be indicated by a single participant statement. For example, P7 described part of their analytic process as: _And then once we got the data, [...] half the time is understanding the data [...] using descriptive statistics, a graphical illustration, [to see] missing data pattern or some unusual outliers._ This response generated the following summary: * Preparing and understanding data is a major task. * Structure, background, distribution, estimates, etc. * Multitool: descriptive graphics. We then reviewed the summary to produce these primary codes: * Data prep is much of stat work (Tacit). * Structure, Background, Distribution, Estimates, Discontinuities. * Descriptive Graphics, a broad class of visualizations. Had a specific tool (such as Histogram or Scatterplot) been mentioned, these would also have been coded. We further grouped primary codes from the expertise perspective into a detailed thematic code: Reality is the authority, which we further group under the broad theme Warrant. #### 4.1.2 Expertise Perspective The **Expertise** perspective contrasts the mindset of experts with that of novices. During interviews, the perspective can be summarized by the question, "What is the participant thinking during their interaction with inferential statistics?" The answers come under two broad groupings: The first group of codes captured the Epistemic Warrants experts attribute to different inferential methods, that is, the degree to which a particular analytic method provides support for or against some claim. Two emergent codes made up this group. * Build Validity captures those elements of analysis that shore up a reasoned line of evidence. * Violations to Validity captures those elements of analysis that tend to undercut evidence. The second code group includes only a single emergent code: Tacit captures the knowledge that experienced statisticians have, through work experience, come to believe about their method. #### 4.1.3 Tasks Perspective The **Tasks** perspective is composed primarily of a priori codes. It emerged out of initial reviews of the interview results, and presumes that there are common steps in the analytic process all statisticians pursue, regardless of the subfield in which they work. Identifying the core tasks of analysis as perceived by experienced statisticians provides a context in which to place the kinds of inferential statistics tools this research effort hopes to design, and may suggest features the design should incorporate. More broadly, the Tasks perspective sets the context for understanding the other two dimensions of analysis. The initial review of interviews indicated five analytic phases within Tasks (Table 3). As the focus of this research is inferential statistics, a later stage part of the analytic process, we sacrificed some detail during coding of the earliest work phase. This meant that we consolidated the code groups of understanding the research questions and understanding data sources into a single code, Gather Background. While we initially expected this perspective to only consist of a priori codes, we found during early review that the Communicate Findings codes (labeled Myth and Comm) often included details of what needed to be communicated that gave further insight into the mindset of participants. Thus, these were recoded as emergent codes, with details captured and summarized for thematic coding using the Expertise (mindset) thematic codes list. #### 4.1.4 Tools Perspective The **Tools** perspective captures the specific interfacing-tools statisticians choose to employ during their work. Broadly speaking, these tools can either be Visual Metaphors (data visualization) or Symbolic Tools (such as code or written equations). The Visual Metaphor code group includes three summary codes: * Broad-focus visualizations that may have many uses (such as histograms, from which an analyst might discern the mean, median, variance, skewness, range, location and number of modes, gaps in the distribution, outliers, and shape); * Narrow-focus visualizations with a single use (such as QQ-norm plots, which are used almost exclusively to test the normality of a distribution); and * Limits: observations on the limitations of visualization and the human visual system offered by participants. Since visualization is the focus of this research, we captured the non-visual Symbolic Tools code group with only one summary code and then listed each tool by name. #### 4.1.5 Graphic Elicitation Coding Participant graphics were screen-captured along with participant descriptions of the images. We labeled these by type and then grouped them according to their analytic focus. #### 4.1.6 Strawman Graphics Coding The strawman graphics section was coded using a combination of a priori and emergent codes (Table 4). These codes captured as follows: 1. **Participant Understanding**: Whether the participant correctly assessed the significance at the given alpha value with the graphic; 2. **Participant Affinity for the Strawman Graphic**: Whether participants found the strawman graphic useful, informative, or otherwise saw it in a positive or negative light; and 3. **Participant Design Recommendations**: Any directly stated or implied design recommendations to improve the graphic. The intention of this phase was to generate informative conversation about, and observations on, potential elements for graphic inference tools. By asking participants to actually use a graphic tool to perform a statistical inference, we hoped to generate deeper insights than merely asking them to comment on a novel graphic. This worked to such an extent that, in addition to revealing several design recommendations, this section of the interviews generated responses that were informative of expert statisticians understanding of inferential statistics in general. ### Strawman Graphic: Design Iterations We produced a total of three versions of the strawman graphics used as design probes during the interview study. Strawman #1.This graphic illustrates the overlap of two sample distributions--the blue and orange samples--represented both as a pair of histograms and as a pair of normal curves that had been fit to the samples (Figure 1(a)). The goal for users was to determine whether the two samples were similar enough in their means that they were likely drawn from a common population, or whether they were so unlikely to have been drawn from a single population that they probably represented sub-populations. The graphic included an aid for users to make this determination in the form of a _difference ruler_: a pair of grey lines connected by a double-headed arrow signifying the amount of separation two sample means needed to show to represent a statistically significant difference for a given confidence level (alphas corresponding to a \(p\)-value of.1,.05,.01. or.001). We drew on our own work on fitting bell curves [23] in designing overlapping histograms to represent a statistical test of two samples. Strawman #2.This iteration (Figure 1(b)) was suggested by a participant during one of the pre-interviews. It attempts to represent the distribution of the test statistic for the two sample means, with arrows to indicate how far out on the distribution a test statistic needed to fall to indicate statistical significance at a given alpha level. The graphic was created using a simulation process, in which 10,000 pairs of samples were drawn from a normally distributed population with a mean set as the mean of the two original samples (orange and blue and a variability (standard deviation) as the joint \begin{table} \begin{tabular}{ variability of the two original samples. For each pair of samples, a difference of means was calculated and these mean differences were plotted in a 50-bin histogram to yield a fairly smooth distribution. We added arrows to indicate the bar in which the mean-difference resided representing the minimum difference required for statistical significance at a given confidence level. Strawman #3Three of the first four participants indicated that some version of overlapping confidence intervals was their internal model of a t-test. Inspired by these interviews, the third iteration was designed as a pair of overlapping 95% confidence intervals (Figure 1(c)). Therefore, we included strawman #3 in all subsequent interviews. ### Overview Transcripts from 16 participants were assigned a total of 845 codes. Table 5 gives an coding overview for each of the three perspectives. In all, 313 texts were assigned codes associated with the Expertise perspective. Tool codes amounted to 149 for mentions of specific tools, including 115 listing broad-focus tools, 38 narrow-focus, and 33 symbolic ones (numbers). Finally, there were 383 occurrences of Task codes. This likely reflects the structure of the interviews, which was focused on the analytic process. Some mentions of specific tools and analytic tasks were also collected from researcher notes. ### Intercoder Reliability Coding was tested for intercoder reliability via a 10% sample of transcripts. Since coding was done on individual texts, the context of the full interview--such as participant comments just prior to and following each text in sample--are missing. This may have reduced reliability. The validity code agreement between coders was 71%; Cohen's Kappa was found to be.588 (moderate agreement). Tacit coding was also tested for intercoder reliability via a 10% sample at the broad themes and detailed thematic code levels. Among broad codes, there was 67.7% agreement, with a Cohen's Kappa of.611 (substantial agreement). Agreement between coders for thematic codes was 51.7%, with Cohen's Kappa at.495 (moderate agreement). ### Broad Themes The six broad themes have wide support from participant interviews (Table 6). A majority of participants made comments coded into all six, and no participant made comments coded into fewer than three. * Caution. This theme expressed the multifaceted concerns statisticians have in pursuing their work, and their approaches to addressing those concerns. It included thematic codes such as People act upon our results, an admonition to remember that people trust statistical work, take action based upon it, and, therefore, it is a statistical professional's responsibility to put in whatever time and effort is required to always provide the best possible advice. The group also includes, Distrust findings, which captured the many ways participants remind themselves to always check and recheck their work, since statistical analysis is a fundamentally complex endeavor which allows for many points of failure. The Caution theme also touched upon statisticians' relationship to inferential statistical methods, with, P-values encourage bad thinking, referring to concerns in the statistical community about the tendency for statistical testing to foment dichotomous thinking about complex realities, and, Stat tests required for publication, expressing participants' belief that whatever risks parametric statistical tests entail, they are nonetheless required for acceptance within many scientific communities, and thus must be used as best they can. * Expertise. This group applies to the several aspects of acquired analytic understanding which, as a body, represent a divide between statistical professionals and people outside the field. It includes codes such as, Analysis takes a statistician, capturing participants' expressions of their belief that people outside the statistical field typically misunderstand at least some aspects of quantitative work. It also included, Stat testing is hard for statisticians, too, which captured participants' expressions that statistical inference is a subject so complex that they don't trust their own knowledge without the use of references. * Limits. This theme captured several observations from participants describing ways that details of a data collection can limit the range of statistical tools available to apply, but also the ways in which the choice of statistical methods can limit the scope of analytic research. These codes are not specifics about the various limitations discussed, but, rather, the awareness among participants that quantitative work entails limitations. Example codes include Sample size important, expressing the multiple dependencies between sample size and the validity of inferences made about sampled populations, and, Stat methods define scope, capturing expressions of how the tools of statistics define the kinds of questions statistical research can address. * Planning. These codes capture the importance of planning in quantitative work: its utility, costs, and pitfalls. For example, the code, Predict to escape rationalization, captures the participants' understanding that post hoc rationalization is a constant temptation during analytic work which threatens results validity, and that the way to avoid this through planning ahead; they plan the analyses they will run, the test statistics they will accept, etc. Plan defines scope, captures the participants' awareness of how the planning process, while vital to the work, once entered, limits possible discoveries the work may yield. * Viz. This theme captures participants' understanding of data visualization as a tool in their analytic work. * Warrant. These codes capture participants' sometimes contradictory understandings of what elements within, or conditions are required by, their quantitative work to support statements about the world. These codes can be broad, such as, Reality is the authority, expressing participants' focus on always connecting their computations as directly as possible to the subject of their study, or checking results against expected values extracted from "facts on the ground" sources, such as news reports. Some are more specific, such as, Effect size\(\geq\) p value, which expressed the common feeling among participants that statistical significance was less important, or at least no more important, than the practical significance in their results. For example, a trial on a cholesterol drug with a large enough sample size might show a statistically significant reduction in blood cholesterol levels, but that reduction could still be so small as to have no expected effect on clinical outcomes for patients. ### Analytical Tasks The Tasks perspective derived from the initial review of researcher notes proved to be well supported by subsequent formal coding processes. Table 7 gives an overview. Thirteen of 18 participants reported performing work steps which fell within all 5 of the proposed tasks, and no participant reported fewer than 3. Pre-collection activities, such as meeting with clients to determine their needs, gathering background information on available datasets, or proposing analytic methods, were universally reported among participants, with other steps nearly so. Note that to qualify for the study, all participants began by confirming that they performed statistical testing as part of their regular work process, or had at some point. Similarly, 17 of 18 participants reported publishing their work publicly, and the 18th reported sharing their work internally within their organization, all of which constitutes communicating results. Therefore, it is likely that while these steps were not \begin{table} \begin{tabular}{l l|c universally captured during the coding exercise, all participants did, in fact, perform these steps during their work. This may further substantiate the Tasks perspective. ### Tools Reported Parsing transcripts for mentions of specific analytic tools (named methods, procedures, or statistical routines encapsulated within software packages) resulted in three lists: Broad-focus, narrow-focus, and symbolic tools. Unsurprisingly given the visual-analytic focus of these conversations, visual tools outnumbered purely symbolic ones. However, we were surprised by the diversity of narrow-focus tools. Some broad-focus statistical graphics have a long history, wide availability in software packages, and ubiquitous appearances in literature with statistical content. The authors interpret use of such tools as an indicator of the degree to which participants fold visualization into their work. While we captured all mentions of visualization (see the supplemental material for details), we focused on seven graphic forms based on frequency and familiarity: Scatterplot, Histogram, Boxplot, Line Diagram, Bar Chart, Table,1 and Pie Chart. Footnote 1: While numeric in content, tables make use of a visual schema [2]. Table 8 gives an overview of this list. All 18 participants used some visualization from this list, likely with analytic intent. Scatterplots were the most widely named graphic form, mentioned by 13 of 18 participants. Histograms followed closely, with 12. Both of these forms are explicitly analytic in function compared with other forms (pie charts) which are less useful for analysis but are sometimes favored in communicating findings. Among participants reporting only 1 tool used, it was either a Scatterplot or Histogram. All graphic forms on this list were widely used with the exception of the Pie Chart (2/18). On average, participants reported use 3.8 out of these 7 tools. ### Graphic Elicitation Twelve of 18 participants provided sketches representing what a statistical test looks like to them. Of the remaining six, two participants' sketches (P-A and P-B) were captured via researcher notes. Some participants indicated that they did not have a mental image other than the definitions/equations of the test. Thus, they had nothing to draw. All participants who provided a sketch were asked to walk the researcher through their work. These walkthroughs and sketches have been summarized in Figure 3. In total, 9 participants communicated that their vision of a statistical test was a pair of sample centers (mean or median) displayed side by side, with some indicator of sample variability around those centers (such as overlapping confidence intervals or side-by-side boxplots). Five participants indicated they envisioned the distribution of the relevant test statistic, with the location of the realized test statistic noted, and with an indicator of the likelihood of being at that location on the distribution. Two participants expressed that their mental image of a statistical test was that of an equation and provided no sketch (a third drew an equation). One participant named a Forrest plot as their internal image of a statistical test, and provided an example from the literature. One participant did not participate in the graphic elicitation portion of the interview. ## 5 Discussion Here we report on our findings from the interview study, including the role of visualization in statistics, the use need for evidentiary warrants, and our design guidelines. We also discuss the limitations of our work. ### Visualization According to Statisticians Statisticians make extensive use of visualization.Every participant in this study employed visualization as a regular part of their analytic process. All 18 made use of one or more of the common broad-use visualizations. Most used several. Most also used narrow-use visualizations; 14 of 18 participants reported making use of at least one specialized visualization with a narrow analytic focus. Indeed, participants reporting using the fewest of the common broad-focus visualizations all reported making use of a specialized narrow-use visualization. A third of participants (6/18) shared observations on the limits of data visualizations while still making use of them. ### Statisticians often conceptualize their work in visual terms. Fourteen of 17 participants indicated that their internal conception of at least one inferential method is visual. Eleven of 16 participants felt that visualization _is_ analysis, fully integrated into their quantitative work. _Maybe I just take that for granted, the image thing. I think once we have data and the first thing probably is to plot the data and see what [it] looks like. I think [...] to understand the data, visualization is a very important tool._ (P6) ### Visualizations are not always shared. Yet despite their frequent use of visualization and their understanding of their analyses in visual terms, it is not necessarily the visualization work they share with others. _Oh yes. I use line graphs. I use bar graphs. I use scatterplots. Absolutely. [...] Not something I would share typically..._ (P13) ### Not good enough for visualization. Some participants described using visualization, but did not think that counted as visualization. Two (P4 and P13) felt that artifacts which rose to the level of deserving the label, "visualization," had to exceeded mere utilitarian analysis and achieve aesthetic value, something they had little confidence they could themselves create: "_I am really bad at visualizing things._" (P13) ### Charts are for communication. Statisticians find visualization an indispensable tool for communicating results. Ten analysts talked about visualization's ability to communicate stories: _You have your analysis, your P-value, your confidence intervals, your hypothesis test, what decision did you make, all of that. That's not what I'm going to put in a presentation typically. It's going to be the graph. [...] It [numeric representation of results] is important, I'm not trying to minimize the importance of it, but it's not what people understand when you're trying to communicate a result [...] unless you're in a room with statisticians._ (P12) ### Chart or calculation? For some, visualization may be preferable to purely calculative methods. Five participants reported that visualization provides more powerful evidence than numeric methods: _I tell every class I teach when we go through inference, do the graphs first. And if the inference that you do doesn't support what you saw in the graph, something's terribly wrong._ (P12) Yet nearly as many (4) reported the opposite, that math is a greater evidentiary warrant, and gave a reason, namely, that visualization relies too much upon the judgement of the analyst. As participant 5 said, "_Yeah, you don't want to rely on your eyes. Having a number is better._" Figure 3: Graphic elicitation examples. Examples of graphics for statistical inference elicited from participants. Other strikes against visualization include the primacy of calculative methods in achieving publication. Two participants reported some version of being wary of aspects of p-value based statistical tests, but using them because publication required it. The cost of vis.Visualization can also result in a lot of work for the statistician, without necessarily earning them concomitant rewards. Intuitive design is critical for visualizations meant to communicate findings, but intuitive design tends to disappear from view. Thus, a statistician who works hours creating a display which their client can understand with a glance is, in effect, hiding their work. This is the "Designer's dilemma," where the more successful the work, the less it might be noticed (or appreciated) by its consumers: _So, [clients] think maybe you hand them a graph and the analysis results and [they think], "Oh, that probably took them 20 minutes to do that." [...] But they miss all the hours and hours of work that go into that final result._ (P12) A strained relationship.The relationship between statisticians and visualization is complex, despite visualization apparently being integrated into every phase of a statistician's workflow. Different practitioners give more or less emphasis to its use, though all the participants in this study report using visualization for at least some phases of their work. While it is possible that this outcome is a result of the Hawthorne effect, we note that our interview protocol used deliberately open-ended questions. Our participants were also seasoned professionals and not impressionable novices. They variously reported using process diagrams for planning analyses, visual exploratory data analysis for both data cleaning and hypothesis formation, specialized narrow-use visualizations during confirmatory analysis either as a reasonableness check or a source of primary findings, and, finally, communication of results. Further, most think about their analyses in visual terms, and find in visualization a powerful tool for supporting evidence-based results. P3 may provide the clearest example of the conflicted relationship statisticians appear to have with visualization. This person reported both a preference for calculative math over visualization, and vice versa. On the one hand, in referring to one of the strawmen graphics: _So it seems like it's more subjective in a way. And I don't think that a new statistician or anyone who's been through the school that I went through [...] has seen this often enough to make the best decisions with it._ But on the other hand, when describing the process of understanding a hypothetical regression output during their workflow: _Then I see where's the plot? How does it look here? What's the shape? So all those things go on in my head._ (P3) ### Evidentiary Warrants The human element.If visualization acts as interface between the human and computer in extended cognition [25], then the resulting compound system hinges upon human judgment--our visual intuitions, visual acuity, subject matter pre-knowledge, and imagination. It is thus subject to human biases, mental blind spots, and the various shortcomings of our eyes, as pointed out by 5/16 participants (4 describing visualization as too subjective, and a fifth making much the same point when suggesting reliance upon visual systems can reduce validity). By contrast, traditional, equation-based statistical tests fully externalize the critical decision point, ostensibly removing the human element. Provided results criteria are selected in advance (said 6/16 participants), and all test assumptions are met (3/16), results from statistical tests _feel_ more objective. Five of 16 participants described relying upon statistical tests to provide evidentiary warrants for their findings, despite nearly as many warning that p values encourage bad thinking (4/16). For example, participant 15 expressed both ideas. On the one hand, statistical tests encourage dichotomous thinking about situations: _There's nothing wrong with frequentist methods [such as parametric statistical tests], but it's kind of black and white._ (P15) Yet at the same time: _... [valid statistical testing] tells me whether my results should be actionable and meaningful to someone... the actual [sample] estimates themselves being different does not necessarily mean that things truly are different in the population._ (P15) This reliance upon calculative statistical tests and distrust of visualization appears to conflict with visualization's wide use as an analytic method by participants; it is in direct conflict with participants expressions that visualization is an important check on the tests themselves. A resolution of apparent conflicts.A central thread may explain this conflict: the statisticians in our study argue against their own judgements wherever they can, constantly seeking to verify their findings with multiple independent methods. In short, it is not that these statisticians trust this or that method and distrust another--they distrust them all, or more precisely, never trust any one method by itself. 10 of 16 participants expressed the idea that they should always distrust their own findings. For example: _So, you want to use simulation to test that and also compare to the previous methods [...] and then you apply your methods [...] and at this time you also need to talk with your collaborators [...] And then they will help you to evaluate whether the results make sense or not. [...] And then you want to analyze whether it is because there is a bug in your method or in your code [...]_ (P6) _There's always a reason. [...] It sometimes will come down to sample size or the overall variation if it's a two-sample test of maybe one group has larger variation than the other. [...] Maybe I'm wrong, but in my head there's always a reason..._ (P12) When numbers would seem to support their favored hypotheses, these statisticians look to visualization to see whether unknown outliers, gaps, or the like, explain away their findings. But if they see a pattern in a visualization, they look to calculative methods to act as a check on their eyes. Presented with an apparent difference between sample means, they test whether there is any likelihood that random chance can explain that away. They check their assumptions (7/16 participants) and conduct parallel analyses (4/16). With already checked and tested findings in hand, they ask subject matter experts whether the results make sense to them (3/16 participants). In each of these cases, we speculate that the statisticians are seeking ways to undercut apparent findings. They pit one analytic method against another, and all must line up for the experienced analyst to accept a finding as probably, or even possibly, true. Staying connected to reality.Among the most frequently expressed understanding captured during this study is the constant effort by participants to link their results back to the reality they are meant to describe. Fourteen of 16 participants discussed this during their interviews. For example, said P9: "_if necessary, go back and revise the statistical analysis plan in light of the reality of the situation as we've discovered it._" P11 put it this way: "_... you should trust the data and not come in with [...] strong priors._" P17's thoughts were shared by many: "_And I find generally data will tell the truth._" Effect sizes over math.Five participants (p1, p5, p6, p8, p13) talked about the validity of their results hinging upon a clear and well understood link between their measures and their phenomena of interest. Indeed, seven participants described how thinking through facts was key to understanding the math of their analyses. For example: _I don't understand statistics. I will freely admit I don't understand it, purely mathematically. I understand it if I'm looking at something tangible that has meaning to me..._ (P1) It appears that a key outgrowth of this reality-focus is a preference among many statisticians for privileging effect sizes over p-values. Nine of 16 participants expressed this idea. Effect sizes are typically some ratio of the difference in a key measure to the variability in that measure, where large values indicate analytically important, rather than merely statistically improbable, results. Effect sizes thus concern the practical significance of a finding, rather than statistical (or probabilistic) significance. In the current climate of concern over p-value statistical testing, focusing on effect sizes is one potential answer. This is also what current best practices in statistical reporting suggests [5]. It is also supported by prior work showing that confidence intervals can cause people to overestimate effect sizes [18], and that more sophisticated visual representations beyond typical error bars are needed to convey effect size nuances [8]. If nothing else, focusing on effect sizes necessarily means focusing on the reality of the subject. Uncertainty is certain.Nine of the participants expressed their understanding of a t-test as some version of a comparison of overlapping confidence intervals, i.e., sample means with some indicator of variability. This approach emphasizes the samples themselves, rather than the unseen population they are meant to represent. The t-test and p-value focus ultimately on the population, by posing the likelihood of drawing such a sample if the population were random with regard to some independent variable. Yet the majority of participants in this study keep their focus on the samples, while asking whether their results achieve statistical significance, and thus provide an evidentiary warrant to make statements about the population. The practical result is logically equivalent, but speaks to the difficulty of statistical inference. In many cases, even the experts don't fully embrace the meaning of the mathematics they rely upon, instead mentally falling back on simpler heuristics. ### Design Recommendations Given statisticians' own use of visualization in their analytic process, it appears that, as a general approach, visualizing in ferential statistics is acceptable to the statistical community. However, it also appears that no single visualization approach will have the confidence of that community. Rather, visualization should be paired with other confirmatory tools. This will provide the analyst both an intuitive understanding of the data (visual), and "more objective" numbers (where the decision point is determined by the calculation rather than the user's eye). Furthermore, our findings indicate that visual inferential tools should include an indicator of effect size that is declared in advance of seeing the data, just like alpha (minimum acceptable p-value), to avoid post-hoc rationalization. Selecting effect sizes requires users to understand their data, usually by speaking to subject matter experts. It also automatically combats dichotomous thinking, as having two measures to choose (alpha and effect size) turns the significance decision multi-dimensional. We base this recommendation on prior work showing that visualizations that show nuanced aspects of data can reduce dichotomous thinking [17], as well as general inferential statistics [29]. ### Limitations Our study in this paper involved a total of 18 professional statisticians, but while we took care to choose participants from many different fields, educational backgrounds, and demographics, there are certainly several threats to generalizing these results too widely. Given the qualitative and highly personal nature of these practices, we feel that it can be hard to draw conclusive findings from our work. Furthermore, as discussed in our positionality statement (Section 3.1), we ourselves as researchers represent only a small fraction of the worldwide statistician population. Our study was focused on parametric inferential statistics. Statistical inference is obviously a much larger field, and includes topics such as non-parametric and Bayesian methods. This may limit the generality of our design recommendations and suggests avenues for future research. While design probes have been proven effective because they provide a common ground for discussion [31], which can be helpful for laypersons, they may also constrain ideation. Furthermore, our strawman graphics (Figure 2) are not radical or even particularly novel, and perhaps a more radical set of design ideas could have sparked more innovation. However, we took care to ask for graphic elicitation (Phase II) prior to showing our graphics (Phase III) to avoid biasing the participants. Furthermore, the purpose of this study was mostly to understand the mindsets and practices of professional statisticians, and more effort will be needed to develop these graphics in the future. ## 6 Conclusion We have presented results from a qualitative interview study involving 18 professional participants with the goal of understanding their use of visualization (RQ1), mental models of inferential statistics (RQ2), and thoughts on designs for visual inference (RQ3). Our findings, which were coded and summarized from interview transcripts, suggest a significant influence of visualization even in the workflows of statisticians who self-report as "traditionalists." In fact, many of their mental models of statistical inference appear to be at least somewhat visually based. We use these findings to suggest several design guidelines for how to design new statistical and visualization tools that can help people make sense of their own data. ## Acknowledgments We thank the anonymous participants for their time, effort, and willingness to share their knowledge and wisdom. We also thank Jennifer Cheeseman Newburger for her parallel coding work.
2309.03789
Pilot-reference-free continuous-variable quantum key distribution with efficient decoy-state analysis
Continuous-variable quantum key distribution (CV QKD) using optical coherent detectors is practically favorable due to its low implementation cost, flexibility of wavelength division multiplexing, and compatibility with standard coherent communication technologies. However, the security analysis and parameter estimation of CV QKD are complicated due to the infinite-dimensional latent Hilbert space. Also, the transmission of strong reference pulses undermines the security and complicates the experiments. In this work, we tackle these two problems by presenting a time-bin-encoding CV protocol with a simple phase-error-based security analysis valid under general coherent attacks. With the key encoded into the relative intensity between two optical modes, the need for global references is removed. Furthermore, phase randomization can be introduced to decouple the security analysis of different photon-number components. We can hence tag the photon number for each round, effectively estimate the associated privacy using a carefully designed coherent-detection method, and independently extract encryption keys from each component. Simulations manifest that the protocol using multi-photon components increases the key rate by two orders of magnitude compared to the one using only the single-photon component. Meanwhile, the protocol with four-intensity decoy analysis is sufficient to yield tight parameter estimation with a short-distance key-rate performance comparable to the best Bennett-Brassard-1984 implementation.
Anran Jin, Xingjian Zhang, Liang Jiang, Richard V. Penty, Pei Zeng
2023-09-07T15:40:05Z
http://arxiv.org/abs/2309.03789v1
Pilot-reference-free continuous-variable quantum key distribution with efficient decoy-state analysis ###### Abstract Continuous-variable quantum key distribution (CV QKD) using optical coherent detectors is practically favorable due to its low implementation cost, flexibility of wavelength division multiplexing, and compatibility with standard coherent communication technologies. However, the security analysis and parameter estimation of CV QKD are complicated due to the infinite-dimensional latent Hilbert space. Also, the transmission of strong reference pulses undermines the security and complicates the experiments. In this work, we tackle these two problems by presenting a time-bin-encoding CV protocol with a simple phase-error-based security analysis valid under general coherent attacks. With the key encoded into the relative intensity between two optical modes, the need for global references is removed. Furthermore, phase randomization can be introduced to decouple the security analysis of different photon-number components. We can hence tag the photon number for each round, effectively estimate the associated privacy using a carefully designed coherent-detection method, and independently extract encryption keys from each component. Simulations manifest that the protocol using multi-photon components increases the key rate by two orders of magnitude compared to the one using only the single-photon component. Meanwhile, the protocol with four-intensity decoy analysis is sufficient to yield tight parameter estimation with a short-distance key-rate performance comparable to the best Bennett-Brassard-1984 implementation. ## I Introduction Quantum key distribution (QKD) allows the generation of random secure keys between distant communication parties, of which the security is guaranteed by quantum physical laws. Apart from its theoretical advances, QKD is also one of the few quantum information processing technologies that can be robustly deployed in the fields, where photonic systems are considered the most suitable carriers of QKD operation. In general, two types of QKD protocols exist based on the detection methods: discrete-variable (DV) QKD [1; 2] uses the single-photon detector or photon-number-resolving detector to generate discrete detection information, while continuous-variable (CV) QKD [3; 4; 5] applies optical homodyne or heterodyne detection to generate continuous measurement information. CV QKD has its advantages over DV QKD in short distances, mainly attributing to the distinct features of the coherent detectors used. The homodyne and heterodyne detectors are compatible with the standard classical communication and can be operated at much milder conditions than single-photon detectors. The spatial-temporal filtering of the local oscillators (LO) allows dense wavelength-division multiplexing with intense classical channels [6; 7; 8], and the high quantum efficiency and operation rate give CV QKD high key rates in metropolitan distances [9; 10; 11]. Moreover, the feasibility of on-chip implementations of the coherent detectors [12] promises large-scale integrated quantum networks. CV QKD is therefore considered highly practical and promising. However, there exist two major limitations to the reliability of CV QKD. First, the transmission of the strong local oscillators is usually necessary to set up the phase reference between the communication parties, yet this complicates the implementation in the multiplexing separation and the relative phase shift calibration with the signals [13]. The LO transmission also opens up security loopholes where the eavesdropper Eve can affect the estimation of the signal variance by manipulating the LO intensity [14; 15], input time [16] and wavelength [17]. The "local" local oscillator scheme [13] is a valid solution, yet it requires a carefully designed phase tracking and compensation system, which increases the experimental complexity. Second, the security of CV QKD in the finite-data regime under coherent attacks is still incomplete. In fact, for the traditional entanglement-distillation approach [18], the finite-size coherent attack security is only tackled for Gaussian-modulated CV QKD [19], which is, however, impractical since continuous modulation is never possible in reality. Recently, several remarkable works on CV QKD have been proposed, aiming at closing its security loopholes. In Ref. [5; 20], Matsuura et al. proposed a DV-like security analysis for the binary phase shift keying CV QKD. Their analysis covers finite size and coherent attack intrinsically since it follows the phase-error complementarity approach [21], yet their protocol still assumes the transmission of local oscillators. Qi [22] and Primantamaja et al. [23] respectively proposed CV QKD protocols with two-mode encoding, generating dual-rail qubits that do not require global references. Their security analyses, however, do not cover the finite-data regime and coherent attacks. In fact, Qi's analysis requires repeated measurements and is only valid for individual attacks, and Primaatmaja et al.'s analysis is based on Devetak-Winter formula [18] and is only valid for collective attacks. Hence, the gap in CV QKD between theory and practice is still a tough problem to be tackled. In this work, we close this gap by proposing a new time-bin-encoding CV QKD protocol that enjoys both simple security proof and practical implementation. We remove the necessity of LO transmission by the two-mode encoding, hence closing the security loophole while simplifying the experimental setups. We make use of the phase-error complementarity approach by invoking the squashing channel idea from Ref. [5; 24; 25]. In fact, the phase error rate can be defined soundly since the received optical modes are squashed into a qubit. What is more, referring to an equivalent hypothetical protocol with photon-number-resolving measurements, we can group the received signals based on the transmitted photon numbers and restore the tagging-based security analysis [24; 26]. By doing this, we can discard the possibly attacked components and reduce the phase-error rate. We build a direct connection between CV QKD and the normal Bennett-Brassard-1984 (BB84) protocol: we clearly show how the multi-photon components in the CV QKD protocol contribute to a higher key rate in short distances. Our tagging-based security analysis gives an intuitive illustration of the idea of reverse reconciliation [27] from a DV point of view: we clearly show in the reverse reconciliation scheme, the received vacuum components are considered secure, thus improving the key rate. We can reuse the traditional decoy-state analysis based on photon numbers [28; 29], thus simplifying the parameter estimation. Our DV-like security analysis of CV QKD can extend to coherent attacks and handle the finite-size effect easily based on the standard martingale analysis [30]. Compared with a similar protocol in Ref. [23] with numerical optimization under collective attacks, our protocol generates higher key rates with four decoy levels under coherent attacks. Our parameter estimation scheme is simple and intuitive since we identify the key components essential for key generation. We can thus utilize the multi-photon components with simple parameter estimation, whilst the numerical approach complicates biquadratically as the photon number increases. We will start with the protocol description of the time-bin-encoding CV QKD in Sec. II. We present its security analysis based on phase error correction [21; 31; 32] in Sec. III, identifying an equivalent protocol squashing the optical modes into qubits with identical key mapping statistics in Sec. III.1. We exploit the block-diagonal structures of both the source and the receiver in Sec. III.2, thus invoking the photon-number tagging technique [24; 26] standard in DV QKD in this CV protocol. In Sec. III.3, we calculate the parameters in the key-rate formula with quantities on optical modes. The estimation of these quantities will be explained in Sec. IV with homodyne tomography [33] and decoy method [28; 29]. We finally simulate the performances of the time-bin CV QKD under realistic fiber-channel setups in Sec. V. ## II Protocol description We present the proposed time-bin-encoding CV QKD protocol in Table 1 and depict its schematic diagram in Fig. 1. The two communication parties, Alice and Bob, employ the time-bin degree of freedom to encode keys. They use \(Z\)-basis for key generation and \(X\)-basis for parameter estimation. At the moment, we do not present the details of the \(X\)-basis parameter settings. We will specify the choices of the random phase factors, \(\varphi_{a}^{1},\varphi_{a}^{2},\varphi_{b}^{1}\) and \(\varphi_{b}^{2}\), and the light intensity, \(\mu_{a}\in\{\mu,\nu_{1},\nu_{2},0\}\), in Sec. IV. Here, we briefly explain the idea behind the protocol design. The source states of our scheme resemble the ones in the time-bin-encoding BB84 protocol with coherent states [1], where the light intensities of the consecutive pulses naturally encode the key-bit information. As the key information is encoded in the relative intensity between the two modes, Alice does not need to send a pilot phase reference as in common CV QKD. In our scheme, Bob decodes the key bit information, namely the \(Z\)-basis information, by measuring the light intensity of the pulses using the homodyne detectors. For instance, when Bob observes \(q_{1}\) to be close to \(0\) and \(q_{2}\) to be far away from \(0\), he may naturally guess that the original state sent by Alice corresponds to \(k_{a}=0\). However, unlike the key decoding with photon-number detectors, the result of measuring a coherent state's quadrature is subjected to a Gaussian distribution rather than a fixed value. The inherent shot noise of the homodyne detection introduces an intrinsic error in distinguishing a vacuum state from a pulse with a non-zero intensity [34]. To suppress the bit error, we introduce a threshold value, \(\tau\), in key decoding. The pulse intensity will be considered non-zero only when the quadrature magnitude is larger than \(\tau\). The choice of \(\tau\) should be optimized with respect to the channel transmittance and pulse intensity. The \(X\)-basis is designed to estimate the information leakage of different photon-number components of the \(Z\)-basis. Thanks to the phase randomization for both the sources and the detectors, the \(Z\)-basis states are block-diagonal on the total photon-number basis on the two optical modes after emitted from the source and be 1. On Alice's side (source): * \(Z\)_-basis_: * Randomly select a key bit \(k_{a}\in\{0,1\}\), a phase factor \(\varphi_{a}\in[0,2\pi)\), and a light intensity \(\mu_{a}\in\{\mu,\nu_{1},\nu_{2},0\}\). * Prepare a coherent state of \(\left|0\right\rangle_{A1}\left|\sqrt{\mu_{a}}e^{i\varphi_{a}}\right\rangle_{A2}\) for \(k_{a}=0\) or \(\left|\sqrt{\mu_{a}}e^{i\varphi_{a}}\right\rangle_{A1}\left|0\right\rangle_{A2}\) for \(k_{a}=1\). * \(X\)_-basis_: * Randomly select two phase factors \(\varphi_{a}^{1}\) and \(\varphi_{a}^{2}\) and a light intensity \(\mu_{a}\in\{\mu,\nu_{1},\nu_{2},0\}\). * Prepare a coherent state of \(\left|\sqrt{\mu_{a}/2}e^{i\varphi_{a}^{1}}\right\rangle\left|\sqrt{\mu_{a}/2} e^{i\varphi_{a}^{2}}\right\rangle\). 2. Alice sends the state through an authenticated channel to Bob. 3. On Bob's side (detection): * \(Z\)_-basis_: * Randomly select a phase factor \(\varphi_{b}\in[0,2\pi)\). * Use homodyne detectors both with LO phases \(\varphi_{b}\) to measure the modes and obtain quadratures \(q_{1}\) and \(q_{2}\). * Decode the key bit as \(0\) if \(|q_{1}|<\tau\wedge|q_{2}|>\tau\), \(1\) if \(|q_{1}|>\tau\wedge|q_{2}|<\tau\), and \(\emptyset\) otherwise. * \(X\)_-basis_: * Randomly select two phases \(\varphi_{b}^{1}\) and \(\varphi_{b}^{2}\) independently. * Use homodyne detectors with LO phases \(\varphi_{b}^{1}\) and \(\varphi_{b}^{2}\) to measure the modes and obtain quadratures \(q_{1}\) and \(q_{2}\). * Use \(\varphi_{1},\varphi_{2}\) and \(q_{1},q_{2}\) for phase-error estimation (see Sec. IV). 4. Alice and Bob perform basis sifting, where they obtain raw keys in the rounds they both choose \(Z\)-basis with light intensity \(\mu_{a}=\mu\) and \(k_{b}\neq\emptyset\). 5. Based on parameter estimation, Alice and Bob perform information reconciliation and privacy amplification to obtain final keys. Figure 1: Schematic diagram of the experimental setup. The setups of Alice and Bob are shaded in green and blue, respectively. Alice prepares two-mode phase-randomized states according to the basis choice and raw key value in key generation rounds, as shown in the table. In this work, we consider a time-bin encoding, where one obtains two modes via time delay. The state modulation consists of intensity modulation (IM), phase modulation (PM), and necessary attenuation (ATTN). Upon receiving the state, Bob measures each mode with homodyne detectors. He uses a synchronized clock to distinguish adjacent modes and applies phase modulation (PM) to the local oscillator (LO). fore being measured by the homodyne detectors. As we will clarify in Sec. III, we can equivalently introduce total photon-number measurements at these two locations. As a result, Eve's eavesdropping strategy is effectively "twirled" to a photonic channel that only acts on the states incoherently with respect to the total photon numbers. One can thus virtually tag the emitted and received pulses according to the photon-number space, allowing the Gottesman-Lutkenhaus-Lo-Preskill (GLLP) framework [24] for analyzing the key privacy contained in each photon-number subspace. In particular, dealing with photon-number spaces effectively brings our security analysis to the DV regime. In Sec. III, we shall construct observables to estimate the \(m\)-photon component phase-error rates \(e_{m,m}^{X}\) for privacy estimation. Intuitively, the phase-error rates \(e_{m,m}^{X}\) provide upper bounds on the key information leakage to the eavesdropper, Eve. To estimate the \(m\)-photon component phase-error rates, \(e_{m,m}^{X}\), ideally, we need a source emitting the photon-number cat states, \((\ket{0}\ket{m}\pm\ket{m}\ket{0})/\sqrt{2}\), and photon-number-resolving measurements that distinguish the cat states. While this is not directly implementable, we can use only coherent states and homodyne measurements to establish unbiased estimators of \(e_{m,m}^{X}\), as shown in Table 2. On the source side, we employ a generalized decoy-state method to estimate the behaviors of photon-number-cat states using coherent states with various intensities [28; 29], which shall be discussed in Sec. IV.2. On the detection side, ideally, we also want to measure the photon-number-cat states to obtain unbiased estimation of the phase-error rates, \(e_{m,m}^{X}\). While this is not directly measurable in practice, we employ the homodyne tomography technique and estimate the photon-number-cat state measurement via quadrature measurement results [35; 36; 37; 38; 39; 40], which shall be discussed in Sec. IV.1. We briefly remark on the performance of homodyne detection. In key decoding, one may consider the homodyne detection as ill-performed single-photon detectors that introduce an inevitable bit-error rate. On the other hand, homodyne detection allows for more efficient parameter estimation than single-photon detection. As we shall discuss later, the set of all quadrature operators spans the underlying mode, thus allowing one to express any linear operator in terms of the quadrature operators. Therefore, with proper transformation of the quadrature measurement results, homodyne detection allows one to obtain an unbiased estimation of linear operator expectations. This is the reason for accurately estimating phase-error rates with repeated homodyne measurements, including those of the multi-photon components. In comparison, as the single-photon detection is not information complete, estimation of multi-photon observables requires more complex setups such as sequential beam splitting [41], and one can only obtain upper and lower bounds rather than an unbiased estimation. ## III Security Analysis We analyze the security of our phase-randomized time-bin-encoding CV QKD protocol along the complementarity approach [21; 32; 24]. As outlined in Fig. 2, we shall set up a series of equivalent protocols of the realistic implementation that do not change the statistics of any observer, with which we define the phase-error observable and estimate the key privacy. In Sec. III.1, we shall prove that raw key generation can be effectively regarded as qubit measurements on a pair of entangled qubits, which allows us to borrow the mature complementarity-based security analysis in the DV regime. In brief, on the source side, we transform the preparation of key states to an entanglement-based protocol [31; 32], where a qubit measurement controls the key-encoding process, as shown in Fig. 2(b). On the detection side, we prove that the homodyne measurement can be squashed into an effective qubit measurement, as shown in Fig. 2(c). Moreover, in Sec. III.2, we shall rigorously prove that phase randomization twirls the photonic modes into diagonal states on the Fock basis and explain how to apply the tagging idea of the GLLP framework [26; 24]. We also show how to estimate the phase-error rates for different photon-number components from Fock-basis observables in Sec. III.3. Later in Sec. IV, we show that the estimation can be realized in the realistic implementation with coherent states and homodyne detection. To focus on the essence of security analysis, we present the result in a single-round analysis in this section, where one can interpret it as the quantum Shannon limit under collective attacks. Nevertheless, the complementarity-based security analysis is inherently adapted to the most general case, namely the coherent attack, where the statistics over the rounds may not be independent and identically distributed (i.i.d.) [42]. For parameter estimation with non-i.i.d. finite statistics, statistical techniques like martingale theory can help. We shall come to this issue in Sec. IV.3. ### Entanglement-based squashing protocol Here, we show the equivalence of the time-bin CV QKD protocol to a qubit-based entanglement distribution protocol, where the protocols generate the same transmitted quantum states and measurement statistics. The latter protocol enables us to simplify the security analysis and estimate the information leakage from phase-error rates. We first focus on the key-generation rounds in the protocol where both users choose the \(Z\)-basis, of which the whole procedure is depicted in Fig. 2(a). In the realistic implementation, Alice prepares phase-randomized coherent states, \[\int_{0}^{2\pi}\frac{d\varphi_{a}}{2\pi}\left|\Psi(k_{a})_{\varphi_{a}} \right\rangle_{A_{1}A_{2}}\left\langle\Psi(k_{a})_{\varphi_{a}}\right|, \tag{1}\] Figure 2: Equivalent quantum circuits in key generation rounds. Reductions in each step are plotted with red dashed boxes. (a) The realistic implementation. The operations on Alice’s and Bob’s sides are shaded in green and blue, respectively. Alice prepares weak coherent states on two modes, which depend on the basis choice and the raw key value. On Bob’s side, Bob measures the two modes with homodyne detectors (HD) and obtains quadratures \(q_{1}\) and \(q_{2}\). Afterward, Bob performs classical post-processing (CP) on the data and obtains a raw key \(k_{b}\) probabilistically, where the key decoding may fail due to the key mapping threshold, denoted as \(\emptyset\). The blue rounded boxes represent phase randomization processes in state preparation or for the LOs in homodyne detection. (b) Equivalent entanglement-based state preparation. Key encoding can be interpreted as a qubit control operation on two modes where the control qubit measurement gives Alice’s raw key \(k_{a}\). The joint state on the two modes is diagonal on the Fock basis after phase randomization. One can insert a photon-number measurement, \(\hat{M}_{n_{a}}\), and read out the total photon number, \(m\), without changing the state. (c) Equivalent key-decoding measurement. The joint state of the two modes becomes diagonal on the Fock basis due to detector phase randomization. In key decoding, the modes are first squashed into a qubit probabilistically, where the failure gives the abort signal \(\emptyset\). Upon successful squashing into a qubit, the computational-basis measurement gives the raw key bit. (d) Due to detector phase randomization, one can insert a photon-number measurement, \(\hat{M}_{n_{b}}\), and read out the total photon number, \(n\), without changing the state. (e) Equivalent circuit for security analysis. After the above reductions, the key generation measurements can be equivalently defined on a pair of (sub-normalized) qubit states. where \[\left|\Psi(k_{a})_{\varphi_{a}}\right\rangle_{A_{1}A_{2}}=\left\{\begin{array}{l} \left|0\right\rangle_{A_{1}}\left|\sqrt{\mu}e^{i\varphi_{a}}\right\rangle_{A_{2 }}\text{, if }k_{a}=0,\\ \left|\sqrt{\mu}e^{i\varphi_{a}}\right\rangle_{A_{1}}\left|0\right\rangle_{A_{2 }}\text{, if }k_{a}=1.\end{array}\right. \tag{2}\] We denote the optical modes sent to Bob as \(A_{1}\) and \(A_{2}\), which are CV systems. Throughout this paper, we treat the phase of optical modes, \(\varphi_{a}\), as fully randomized over \([0,2\pi)\). Finite phase randomization, \(\varphi_{a}\in\{2j\pi/D\}_{j\in[D]}\), suffices for a practical implementation, where its difference from the full phase randomization is negligible when \(D\) is sufficiently large [43]. This is also the case in later discussions on the detector phase randomization. Alice's key-state preparation can be effectively seen as an entanglement-based protocol [31; 32]. Given the phase value, \(\varphi_{a}\), Alice first prepares the following entangled state, \[\left|\Psi_{\varphi_{a}}\right\rangle_{A^{\prime}A_{1}A_{2}}= \frac{1}{\sqrt{2}}\big{(}\left|0\right\rangle_{A^{\prime}}\left| \Psi(k_{a}=0)_{\varphi_{a}}\right\rangle_{A_{1}A_{2}} \tag{3}\] \[+\left|1\right\rangle_{A^{\prime}}\left|\Psi(k_{a}=1)_{\varphi_{ a}}\right\rangle_{A_{1}A_{2}}\big{)},\] where system \(A^{\prime}\) is a qubit system that superposes the two possible key states. The entangled state can be prepared by the quantum circuit in Fig. 2(b). Up to phase randomization, systems \(A^{\prime}\) and \(A_{1}A_{2}\) are initialized in \(\left|+\right\rangle\) and \(\left|0\right\rangle\left|\sqrt{\mu}\right\rangle\), and a control-swap operation is then applied from the qubit system to the optical modes. Alice obtains raw key bit \(k_{a}\) by measuring system \(A^{\prime}\) on the computational basis, and the optical modes are prepared into the corresponding key state, \(\left|\Psi(k_{a})_{\varphi_{a}}\right\rangle\). The complementary observable of Alice's key-generation measurement can thus be defined over qubit system \(A^{\prime}\), which measures the complementary basis of \(\{\left|+\right\rangle,\left|-\right\rangle\}:=\{\left(\left|0\right\rangle \pm\left|1\right\rangle\right)/\sqrt{2}\}\). At the detection side in Fig. 2(a), Bob receives two optical modes \(B_{1}\) and \(B_{2}\), takes homodyne measurements, and maps the quadratures to a raw key or an abort signal. This process can be described by a trace-non-preserving completely positive map, \[\mathcal{F}_{\text{rand}}^{B_{1}B_{2}\to B^{\prime}}(\hat{\rho}_{B_{1}B_{2}})= \int_{0}^{2\pi}\frac{d\varphi_{b}}{2\pi}\int_{\mathbf{R_{0}}}dq_{1 }dq_{2} \tag{4}\] \[\hat{K}^{(q_{1},q_{2},\varphi_{b})}\hat{\rho}_{B_{1}B_{2}}\hat{K} ^{(q_{1},q_{2},\varphi_{b})\dagger},\] where \[\hat{K}^{(q_{1},q_{2},\varphi_{b})}:= \left|0\right\rangle_{B^{\prime}}\left\langle q_{1}(\varphi_{b}), q_{2}(\varphi_{b})\right|_{B_{1},B_{2}} \tag{5}\] \[+\left|1\right\rangle_{B^{\prime}}\left\langle q_{2}(\varphi_{b}),q_{1}(\varphi_{b})\right|_{B_{1},B_{2}},\] \(\left|q(\varphi)\right\rangle\) is the rotated position eigenstate of quadrature observable \[\hat{Q}_{\varphi}=\hat{a}e^{-i\varphi}+\hat{a}^{\dagger}e^{i\varphi}, \tag{6}\] with \(\hat{a}\) and \(\hat{a}^{\dagger}\) denoting the annihilation and creation operators, respectively, and \(\mathbf{R_{0}}\in\mathbb{R}^{2}\) records the region that decodes the real-valued tuple, \((q_{1},q_{2})\), as \(k_{b}=0\). Note that in our protocol, \(\mathbf{R_{0}}=\{\left|q_{1}\right|<\tau\}\times\{\left|q_{2}\right|>\tau\}\), and the region decodes the quadratures to \(k_{b}=1\) under the mapping \((q_{1},q_{2})\mapsto(q_{2},q_{1})\), which we denote as \(\mathbf{R_{1}}=\{\left|q_{1}\right|>\tau\}\times\{\left|q_{2}\right|<\tau\}\). The LOs of homodyne measurements are synchronically randomized, as denoted by \(\varphi_{b}\) in Eq. (4). As the key-decoding region does not cover the entire parameter space, \(\mathcal{F}_{\text{rand}}^{B_{1}B_{2}\to B^{\prime}}\) is hence not trace-preserving, where \(\text{Tr}[\mathcal{F}_{\text{rand}}^{B_{1}B_{2}\to B^{\prime}}(\hat{\rho}_{B_{1} B_{2}})]\) gives the probability of obtaining raw key bit \(k_{b}\in\{0,1\}\). Bob's raw key can be equivalently seen as obtained by measuring the squashed sub-normalized qubit on the computa \begin{table} \begin{tabular}{c c c c c} \hline \hline basis & \multicolumn{2}{c}{source} & \multicolumn{2}{c}{detection} \\ \cline{2-5} & ideal & real & ideal & real \\ \hline \multirow{2}{*}{\(Z\)} & \(\left|0\right\rangle\left|m\right\rangle\) & \(\left|0\right\rangle\left|\sqrt{\mu}e^{i\varphi_{a}}\right\rangle\) & \multirow{2}{*}{\(\hat{\Pi}(q_{1},q_{2})\), Eq. (9)} & \multirow{2}{*}{\(\hat{\Pi}(q_{1},q_{2})\)} \\ & \(\left|m\right\rangle\left|0\right\rangle\) & \(\left|\sqrt{\mu}e^{i\varphi_{a}}\right\rangle\left|0\right\rangle\) & & \\ \hline \multirow{2}{*}{\(X\)} & \(\frac{1}{\sqrt{2}}(\left|0\right\rangle\left|m\right\rangle\pm\left|m\right\rangle \left|0\right\rangle)\) & \(\left|\sqrt{\frac{\mu}{2}}e^{i\varphi_{a}^{\dagger}}\right\rangle\left|\sqrt{ \frac{\mu}{2}}e^{i\varphi_{a}^{\dagger}}\right\rangle\) & \(\frac{1}{2}\left(\left|0\right\rangle\left|m\right\rangle\pm\left|m\right\rangle \left|0\right\rangle\right)\left(\left|0\right\rangle\left|\mu\pm\left\langle m \right|\left\langle 0\right|\right)\right.\) & \(\hat{Q}_{\varphi_{1}}\otimes\hat{Q}_{\varphi_{2}}\), Eq. (6), \\ & & & & estimation via Eq. (28) \\ \hline \hline \end{tabular} \end{table} Table 2: State preparation and detection settings in the ideal implementation and the realistic implementation. For brevity, we omit the subscripts of modes and express the detection with the measurement operators. In key generation rounds, Bob applies phase-randomized homodyne detection for key-decoding. The expression of measurement operator \(\hat{\Pi}(q_{1},q_{2})\) is given in Eq. (9), where \(q_{1}\) and \(q_{2}\) represent the quadratures of the two modes. The operator is block-diagonal on the total photon-number basis. For parameter estimation, ideally, Alice sends photon-number-cat states, and Bob performs a corresponding projective measurement. In the realistic setting, Alice can only prepare phase-randomized weak coherent states, and Bob can only perform phase-randomized homodyne measurements. The homodyne measurement operator, \(\hat{Q}_{\varphi_{1}}\otimes\hat{Q}_{\varphi_{2}}\), is given in Eq. (6). Afterward, Bob estimates the photon-number-cat state measurement expectations via homodyne tomography methods, as shown in Eq. (28). tional basis, and the probabilities are given by \[\Pr(k_{b}=0)=\bra{0}\mathcal{F}_{\text{rand}}^{B_{1}B_{2}\to B^{ \prime}}(\hat{\rho}_{B_{1}B_{2}})\ket{0} \tag{7}\] \[= \int_{0}^{2\pi}\frac{d\varphi_{b}}{2\pi}\int_{\mathbf{R_{0}}}dq_{1 }dq_{2}\bra{q_{1}(\varphi_{b}),q_{2}(\varphi_{b})}\hat{\rho}_{B_{1}B_{2}}\ket{q_ {1}(\varphi_{b}),q_{2}(\varphi_{b})},\] \[\Pr(k_{b}=1)=\bra{1}\mathcal{F}_{\text{rand}}^{B_{1}B_{2}\to B^{ \prime}}(\hat{\rho}_{B_{1}B_{2}})\ket{1}\] \[= \int_{0}^{2\pi}\frac{d\varphi_{b}}{2\pi}\int_{\mathbf{R_{1}}}dq_{ 1}dq_{2}\bra{q_{1}(\varphi_{b}),q_{2}(\varphi_{b})}\hat{\rho}_{B_{1}B_{2}}\ket{q _{1}(\varphi_{b}),q_{2}(\varphi_{b})}.\] Similar to the treatment to \(A^{\prime}\), we can define the complementary observable of Bob's key generation measurement on qubit system \(B^{\prime}\). ### Photon-number tagging of the source and receiver In the last section, we have shown that raw keys can be equivalently seen as generated from qubit measurements on \(A^{\prime}\) and \(B^{\prime}\). Should Alice and Bob instead measure the qubit system on the complementary bases, the probability they obtain different results, or the phase-error rate, \(e^{X}\), could be used to upper-bound the average privacy amplification cost per round as \(h(e^{X})\), where \(h(p)=-p\log p-(1-p)\log(1-p)\) is the binary entropy function. Nevertheless, the actual privacy leakage may be less than the direct calculation. Note that the above privacy leakage estimation is averaged over the overall quantum state transmitted from Alice to Bob. The contribution to the privacy leakage of different components in quantum signals can differ. For instance, Eve can apply the photon-number-splitting (PNS) attack in the rounds in which Alice transmits two photons and Bob receives only a single photon [44; 45]; hence no privacy should be expected, rendering the phase-error probability to be \(1/2\) in these rounds. If Alice and Bob can distinguish such rounds from the others, they can simply discard them in privacy amplification. The GLLP framework makes the above statement rigorous [24; 26]. Suppose Alice and Bob can categorize the transmitted quantum signals into different groups, or tags, and evaluate phase-error probabilities separately. The privacy amplification cost can be evaluated by \(\sum_{i}Q_{i}h(e_{i}^{X})\), where \(Q_{i}\) is the probability that a signal in the \(i\)'th group is transmitted and detected, namely the gain, and \(e_{i}^{X}\) is the phase-error probability of the group. Due to the concavity of the entropy function, this estimation is no larger than \(h(\sum_{i}Q_{i}e_{i}^{X})\). In DV QKD, the tagging idea has been well practiced. In the coherent-state-based BB84 protocol, phase randomization on the source side diagonalizes the quantum signals on the Fock basis [28; 29], and an ideal single-photon detector naturally distinguishes the single photon components from other detected Fock components, allowing Alice and Bob to tag the quantum states with respect to the photon number [46]. Similarly, we now prove that the photon-number tag can also be applied to the phase-randomized CV QKD protocol in Table 1. On the source side, the phase randomization diagonalizes the state on the joint Fock basis, \[\hat{\rho}^{Z}= \int_{0}^{2\pi}\frac{d\varphi_{a}}{2\pi}\quad\frac{1}{2}\ket{0}_{ A_{1}}\bra{0}\otimes\sqrt{\mu}e^{i\varphi_{a}}\rangle_{A_{2}}\bra{\sqrt{ \mu}}e^{i\varphi_{a}} \tag{8}\] \[+\frac{1}{2}\ket{\sqrt{\mu}e^{i\varphi_{a}}}_{A_{1}}\bra{\sqrt{ \mu}}e^{i\varphi_{a}}\otimes\ket{0}_{A_{2}}\bra{0}\] \[= \sum_{m=0}^{\infty}\Pr(m)\frac{1}{2}\left(\ket{0m}_{A_{1}A_{2}} \bra{0m}+\ket{m0}_{A_{1}A_{2}}\bra{m0}\right),\] where \(\Pr(m)=e^{-\mu}\mu^{m}/m!\) is the Poisson distribution. Consequently, one can virtually insert a photon-number measurement after phase randomization to measure the total photon number on the two modes without changing the state, as shown in Fig. 2(b). On the detection side, when Bob takes the \(Z\)-basis measurement, the phase-randomized homodyne detector POVM elements can be expanded on the Fock basis [23], \[\hat{\Pi}(q_{1},q_{2}) \tag{9}\] \[= \int_{0}^{2\pi}\frac{d\varphi_{b}}{2\pi}\ket{q_{1}(\varphi_{b})}_ {B_{1}}\bra{q_{1}(\varphi_{b})}\otimes\ket{q_{2}(\varphi_{b})}_{B_{2}}\bra{q_ {2}(\varphi_{b})}\] \[= \sum_{n=0}^{\infty}\sum_{k_{0}=0}^{n}\sum_{l_{0}=0}^{n}\psi_{k_{0} }(q_{1})\psi_{l_{0}}(q_{1})\psi_{n-k_{0}}(q_{2})\psi_{n-l_{0}}(q_{2})\] \[\ket{k_{0},n-k_{0}}_{B_{1}B_{2}}\bra{l_{0},n-l_{0}},\] where \[\psi_{n}(q_{j})=\frac{1}{\sqrt{2^{n}n!\sqrt{2\pi}}}H_{n}(q_{j}/\sqrt{2})e^{-q_{ j}^{2}/4} \tag{10}\] is the coordinate representation of Fock state \(\ket{n}\), with \(H_{n}\) being the \(n\)-th Hermite polynomial. Therefore, one can virtually insert another photon-number measurement after phase randomization before the squashing channel Eq. (4) on the detection side to measure the total photon number of the received state, as shown in Fig. 2(d). Based on the above results, we depict a virtual quantum circuit of the protocol when both Alice and Bob chooses the \(Z\)-basis in Fig. 2(e). We denote the photon-number measurement results on the source side and the detection side as \(m\) and \(n\), respectively. Alice and Bob can thus distill secrete keys separately based on the photon-number tag of \((m,n)\). A lower bound on the key rate can then be given by [24; 26] \[r\geq\sum_{m=0}^{\infty}Q_{m,m}[1-h(e_{m,m}^{X})]-fQ^{Z}h(e^{Z}), \tag{11}\] where \(Q_{m,m}\) and \(e_{m,m}^{X}\) denote the gain and the phase-error rate in the rounds where \(m\) photons are sent and \(m\) photons are accepted, \(Q^{Z}\) is the \(Z\)-basis gain, \(e^{Z}\) is the bit-error rate, and \(f\) is the efficiency of information reconciliation. Note not to confuse the gains with quadrature observables. In addition, since Bob's key decoding succeeds probabilistically where he only accepts quadratures above the threshold, we use the term "accepting" to represent receiving a certain state and passing the post-selection. All the gains and error rates in the key-rate formula are restricted to the rounds with light intensity \(\mu_{a}=\mu\). We discard the rounds where the total photon number decreases after state transmission, as the photons that are lost may come from Eve's interception, with which Eve can apply a PNS attack. The corresponding phase-error probability is \(1/2\); hence these rounds do not contribute to key generation. In addition, as the transmission channel is naturally lossy in a usual setting, we do not account for the terms where the total photon number increases. Note that the key-rate formula in Eq. (11) assumes forward reconciliation, where Bob reconciles his raw keys to Alice's, \(k_{a}\), and then the users perform privacy amplification. The rounds where Alice sends a non-vacuum state while Bob receives a vacuum state are hence insecure, since the information carriers are lost through the channel. Instead, if reverse reconciliation is used, where Alice reconciles her raw keys to Bob's, the rounds where Bob receives a vacuum state become secure. One can interpret Bob's raw keys in these rounds as generated from local random numbers, and no information is known _a priori_ in transmission. This is a common practice in usual CV QKD and in accordance with the observation in Ref. [22]. We present as Theorem 1 the key-rate lower bound with reverse reconciliation as the main key-rate formula to be used throughout this paper. **Theorem 1**.: _For the time-bin CV QKD protocol in Table 1 with reverse reconciliation, in the asymptotic limit of an infinite data size, the distillable secure key rate \(r\) is lower bounded by \(r_{\mathrm{rev}}\),_ \[r\geq r_{\mathrm{rev}}=Q_{*,0}+\sum_{m=1}^{\infty}Q_{m,m}[1-h(e_{m,m}^{X})]-fQ ^{Z}h(e^{Z}), \tag{12}\] _where \(Q_{m,m}\) and \(e_{m,m}^{X}\) denote the gain and the phase-error rate in the rounds where \(m\) photons are sent and \(m\) photons are accepted, \(Q^{Z}\) is the \(Z\)-basis gain, \(e^{Z}\) is the bit-error rate, and \(f\) is the efficiency of information reconciliation. \(Q_{*,0}\) represents the gain of the rounds where Bob accepts a vacuum state for whatever state sent by Alice._ ### Phase-error probability calculation We now evaluate the key-rate formula in Eq. (12) with Fock-basis observables [5]. The bit-error rate \(e^{Z}\) can be directly measured, as the \(Z\)-measurement statistics in the entanglement-based squashing model are the same as the realistic statistics. To evaluate the gains and phase-error probabilities, we first determine the state before the phase-error measurement under each photon-number tag. Define \(\hat{P}_{m}\) as the projector onto the \(m\)-photon state on modes \(A_{1}\) and \(A_{2}\). When sending \(m\) photons, the source in Fig. 2(b) collapses to \[\hat{P}_{m}^{A_{1}A_{2}}\hat{\rho}_{A^{\prime}A_{1}A_{2}}\hat{P}_ {m}^{A_{1}A_{2}}=\Pr(m)\left|\Psi_{m}\right\rangle_{A^{\prime}A_{1}A_{2}} \left\langle\Psi_{m}\right|, \tag{13}\] where \[\left|\Psi_{m}\right\rangle_{A^{\prime}A_{1}A_{2}} =\frac{1}{\sqrt{2}}(\left|0\right\rangle_{A^{\prime}}\left|0m \right\rangle_{A_{1}A_{2}}+\left|1\right\rangle_{A^{\prime}}\left|m0\right\rangle _{A_{1}A_{2}}), \tag{14}\] \[\Pr(m) =\frac{e^{-\mu}\mu^{m}}{m!}.\] Upon transmitting the \(m\)-photon state, \(\left|\Psi_{m}\right\rangle_{A^{\prime}A_{1}A_{2}}\), the \(n\)-photon state is selected on the detection side after the squashing channel, \[\mathcal{F}_{\mathrm{rand}}^{B_{1}B_{2}\to B^{\prime}}\left\{\hat{P}_{n}^{B_{ 1}B_{2}}\mathcal{N}_{E}^{A_{1}A_{2}\to B_{1}B_{2}}\left[\Pr(m)\left|\Psi_{m} \right\rangle_{A^{\prime}A_{1}A_{2}}\left\langle\Psi_{m}\right|\right]\hat{P}_ {n}^{B_{1}B_{2}}\right\}=Q_{m,n}\hat{\rho}_{A^{\prime}B^{\prime}}^{(m,n)}, \tag{15}\] where \(\mathcal{N}_{E}^{A_{1}A_{2}\to B_{1}B_{2}}\) represents Eve's channel, and \(Q_{m,n}\) denotes the probability of sending an \(m\)-photon state and accepting an \(n\)-photon state, namely the gain for the states tagged by the photon-number tuple, \((m,n)\). Note the probability that Bob aborts the signal is reflected in \(Q_{m,n}\). The normalized state, \(\hat{\rho}_{A^{\prime}B^{\prime}}^{(m,n)}\), is a bipartite qubit, with which we evaluate the phase-error probability, \[e_{m,n}^{X}=\Tr\left[\hat{\rho}_{A^{\prime}B^{\prime}}^{(m,n)}\left(\left|+- \right\rangle_{A^{\prime}B^{\prime}}\left\langle+-\right|+\left|-+\right\rangle _{A^{\prime}B^{\prime}}\left\langle-+\right|\right]\right]. \tag{16}\] With respect to the complementary-basis measurement result on the qubit \(A^{\prime}\), \(+\) or \(-\), the state on modes \(A_{1}\) and \(A_{2}\) collapses to \[\left|\Psi_{m}^{\pm}\right\rangle_{A_{1}A_{2}}=\frac{1}{\sqrt{2}}(\left|0m \right\rangle\pm\left|m0\right\rangle)_{A_{1}A_{2}} \tag{17}\] with equal probabilities. For the state on Bob's systems \(B_{1}\) and \(B_{2}\) under tag \((m,n)\), \(\hat{\rho}_{B_{1}B_{2}}^{(m,n)}\), the statistics of the complementary measurement are given by \[\left|\Psi_{m}^{\pm}\right\rangle_{A_{1}A_{2}}=\frac{1}{\sqrt{2}}( \left|0m\right\rangle\pm\left|m0\right\rangle)_{A_{1}A_{2}} \tag{18}\] and \(A_{2}\) collapses to \[\left|\Psi_{m}^{\pm}\right\rangle_{A_{1}A_{2}}=\frac{1}{\sqrt{2}}( \left|0m\right\rangle\pm\left|m0\right\rangle)_{A_{1}A_{2}} \tag{19}\] with equal probabilities. For the state on Bob's systems \(B_{1}\) and \(B_{2}\) under tag \((m,n)\), \(\hat{\rho}_{B_{1}B_{2}}^{(m,n)}\), the statistics of the complementary measurement are given by \[\left|\Psi_{m}^{\pm}\right\rangle_{A_{1}A_{2}}=\frac{1}{\sqrt{2}}( \left|0m\right\rangle\pm\left|m0\right\rangle)_{A_{1}A_{2}} \tag{20}\] \[=\Tr\left[\hat{\rho}_{B_{1}B_{2}}^{(m,n)}\hat{P}_{n}\hat{M}_{\pm} \hat{P}_{n}\right],\] where \[\hat{M}_{\pm}=\frac{1}{2}\int_{\mathbf{R_{0}}}dq_{1}dq_{2}\int_{0}^{2\pi}\frac{d \varphi_{b}}{2\pi}\left[|q_{1}(\varphi_{b}),q_{2}(\varphi_{b})\rangle\pm|q_{2}( \varphi_{b}),q_{1}(\varphi_{b})\rangle\right]\left[\langle q_{1}(\varphi_{b}),q _{2}(\varphi_{b})|\pm\langle q_{2}(\varphi_{b}),q_{1}(\varphi_{b})|\right]. \tag{19}\] In the last equation in Eq. (18), we utilize the fact that \(\hat{\rho}_{B_{1}B_{2}}^{(m,n)}\) acts on the \(n\)-photon space of system \(B_{1}B_{2}\). Combining the above results, we can express the phase-error rate for each tag with observables on optical modes: **Proposition 1**.: _The phase-error rate \(e_{m,n}^{X}\) of the rounds where \(m\) photons are sent and \(n\) photons are accepted can be calculated by:_ \[\frac{Q_{m,n}e_{m,n}^{X}}{\Pr(m)}= \frac{1}{2}\mathrm{Tr}\big{[}\mathcal{N}_{E}^{A_{1}A_{2}\to B_{1}B_{2}}(| \Psi_{m}^{+}\rangle_{A_{1}A_{2}}\,\langle\Psi_{m}^{+}|)\hat{P}_{n}\hat{M}_{- }\hat{P}_{n}+\mathcal{N}_{E}^{A_{1}A_{2}\to B_{1}B_{2}}(|\Psi_{m}^{-} \rangle_{A_{1}A_{2}}\,\langle\Psi_{m}^{-}|)\hat{P}_{n}\hat{M}_{+}\hat{P}_{n} \big{]}, \tag{20}\] _where \(Q_{m,n}\) denotes the probability of sending an \(m\)-photon state and accepting an \(n\)-photon state, and \(\Pr(m)\) is the probability of the source emitting \(m\) photons. \(\mathcal{N}_{E}\) denotes Eve's channel on the two optical modes and \(\hat{P}_{n}\) denotes the projector onto the \(n\)-photon subspace. \(|\Psi_{m}^{+}\rangle\) and \(\hat{M}_{\pm}\) defined in Eq. (17) and (19) respectively._ * For the single-photon component, \[\begin{split}\frac{Q_{1,1}e_{1,1}^{X}}{\Pr(1)}=& \frac{c_{1}}{2}\bigg{\{}\mathrm{Tr}\left[\mathcal{N}_{E}^{A_{1}A_{2} \to B_{1}B_{2}}(|\Psi_{1}^{+}\rangle_{A_{1}A_{2}}\,\langle\Psi_{1}^{+}|) \frac{1}{2}(|01\rangle_{B_{1}B_{2}}-|10\rangle_{B_{1}B_{2}})(\langle 01|_{B_{1}B_{ 2}}-\langle 10|_{B_{1}B_{2}})\right]\\ +&\mathrm{Tr}\left[\mathcal{N}_{E}^{A_{1}A_{2}\to B _{1}B_{2}}(|\Psi_{1}^{-}\rangle_{A_{1}A_{2}}\,\langle\Psi_{1}^{-}|)\frac{1}{2 }(|01\rangle_{B_{1}B_{2}}+|10\rangle_{B_{1}B_{2}})(\langle 01|_{B_{1}B_{ 2}}+\langle 10|_{B_{1}B_{2}})\right]\bigg{\}},\end{split}\] (21) where \[c_{1}=\int_{\mathbf{R_{0}}}dq_{1}dq_{2}[\psi_{0}^{2}(q_{1})\psi_{1}^{2}(q_{2})+ \psi_{0}^{2}(q_{2})\psi_{1}^{2}(q_{1})],\] (22) and gain \(Q_{1,1}\) is given by \[\frac{Q_{1,1}}{\Pr(1)}=c_{1}\mathrm{Tr}\left\{\mathcal{N}_{E}\left[\mathrm{Tr}_ {A^{\prime}}(|\Psi_{1}\rangle_{A^{\prime}A_{1}A_{2}}\,\langle\Psi_{1}|)\right] (|01\rangle_{B_{1}B_{2}}\,\langle 01|+|10\rangle_{B_{1}B_{2}}\,\langle 10|) \right\}.\] (23) Up to the less-than-unity factor \(c_{1}\) that arises from the data post-selection in key mapping, the formulae are the same as the complementary-basis result in the coherent-state-based BB84 protocol [47]. * For the two-photon subspace, based on Eq. (9) and Eq. (20), we have, \[\begin{split}\frac{Q_{2,2}e_{2,2}^{X}}{\Pr(2)}=& \frac{1}{2}(c_{2}^{02}-c_{2}^{11})\mathrm{Tr}\left[\mathcal{N}_{E}^{A_{1}A_{2} \to B_{1}B_{2}}(|\Psi_{2}^{+}\rangle_{A_{1}A_{2}}\,\langle\Psi_{2}^{+}|) \frac{1}{2}(|02\rangle_{B_{1}B_{2}}-|20\rangle_{B_{1}B_{2}})(\langle 02|_{B_{1}B_{ 2}}-\langle 20|_{B_{1}B_{2}})\right]\\ &+\frac{1}{2}(c_{2}^{02}+c_{2}^{11})\mathrm{Tr}\left[\mathcal{N} _{E}^{A_{1}A_{2}\to B_{1}B_{2}}(|\Psi_{2}^{-}\rangle_{A_{1}A_{2}}\,\langle\Psi _{2}^{-}|)\frac{1}{2}(|02\rangle_{B_{1}B_{2}}+|20\rangle_{B_{1}B_{2}})( \langle 02|_{B_{1}B_{2}}+\langle 20|_{B_{1}B_{2}})\right]\\ &+c_{2}^{11}\mathrm{Tr}\left[\mathcal{N}_{E}^{A_{1}A_{2}\to B _{1}B_{2}}(|\Psi_{2}^{-}\rangle_{A_{1}A_{2}}\,\langle\Psi_{2}^{-}|)\,|11 \rangle_{B_{1}B_{2}}\,(11|\right],\end{split}\] (24) where \[\begin{split} c_{2}^{02}&=\int_{\mathbf{R_{0}}}dq_{1}dq_{2 }[\psi_{0}^{2}(q_{1})\psi_{2}^{2}(q_{2})+\psi_{2}^{2}(q_{1})\psi_{0}^{2}(q_{2})],\\ c_{2}^{11}&=\int_{\mathbf{R_{0}}}dq_{1}dq_{2}[2\psi_{ 1}^{2}(q_{1})\psi_{1}^{2}(q_{2})],\end{split}\] (25) and the two-photon gain is given by \[\begin{split}\frac{Q_{2,2}}{\Pr(2)}=& c_{2}^{02} \mathrm{Tr}\left\{\mathcal{N}_{E}^{A_{1}A_{2}\to B_{1}B_{2}}[\mathrm{Tr}_{A^{ \prime}}(|\Psi_{2}\rangle_{A^{\prime}A_{1}A_{2}}\,\langle\Psi_{2}|)](|02 \rangle_{B_{1}B_{2}}\,\langle 02|+|20\rangle_{B_{1}B_{2}}\,\langle 20|)\right\}\\ &+c_{2}^{11}\mathrm{Tr}\left\{\mathcal{N}_{E}^{A_{1}A_{2}\to B_{1} B_{2}}[\mathrm{Tr}_{A^{\prime}}(|\Psi_{2}\rangle_{A^{\prime}A_{1}A_{2}}\, \langle\Psi_{2}|)]\,|11\rangle_{B_{1}B_{2}}\,\langle 11|\right\}.\end{split}\] (26) * The probability of accepting a vacuum state when employing reverse reconciliation is given by \[Q_{*,0}=\mathrm{Tr}\left[\hat{P}_{0}^{B_{1}B_{2}}\mathcal{N}_{E}^{A_{1}A_{2} \to B_{1}B_{2}}\left(\hat{\rho}^{Z}\right)\hat{P}_{0}^{B_{1}B_{2}}\right]\int_ {\mathbf{R_{0}}}2\psi_{0}^{2}(q_{1})\psi_{1}^{2}(q_{2})dq_{1}dq_{2},\] (27) where \(\hat{\rho}^{Z}\) is the \(Z\)-basis state sent by the source given in Eq. (8); hence \(Q_{*,0}\) is given by the product of the probability of receiving a vacuum-state in the \(Z\)-basis rounds and a post-selection-related integration factor. Note that the former value is independent of the post-selection. ## IV Parameter estimation and practical protocol We briefly show how to estimate the parameters derived in Sec. III.3 with a practical setup. In the actual protocol, we do not have photon-number-resolving detectors, with which one can directly measure the above parameters. In addition, the phase-error probabilities and gains are defined by particular Fock-basis states, yet the actual photon source emits coherent states. Nevertheless, we can construct unbiased estimators with the available states and detection settings to evaluate these values. On the detection side, we apply the homodyne tomography technique to evaluate the photon-number observables [35; 36; 37; 38; 39; 40]. The homodyne tomography allows unbiased estimation of the expected value of a variety of observables, including the photon-number observables, of measuring an unknown quantum state. On the source side, we extend the decoy-state method [28; 29] to evaluate the statistics defined by the non-classical Fock states with the use of the coherent states at hand. We will give a practical version of the protocol at the end of this section. A fully-detailed discussion is placed in Appendix A on how the specific parameters related to key rate calculation can be practically estimated. ### Effective photon-number resolving via homodyne tomography Since the eigenstates of the quadrature observables, \(\ket{q(\varphi)}\), form a complete basis on an optical mode, one can reconstruct a general observable on an optical mode with homodyne measurements. In our study, the parameters to be estimated involve photon-number measurements on two modes in the form of \(\hat{O}_{1}\otimes\hat{O}_{2}=\ket{n_{1}}_{B_{1}}\bra{m_{1}}\otimes\ket{n_{2}}_ {B_{2}}\bra{m_{2}}\). Their measurements on an arbitrary state, \(\hat{\rho}\), can be obtained from two independent homodyne measurements with randomized LO phases, \[\begin{split}\langle\hat{O}_{1}\otimes\hat{O}_{2}\rangle= \mathrm{Tr}[(\hat{O}_{1}\otimes\hat{O}_{2})\hat{\rho}]\\ :=\int_{0}^{\pi}\frac{d\varphi_{1}}{\pi}\int_{-\infty}^{\infty} dq_{1}\int_{0}^{\pi}\frac{d\varphi_{2}}{\pi}\int_{-\infty}^{\infty}dq_{2}\\ \mathcal{R}[\hat{O}_{1}](q_{1},\varphi_{1})\mathcal{R}[\hat{O}_{2 }](q_{2},\varphi_{2})p(q_{1},q_{2}|\varphi_{1},\varphi_{2}),\end{split} \tag{28}\] where \(p(q_{1},q_{2}|\varphi_{1},\varphi_{2})\) is the joint probability of the quadrature measurements on the two modes conditioned on phases \(\varphi_{1}\) and \(\varphi_{2}\). The estimators, \(\mathcal{R}[\hat{O}_{1}](q_{1},\varphi_{1})\) and \(\mathcal{R}[\hat{O}_{2}](q_{2},\varphi_{2})\), link the quadrature measurement statistics with \(\langle\hat{O}_{1}\otimes\hat{O}_{2}\rangle\). For a homodyne detector with efficiency \(\eta\), the estimator for observable \(\ket{n}\bra{n+d}\) is given by \[\begin{split}\mathcal{R}_{\eta}[\ket{n}\bra{n+d}](q,\varphi)=e^ {id(\varphi+\frac{\pi}{2})}\sqrt{\frac{n!}{(n+d)!}}\\ \int_{-\infty}^{\infty}dk|k|\exp\left(\frac{1-2\eta}{2\eta}k^{2}- ikq\right)k^{d}L_{n}^{d}(k^{2}),\end{split} \tag{29}\] where \(L_{n}^{d}\) is the generalized Laguerre polynomial. The estimator is shown to be bounded for detector efficiency \(\eta>1/2\)[40; 38], a mild requirement for current technologies [48; 49]. Consequently, repeated measurements allow the users to obtain an unbiased estimation of the photon-number observables that converges in probability. In Appendix A.1, we shall provide more details of the homodyne tomography techniques. ### Generalized decoy-state method To effectively realize the non-classical states on the source side, we extend the standard decoy-state method [28; 29]. We take advantage of two-mode coherent states with simultaneous phase randomization on the two modes. We denote the state with phase difference \(\varphi\) as \[\hat{\rho}_{\mu}^{\varphi}=\int_{0}^{2\pi}\frac{d\theta}{2\pi}\left|\sqrt{\frac{ \mu}{2}}e^{i\theta}\right\rangle\left\langle\sqrt{\frac{\mu}{2}}e^{i\theta}| \otimes|\sqrt{\frac{\mu}{2}}e^{i(\theta+\varphi)}\right\rangle\left\langle\sqrt {\frac{\mu}{2}}e^{i(\theta+\varphi)}\right|, \tag{30}\] where we specify the light intensity with the subscript, \(\mu\). With proper linear combination of these states, we can effectively construct the photon-number-cat states that we are interested in. It is well-known that \((|01\rangle\pm|10\rangle)/\sqrt{2}\) is the single-photon component of \(\hat{\rho}_{\mu}^{0(\pi)}\), \[\Pr_{\mu}(1)\left|\Psi_{1}^{+(-)}\right\rangle\left\langle\Psi_{1}^{+(-)} \right|=\hat{P}_{1}\hat{\rho}_{\mu}^{0(\pi)}\hat{P}_{1}, \tag{31}\] where \(\Pr_{\mu}\) represents the Poisson distribution determined by light intensity \(\mu\), as given in Eq. (14). Thus, the estimation problem is transformed into the estimation of the single-photon yields of \(\hat{\rho}_{\mu}^{0}\) and \(\hat{\rho}_{\mu}^{\pi}\). For the multi-photon components \((|0m\rangle\pm|m0\rangle)/\sqrt{2}\), a direct calculation shows \[\Pr_{\mu}(m)\left|\Psi_{m}^{+}\right\rangle\left\langle\Psi_{m}^{+}\right|= \hat{P}_{m}\left(\hat{\rho}_{\mu}^{Z}+\frac{2^{m-2}}{m}\sum_{k=0}^{m-1}\hat{ \rho}_{\mu}^{\frac{2\pi k}{m}}-\frac{2^{m-2}}{m}\sum_{k=0}^{m-1}\hat{\rho}_{ \mu}^{\frac{2\pi k}{m}+\delta}\right)\hat{P}_{m}, \tag{32}\] \[\Pr_{\mu}(m)\left|\Psi_{m}^{-}\right\rangle\left\langle\Psi_{m}^{-}\right|= \hat{P}_{m}\left(\hat{\rho}_{\mu}^{Z}-\frac{2^{m-2}}{m}\sum_{k=0}^{m-1}\hat{ \rho}_{\mu}^{\frac{2\pi k}{m}}+\frac{2^{m-2}}{m}\sum_{k=0}^{m-1}\hat{\rho}_{ \mu}^{\frac{2\pi k}{m}+\delta}\right)\hat{P}_{m}, \tag{33}\] where \(\delta=\pi\) for odd \(m\) and \(\pi/2\) for even \(m\), and \(\hat{\rho}_{\mu}^{Z}\) is the state emitted from the source in a key generation round. Consequently, the terms that define \(e_{m,m}^{X}\) and \(Q_{m,m}\) can be constructed from the statistics when emitting the states of \(\hat{\rho}_{\mu}^{Z}\) and \(\hat{\rho}_{\mu}^{\varphi}\) with \(\varphi\in\{2\pi k/m,2\pi k/m+\delta\}_{k=0}^{m-1}\). Notably, the extended decoy method allows estimating the gains with the number of parameters increasing only linearly in the photon number. In later discussions, we shall utilize up to the two-photon components. Specifically, for \(m=2\), \[\Pr_{\mu}(2)\left|\Psi_{2}^{\pm}\right\rangle\left\langle\Psi_{2 }^{\pm}\right|= \hat{P}_{2}\big{[}\hat{\rho}_{\mu}^{Z}\pm\left(\frac{1}{2}\hat{ \rho}_{\mu}^{0}+\frac{1}{2}\hat{\rho}_{\mu}^{\pi}\right) \tag{34}\] \[\mp\left(\frac{1}{2}\hat{\rho}_{\mu}^{\frac{\pi}{2}}+\frac{1}{2} \hat{\rho}_{\mu}^{\frac{3\pi}{2}}\right)\big{]}\hat{P}_{2}.\] ### General parameter estimation under the coherent attack In this section, we discuss the security analysis and parameter estimation in the most general case. In the most general adversarial scenario, namely the coherent attack, Eve can apply a joint quantum operation over the rounds for eavesdropping, which may correlate or even entangle the states transmitted to Bob. Eve collects all the side information leaked to her in the protocol and then guesses the legitimate users' keys. Under such an attack, the measurement statistics obtained by Bob are generally correlated over the rounds [42]. The complementarity-based security analysis remains valid with finite statistics under a coherent attack [21]. The information leakage is quantified via the number of phase errors, while the occurrence of a phase error in each round may be non-i.i.d. That is, one should interpret the gains and phase-error rates in Eq. (12) as frequencies in non-i.i.d. statistics. For instance, \(Q_{1,1}\) should be regarded as the frequency of the events that Alice sends a single-photon state, and Bob accepts a single-photon state among key generation rounds in the virtual experiment. The remaining problem is to estimate these parameters via observed statistics. To tackle the non-i.i.d. parameter estimation problem, we can apply a martingale-based analysis. We shall present the details in Appendix B. Here, we explain its basic idea. As the starting point, in the \(i\)'th round, the users can evaluate the probability of choosing some experimental setting and observing a particular event conditioned on the experimental history, including the events of sending an \(m\)-photon state and accepting an \(n\)-photon state and the occurrence of a phase error if they choose the key generation setting, and observing a particular homodyne detection result if they choose to perform the parameter estimation operations. The events' correlations with the experimental history are inherently taken into account in the definitions of conditional probabilities. We can set up martingales for a series of events, such as the occurrence of phase errors in each round of the virtual protocol, and link their frequencies with the associated conditional probabilities via concentration results like Azuma's inequality [30]. Note that such concentration results work for general non-i.i.d. correlations. Furthermore, the setting choices randomly chosen by Alice and Bob are independent of the experimental history and unknown to Eve. Therefore, conditioned on the experimental history, the probabilities of different possible events in a round are linked. For instance, the probability that the users take key generation measurements and a phase error occurs in a round is measurable via the probability that they instead take parameter estimation measurements and observe certain statistics. The relation is in the form of Eq. (20), while now the probabilities are interpreted as conditional ones that cover the correlations. The relations between conditional probabilities then link the martingales for the parameter estimation measurement with the ones for the gains and phase-error rates, completing the parameter estimation. In the end, the total number of keys that can be securely distilled from finite statistics under the coherent attack is given by a formula of the following form: **Theorem 2** (Informal).: _The finite-size key rate \(r\) is lower bounded by:_ \[r\geq \bar{Q}_{\star,0}^{L}+\sum_{m=1}^{\infty}\left(\bar{Q}_{m,m}^{L} \left\{1-h[\bar{e}_{m,m}^{X(U)}]\right\}+\frac{1}{N_{\mu}^{zz}}\log\varepsilon _{m,m}^{\mathrm{pa}}\right)\] \[-fQ^{Z}h(e^{Z}), \tag{35}\] _where the barred notations specify that the quantities should be considered as average values of non-i.i.d. statistics and superscripts \(L\) and \(U\) represent the estimated lower and upper bounds. \(\varepsilon_{m,m}^{\mathrm{pa}}\) denotes the failure probability [50, 21] of the group sending \(m\) photons and accepting \(m\) photons. \(N_{\mu}^{zz}\) is the number of key-generation rounds._ Note that the failure probability \(\varepsilon_{m,m}^{\mathrm{pa}}\) comes from the possible estimation inaccuracy of the decoy state method, homodyne tomography, and the convergence of the martingale-based concentration results, introducing an additional cost of \(-\log\varepsilon_{m,m}^{\mathrm{pa}}\) key bits. In the asymptotic limit where the number of key generation rounds \(N_{\mu}^{zz}\to\infty\), the average cost of this term per round converges to zero, and the key rate formula degenerates to that in Eq. (12). ### Practical protocol Combining the above ingredients, we provide a practical protocol that utilizes up to the two-photon components in Table 3. In parameter estimation, Bob applies homodyne tomography to estimate the statistics of measuring photon-number observables, including \(|00\rangle\), \((|01\rangle\pm|10\rangle)/\sqrt{2}\), \((|02\rangle\pm|20\rangle)/\sqrt{2}\), and \(|11\rangle\), on various states transmitted from the source, originally \(\tilde{\rho}_{\mu_{a}}^{Z}\) and \(\tilde{\rho}_{\mu}^{\mathrm{ga}}\). Afterward, the users can obtain upper and lower bounds on the gains and phase-error rates by applying the extended decoy-state method. We provide the detailed parameter estimation procedures in Appendix A and discuss dealing with general non-i.i.d. statistics under a coherent attack in Appendix B. In the end, we make some remarks on the protocol. Alice's announcement of the relative phase does not reveal key information since the key is encoded in the relative intensity between the two modes. In addition, we parameterize the discrete phase randomization with the same value, \(D\), for both state preparation and homodyne detection in different basis choices. This does not need to be the case in practice, and one can take different phase randomization precision in these procedures for higher accuracy or to compromise the devices' functioning. ## V Performances and comparison We demonstrate in this section the key rate-distance performances of the time-bin CV QKD protocol. We consider a thermal noise channel with a unit-efficiency homodyne detector. An inefficient detector with thermal electronic noise can be equated to a fiber section with transmittance equal to the detector efficiency, and the electronic noise absorbed into the channel excess noise (Eq. (C2)). The fiber attenuation is 0.2 dB/km, and the error-correction efficiency \(f\) is taken to be 1. The simulation formulae can be found in Appendix C. According to the key rate formula Eq. (12), the rounds where Alice sends \(m\) photons and Bob receives \(m\) photons can assure to generate secure keys. We plot in Fig. 3a the asymptotic key rate of the \(i\)-photon protocol assuming perfect decoy estimation and no excess noise, where in an \(i\)-photon protocol we only extract secure keys from a maximal \(i\)-photon components. In this ideal case, the phase error rates of all the protocols are zero. The optimized source intensities \(\mu\) and the post-selection thresholds \(\tau\) are listed in Table 4, as well as the resulted \(Z\)-basis error rate. Notice that the two-photon-protocol key rate derived from our DV method is similar to that from Ref. [23] using CV method, both reversely reconciled. This implies the connection between DV and CV security analysis, as well as the validity of the DV reverse reconciliation idea in Section III.2. To facilitate the discussion, we also plot in Fig. 3b the contribution of each photon components to the key rate at different distances. In each group of bars, the relative contribution of the vacuum, one, two, three, four-photon components are plotted respectively, where the \(m\)-photon contribution of the \(i\)-photon protocol is defined to be \(Q_{m,m}/(Q_{\star,0}+\sum_{m=1}^{i}Q_{m,m})\) and the vacuum contribution is \(Q_{\star,0}/(Q_{\star,0}+\sum_{m=1}^{i}Q_{m,m})\), i.e., the relative contribution to the raw key rate. It can be seen that the key rate improves as we make use of the multi-photon components. The improvement is most remarkable between the one and two-photon protocols. This is reasonable since in the one-photon protocol, the multi-photon components are considered insecure, thus limiting the source intensity. The low source intensity would result in severe bit error rate and higher post-selection thresholds, which in turn suppress the key rate. Whilst in the two-photon protocol where the two-photon components are considered secure, the limit on the source intensity can be lifted, and the bit error rate would drop, resulting in higher key rates. This is manifested in Fig. 3b, where the single-photon protocol sees significant vacuum contribution, whilst the two-photon protocol, at short distances, does not. Since the vacuum component would yield 50% bit error rate, we see the lower bit error rate of the two-photon protocol than the single-photon protocol as in Table 4. When we further make use of the three-photon components, the key rate as well as the source intensity still increase, yet less obviously. This is mainly because the fraction of the rounds where three photons are sent and three photons are received, decaying cubically with the channel transmittance, are not dominating, especially at longer distances. For example, we see in Fig. 3b that at 20 km, the contribution of the three-photon component is less than that of the single and two-photon components, and at 40 km the three-photon component rarely has contribution to the key rate. This trend is justified further in the four-photon protocol, where in Fig. 3b we see the four-photon-component contribution is quite small for longer distances, and in turn the key rate of the four-photon protocol only improves marginally than that of the three-photon protocol. Simulation shows that resorting to higher-than-four photon-number components has negligible increase to the key rate. Hence, If we consider the protocol with infinite photon-number components, the 0-km key rate is around 0.31 bit/channel, and the BB84 protocol with currently the best single-photon detector of 80% efficiency [53; 54] has 0-km key rate 0.29 bit/channel, based on the model from Ref. [46]. Our key rate thus matches the best BB84 key rate with practically favorable devices. The practical performances of the two-photon time-bin CV QKD protocol are illustrated in Fig. 4. For a reasonable range of excess noise \(\xi\) from \(10^{-3}\) to \(10^{-2}\) with respect to channel output, the key rate decays mildly as shown in Fig. 4a. Notice that the key rate is almost unaffected at 0 km since no noise photon is introduced to give phase error, and the bit error is almost unchanged for a negligible increase in the shot-noise variance. This demonstrates the robustness of the phase-error analysis to the excess noise. Fig. 4b illustrates the key rate against the mode refer \begin{table} \begin{tabular}{l} 1. On Alice’s side (source): \\ * \(Z\)_-basis_: \\ * Randomly select a key bit \(k_{a}\in\{0,1\}\), a phase factor \(\varphi_{a}\in\{2\pi j/D\}_{j=0}^{D-1}\), and a light intensity \(\mu_{a}\in\{\mu,\nu_{1},\nu_{2},0\}\). \\ * Prepare a coherent state of \(\ket{0}_{A1}\sqrt{\mu_{a}}e^{i\varphi_{a}}_{A2}\) for \(k_{a}=0\) or \(\ket{\sqrt{\mu_{a}}e^{i\varphi_{a}}}_{A1}\ket{0}_{A2}\) for \(k_{a}=1\). \\ * \(X\)_-basis_: \\ * Randomly select two phase factors \(\varphi_{a}^{1},\varphi_{a}^{2}\in\{2\pi j/D\}_{j=0}^{D-1}\) and a light intensity \(\mu_{a}\in\{\mu,\nu_{1},\nu_{2},0\}\). \\ * Prepare a coherent state of \(\ket{\sqrt{\mu_{a}}/2}e^{i\varphi_{a}^{1}}\ket{\sqrt{\mu_{a}}/2}e^{i\varphi_{a }^{2}}\). \\ 2. Alice sends the state through an authenticated channel to Bob. \\ 3. On Bob’s side (detection): \\ * \(Z\)_-basis_: \\ * Randomly select a phase factor \(\varphi_{b}\in\{2\pi j/D\}_{j=0}^{D-1}\). \\ * Use homodyne detectors with LO phases \(\varphi_{b}\) to measure the modes and obtain quadratures \(q_{1}\) and \(q_{2}\). \\ * Decode the key bit as 0 if \(|q_{1}|<\tau\wedge|q_{2}|>\tau\), 1 if \(|q_{1}|>\tau\wedge|q_{2}|<\tau\), and \(\emptyset\) otherwise. \\ * \(X\)_-basis_: \\ * Randomly select two phases \(\varphi_{b}^{1},\varphi_{b}^{2}\in\{\pi j/D\}_{j=0}^{D-1}\). \\ * Use homodyne detectors with LO phases \(\varphi_{b}^{1}\) and \(\varphi_{b}^{2}\) to measure the modes and obtain quadratures \(q_{1}\) and \(q_{2}\). \\ 4. Alice announces the light intensity in each round and relative phase between the two modes in \(X\)-basis states \(\varphi_{a}=\varphi_{a}^{2}-\varphi_{a}^{1}\). \\ 5. Alice and Bob perform basis sifting, where they obtain raw keys in the rounds they both choose \(Z\)-basis with light intensity \(\mu_{a}=\mu\) and \(k_{b}\neq\emptyset\). \\ 6. Bob estimates the gains and phase-error rates from the statistics in the rounds where Alice sends the \(Z\)-basis states \(\hat{\rho}_{\mu_{a}}^{Z}\) or \(X\)-basis states \(\hat{\rho}_{\mu_{a}}^{\varphi_{a}}\) with \(\varphi_{a}\in\{0,\pi/2,\pi,3\pi/2\}\). \\ 7. Alice and Bob perform reverse information reconciliation and privacy amplification to obtain final keys. \\ \end{tabular} \end{table} Table 3: Practical time-bin CV QKD with decoy states using up to two photons ence misalignment, where the two optical modes generating the time-bin qubit differ by \(\delta\) in the reference phases intrinsically. The misalignment in relative phases does not affect the \(Z\) basis as we encode the key bits into the relative intensities, and it only affects the \(X\) basis where the phase error is defined as the flips in relative phases. Our protocol thus has robustness against misalignment. Fig. 3(c) illustrates the key rates of the decoy-state protocol in Sec. IV.2. We set one decoy level at vacuum, and heuristically optimize the two decoy intensities \(\nu_{1}\) and \(\nu_{2}\) and the signal intensity \(\mu\). The decoy estimations are done by linear programming with a cutoff photon number \(10\)[55; 56]. A detailed treatment of the decoy estimation of the two-photon protocol is placed in Appendix A. We see for both the noiseless setup, with no excess noise and misalignment, and the practical setup, with \(10^{-3}\) excess noise and \(5^{\circ}\) misalignment, the 4-level decoy estimation is almost exact. This clearly surpassed the practical performance of the protocol in Ref. [23], since our protocol uses simpler estimation of the phase error by identifying the principal components in key generation. The optimized parameters of the practical setup are listed in Table 5. ## VI Conclusion and Outlook In summary, we present the time-bin-encoding CV QKD protocol with a phase-error-based security analysis. Similar to the ideas in DV protocols [25] and other CV protocols [5], we introduce a squashing channel to "squash" the original privacy-estimation problem on two optical modes to a single qubit, enabling the definition of phase-error rate. The phase randomization on both the source and detector enables the introduction of the photon-number-tagging method, identifying the central components for key generation. Combined with the decoy-state estimation, our parameter estimation is made \begin{table} \begin{tabular}{c||c|c|c|c||c|c|c||c|c|c|c} \hline \hline & \(\mu_{1}\) & \(\mu_{2}\) & \(\mu_{3}\) & \(\mu_{4}\) & \(\tau_{1}\) & \(\tau_{2}\) & \(\tau_{3}\) & \(\tau_{4}\) & \(e_{Z}^{1}\) & \(e_{Z}^{2}\) & \(e_{Z}^{3}\) & \(e_{Z}^{4}\) \\ \hline 0 km & 0.356 & 1.487 & 2.395 & 2.395 & 1.437 & 1.641 & 1.845 & 1.845 & 30.95\% & 10.52\% & 5.31\% & 5.31\% \\ 10 km & 0.137 & 0.924 & 1.887 & 2.395 & 3.476 & 2.253 & 2.457 & 2.457 & 29.80\% & 14.84\% & 5.66\% & 4.17\% \\ 20 km & — & 0.728 & 1.487 & 1.887 & — & 3.068 & 3.068 & 3.272 & — & 15.48\% & 6.91\% & 3.85\% \\ 40 km & — & 0.356 & 0.728 & 1.172 & — & 4.495 & 4.495 & 4.699 & — & 28.52\% & 17.07\% & 8.81\% \\ \hline \hline \end{tabular} \end{table} Table 4: The optimized intensities and post-selection thresholds of protocols using one to four photons respectively at different distances. \(\mu_{i}\), \(\tau_{i}\) and \(e_{Z}^{i}\) denote the optimized intensity, post-selection threshold and the \(Z\)-basis error rate of the \(i\)-photon protocol. These parameters generate the four key rate plots in Fig. 2(a), assuming infinite decoy levels. Figure 3: **(a)** The solid lines illustrate the asymptotic key rates of protocols using maximal one, two, three and four photons to generate keys. The dotted line is the linear key rate bound [51; 52]. We plot the PLOB bound here. The channel and devices are assumed to be ideal with no excess noise and inefficiency. **(b)** The relative contribution of \(Q_{m,m}\), i.e. the gain of the rounds where \(m\) photons are sent and \(m\) photons are received. The \(m\)-photon contribution of the \(i\)-photon protocol is relative to the raw key rate. Each group of bars illustrate the contribution of vacuum, one, two, three and four-photon components of the protocol at a certain distance. \begin{table} \begin{tabular}{c c c c c} \hline \hline Distance (km) & Signal intensity \(\mu\) & Threshold \(\tau\) & Decoy intensity \(\nu_{1}\) & Decoy intensity \(\nu_{2}\) \\ \hline 0 & 1.487 & 1.641 & \(1.737\times 10^{-1}\) & \(1.000\times 10^{-4}\) \\ 5 & 1.172 & 2.049 & \(3.406\times 10^{-3}\) & \(2.740\times 10^{-4}\) \\ 10 & 0.924 & 2.457 & \(2.993\times 10^{-2}\) & \(1.000\times 10^{-4}\) \\ 15 & 0.924 & 3.068 & \(1.861\times 10^{-2}\) & \(1.000\times 10^{-4}\) \\ 20 & 0.728 & 3.476 & \(1.355\times 10^{-2}\) & \(2.441\times 10^{-4}\) \\ 25 & 0.728 & 4.291 & \(1.355\times 10^{-2}\) & \(1.562\times 10^{-4}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Optimal protocol parameters in generating the key rate plot with \(10^{-3}\) excess noise and \(5^{\circ}\) misalignment, using 4 decoy levels, as in Fig. 4c. The heuristically optimized signal intensity, post-selection threshold and the decoy intensities are as given. There is one more decoy intensity set to be vacuum. The decoy estimation is done by linear programming with cutoff photon number 10. Figure 4: Practical performances of the two-photon protocol. **(a)** Key rate against excess noise with respect to channel output, assuming infinite decoy levels. **(b)** Key rate against misalignment, i.e. the phase-reference difference between the two optical modes generating the time-bin qubit, assuming infinite decoy levels. **(c)** Key rate derived using decoy methods. The noiseless setup (blue curves) uses fixed decoy levels at \(1.2\times 10^{-4}\), \(1\times 10^{-4}\) and vacuum, and the optimized protocol parameters of the practical setup (red curves) are listed in Table 5. simple and efficient. We expect our methods of constructing squashing models and applying phase randomization can be applied to many other CV protocols. One of our major observations is that coherent detectors can be used to estimate the privacy of multi-photon signals. This may also be helpful to the protocols with DV detectors. In fact, we may consider a hybrid protocol: single-photon detectors for key generation and homodyne detectors for parameter estimation. The multi-photon components in this protocol can contribute to the key generation compared with the single-photon BB84 protocol. We provide a general framework for finite-size analysis based on martingale for this CV protocol. Due to the photon-number tagging method, the finite-size analysis is greatly simplified. A direct follow-up of this work is to complete the finite-size analysis, encompassing the effects on the distillable key rate, the decoy-method accuracy, and the deviation of the homodyne tomography. In the literature, variants of Azuma's inequality have been applied for faster convergence of parameter estimation in quantum key distribution [57, 58], such as Kato's inequality [59]. One can borrow such techniques to the protocol in this work for better practicality. It is tempting to enhance the key rate as well as the maximal distance of this protocol. We may consider the high-dimensional time-bin encoding, which is relatively easy to implement experimentally [60, 61, 62]. The high-dimensional complementarity security analysis [63] can be invoked, and the squashing channel should map the optical modes to a qudit. We can also apply the trusted-noise model to alleviate the effect of the detector noise [64, 65]. This requires the modification of the detector POVM, which is shown to be still block-diagonal in Fock basis [23]. One may also consider using squeezed states as the light source to reduce the shot noise in one quadrature and use the other quadrature for parameter estimation only. This may tackle the large bit error rate due to the shot noise, rendering our 0-km performance not as good as the usual CV QKD scheme. The measurement-device-independent-type schemes [66, 67] and their extensions, including the twin-field-type [68, 69, 70] and the mode-pairing schemes [71, 72], may also be helpful to enhance the long-distance performance of our protocol. ###### Acknowledgements. We acknowledge insightful discussions with Hoi-Kwong Lo and Xiongfeng Ma. A.J. and R.V.P. acknowledge support from the UK EPSRC Quantum Communications Hub, project EP/T001011/1. A.J. acknowledges funding from Cambridge Trust. X. Z. acknowledges support from the National Natural Science Foundation of China Grant No. 11975222, Grant No. 12174216, and the Innovation Program for Quantum Science and Technology Grant No. 2021ZD0300804. P. Z. and L. J. acknowledge support from the ARO(W911NF-23-1-0077), ARO MURI (W911NF-21-1-0325), AFOSR MURI (FA9550-19-1-0399, FA9550-21-1-0209), NSF (OMA-1936118, ERC-1941583, OMA-2137642), NTT Research, Packard Foundation (2020-71479), and the Marshall and Arlene Bennett Family Research Program. A. J. and X. Z. contributed equally to this work. ## Appendix A Parameter estimation In this section, we show how to estimate the quantities in the key-rate formula with realistic devices. We first state parameter estimation in terms of probabilities in a single round, and one can interpret the results as obtained from sufficiently many rounds under a collective attack. In the next section, we will generalize the results to the finite-size regime under a coherent attack. For convenience, we first review the terms to be estimated in the virtual protocol that utilizes up to the two-photon component, as given in Sec. III.3. As a reminder, note that we distinguish the terms "receiving" and "accepting." 1. \(Q_{*,0}\): The probability of accepting a vacuum state, given by Eq. (27). 2. \(Q_{1,1}\): The probability of sending \(\ket{\Psi_{1}}_{A^{\prime}A_{1}A_{2}}\) and accepting a single-photon state, given by Eq. (23). 3. \(Q_{2,2}\): The probability of sending \(\ket{\Psi_{2}}_{A^{\prime}A_{1}A_{2}}\) and accepting a two-photon state, given by Eq. (26). 4. \(e^{X}_{1,1}\): the phase-error probability when sending a single-photon state and accepting a single-photon state, determined by the probabilities of sending \((\ket{01}\pm\ket{10})/\sqrt{2}\) and accepting \((\ket{01}\mp\ket{10})/\sqrt{2}\), given by Eq. (21). 5. \(e^{X}_{2,2}\): the phase-error probability when sending a two-photon state and accepting a two-photon state, determined by the probabilities of sending \((\ket{02}\pm\ket{20})/\sqrt{2}\) and accepting \((\ket{02}\mp\ket{20})/\sqrt{2}\) and sending \((\ket{02}-\ket{20})/\sqrt{2}\) and accepting \(\ket{11}\), given by Eq. (24). ### Homodyne tomography The first issue we need to tackle is the estimation of photon-number statistics. Due to the lack of photon-number-resolving detectors, these operators are not directly measurable. Nevertheless, we can apply homodyne tomography and obtain unbiased estimation [35; 36; 37; 38; 39; 40]. For a systematic review, we recommend the tutorial textbook of Ref. [33]. We start with a single-mode system. Consider the displacement operators given by \[\begin{split}\hat{D}(\alpha)=&\exp(\alpha\hat{a}^{ \dagger}-\alpha^{*}\hat{a})\\ =&\exp\left[(-ik)\frac{\hat{a}^{\dagger}e^{i\varphi} +\hat{a}e^{-i\varphi}}{2}\right]\\ :=&\exp(-ik\hat{Q}_{\varphi}),\end{split} \tag{28}\] where \(\hat{a}\) and \(\hat{a}^{\dagger}\) are the annihilation and creation operators of the mode, respectively, \(\alpha\) is a complex scalar, and \(\alpha^{*}\) denotes the complex conjugate of \(\alpha\). In the second equation, we use the polar variables to represent \(\alpha\), \(\alpha=(-i/2)ke^{i\varphi}\). We call \(\hat{Q}_{\varphi}\) the quadrature operator. Measuring \(\hat{Q}_{\varphi}\) corresponds to the homodyne measurement, where the phase of the LO is \(\varphi\). By definition, \(\hat{D}(\alpha)\) is a Hermitian operator. The set of all displacement operators forms an orthogonal and complete function basis on a mode; hence any linear operator on a mode, \(\hat{O}\), can be expanded with displacement operators, \[\hat{O}=\int_{0}^{\pi}\frac{d\varphi}{\pi}\int_{-\infty}^{\infty}\frac{dk|k|}{ 4}\text{Tr}(\hat{O}e^{ik\hat{Q}_{\varphi}})e^{-ik\hat{Q}_{\varphi}}. \tag{29}\] When measuring \(\hat{O}\) on a state, \(\hat{\bar{\rho}}\), the expected value is given by \[\begin{split}\langle\hat{O}\rangle=&\text{Tr}(\hat {O}\hat{\bar{\rho}})\\ =&\int_{0}^{\pi}\frac{d\varphi}{\pi}\int_{-\infty}^{ \infty}\frac{dk|k|}{4}\text{Tr}(\hat{O}e^{ik\hat{Q}_{\varphi}})\text{Tr}(\hat{O }e^{-ik\hat{Q}_{\varphi}})\\ :=&\int_{0}^{\pi}\frac{d\varphi}{\pi}\int_{-\infty}^ {\infty}dq\text{Tr}[\hat{O}K(\hat{Q}_{\varphi}-q)]p(q|\varphi)\\ :=&\int_{0}^{\pi}\frac{d\varphi}{\pi}\int_{-\infty}^ {\infty}dq\mathcal{R}[\hat{O}](q,\varphi)p(q|\varphi),\end{split} \tag{30}\] where the value of the term \[K(q):=\int_{-\infty}^{\infty}\frac{dk}{4}|k|e^{ikq} \tag{31}\] should be determined by the Cauchy principal value, \(p(q|\varphi)\) is the conditional probability of obtaining quadrature \(q\) when the phase of the homodyne measurement is \(\varphi\), and \(\mathcal{R}[\hat{O}](q,\varphi)\) is the kernel function of \(\hat{O}\) with respect to the homodyne measurement. Eq. (10) gives a sampling procedure to estimate \(\left\langle\hat{O}\right\rangle\) for a general unknown system using homodyne measurements [37, 38, 40]. Namely, 1. Repeat the following process for \(N\) times: 1. Choose the LO phase of the homodyne measurement, \(\varphi_{i}\in[0,\pi]\), uniformly at random. 2. Measure the system and record the result, \(q_{i}\). 2. Calculate the average value of the kernel function with respect to the observed statistics, \(\sum_{i=1}^{N}\mathcal{R}[\hat{O}](q_{i},\varphi_{i})/N\). When the kernel function is bounded, the law of large numbers guarantees the convergence, \[\left\langle\hat{O}\right\rangle=\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{i= 1}^{N}\mathcal{R}[\hat{O}](q_{i},\varphi_{i}). \tag{11}\] In our protocol, we are interested in the photon-number operators, \(\left|n\right\rangle\left\langle n+d\right|\), where \(\left|n\right\rangle\) is the eigenstate of the \(n\)-photon eigenstate. For a single mode, the estimator of this operator is given by \[\begin{split}\mathcal{R}_{\eta}[\left|n\right\rangle\left\langle n +d\right|](q,\varphi)=& e^{id(\varphi+\frac{\pi}{2})}\sqrt{ \frac{n!}{(n+d)!}}\int_{-\infty}^{\infty}dk|k|\\ &\exp\left(\frac{1-2\eta}{2\eta}k^{2}-ikq\right)k^{d}L_{n}^{d}(k ^{2}),\end{split} \tag{12}\] where \(\eta\) is the detector efficiency, and \(L_{n}^{d}\) is the generalized Laguerre polynomial [37, 38, 73]. The kernel function is bounded when \(\eta>1/2\), allowing for a converging tomography result by increasing samples [38, 40].homodyne tomography can be generalized to estimate the statistics of a multiple-mode observable. In our case, one needs to estimate separable observables on two modes, \(\hat{O}_{1}\otimes\hat{O}_{2}\), where we specify the modes with subscripts. One can apply independent homodyne measurements to each mode for the estimation. Notably, as \(\hat{D}(\alpha_{1})\otimes\hat{D}(\alpha_{2})\) forms a complete basis on the joint system, then \[\hat{O}_{1}\otimes\hat{O}_{2}=\int_{0}^{\pi}\frac{d\varphi_{1}}{\pi}\int_{- \infty}^{\infty}\frac{dk_{1}|k_{1}|}{4}\mathrm{Tr}(\hat{O}_{1}e^{ik_{1}\hat{O }_{\varphi_{1}}})\int_{0}^{\pi}\frac{d\varphi_{2}}{\pi}\int_{-\infty}^{\infty }\frac{dk_{2}|k_{2}|}{4}\mathrm{Tr}(\hat{O}_{2}e^{ik_{2}\hat{O}_{\varphi_{2}}} )(e^{-ik_{1}\hat{O}_{\varphi_{1}}}\otimes e^{-ik_{2}\hat{O}_{\varphi_{2}}}). \tag{13}\] Consequently, \[\begin{split}\left\langle\hat{O}_{1}\otimes\hat{O}_{2}\right\rangle &=\mathrm{Tr}[(\hat{O}_{1}\otimes\hat{O}_{2})\hat{\rho}]\\ &=\int_{0}^{\pi}\frac{d\varphi_{1}}{\pi}\int_{-\infty}^{\infty} dq_{1}\int_{0}^{\pi}\frac{d\varphi_{2}}{\pi}\int_{-\infty}^{\infty}dq_{2} \mathcal{R}[\hat{O}_{1}](q_{1},\varphi_{1})\mathcal{R}[\hat{O}_{2}](q_{2}, \varphi_{2})p(q_{1},q_{2}|\varphi_{1},\varphi_{2}).\end{split} \tag{14}\] As a remark, note that the quantum state of the two modes, \(\hat{\tilde{\rho}}\), can generally be entangled. In the experiment, the users simply need two independently phase-randomized homodyne detectors and record the joint conditional probability distribution of quadratures \((q_{1},q_{2})\) given the LO phases \((\varphi_{1},\varphi_{2})\). ### Parameter estimation procedures Now we present the parameter estimation procedures based on the homodyne tomography and extended decoy methods. In the experiment, given that they choose certain bases and light intensity, the users can collect data and evaluate the conditional probabilities for taking \((\varphi_{b}^{1},\varphi_{b}^{2})=\vec{\varphi}_{b}\) and observing \((q_{1},q_{2})=\vec{q}\) in the homodyne measurements, or the yields. For simplicity, we use notations of \(Y_{\mu_{a},(\vec{y}_{b},\vec{q})}^{2}\) and \(Y_{\mu_{a},(\vec{y}_{b},\vec{q})}^{2}\). In the subscript, \(\mu_{a}\) denotes the light intensity, and \((\vec{\varphi}_{b},\vec{q})\) denotes the homodyne measurement results of Bob. When the superscript writes \(Z\), it denotes that Alice chooses the \(Z\)-basis; when the superscript writes \(\varphi\), it denotes that Alice chooses the \(X\)-basis and \(\varphi_{a}^{2}-\varphi_{a}^{1}=\varphi\). Via homodyne tomography, these values can be used to estimate \(Y_{\mu_{a},\hat{O}}^{Z}\) and \(Y_{\mu_{a},\hat{O}}^{\varphi}\), namely, the expected value of measuring observable \(\hat{O}\) conditioned on the corresponding input settings. Following Eq. (17), the yields are given by \[\begin{split} Y^{Z}_{\mu_{a},\hat{O}}&=\int_{0}^{\pi} d\varphi^{1}_{b}\int_{0}^{\pi}d\varphi^{2}_{b}\int_{-\infty}^{\infty}dq_{1}\int_{- \infty}^{\infty}dq_{2}\mathcal{R}[\hat{O}](\vec{q},\vec{\varphi}_{b})Y^{Z}_{ \mu_{a},(\vec{\varphi}_{b},\vec{q})},\\ Y^{\varphi}_{\mu_{a},\hat{O}}&=\int_{0}^{\pi}d\varphi^ {1}_{b}\int_{0}^{\pi}d\varphi^{2}_{b}\int_{-\infty}^{\infty}dq_{1}\int_{- \infty}^{\infty}dq_{2}\mathcal{R}[\hat{O}](\vec{q},\vec{\varphi}_{b})Y^{ \varphi}_{\mu_{a},(\vec{\varphi}_{b},\vec{q})}.\end{split} \tag{23}\] In the second step, we apply the extended decoy state methods to estimate the gains and phase-error probabilities in Eq. (12), the key-rate formula. With finite decoy states, the users can obtain upper and lower bounds on these quantities [26]. Since the estimation procedure involves many quantities, for convenience, we list the estimation procedures and the involved quantities in Table 6. Note that the original data can be re-used to estimate various quantities in homodyne tomography by varying the kernel function with respect to the observable under consideration. ## Appendix B Martingale-based analysis against coherent attacks To tackle the most general attack, namely, a coherent attack, we apply a martingale-based approach. We first review the basics of martingale theory. **Definition 1** (Martingale).: _Consider a probability space, \((\Omega,\mathcal{F},P)\), where \(\Omega\) is the sample space, \(\mathcal{F}\) is the event space, and \(P\) is the probability measure, and a filtration \(\mathbb{F}=\{\mathcal{F}_{i}\}_{i\in\mathbb{N}},\mathcal{F}_{i}\subseteq \mathcal{F}_{j}\subseteq\mathcal{F},\forall i\leq j\). A sequence of random variables, \(X_{0},X_{1},\cdots\), such that \(\forall i,X_{i}\) is \(\mathcal{F}_{i}\)-measurable, is called a martingale with respect to filtration \(\mathbb{F}\) if \(\forall i\),_ \[\begin{split}\mathbb{E}(|X_{i}|)&<\infty,\\ \mathbb{E}(X_{i+1}|\mathcal{F}_{i})&=X_{i}.\end{split} \tag{24}\] For a martingale, the summation of its composed random variables converges to the expected value in probability, as shown by Azuma's inequality [30] and its variants. **Theorem 3** (Azuma's inequality [30]).: _Given a probability space, \((\Omega,\mathcal{F},P)\), and a filtration, \(\mathbb{F}=\{\mathcal{F}_{i}\}_{i\in\mathbb{N}}\), suppose \(\mathbb{X}=\{X_{i}\}_{i\in\mathbb{N}}\) is a martingale bounded by two sets of predictable processes with respect to \(\mathbb{F}\), \(\mathbb{A}=\{A_{i}\}_{i\in\mathbb{N}}\) and \(\mathbb{B}=\{B_{i}\}_{i\in\mathbb{N}}\), such that_ \[\begin{split} A_{i}\leq X_{i}-X_{i-1}\leq B_{i},\\ B_{i}-A_{i}\leq c_{i},\end{split} \tag{25}\] \begin{table} \begin{tabular}{c c c} \hline \hline Original data & Estimation via homodyne tomography & Final estimation with decoy states \\ \hline \multirow{2}{*}{\(Y^{Z}_{\mu,|00\rangle}\)} & \(Y^{Z}_{\mu,|00\rangle}\) & \(Q_{\star,0}\) \\ & \(Y^{Z}_{\mu_{a},|01\rangle},Y^{Z}_{\mu_{a},|10\rangle}\) & \(Q^{L}_{1,1},Q^{U}_{1,1}\) \\ & \(Y^{Z}_{\mu_{a},|02\rangle},Y^{Z}_{\mu_{a},|20\rangle},Y^{Z}_{\mu_{a},|11 \rangle}\) & \(Q^{L}_{2,2},Q^{U}_{2,2}\) \\ \hline \multirow{2}{*}{\(Y^{\varphi}_{\mu_{a},(\vec{\varphi}_{b},\vec{q})}\)} & \(Y^{\varphi=\pi}_{\mu_{a},|\vec{\psi}_{b}^{\dagger}\rangle},Y^{\varphi=\pi}_{\mu_ {a},|\vec{\psi}_{b}^{\dagger}\rangle},Y^{\varphi=0}_{\mu_{a},|\vec{\psi}_{b}^{ \dagger}\rangle},Y^{\varphi=0}_{\mu_{a},|\vec{\psi}_{b}^{\dagger}\rangle}\) & \(c^{X,L}_{1,1},c^{X,U}_{1,1}\) \\ & \(Y^{\varphi=\pi/2}_{\mu_{a},|\vec{\psi}_{b}^{\dagger}\rangle},Y^{\varphi=\pi/2 }_{\mu_{a},|\vec{\psi}_{b}^{\dagger}\rangle},Y^{\varphi=\pi/2}_{\mu_{a},|11 \rangle}\) & \(c^{X,L}_{2,2},c^{X,U}_{2,2}\) \\ \(Y^{Z}_{\mu_{a},(\vec{\varphi}_{b},\vec{q})}\) & \(Y^{Z}_{\mu_{a},|\vec{\psi}_{b}^{\dagger}\rangle},Y^{Z}_{\mu_{a},|\vec{\psi}_{b }^{\dagger}\rangle},Y^{Z}_{\mu_{a},|11\rangle}\) & \\ \hline \hline \end{tabular} \end{table} Table 6: Parameter estimation with homodyne tomography and decoy states. In our work, the light intensity is chosen from the set \(\mu_{a}\in\{\mu,\nu_{1},\nu_{2},0\}\). For simplicity, we denote the photon-number operator, \(\ket{n_{1}n_{2}}\bra{n_{1}n_{2}}\), as \(\ket{n_{1}n_{2}}\) in the subscripts of the yields, and similarly for \(\ket{\Psi^{\pm}_{n}}\). We denote the lower and upper bounds with additional superscripts of \(L\) and \(U\), respectively. Note that one can directly estimate \(Q_{\star,0}\) in the rounds of \(\mu_{a}=\mu\). The estimation of \(e^{X}_{2,2}\) involves statistics in the rounds that Alice chooses the \(Z\)-basis and \(X\)-basis with \(\varphi=k\pi/2\). _where \(c_{i}\in[0,\infty)\) are constants. Then \(\forall\delta>0\) and \(\forall n\in\mathbb{N}^{+}\),_ \[\begin{split}\Pr(X_{n}-X_{0}\geq\delta)&\leq\exp \left(-\frac{2\delta^{2}}{\sum_{i=1}^{n}c_{i}^{2}}\right),\\ \Pr(X_{n}-X_{0}\leq-\delta)&\leq\exp\left(-\frac{2 \delta^{2}}{\sum_{i=1}^{n}c_{i}^{2}}\right).\end{split} \tag{10}\] To explain why martingale theory allows us to tackle the most general attack, we use the estimation of \(Q_{1,1}\) in the key-rate formula as an example to show the parameter estimation method. One can estimate the other quantities similarly. Note that in the finite-data-size analysis under a coherent attack, \(Q_{1,1}\) should be understood as a frequency, and \(N_{\mu}^{zz}Q_{1,1}\) is the number of key generation rounds in the virtual experiment where Alice sends a single-photon state and Bob accepts a single-photon state. Here, we denote the number of key generation rounds as \(N_{\mu}^{zz}\) in accordance with the basis choice and light intensity. In particular, the statistics over the rounds can be correlated. Consider the following random variable in the \(i\)'th round, \[\zeta_{\mu,(1,1)}^{(i)}=\left\{\begin{aligned} & 1,\,\text{if}\,\,\mu_{a}^{(i)}=\mu \wedge ZZ\wedge n^{(i)}=1\wedge m^{(i)}=1\wedge(q_{1}^{(i)},q_{2}^{(i)})\in \mathbf{R_{0}}\cup\mathbf{R_{1}}\\ & 0,\,\text{otherwise}.\end{aligned}\right., \tag{11}\] where we write \(ZZ\) to denote the bases choices, with the former denoting Alice's choice and the latter denoting Bob's, and specify the round number in the superscript, \((i)\). We call \(\zeta_{\mu,(1,1)}^{(i)}\) a counter variable, which adds one in the virtual experiment when the light intensity is \(\mu\), both Alice and Bob choose the \(Z\)-basis, Alice sends a single-photon state, and Bob receives a single-photon state and accepts the signal in the post-selection. Conditioned on the history before the \(i\)'th round, \(\mathcal{F}_{i-1}\), the expected value of \(\zeta_{(1,1)}^{(i)}\) is given by \[\mathbb{E}\left[\zeta_{\mu,(1,1)}^{(i)}|\mathcal{F}_{i-1}\right]=p_{\mu}p_{zz }Q_{1,1}^{(i)}, \tag{12}\] where \(p_{\mu}\) is the probability that the light intensity is \(\mu\), \(p_{zz}\) is the probability that both users choose the \(Z\)-basis, and \(Q_{1,1}^{(i)}\) is the probability of sending a single-photon state and accepting a single-photon state in the \(i\)'th round given the experimental setting, or the gain of \((n,m)=(1,1)\) in the \(i\)'th round conditioned on the history, which is \(\mathcal{F}_{i-1}\)-measurable. Then, the following random variables form a martingale, \[\Delta_{\mu,(1,1)}^{(t)}=\left\{\begin{aligned} & 0,&\text{if}\,\,t=0,\\ &\sum_{i=1}^{t}\left\{\zeta_{(1,1)}^{(i)}-\mathbb{E}\left[\zeta_{( 1,1)}^{(i)}|\mathcal{F}_{i-1}\right]\right\}\!,&\text{if}\,\,t=1, \cdots,N,\end{aligned}\right. \tag{13}\] where \(N\) is the total number of rounds in the experiment. In addition, the martingale has a bounded difference, \[-\mathbb{E}\left[\zeta_{(1,1)}^{(i)}|\mathcal{F}_{i-1}\right]\leq\Delta_{\mu,(1,1)}^{(t)}-\Delta_{\mu,(1,1)}^{(t-1)}\leq 1-\mathbb{E}\left[\zeta_{(1,1)}^{(i)}| \mathcal{F}_{i-1}\right]. \tag{14}\] We denote \(c_{\mu,(1,1)}:=1\). In a virtual experiment where Alice and Bob perform the photon-number measurements, the value of \(\Delta_{\mu,(1,1)}^{(n)}\) can be bounded on both sides by applying Azuma's inequality; hence the value of \(\sum_{i=1}^{n}\zeta_{\mu,(1,1)}^{(i)}\) can be bounded with respect to the expected values in Eq. (12). Notwithstanding, the expected value, \(\mathbb{E}[\zeta_{(1,1)}^{(i)}|\mathcal{F}_{i-1}]\), is not directly accessible. For this purpose, we link it with the parameter estimation measurements. Consider the following random variables for the \(i\)'th round in the experiment, \[\zeta_{(\mu_{a},Z,|n_{0}n_{1})}^{(i)}=\left\{\begin{aligned} &\mathcal{R}_{\eta}[|n_{0}n_{1}\rangle\,\langle n_{0}n_{1}|]( \vec{q},\vec{\varphi}_{b}),&\text{if}\,\,\mu_{a}^{(i)}=\mu_{a} \wedge ZX\wedge(q_{1}^{(i)},q_{2}^{(i)})=\vec{q},(\varphi_{b}^{1(i)},\varphi_{ b}^{2(i)})=\vec{\varphi}_{b},\\ & 0,&\text{otherwise},\end{aligned}\right. \tag{15}\] which relate to the parameter estimation measurements when the light intensity is \(\mu_{a}\) and Alice chooses the \(Z\)-basis. Here, we overload the notation, \(\mu_{a}\), with the meaning of a specific light intensity in \(\{\mu,\nu_{1},\nu_{2},0\}\) and distinguish it from Alice's choice in a round, \(\mu_{a}^{(i)}\). For the estimation of \(Q_{1,1}\), we use homodyne tomography to estimate the photon-number measurements of \(|n_{0}n_{1}\rangle=|01\rangle\) and \(|10\rangle\). Here, \((\vec{q},\vec{\varphi}_{b})\) represents the homodyne measurement result, where the LO phases are \(\vec{\varphi}_{b}=(\varphi_{1},\varphi_{2})\) and the quadratures are \(\vec{q}=(q_{1},q_{2})\) on the two modes. The value of \(\mathcal{R}_{\eta}[|n_{0}n_{1}\rangle\,\langle n_{0}n_{1}|](\vec{q},\vec{ \varphi}_{b})\) comes from the kernel function in homodyne tomography by using homodyne detectors with efficiency \(\eta\), of which the expected value is \(\left\langle n_{0}n_{1}\right|\mathcal{N}_{E}(\hat{\bar{\varphi}}_{\mu_{a}}^{ \varphi})\left|n_{0}n_{1}\right\rangle\). We also call \(\zeta_{(\mu_{a},\mathcal{Z},|n_{0}n_{1}))}^{(i)}\) counter variables. Similar to Eq. (101), the following random variables form a martingale, \[\Delta_{(\mu_{a},\mathcal{Z},|n_{0}n_{1}))}^{(t)}=\left\{\begin{array}{ll}0,& \text{if }t=0,\\ \sum_{i=1}^{t}\left\{\zeta_{(\mu_{a},\mathcal{Z},|n_{0}n_{1}))}^{(i)}-\mathbb{ E}\left[\zeta_{(\mu_{a},\mathcal{Z},|n_{0}n_{1}))}^{(i)}|\mathcal{F}_{i-1} \right]\right\},&\text{if }t=1,\cdots,n.\end{array}\right. \tag{102}\] For \(\eta>1/2\), the above martingale has a bounded difference. One can numerically evaluate the maximum absolute value of \(\mathcal{R}_{\eta}[\left|n_{0}n_{1}\right\rangle\left\langle n_{0}n_{1}\right| ](\vec{q},\vec{\varphi}_{b})\), which we denote as \(r_{\eta,|n_{0}n_{1})}\). Then, the martingale has a bounded difference, \[-r_{\eta,|n_{0}n_{1})}-\mathbb{E}\left[\zeta_{(\mu_{a},\mathcal{Z},|n_{0}n_{1 }))}^{(i)}|\mathcal{F}_{i-1}\right]\leq\Delta_{(\mu_{a},\mathcal{Z},|n_{0}n_{ 1}))}^{(t)}-\Delta_{(\mu_{a},\mathcal{Z},|n_{0}n_{1}))}^{(t-1)}<r_{\eta,|n_{0}n _{1})}-\mathbb{E}\left[\zeta_{(\mu_{a},\mathcal{Z},|n_{0}n_{1}))}^{(i)}| \mathcal{F}_{i-1}\right]. \tag{103}\] We denote \(c_{(\mu_{a},\mathcal{Z},|n_{0}n_{1}))}:=2r_{\eta,|n_{0}n_{1})}\). By applying Azuma's inequality, we can link the observed statistics, \(\zeta_{(\mu_{a},\varphi,|n_{0}n_{1}))}^{(i)}\), with their expected values. In defining the counter variables, \(\zeta_{\mu,(1,1)}^{(i)}\) and \(\zeta_{(\mu_{a},\mathcal{Z},|n_{0}n_{1}))}^{(i)}\), we embed the random choices by the users and the probabilistic events in quantum measurements into their definition, which guarantees that \(\zeta_{\mu,(1,1)}^{(i)}\) and \(\zeta_{(\mu_{a},\mathcal{Z},|n_{0}n_{1}))}^{(i)}\) are defined on the same space. Using the single-round analysis, we can thus express \(\mathbb{E}\left[\zeta_{\mu,(1,1)}^{(i)}|\mathcal{F}_{i-1}\right]\) in terms of \(\mathbb{E}[\zeta_{(\mu_{a},\mathcal{Z},|n_{0}n_{1}))}^{(i)}|\mathcal{F}_{i-1}]\). Therefore, by further employing Azuma's inequality to each martingale, we finally relate the parameter estimation statistics with the estimation of the average gain over the rounds, \[N_{\mu}^{zz}Q_{1,1}\stackrel{{\text{\tiny{A\`{\rm{\sc{\sc{\sc{\sc{\sc{\sc{\sc{\sc{\sc{\sc{\sc{\sc{\sc{\sc{\sc{\sc{{\sc{{\sc{{ 1 \leftleftleft\{\leftleft({\left({\leftleftleft({\leftleft({ \left { \left \left({ \left({ \leftleft({ \left({ \left({ \left({ 0 }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\\\\\\\\\\\\\\ \\\\\\\} \\\\\\\\\\\\\\\\\\\\}} \ \ \ \ \ \ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\}\\}\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\} statistics. In Fig. 6, we list the counter variables to establish martingales in parameter estimation. In the \(i\)'th round, for a specific experimental setting and observable \(\hat{O}\), consider a counter variable, \(\zeta^{(i)}_{\text{setting},\hat{O}}\), which is in the form of Eq. (47) and Eq. (48). Note that due to phase randomization, we do not need to label the phase value for the random variables related to estimating gains. In the figure, we list the non-trivial values the counter variables may take in a (virtual) experiment. Each blue box represents the outcome of a random event due to either a random choice by the users or a measurement event. The counter variable takes a non-trivial value if the experiment takes the path that leads to its associating event. Otherwise, it takes the value \(0\). Then, for each setting and observable, the following series of random variables form a martingale, \[\Delta^{(t)}_{\text{setting},\hat{O}}=\left\{\begin{aligned} 0,& \text{if }t=0,\\ \sum_{i=1}^{t}\left\{\zeta^{(i)}_{\text{setting},\hat{O}}-\mathbb{E} \left[\zeta^{(i)}_{\text{setting},\hat{O}}|\mathcal{F}_{i-1}\right]\right\}& \text{if }t=1,\cdots,N.\end{aligned}\right. \tag{49}\] As we embed all the random choices in the experiment in defining these observables, all the counter variables in the \(i\)'th round, \(\zeta^{(i)}_{\text{setting},\hat{O}}\), are defined over the same filtration. Therefore, we can link their expected values via the single-round analysis and hence the parameter estimation measurement statistics with the quantities we are interested in. ## Appendix C Simulation formulae under thermal-noise channel We present the simulation formulae of the asymptotic time-bin CV QKD under a thermal-noise channel with excess noise \(\xi\) from the output. A thermal noise channel is characterized as a Gaussian completely positive map transforming the first and second moment \((\bar{r},V)\), representing the mean vector and covariance matrix of the quadrature operators, of the input state as [34]: \[\bar{r}\mapsto\sqrt{\eta}\bar{r}, \tag{50}\] \[V\mapsto\eta V+(1-\eta)\mathbb{I}+\xi\mathbb{I},\] where \(\eta\) is the channel transmittance. Two thermal channels with transmittance \(\eta\) and \(\eta^{\prime}\) and excess noise \(\xi\) and \(\xi^{\prime}\) concatenate to another thermal channel with transmittance \(\eta\eta^{\prime}\) and excess noise \((\eta^{\prime}\xi+\xi^{\prime})\) since \[\bar{r}\mapsto\sqrt{\eta^{\prime}\eta\bar{r}}, \tag{51}\] \[V\mapsto\eta^{\prime}(\eta V+(1-\eta)\mathbb{I}+\xi\mathbb{I})+( 1-\eta^{\prime})\mathbb{I}+\xi^{\prime}\mathbb{I}\] \[=\eta^{\prime}\eta V+(1-\eta^{\prime}\eta)\mathbb{I}+(\eta^{ \prime}\xi+\xi^{\prime})\mathbb{I}.\] On the bit-error side, the thermal noise can be seen as adding \(\xi\) to the unity variance of the coherent states. Hence, if Alice transmits a coherent state \(|\sqrt{\mu}e^{i\theta}\rangle\) through a thermal-noise channel with transmittance \(\eta\) and excess noise \(\xi\), and Bob applies homodyne detection with LO phase \(\varphi\), the detection result \(q\) will follow a distribution \[\Pr(q|\mu,\theta-\varphi)=\frac{1}{\sqrt{2\pi}}\exp\left\{-\frac{[q-2\sqrt{ \eta\mu}\cos(\theta-\varphi)]^{2}}{2(1+\xi)}\right\}. \tag{52}\] Figure 5: Flowchart of martingale-based parameter estimation procedures. Here, we take the estimation of \(Q_{1,1}\) as an example. The users start with the observed statistics in the test rounds and aim to estimate \(Q_{1,1}\). Azuma’s inequality is applied twice in the estimation. The single-round analysis in terms of conditional probabilities links the two martingales and hence relates the observed statistics in the test rounds with the target value in the key generation rounds. Since both the signal states and the receiver LO are uniformly phase randomized, \((\theta-\varphi)\) is also uniformly randomized with \([0,2\pi)\) in a cyclic manner. The bit error rate \(e_{\mu}^{Z}\) and the \(Z\)-basis gain \(Q_{\mu}^{Z}\) can thus be calculated according to the post-selection threshold \(\tau\), uniformly randomizing over \([0,2\pi)\). The calculation of the vacuum gain \(Q_{*,0}\), according to Eq. (27), requires the probability of sending the \(Z\)-basis state whilst receiving vacuum. This can be calculated via the Wigner function for Gaussian state, and in specific \[\mathrm{Tr}\left[\hat{P}_{0}^{B_{1}B_{2}}\mathcal{N}_{E}^{A_{1}A_{2}\to B_{1}B _{2}}\left(\hat{\rho}^{Z}\right)\hat{P}_{0}^{B_{1}B_{2}}\right]=\left(\frac{2} {2+\xi}\right)^{2}\exp\left(-\frac{2\eta\mu}{2+\xi}\right). \tag{100}\] The single- and two-photon gains, \(Q_{1,1}\) and \(Q_{2,2}\), and phase-error rates, \(e_{1,1}^{X}\) and \(e_{2,2}^{X}\), are more complicated in calculation. In the infinite-decoy setup, we calculate the photon gains directly. We decompose the thermal noise \(\hat{\rho}_{\mathrm{th}}\) into Fock states, \[\hat{\rho}_{\mathrm{th}}=\sum_{k=0}^{\infty}\frac{\bar{k}^{k}}{(\bar{k}+1)^{k+ 1}}\left|k\right\rangle\left\langle k\right|, \tag{101}\] where \(\bar{k}=\xi/2(1-\eta)\) is the average photon number of the thermal noise. The optical mode from Alice can be seen as mixing with the thermal noise through an \(\eta\)-transmittance beam splitter. We calculate the effect of the thermal noise in an ensemble manner, that is, we calculate the case where the channel injects \(k\) and \(l\) noise photons to the two consecutive optical modes respectively, and mix the results according to the noise photon-number distribution in Eq. (101). We set a cutoff photon-number at \(N_{c}=3\) since the thermal noise is relatively low. Simulation shows that higher cutoffs have negligible effects on the key rate. We also account for the effects of the misalignment angle \(\delta\), which introduces a \(\sin^{2}(m\delta/2)\) error to the \(m\)-photon phase error rate. We ignore the correlation between the misalignment and thermal noise photon as a second-order small quantity. The calculations of the quantities of interest are listed below. The notation of \((k,l)\) represents the conditional probability that the thermal sources emit \(k\) and \(l\) photons respectively to the two optical modes: 1. The probability of sending \((\left|01\right\rangle\left\langle 01\right|+\left|01\right\rangle\left\langle 01 \right|)/2\) whilst accepting one photon in total (Eq. (23)): \[Q_{1,1}(k,l)=c_{1}\mathrm{Pr}(1)\eta^{k+l-1}\left\{\left[(k+1)\eta-k\right]^{ 2}+l(k+1)(1-\eta)^{2}\right\},\] (102) \[Q_{1,1}=\sum_{k=0,l=0}^{N_{c}}\mathrm{P}_{\mathrm{th}}(k)\mathrm{P}_{\mathrm{ th}}(l)Q_{1,1}(k,l), \tag{103}\] Figure 6: The counter variables to set up martingales in parameter estimation. For each experimental setting and observable that is involved in parameter estimation, we set up a corresponding counter random variable in each round, \(\zeta_{\mathrm{setting},\hat{O}}^{(i)}\). We list the values that should be given to these random variables when an associating path is taken in the experiment. In other circumstances, the counter variables take the value \(0\). where \[\mathrm{P}_{\mathrm{th}}(k)=\frac{\bar{k}^{k}}{(\bar{k}+1)^{k+1}}\text{ with }\bar{k}=\frac{\xi}{2(1-\eta)}. \tag{101}\] 2. The probability of sending \((\ket{01}\pm\ket{10})/\sqrt{2}\) whilst receiving \((\ket{01}\mp\ket{10})/\sqrt{2}\) (Eq. (21)): \[\frac{e_{1,1}^{X}(k,l)Q_{1,1}}{\mathrm{Pr}(1)}=\frac{c_{1}}{4}\eta^{k+l-1}(1- \eta)^{2}(k^{2}+l^{2}+k+l)+c_{1}\sin^{2}\left(\frac{\delta}{2}\right),\] (102) \[e_{1,1}^{X}=\sum_{k=0,l=0}^{N_{c}}\mathrm{P}_{\mathrm{th}}(k)\mathrm{P}_{ \mathrm{th}}(l)e_{1,1}^{X}(k,l). \tag{103}\] 3. The probability of sending \((\ket{02}\bra{02}+\ket{20}\bra{20})/2\) whilst accepting within the \((\ket{02}\bra{02}+\ket{20}\bra{20})\) and \(\ket{11}\bra{11}\) subspace (Eq. (26)): \[Q_{2,2}^{02}(k,l)=\frac{1}{2}c_{2}^{02}\Pr(2)\eta^{k+l-2}\left\{\left[\eta^{2 }-2k\eta(1-\eta)+\frac{1}{2}k(k-1)(1-\eta)^{2}\right]^{2}+\frac{1}{4}l^{2}(l- 1)^{2}(1-\eta)^{4}\right\}+\{k\leftrightarrow l\},\] (104) \[Q_{2,2}^{11}(k,l)=\frac{1}{2}c_{2}^{11}\Pr(2)\eta^{k+l-2}\left[\sqrt{2(k+1)l} \eta(1-\eta)-\sqrt{\frac{1}{2}kl(k+1)}(1-\eta)^{2}\right]^{2}+\{k \leftrightarrow l\}, \tag{105}\] \[Q_{2,2}=\sum_{k=0,l=0}^{N_{c}}\mathrm{P}_{\mathrm{th}}(k)\mathrm{P}_{\mathrm{ th}}(l)\left[Q_{2,2}^{02}(k,l)+Q_{2,2}^{11}(k,l)\right], \tag{106}\] where the expression \(\{k\leftrightarrow l\}\) denotes exchanging the \(k\)'s and \(l\)'s in the term ahead. 4. The probability of sending \((\ket{02}\pm\ket{20})/\sqrt{2}\) whilst receiving \((\ket{02}\mp\ket{20})/\sqrt{2}\) and \(\ket{11}\) (Eq. (24)). \[\frac{e_{2,2}^{02,X}(k,l)Q_{2,2}}{\mathrm{Pr}(2)}=\frac{c_{2}^{02}}{4}\eta^{k+ l-2}\left[2(k-l)(1-\eta)\eta+(k^{2}-k-l^{2}-l)(1-\eta)^{2}\right]^{2}+c_{2}^{02 }\sin^{2}(\delta),\] (107) \[\frac{e_{2,2}^{11,X}(k,l)Q_{2,2}}{\mathrm{Pr}(2)}=c_{2}^{11}\eta^{k+l-2}(1- \eta)^{2}\left\{l(k+1)\left[\eta-\frac{1}{2}k(1-\eta)\right]^{2}+k(l+1)\left[ \eta-\frac{1}{2}l(1-\eta)\right]^{2}\right\}, \tag{108}\] \[\frac{e_{2,2}^{X}Q_{2,2}}{\mathrm{Pr}(2)}=\sum_{k=0,l=0}^{N_{c}}\mathrm{P}_{ \mathrm{th}}(k)\mathrm{P}_{\mathrm{th}}(l)\left[e_{2,2}^{02,X}(k,l)+e_{2,2}^{1 1,X}(k,l)\right]. \tag{109}\] In the finite-decoy setup, the estimations of photon gains are derived from the statistics of coherent-state gains. We need to calculate the probability of transmitting certain coherent states whilst receiving certain photon states. This can be also be done by the Gaussian-state Wigner function. Let \(\kappa=2/(2+\xi)\). Denote the output of a thermal noise channel when transmitting the coherent state \(\ket{\alpha}\) as \(\rho_{\alpha}\). Its Fock-basis matrix elements are: \[\bra{0}\rho_{\alpha}\ket{0}=\kappa\exp(-\kappa|\alpha|^{2}), \tag{110}\] \[\bra{1}\rho_{\alpha}\ket{1}=\kappa(\kappa^{2}|\alpha|^{2}+1-\kappa)\exp(- \kappa|\alpha|^{2}), \tag{111}\] \[\bra{0}\rho_{\alpha}\ket{1}=-\kappa^{2}\alpha^{*}\exp(-\kappa|\alpha|^{2}), \tag{112}\] \[\left\langle 2|\,\rho_{\alpha}\,|2\right\rangle=\kappa(\frac{1}{2}\kappa^{4}| \alpha|^{4}+2(\kappa^{2}-\kappa^{3})|\alpha|^{2}+(1-\kappa^{2}))\exp(-\kappa| \alpha|^{2}), \tag{102}\] \[\left\langle 0|\,\rho_{\alpha}\,|2\right\rangle=\frac{1}{\sqrt{2}}\kappa^{3}( \alpha^{*})^{2}\exp(-\kappa|\alpha|^{2}). \tag{103}\] The statistics required by the decoy method are all based on the gains of separable coherent states. For example, the probability of sending \(|\alpha\rangle\otimes|\beta\rangle\) whilst receiving \((|02\rangle+|20\rangle)/\sqrt{2}\) can be computed by \[\begin{split}&\frac{1}{2}(\left\langle 02|+\langle 20| \right)\rho_{\alpha}\otimes\rho_{\beta}(|02\rangle+|20\rangle)\\ =&\frac{1}{2}\left(\left\langle 0\right|\rho_{ \alpha}\,|0\right\rangle\left\langle 2|\,\rho_{\beta}\,|2\right\rangle+\left\langle 2 |\,\rho_{\alpha}\,|2\right\rangle\left\langle 0|\,\rho_{\beta}\,|0\right\rangle+ \left\langle 0|\,\rho_{\alpha}\,|2\right\rangle\left\langle 2|\,\rho_{\beta}\,|0 \right\rangle+\left\langle 2|\,\rho_{\alpha}\,|0\right\rangle\left\langle 0|\, \rho_{\beta}\,|2\right\rangle)\,.\end{split} \tag{104}\]
2309.10093
A Minimal left ideal description of Geometric structures in dimensions $6$, $7$, and $8$
In this paper we relate minimal left ideals on Clifford algebras with special geometric structures in dimensions $6,7,$ and $8$.
Ricardo Suarez
2023-09-18T19:06:14Z
http://arxiv.org/abs/2309.10093v1
# A minimal left ideal description of geometric structures in dimensions \(6,7,\) and \(8\) ###### Abstract. In this paper we relate minimal left ideals on Clifford algebras with special geometric structures in dimensions \(6,7,\) and \(8\). ## 1. Introduction Spinorial descriptions of special geometric structures in dimensions \(6,7,\) and \(8\) have been recently explored in Riemannian geometry. The spaces of spinors that are often used are \(\Delta=\mathbb{R}^{m}\), where \(m=8,8,16\), for \(SU(3)\), \(G_{2}\), and \(Spin(7)\) structures respectively. The goal of this paper is to canonically identify special geometric structures in dimensions \(6,7,\) and \(8\) with the left Clifford modules generated by a primitive idempotents, i.e. a minimal left ideals for the appropriate dimensions. For instance, in dimension \(6\), the \(SU(3)\) structure \(\psi_{+},\psi_{-},\omega_{0}\) defines the minimal left ideal \(\mathbb{R}_{0,6}\cdot\frac{1}{32}(\star q^{*}(\psi_{+}\wedge\psi_{-})+4q^{*}( \psi_{+})+4\star q^{*}(\omega_{0}))\), where \(\frac{1}{32}(\star q^{*}(\psi_{+}\wedge\psi_{-})+4q^{*}(\psi_{+})+4\star q^{*}( \omega_{0}))\) is the unit element in the induced minimal left ideal. In dimension \(7\), the fundamental \(3\)-form \(\phi\) that defines the \(G_{2}\) structure induces the minimal left ideal \(\mathbb{R}_{0,7}\cdot\frac{1}{112}(\star q^{*}(\phi\wedge\star\phi)+7q^{*}( \phi)-7q^{*}(\star\phi)-q^{*}(\phi\wedge\star\phi))\), where \(\frac{1}{112}(\star q^{*}(\phi\wedge\star\phi)+7q^{*}(\phi)-7q^{*}(\star\phi)- q^{*}(\phi\wedge\star\phi))\) defines the unit element in this minimal left ideal. Lastly, in dimension \(8\), the \(Spin(7)\) structure induced by the \(4\)-form \(\Omega_{0}\) defines the minimal left ideal \(\mathbb{R}_{0,8}\cdot\frac{1}{128}(\star q(\Omega_{0}\wedge\Omega_{0})-8q( \Omega_{0})+q(\Omega_{0}\wedge\Omega_{0}))\). ## 2. Geometric structures in dimension \(6,7,\) and \(8\) We begin with the definition of a \(G\)-structure on a differential manifold \(M\). **Definition 2.1**.: _Let \(G\) be a closed Lie subgroup of \(GL(n,\mathbb{R})\). A \(G\)**-structure** on \(M\) is a reduction of the structure group of the frame bundle \(\mathcal{F}(M)\), \(GL(n,\mathbb{R})\), to \(G\). That is, a \(G\)-structure is a principal sub-bundle \(Q\) of \(\mathcal{F}(M)\) with fibre \(G\). If \(G\) is one of the groups that appear in the Berger classification theorem, then \(G\) is called a **special geometric structure**._ Let \((M,g)\) be an oriented Riemannian spin manifold; that is, \((M,g)\) is a Riemannian manifold with vanishing first and second Stiefel Whitney classes. These manifolds admit at least two tensor fields, the Riemannian metric \(g\) and the volume form \(dV_{g}\), whose stabilizer group at any given point is the special orthogonal group \(SO(n)\). As is well known for Riemannian manifolds the holonomy group \(Hol(g)\) is strictly smaller than \(SO(n)\) if and only if there exist additional nontrivial parallel tensor fields on \(M\) with respect to the Levi-Civita connection \(\nabla^{g}\). Thus \(G\)-structures in general can be viewed in terms of the additional tensor fields assigned to our oriented Riemannian manifolds. In 1955 Marcel Berger classified the possible Lie subgroups of \(O(n)\) for \(n\)-dimensional, simply connected manifolds with an irreducible, non-symmetric Riemannian metric (see [14]). **Theorem 2.2**.: _Suppose that \(M\) is a simply connected manifold of dimension \(n\) and \(g\) is an irreducible, non-symmetric Riemannian metric on \(M\). Then exactly one of the following seven cases holds._ 1. \(Hol(g)=SO(n)\)_._ 2. \(n=2m\) _with_ \(m\geq 2\)_, and_ \(Hol(g)=U(m)\) _in_ \(SO(2m)\)_._ 3. \(n=2m\) _with_ \(m\geq 2\)_, and_ \(Hol(g)=SU(m)\) _in_ \(SO(2m)\)_._ 4. \(n=4m\) _with_ \(m\geq 2\)_, and_ \(Hol(g)=Sp(m)\) _in_ \(SO(4m)\)_._ 5. \(n=4m\) _with_ \(m\geq 2\)_, and_ \(Hol(g)=Sp(m)Sp(1)\) _in_ \(SO(4m)\)_._ 6. \(n=7\) _and_ \(Hol(g)=G_{2}\) _in_ \(SO(7)\)_._ 7. \(n=8\) _and_ \(Hol(g)=Spin(7)\) _in_ \(SO(8)\)_._ In dimension \(6\), an \(SU(3)\) structure can be defined by a \(2\)-form and a stable \(3\)-form which can locally be expressed as \(\omega=e^{12}+e^{34}+e^{56}\) and \(\psi_{+}=e^{135}-e^{146}-e^{236}-e^{245}\). Moreover, on a \(6\)-dimensional oriented Riemannian manifold, we have two additional tensors, the almost complex structure \(J\) and a complex volume form \(\Psi\) which can be expressed entirely in terms of \(J\) and \(\psi_{+}\): \(\Psi=\psi_{+}+i\psi_{-}\), where \(\psi_{-}=J\psi_{+}\). Moreover, the Riemannian metric can be expressed in terms of \(\omega\) and \(J\), given by the tensor equation \(g=\omega(\cdot,J\cdot)\). In dimension \(7\), a \(G_{2}\) structure on a \(7\)-dimensional Riemannian manifold is defined by a stable \(3\)-form \(\phi\) which can be locally (point-wise) defined as \(\phi=e^{123}+e^{145}-e^{257}-e^{347}+e^{167}-e^{356}+e^{246}\) with respect to the local frame \(e^{1},\ldots,e^{7}\) of \(T_{p}^{*}M\). For \(G_{2}\) manifolds, the Riemannian metric and volume form are expressed in terms of the fundamental \(3\)-form via the equation \(g(X,Y)dV=\frac{1}{6}(\iota_{X}\phi)\wedge(\iota_{Y}\phi)\wedge\phi\) for any pair of vector fields \(X,Y\). On dimension \(8\) Riemannian manifolds, a \(Spin(7)\) structure is defined as an admissible \(4\)-form that is locally expressed as: \[\Omega=e^{1234}+e^{1256}+e^{1278}+e^{1357}-e^{1368}-e^{1458}\] \[-e^{1467}-e^{2358}-e^{2367}-e^{2457}+e^{2468}+e^{3456}+e^{3478}+e^{5678}.\] ## 3. Real Clifford algebras and minimal left ideals generated by primitive idempotents Let \(V\) be a finite dimensional \(\mathbb{R}\) vector space with quadratic form \(q\), and \(V^{\otimes}=\bigoplus_{k}V^{\otimes k}\) its tensor algebra. We define the **Clifford algebra** of the pair \((V,q)\) as the quotient of the tensor algebra with the two-sided ideal \(I_{q}=\langle v\otimes v+q(x)1_{V^{\otimes}}\rangle\); that is, \(C_{q}(V)=V^{\otimes}/I_{q}\). The Clifford algebra \(C_{q}(V)\) carries a natural \(\mathbb{Z}_{2}\) grading, where \(C_{q}^{0}(V)\) denotes the elements of even degree, while \(C_{q}^{1}(V)\) denotes the elements of odd degree. Choosing a basis for \((V,q)\), say \(e_{1},\ldots,e_{n}\), we get the following canonical basis for the Clifford algebra generated by the following \(2^{n}\) monomials: \[\{1_{V}\}\cup\{e_{i_{1}}\cdots e_{i_{k}}:i_{1}<i_{2}<\cdots<i_{k},k=1,\ldots,n\}. \tag{1}\] As is well known, for any quadratic space over \(\mathbb{R}\) we have an isomorphism with a quadratic space of the form \(\mathbb{R}^{p,q}\), with a signature \((p,q)\), where \(p\) is the number of positive definite generators and \(q\) the number of negative definite generators. For the quadratic spaces \(\mathbb{R}^{p,q}\) and \(\mathbb{R}^{n}\), we denote the associated Clifford algebras by \(\mathbb{R}_{p,q}\) and \(\mathbb{R}_{0,n}\) respectively. \(\mathbb{R}_{0,n}\) is canonically isomorphic to \(\bigwedge\mathbb{R}^{n}\) as \(\mathbb{R}\) vector spaces. This isomorphism is achieved via the assignment of the **quantization map**\(q:\bigwedge\mathbb{R}^{n}\to\mathbb{R}_{0,n}\), where \(e_{i_{1}}\wedge\cdots\wedge e_{i_{k}}\mapsto e_{i_{1}}\cdots e_{i_{k}}\). The natural grading of both algebras is preserved via this assignment. The inverse of this isomorphism is what is called the **symbol map**, which we denote \(\sigma:\mathbb{R}_{0,n}\to\bigwedge\mathbb{R}^{n}\), given by \(\sigma(e_{i_{1}}\cdots e_{i_{k}})=e_{i_{1}}\wedge\cdots\wedge e_{i_{k}}\). These maps have natural extensions onto the spaces of exterior forms on \(\mathbb{R}^{n}\). We denote these extensions by \(q^{*}:\bigwedge^{*}(\mathbb{R}^{n})^{*}\to\mathbb{R}_{n}\) where \(q^{*}(e^{i_{i}}\wedge\cdots\wedge e^{i_{k}})=e_{i_{1}}\cdots e_{i_{k}}\), and \(\sigma^{*}:\mathbb{R}_{n}\to\bigwedge^{*}(\mathbb{R}^{n})^{*}\) where \(\sigma^{*}(e_{i_{1}}\cdots e_{i_{k}})=e^{i_{1}}\wedge\cdots\wedge e^{i_{k}}\), where \(e^{1},\ldots,e^{n}\) is the dual basis to the canonical basis \(e_{1},\ldots,e_{n}\); that is, \(e^{i}(e_{j})=\delta_{ij}\). It is well known that Clifford algebras are isomorphic to matrix algebras over the division algebras \(\mathbb{R},\mathbb{C}\), and \(\mathbb{H}\). This classification is given in the following theorem (see [7]). **Theorem 3.1**.: _The Clifford algebra \(\mathbb{R}_{p,q}\), where \(p+q=n\), has the following minimal representations over \(\mathbb{R}\), \(\mathbb{C}\), and \(\mathbb{H}\):_ * \(\mathbb{R}_{p,q}\cong M_{2^{\frac{n}{2}}}(\mathbb{R})\) _if_ \(q-p=0,6\mod 8\)_._ * \(\mathbb{R}_{p,q}\cong M_{2^{\frac{n-1}{2}}}(\mathbb{C})\) _if_ \(q-p=1,5\mod 8\)_._ * \(\mathbb{R}_{p,q}\cong M_{2^{\frac{n-2}{2}}}^{2}(\mathbb{H})\) _if_ \(q-p=2,4\mod 8\)_._ * \(\mathbb{R}_{p,q}\cong M_{2^{\frac{n-3}{2}}}^{2}(\mathbb{H})\oplus M_{2^{\frac{ n-3}{2}}}(\mathbb{H})\) _if_ \(q-p=3\mod 8\)_._ * \(\mathbb{R}_{p,q}\cong M_{2^{\frac{n-1}{2}}}(\mathbb{R})\oplus M_{2^{\frac{n-1 }{2}}}(\mathbb{R})\) _if_ \(q-p=7\mod 8\)_._ For any semisimple algebra \(A\), a minimal left ideal is of type \(A\cdot e\), where \(\cdot\) is multiplication in the algebra and \(e\in A\) is a **primitive idempotent**. An element \(e\) is primitive if it cannot be written as a sum of orthogonal idempotents (\(e=f+g\); \(f^{2}=f\), \(g^{2}=g\), \(f\cdot g=g\cdot f=0\) see [17]). An idempotent is called **minimal** if it is minimal with respect to the partial ordering \(f\leq e\), which happens if and only if \(ef=f=fe\) (see [17]). A **minimal left ideal**\(I\subset A\) is a left ideal that does not contain any other nonzero left ideals. It is known that if \(I\) is a minimal left ideal of our algebra \(A\) then either \(I^{2}=0\) or \(I=A\cdot e\) for some idempotent \(e\in A\) (see [17]). For our Clifford algebras, any minimal left ideal is of the form \(\mathbb{R}_{p,q}\cdot f\) where \(f\) is a primitive idempotent. \(\mathbb{R}_{p,q}\cdot f\) is clearly a left \(\mathbb{R}_{p,q}\) module, where module multiplication is given by \(\mathbb{R}_{p,q}\times\mathbb{R}_{p,q}\cdot f\rightarrow\mathbb{R}_{p,q}\cdot f\), via \((\phi,\psi\cdot f)\mapsto(\phi\cdot\psi)\cdot f\), for all \(\phi\in\mathbb{R}_{p,q}\) and \(\psi\cdot f\in\mathbb{R}_{p,q}\cdot f\). **Definition 3.2**.: _The minimal left ideals of the Clifford algebra \(\mathbb{R}_{p,q}\) are called spinor spaces, and the elements of the minimal left ideals are called algebraic spinors._ As is consistent with spin geometry, these spinor spaces generate our spinor representations of the appropriate dimension. The following theorem gives us the construction and classification of minimal left ideals in \(\mathbb{R}_{p,q}\) (see [17]). **Theorem 3.3**.: _A minimal left ideal of \(\mathbb{R}_{p,q}\) is of type \(\mathbb{R}_{p,q}\cdot f\) where \(f=\dfrac{1+e_{t_{1}}}{2}\ldots\ldots\dfrac{1+e_{t_{k}}}{2}\) is a primitive idempotent in \(\mathbb{R}_{p,q}\) and \(e_{t_{1}},\ldots,e_{t_{k}}\) is a set of commuting elements of the canonical basis such that \(e_{t_{i}}^{2}=1\) for all \(i=1,\ldots,k=q-r_{q-p}\); moreover, the generators form a multiplicative group of order \(2^{q-r_{q-p}}\). The numbers \(r_{i}\) are called the Random-Hurwitz numbers, given by the recurrence formula \(r_{q-p}\) subject to the conditions: \(r_{0}=0,\ r_{1}=1,\ r_{2}=2,\ r_{3}=2,\ r_{j}=3\) where \(4\leq j\leq 7\), \(r_{i+8}=r_{i}+4\) for \(i\geq 0\), \(r_{-1}=-1\), and \(r_{-i}=1-i+r_{i-2}\) for \(i\geq 2\)._ The \(k\) commuting elements generate \(2^{k}\) different idempotents which yield the decomposition of the Clifford algebra given by: \[\mathbb{R}_{p,q}=\bigoplus_{all\ \pm\ combinations}\mathbb{R}_{p,q}\cdot \prod_{\alpha}\dfrac{1\pm e_{\alpha}}{2},\] where each minimal left ideal \(\mathbb{R}_{p,q}\cdot\prod_{\alpha}\dfrac{1\pm e_{\alpha}}{2}\) is of real dimension \(2^{p+q-k}\). The algebra of endomorphisms is isomorphic to the real matrix algebra of dimensions matching those of the above theorem; that is, \(\mathbb{R}_{p,q}\cong End(\mathbb{R}_{p,q}\cdot f)\cong M_{2^{p+q-k}}(\mathbb{ R})\). Restricting our representations to the spin sub-groups \(\rho:Spin(p,q)\to Aut(\mathbb{R}_{p,q}\cdot f)\) gives us the usual spinor representations. In the next three sections, we properly recover geometric structures in dimensions \(6,7,\) and \(8\) in terms of these minimal left ideals. ## 4. Recovering \(Su(3)\) structures from Algebraic spinors in dimension \(6\) In this section we make the association between the local description of the \(SU(3)\) structure in dimension six and its associated minimal left ideal of \(\mathbb{R}_{0,6}\). ### An \(Su(3)\) structure recovered from a minimal left ideal in dimension \(6\) An \(SU(3)\) structure in dimension \(6\) is given by tensors \(\omega_{0},\psi_{\pm},J_{0}\) such that their stabilizer in \(\mathbb{R}^{6}\) is the group \(SU(3)\). For the Clifford algebra \(\mathbb{R}_{0,6}\) we define the primitive idempotent \(f=\left(\dfrac{1+e_{135}}{2}\right)\left(\dfrac{1-e_{146}}{2}\right)\left( \dfrac{1-e_{236}}{2}\right)=\frac{1}{8}(1+e_{135}-e_{146}-e_{236}-e_{245}-e_{3 456}-e_{1234}-e_{1256})\), where \(\mathbb{R}_{0,6}\cdot f\) is the associated minimal left ideal. Now we normalize the idempotent to get unit coefficients which we denote in \(\mathbb{R}_{0,6}\). Utilizing the quantization and symbol maps, \(q^{*}:\bigwedge(\mathbb{R}^{6})^{*}\rightarrow\mathbb{R}_{0,6}\) and \(\sigma^{*}:\mathbb{R}_{0,6}\rightarrow\bigwedge(\mathbb{R}^{6})^{*}\), as well as the grading projection map \(\pi_{\alpha}:\mathbb{R}_{0,6}\rightarrow\mathbb{R}_{0,6}^{\alpha}\), where \(\pi_{\alpha}(x)=\langle x\rangle_{\alpha}\) is the \(\alpha\) graded component, we have \(\langle W\rangle_{0}=1,\langle W\rangle_{3}=e_{135}-e_{146}-e_{236}-e_{245}\), and \(\langle W\rangle_{4}=e_{3456}-e_{1234}-e_{1256}\). This gives us the graded decomposition \(W=\langle W\rangle_{0}+\langle W\rangle_{3}+\langle W\rangle_{4}\). Using the Clifford Hodge dual, we have the relationship \(\star\langle W\rangle_{3}=e_{246}-e_{235}-e_{145}-e_{136}\), and \(\star\langle W\rangle_{4}=-(e_{12}+e_{56}+e_{34})\). With these relations, we recover the \(SU(3)\) structure given by \(\sigma^{*}(\langle W\rangle_{3})=\psi_{+}\in\bigwedge^{3}(\mathbb{R}^{6})^{*}\), \(-\sigma^{*}(\star\langle W\rangle_{3})=\psi_{-}\in\bigwedge^{3}(\mathbb{R}^{6} )^{*}\), \(-\sigma^{*}(\star\langle W\rangle_{4})=\omega_{0}\in\bigwedge^{2}(\mathbb{R}^{ 6})^{*}\). **Proposition 4.1**.: _Fix the primitive idempotent \(f=\frac{1}{8}(1+e_{135}-e_{146}-e_{236}-e_{245}-e_{3456}-e_{1234}-e_{1256})\) in \(\mathbb{R}_{0,6}\) that defines the minimal left ideal \(\mathbb{R}_{0,6}\cdot f\). Normalize the left ideal so that we have unit coefficients in the sense \(W=8f\). From this we recover the tensor fields that define an \(SU(3)\) structure in dimension \(6\). That is, \(\sigma^{*}(\langle W\rangle_{3})=\psi_{+},-\sigma^{*}(\star\langle W\rangle_{ 3})=\psi_{-}\), and \(-\sigma^{*}(\star\langle W\rangle_{4})=\omega_{0}\)._ ### Minimal left ideal induced from an \(Su(3)\) structure For the converse, we fix an \(SU(3)\) structure in \(\mathbb{R}^{6}\) defined by the local tensors \(\omega_{0}=e^{12}+e^{34}+e^{56}\) and \(\psi_{+}=e^{135}-e^{146}-e^{236}-e^{245}\), and \(\psi_{-}=J\cdot\psi_{+}\), where \(J\) is the complex structure in \(\mathbb{R}^{6}\). It is easy to see that \(q^{*}(\psi_{+})=e_{135}-e_{246}-e_{236}-e_{145}\), \(q^{*}(\psi_{-})=e_{136}+e_{145}+e_{235}-e_{246}\), and \(q^{*}(\omega_{0})=e_{12}+e_{56}+e_{34}\). The volume form can be expressed in terms of \(\psi_{+}\) and \(\psi_{-}\) using the formula \(\psi_{+}\wedge\psi_{-}=4e^{123456}\); hence the volume element of the Clifford algebra \(\mathbb{R}_{0,6}\) can be expressed by the formula \(\frac{1}{4}q^{*}(\psi_{+}\wedge\psi_{-})=e_{123456}\). Now via the Clifford Hodge product we have \(\frac{1}{4}\star q^{*}(\psi_{+}\wedge\psi_{-})=1\). Hence we have the formula \(\frac{1}{32}(\star q^{*}(\psi_{+}\wedge\psi_{-})+4q^{*}(\psi_{+})+4\star q^{*} (\omega_{0}))=\frac{1}{8}(1+e_{135}-e_{146}-e_{236}-e_{245}-e_{3456}-e_{1234}-e_{ 1256})=\left(\frac{1+e_{135}}{2}\right)\left(\frac{1-e_{146}}{2}\right)\left( \frac{1-e_{236}}{2}\right)\), where \(e_{135},e_{146},e_{236}\) are commuting positive definite elements in the canonical basis. Hence we have a primitive idempotent, establishing the following: **Proposition 4.2**.: _The \(SU(3)\) structure \(\omega_{0},\psi_{+},\psi_{-}\) in \(\mathbb{R}^{6}\) determines the primitive idempotent \(f=\frac{1}{32}(\star q^{*}(\psi_{+}\wedge\psi_{-})+4q^{*}(\psi_{+})+4\star q^{ *}(\omega_{0}))\) in the Clifford algebra \(\mathbb{R}_{0,6}\), and the resulting minimal left ideal is \(\mathbb{R}_{0,6}\cdot\frac{1}{32}(\star q^{*}(\psi_{+}\wedge\psi_{-})+4q^{*}( \psi_{+})+4\star q^{*}(\omega_{0}))\)._ Defintionally, elements of our minimal left ideal \(\sigma\in\mathbb{R}_{0,6}\cdot\frac{1}{32}(\star q^{*}(\psi_{+}\wedge\psi_{-}) +4q^{*}(\psi_{+})+4\star q^{*}(\omega_{0}))\) are algebraic spinors generated by \(SU(3)\) structures. Although it is enough to have this canonical identification, from a well defined \(SU(3)\) structure \(\psi_{+},\psi_{-},\omega_{0}\) we can always generate a primitive idempotent and hence a minimal left ideal in this canonical manner. Now the primitive idempotent, \(f=\frac{1}{32}(\star q^{*}(\psi_{+}\wedge\psi_{-})+4q^{*}(\psi_{+})+4\star q^{ *}(\omega_{0}))\), represents the identity class in the canonical projection \(\phi\mapsto\phi\cdot f\). Moreover, we have the \(\mathbb{R}\) basis \(f,e_{2}f,e_{3}f,e_{5}f,e_{23}f,e_{25}f,e_{35}f,e_{235}f\) in \(\mathbb{R}_{0,6}\cdot f\), where each basis element is a unit spinor in the module. ## 5. Algebraic spinors in dimension \(7\) In dimension \(7\), a \(G_{2}\) structure is a positive \(3\)-form whose local tensor in \(\mathbb{R}^{7}\) is given by \(\phi=e^{123}+e^{145}+e^{167}+e^{246}-e^{257}-e^{347}-e^{356}\in\bigwedge^{3}( \mathbb{R}^{7})^{*}\), and whose fundamental \(4\)-form with respect to the Hodge star operator is given by \(\star\phi_{0}=e^{4567}+e^{2367}+e^{2345}+e^{1357}-e^{1346}-e^{1256}-e^{1247}\in \bigwedge^{4}(\mathbb{R}^{7})^{*}\). ### \(G_{2}\) structure recovered from a minimal left ideal We begin by fixing the primitive idempotent \(f=\frac{1}{16}(1+e_{123})(1+e_{145})(1-e_{257})(1+e_{167})\), which generates the minimal left ideal \(\mathbb{R}_{0,7}\cdot f\). We normalize the idempotent to obtain unit scalars via \(W=16f=1+e_{123}+e_{145}-e_{2345}-e_{257}-e_{1357}+e_{1247}-e_{347}+e_{167}-e_{ 2367}-e_{4567}-e_{1234567}+e_{1256}-e_{356}-e_{246}-e_{1346}\). Using the symbol and projection maps, we have \(\langle W\rangle_{3}=e_{123}+e_{145}+e_{167}+e_{246}-e_{257}-e_{347}-e_{356}\), and \(\langle W\rangle_{4}=e_{2367}-e_{4567}+e_{1346}+e_{1256}-e_{2345}-e_{1357}+e_{1 247}\). In terms of the projection decomposition, we write \(W=1+\langle W\rangle_{3}+\langle W\rangle_{4}-e_{1234567}\), where \(e_{1234567}\) is the volume element in \(\mathbb{R}_{0,7}\). Using the Clifford Hodge dual, we have \(\star\langle W\rangle_{3}=\langle W\rangle_{4}\) and \(\star e_{1234567}=1\), and thus we have \(W=\star e_{1234567}+\langle W\rangle_{3}+\star\langle W\rangle_{3}-e_{1234567}\). Hence, using the extended symbol map, we have \(\sigma^{*}(\langle W\rangle_{3})=\phi_{0}\in\bigwedge^{3}(\mathbb{R}^{7})^{*}\), which is the desired \(G_{2}\) structure in \(\mathbb{R}^{7}\). The associated \(4\)-form that comes with a \(G_{2}\) structure is given by \(-\sigma^{*}(\langle W\rangle_{4})=\star\phi_{0}\). From this we have the following. **Proposition 5.1**.: _Fix the primitive idempotent \(f=\frac{1}{16}(1+e_{123})(1+e_{145})(1-e_{257})(1+e_{167})\), with \(\mathbb{R}_{0,7}\cdot f\) the associated minimal left ideal. We can then obtain a \(G_{2}\) structure in \(\mathbb{R}^{7}\) generated from the normalized tensor \(W\) via \(\sigma^{*}(\langle W\rangle_{3})=\phi_{0}\), and the associated \(4\)-form is then given by \(-\sigma^{*}(\langle W\rangle_{4})=\star\phi_{0}\)._ ### Algebraic spinors generated by \(G_{2}\) structures in dimension \(7\) Fix a \(G_{2}\) structure \(\phi=e^{123}+e^{145}+e^{167}+e^{246}-e^{257}-e^{347}-e^{356}\in\bigwedge^{3}( \mathbb{R}^{7})^{*}\). Using the quantization map, we have \(q^{*}(\phi)=e_{123}+e_{145}+e_{167}+e_{246}-e_{257}-e_{347}-e_{356}\) and \(q^{*}(\star\phi)=-e_{2367}+e_{4567}-e_{1346}-e_{1256}+e_{2345}+e_{1357}-e_{1247}\). The volume form in \(\mathbb{R}^{7}\) expressed in terms of the \(G_{2}\) structure is given by the formula \(\phi\wedge\star\phi=7e^{1234567}\), and thus we have on \(\mathbb{R}_{0,7}\) the equation \(q^{*}(\phi\wedge\star\phi)=7e_{1234567}\). Using the Clifford Hodge star operator, we have \(\star q^{*}(\phi\wedge\star\phi)=7\). Putting this all together, we get the following primitive idempotent element in \(\mathbb{R}_{0,7}\) induced by the \(G_{2}\) structure: \[f_{\phi}=\frac{1}{112}(\star q^{*}(\phi\wedge\star\phi)+7q^{*}(\phi)-7q^{*}( \star\phi)-q^{*}(\phi\wedge\star\phi))=\frac{1}{16}(1+e_{123})(1+e_{145})(1-e_{ 257})(1+e_{167}).\] **Proposition 5.2**.: _Fix the local \(G_{2}\) structure \(\phi=e^{123}+e^{145}+e^{167}+e^{246}-e^{257}-e^{347}-e^{356}\) in \(\mathbb{R}^{7}\). The \(3\)-form \(\phi\) then determines the primitive idempotent \(f_{\phi}=\frac{1}{112}(\star q^{*}(\phi\wedge\star\phi)+7q^{*}(\phi)-7q^{*}( \star\phi)-q^{*}(\phi\wedge\star\phi))\) in the Clifford algebra \(\mathbb{R}_{0,7}\), with the resulting minimal left ideal \(\mathbb{R}_{7}\cdot\frac{1}{112}(\star q^{*}(\phi\wedge\star\phi)+7q^{*}(\phi) -7q^{*}(\star\phi)-q^{*}(\phi\wedge\star\phi))=:\mathbb{R}_{7}\cdot f_{\phi}\)._ Elements of the minimal left ideal \(\mathbb{R}_{0,7}\cdot f_{\phi}\) are algebraic spinors generated by the \(G_{2}\) structure. Although it is enough to have this canonical identification, from a well defined \(G_{2}\) structure \(\phi\) we can always generate a primitive idempotent and hence a minimal left ideal in this canonical manner. As vector spaces we have \(\mathbb{R}_{0,7}\cdot f_{\phi}\cong\mathbb{R}^{8}\), with the basis of equivalence classes given by \(f_{\phi},e_{1}f_{\phi},e_{2}f_{\phi},e_{3}f_{\phi},e_{4}f_{\phi},e_{5}f_{\phi}, e_{6}f_{\phi},e_{7}f_{\phi}\). ## 6. Algebraic spinors in dimension 8 In dimension 8, a \(Spin(7)\) structure is defined by the model tensor \(\Omega_{0}=e^{1234}+e^{1256}+e^{1278}+e^{1357}-e^{1368}-e^{1458}-e^{1467}-e^{2358 }-e^{2367}-e^{2457}+e^{2468}+e^{3456}+e^{3478}+e^{5678}\in\wedge^{4}(\mathbb{R} ^{8})^{*}\). Using the quantization map, we have the following: \(q^{*}(\Omega_{0})=e_{1234}+e_{1256}+e_{1278}+e_{1357}-e_{1368}-e_{1458}-e_{1467} -e_{2358}-e_{2367}-e_{2457}+e_{2468}+e_{3456}+e_{3478}+e_{5678}\), \(q^{*}(\Omega_{0}\wedge\star\Omega_{0})=8e_{12345678}\), and \(\star q^{*}(\Omega_{0}\wedge\star\Omega_{0})=8\). Now the formula \[f_{\Omega}=\frac{1}{128}(\star q(\Omega_{0}\wedge\Omega)-8q(\Omega_{0})+q( \Omega_{0}\wedge\Omega_{0}))\] is a primitive idempotent in \(\mathbb{R}_{0,8}\), as it factors out as \(f_{\Omega}=\frac{1}{16}(1-e_{1234})(1-e_{1256})(1-e_{1278})(1-e_{1357})\). Thus \(\mathbb{R}_{0,8}\cdot\frac{1}{128}(\star q(\Omega_{0}\wedge\Omega)-8q(\Omega _{0})+q(\Omega_{0}\wedge\Omega_{0}))\) is the minimal left ideal induced by our \(Spin(7)\) structure. Moreover, \(\mathbb{R}_{0,8}\cdot\frac{1}{128}(\star q(\Omega_{0}\wedge\Omega)-8q(\Omega _{0})+q(\Omega_{0}\wedge\Omega_{0}))\cong\Delta_{8}=\mathbb{R}^{16}\). We summarize this with the following proposition. **Proposition 6.1**.: _The \(Spin(7)\) structure \(\Omega_{0}\) in \(\mathbb{R}^{8}\) determines the primitive idempotent \(f_{\Omega}=\frac{1}{128}(\star q(\Omega_{0}\wedge\Omega)-8q(\Omega_{0})+q( \Omega_{0}\wedge\Omega_{0}))\) in the Clifford algebra \(\mathbb{R}_{0,8}\), and the resulting minimal left ideal is \(\mathbb{R}_{0,8}\cdot\frac{1}{128}(\star q(\Omega_{0}\wedge\Omega)-8q(\Omega _{0})+q(\Omega_{0}\wedge\Omega_{0}))\)._ Conversely, for \(\mathbb{R}_{0,8}\) we have four commuting generators from our canonical basis \(e_{1234},e_{1256},e_{1278},e_{1357}\), in which the idempotent \(f=\frac{1}{16}(1-e_{1234})(1-e_{1256})(1-e_{1278})(1-e_{1357})\) defines the spinor space \(\mathbb{R}_{0,8}\cdot f\) isomorphic to \(\Delta_{8}=\mathbb{R}^{16}\). The normalized tensor, \(W=16f=1-(e_{1234}+e_{1256}+e_{1278}+e_{1357}-e_{1368}-e_{1458}-e_{1467}-e_{2358} -e_{2367}-e_{2457}+e_{2468}+e_{3456}+e_{3478}+e_{5678})+e_{12345678}\), decomposes as \(W=\star e_{12345678}-\langle W\rangle_{4}+e_{12345678}\). Hence we define \(\sigma^{*}(\langle W\rangle_{4})=\Omega_{0}\), and \(\sigma^{*}(\star\langle W\rangle_{4})=\star\Omega_{0}=\Omega_{0}\). This results in the following proposition: **Proposition 6.2**.: _Fix the primitive idempotent \(f=\frac{1}{16}(1-e_{1234})(1-e_{1256})(1-e_{1278})(1-e_{1357})\) such that \(\mathbb{R}_{0,8}\cdot f\) is the minimal left ideal. We can associate a \(Spin(7)\) structure in \(\mathbb{R}^{8}\) generated from the normalized tensor \(W\) via \(\sigma^{*}(\langle W\rangle_{4})=\sigma^{*}(\star\langle W\rangle_{4})=\Omega_ {0}\)._ ## 7. Primitive idempotents in dimension 6 to \(G_{2}\) structures in dimension 7 We conclude this paper by relating the constructions in dimension 6 and dimension 7 by viewing \(\mathbb{R}^{7}=\mathbb{R}^{6}\oplus\mathbb{R}\), where the orthogonal dimension is given by the basis vector \(e_{7}\). We define a generic \(SU(3)\) structure given in the \(\mathbb{R}^{6}\) component by \(\omega_{0}=e^{12}+e^{34}+e^{56}\), \(\psi_{+}=e^{135}-e^{146}-e^{236}-e^{245}\), and \(\psi_{-}=e^{136}+e^{145}+e^{235}-e^{246}\). For the associated Clifford algebra \(\mathbb{R}_{0,6}\) we have the associated primitive idempotent \(f=\left(\dfrac{1+e_{135}}{2}\right)\left(\dfrac{1-e_{146}}{2}\right)\left(\dfrac{ 1-e_{236}}{2}\right)=\frac{1}{8}(1+e_{135}-e_{146}-e_{236}-e_{245}-e_{3456}-e_{12 34}-e_{1256})\), with \(\mathbb{R}_{0,6}\cdot f\) being the associated minimal left ideal. As we saw above from the normalized idempotent \(W\) in dimension 6, we recover the \(SU(3)\) structure via \(\sigma^{*}(\langle W\rangle_{3})=\psi_{+},\sigma^{*}(\star\langle W\rangle_{3 })=\psi_{-}\), and \(\sigma^{*}(\star\langle W\rangle_{4})=\omega_{0}\). Now from the induced \(SU(3)\) structure in dimension 6, we can recover a \(G_{2}\) structure in dimension 7 via \(\phi_{0}=\sigma^{*}(\star\langle W\rangle_{4})\wedge e^{7}+\sigma^{*}(\langle W \rangle_{3})\). Conversely, starting with the \(SU(3)\) we have the primitive idempotent \(\frac{1}{32}(\star q^{*}(\psi_{+}\wedge\psi_{-})+4q^{*}(\psi_{+})+4\star q^{*} (\omega_{0}))\), which gives us the minimal left ideal \(\mathbb{R}_{0,6}\cdot\frac{1}{32}(\star q^{*}(\psi_{+}\wedge\psi_{-})+4q^{*} (\psi_{+})+4\star q^{*}(\omega_{0}))\). Now the normalized idempotent with unit scalars \(W\) in this formulation is given by \(W=\star q^{*}(\psi_{+}\wedge\psi_{-})+4q^{*}(\psi_{+})+4\star q^{*}(\omega_{0})\), and hence we define the primitive idempotent in dimension 7 as follows: \(f_{\phi}=\frac{1}{112}(\star q^{*}(\sigma^{*}(\star\langle W\rangle_{4})\wedge e ^{7}+\sigma^{*}(\langle W\rangle_{3})\wedge\star\sigma^{*}(\star\langle W \rangle_{4})\wedge e^{7}+\sigma^{*}(\langle W\rangle_{3}))+7q^{*}(\sigma^{*}( \star\langle W\rangle_{4})\wedge e^{7}+\sigma^{*}(\langle W\rangle_{3}))-q^{*} (\sigma^{*}(\star\langle W\rangle_{4})\wedge e^{7}+\sigma^{*}(\langle W \rangle_{3}))-q^{*}(\sigma^{*}(\star\langle W\rangle_{4})\wedge e^{7}+\sigma^{ *}(\langle W\rangle_{3}))-q^{*}(\sigma^{*}(\star\langle W\rangle_{4})\wedge e ^{7}+\sigma^{*}(\langle W\rangle_{3}))\). Thus \(\mathbb{R}_{0,7}\cdot f_{\phi}\) is the spinor module induced from the \(SU(3)\) structures recovered from the primitive idempotents in dimension 6. ## 8. Future research With the correspondences established above, we can construct algebraic spinor bundles from the induced spinor spaces and provide classification equations for \(G_{2}\), \(SU(3)\), and \(Spin(7)\) manifolds in terms of algebraic spinors. ## 9. Acknowledgments I would like to thank my advisor, Dr. Ivona Grzegorczyk, and my supervisor at UNITO, Dr. Anna Fino, as well as UNITO for financing my research and providing me guidance in this topic.
2306.06074
Improved flood mapping for efficient policy design by fusion of Sentinel-1, Sentinel-2, and Landsat-9 imagery to identify population and infrastructure exposed to floods
A reliable yet inexpensive tool for the estimation of flood water spread is conducive for efficient disaster management. The application of optical and SAR imagery in tandem provides a means of extended availability and enhanced reliability of flood mapping. We propose a methodology to merge these two types of imagery into a common data space and demonstrate its use in the identification of affected populations and infrastructure for the 2022 floods in Pakistan. The merging of optical and SAR data provides us with improved observations in cloud-prone regions; that is then used to gain additional insights into flood mapping applications. The use of open source datasets from WorldPop and OSM for population and roads respectively makes the exercise globally replicable. The integration of flood maps with spatial data on population and infrastructure facilitates informed policy design. We have shown that within the top five flood-affected districts in Sindh province, Pakistan, the affected population accounts for 31 %, while the length of affected roads measures 1410.25 km out of a total of 7537.96 km.
Usman Nazir, Muhammad Ahmad Waseem, Falak Sher Khan, Rabia Saeed, Syed Muhammad Hasan, Momin Uppal, Zubair Khalid
2023-05-31T20:46:06Z
http://arxiv.org/abs/2306.06074v1
Improved Flood Mapping for Efficient Policy Design by Fusion of Sentinel-1, Sentinel-2 and Landsat-9 Imagery to Identify Population and Infrastructure exposed to Floods ###### Abstract A reliable yet inexpensive tool for the estimation of flood water spread is conducive for efficient disaster management. The application of optical and SAR imagery in tandem provides a means of extended availability and enhanced reliability of flood mapping. We propose a methodology to merge these two types of imagery into a common data space and demonstrate its use in the identification of affected populations and infrastructure for the \(2022\) floods in Pakistan. The merging of optical and SAR data provides us with improved observations in cloud-prone regions; that is then used to gain additional insights into flood mapping applications. The use of open source datasets from WorldPop1and OSM2 for population and roads respectively makes the exercise globally replicable. The integration of flood maps with spatial data on population and infrastructure facilitates informed policy design. We have shown that within the top five flood-affected districts in Sindh province, Pakistan, the affected population accounts for \(31\%\), while the length of affected roads measures \(1410.25\) km out of a total of \(7537.96\) km. Footnote 1: [https://www.worldpop.org/](https://www.worldpop.org/) Footnote 2: [https://www.openstreetmap.org/](https://www.openstreetmap.org/) U. Nazir\({}^{\star}\), M. A. Waseem\({}^{\star}\), F. S. Khan\({}^{\dagger}\), R. Saeed\({}^{\dagger}\), S. M. Hasan\({}^{\dagger}\), M. Uppal\({}^{\star}\), Z. Khalid\({}^{\star}\)+\(\star\)Department of Electrical Engineering, Syed Babar Ali School of Science and Engineering \(\dagger\) Department of Economics, Mushtaq Ahmad Gurmani School of Humanities and Social Sciences Lahore University of Management Sciences (LUMS), Lahore, Pakistan {usman.nazir, m_waseem, falak.khan, rabia.saeed, syed.hasan, momin.uppal, zubair.khalid}@lums.edu.pk Optical imagery, SAR imagery, OSM, WorldPop, Flood mapping Footnote †: star}\)Department of Electrical Engineering, Syed Babar Ali School of Science and Engineering \(\dagger\) Department of Economics, Mushtaq Ahmad Gurmani School of Humanities and Social Sciences Lahore University of Management Sciences (LUMS), Lahore, Pakistan {usman.nazir, m_waseem, falak.khan, rabia.saeed, syed.hasan, momin.uppal, zubair.khalid}@lums.edu.pk ## 1 Introduction The availability of reliable data providing the spatial and temporal extent of a flooding event is a prerequisite for effective and efficient policy design and implementation. The damage, loss and needs assessment following a flood occurrence and the planning and execution of relief, rehabilitation and reconstruction measures are all contingent on flood information-period, extent, and depth of inundation. Precipitation and inundation information collected through weather stations or aerial photography is often limited in time and space and hence needs to be supplemented with more periodic and wide-ranging satellite imagery. The policy relevance of this data is significant overall and besides its usefulness for relief and reconstruction, it can be helpful in the identification of disaster-prone infrastructure and assessing adverse impacts on the health and education outcomes of affected communities. The design of river flood defense requires the estimation of potential flood levels, extent, and period [1]. In the literature, a range of methodologies have focused on flood mapping using optical sensors (e.g. Landsat, Sentinel-2, and VIIRS) [2, 3, 4] and SAR sensors (e.g. Radarsat and Sentinel-1) [5, 6]. Flood mapping using optical sensors is hindered by clouds that obscure surface observations; resulting in data gaps. SAR data, on the other hand, can fill in gaps in optical data because SAR can penetrate cloud cover, operate in any weather conditions, and provide timely and crucial information about one of the most frequent and devastating natural disasters: flooding. Research has shown when flooding occurs, smooth water surfaces replace rough surfaces and reflect the radar signal in the specular direction at a distance from the antenna, resulting in a low back-scattering and showing as dark areas in SAR images [7]. Few studies have explored the application of data fusion for flood mapping using both optical and SAR data [8, 9, 10, 11]. In [11], statistical water index-based thresholding algorithm is used to detect and monitor mega river floods. In this paper, we proposed a novel tool for mapping the extent of floods. We utilized the Google Earth Engine, a cloud-based platform for processing remote sensing data. This platform offers enhanced computational speed as the processing is outsourced to Google's servers, eliminating the need to download raw imagery beforehand. The primary focus of the paper centers around highlighting the importance of integrating the generated flood maps with spatial data of population and infrastructure. Our proposed approach to flood mapping, will empower the decision-makers with the necessary information for effective disaster response and policy development. ## 2 Methodology for Mapping Floods To ensure adequate satellite image coverage for the desired area of interest, we establish specific pre- and post-flood time periods for radar (Sentinel-1) and optical (Sentinel-2 and Landsat-9) satellites. For optical-based satellites, flood mapping involves applying NDWI (Normalized Difference Water Index) thresholding to the difference layer between dry and wet images, utilizing water's spectral characteristics to identify flooded areas. Meanwhile, in radar-based satellites, the flood mapping process includes speckle filtering to remove noise, computing the difference between pre and post-flood mosaics, and applying post-processing filtering techniques to refine the results. To enhance the reliability of flood mapping, a fusion approach is employed, where the intersection of the flood mapping results from optical satellites and the flood map generated from Sentinel-1 radar data is taken on a pixel-wise basis. This fusion process leverages the strengths of both optical and radar-based observations, resulting in a more accurate and comprehensive flood mapping output. (see Fig. 2). ## 3 Evaluation Results ### Study Region For the purpose of identifying the population and roads vulnerable to flooding, we choose the entire province of Sindh, Pakistan as our study area and make use of open source datasets provided by WorldPop and OpenStreetMap (OSM) to obtain population and road information respectively. To identify schools impacted by floods, we chose seven districts in the Sindh province of Pakistan as our study area: Hyderabad, Jamshoro, Malir, Malir Cantonment, Sajawal, Thatta, and T. M Khan (See Fig. 3). ### Datasets #### 3.2.1 Population Data WorldPop is a research group focusing to the provision of open access spatial demographic datasets. They utilize remote sensing and geospatial analysis, to estimate and map population distributions at fine scales across the globe. The datasets offered by WorldPop have diverse applications, including urban planning and disaster management. #### 3.2.2 Roads Network The roads network used is sourced from OpenStreetMap (OSM). OSM is an open collaborative mapping project that provides a wealth of geographic information, including roads, buildings, landmarks, and more. #### 3.2.3 School Locations To determine the geo-locations (geo-coordinates) of the schools in the study area, we utilize a classical computer vision method, namely 'Template Matching'. First, we download high resolution (zoom level 213) google map imagery for our region of interest. Google maps provide precise locations of different points of interest (POIs) (education, food, park, health, etc.), each of which can be separated using a unique icon. So, we simply extract the icon-image of education POIs, and we apply template matching between this icon-image and the google map images. For template matching, we use the normalized correlation coefficient method and find the locations where this correlation coefficient value is greater than \(0.95\). Since the template matching is applied on simple images, we obtain pixel locations after applying template matching. Hence, we interpolate the geo-spatial extents of the google map images to convert these pixel locations into geo-coordinates. We also add a post-processing step to ensure that all the extracted points are at least \(10\) meters apart, as it Figure 1: (a) Flood mapping in Sindh province, Pakistan using Sentinel-1, Sentinel-2 and Landsat-9 (radar and optical) satellites; (b) Population dataset from WorldPop; (c) Affected population (in red); (d) Roads network from OSM; (e) Affected roads (in red). is practically impossible to have more than one educational building in such a small radius. Using template matching with Google Maps images provides a practical approach to efficiently identify school buildings (see Fig. 3(a)). ### Affected Population and Roads To identify the population and infrastructure exposed to floods, we take the pixel wise intersection of final flood map (see Fig. 1(a) and Fig. 2(d)) with the population map from WorldPop (see Fig. 1(b)) and the roads network from OSM (see Fig. 1(d)) respectively. We get the affected population in Fig. 1(c) and affected roads in Fig. 1(e). We also achieve increased availability of flood mapping by computing the flood areas in different districts using different satellites as optical and radar based satellites provide imagery for different dates (see Fig. 2(e)). Table 1, shows the estimated number of the affected population and the length of damaged roads (in KM) in _top-\(5\)_ flood affected districts (out of \(35\)) of Sindh province, Pakistan. ### Affected Schools We employed the intersection of school locations and vectorized flood maps to identify the affected schools in our study region. By comparing the location data from both maps, we identified \(24\) schools that have been impacted. These affected schools are depicted as red dots in Fig. 3(c). Table 2 shows that the highest number of flood-affected schools can be observed in Malir and Thatta districts. ## 4 Flooding and Education Outcomes: Present Situation and Updates A field survey conducted from \(24\)th to \(27\)th November 2022, assessed the flooding levels and impacts on education in various locations in Sindh, Pakistan. The survey covered eight areas: Hyderabad, Jamshoro, Matyari, Sehwan, Manchar lake, Mehar, Nawabshah, and Bhit Shah. While it was not feasible to conduct full-length surveys to assess learning losses in the affected districts given the time and budget constraints, we plan to do the same in the future for \(35\) districts in Sindh province. Nonetheless, we used nationally representative data \begin{table} \begin{tabular}{|c|c|c|} \hline **Study area (districts)** & **Total schools** & **Affected schools** \\ \hline Jamshoro & 366 & **5** \\ \hline Malir & 1242 & **7** \\ \hline Malir Cantonment & 88 & 0 \\ \hline Thatta & 384 & 6 \\ \hline Sujjawal & 246 & 1 \\ \hline Hyderabad & 625 & **5** \\ \hline T. M Khan & 33 & 0 \\ \hline \end{tabular} \end{table} Table 2: Affected schools in study area of Sindh province. Top-3 ranking flood affected districts are in bold and, in particular, red (1st), violet (2nd) and black (3rd). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Study area (districts)** & **Actual population** & **Affected population** & **Actual road length (km)** & **Affected road length (km)** \\ \hline Jakobabad & 1382896 & **401370** & 3137.98 & **580.83** \\ \hline Mastung & 235830 & 112551 & 1102.04 & 316.72 \\ \hline Sibi & 233671 & **114117** & 626.82 & 84.77 \\ \hline Jafarabad & 504570 & **115279** & 926.50 & 117.93 \\ \hline Kalat & 290610 & 75822 & 1163.79 & **310** \\ \hline \end{tabular} \end{table} Table 1: Quantitative analysis of population and infrastructure exposed to floods in Sindh province, Pakistan. Top-3 ranking flood affected districts are in bold and, in particular, red (1st), violet (2nd) and black (3rd). Figure 2: _Flood mapping_ in Sindh province, Pakistan using (a) Sentinel-1, (b) Sentinel-2, (c) Landsat-9, (d) Intersection and majority voting; (e) Availability of flood mapping (continuous monitoring) is increased using multiple satellites for the month of August 2022 with union operation as Sentinel-1 and Sentinel-2 images are available for different dates (in red lines). from Pakistan Social and Living Standards Measurement Survey (2018-19 round) to develop a sense of existing disparities in the affected districts. We find that the districts that are overall affected by the incidence of flood i.e. Jacobabad and Sibi, as well as the ones in which we observe the maximum number of schools affected i.e. Thatta and Malir; already lag behind in certain literacy and education outcomes. For instance, the literacy rate (those who can read, and write and solve basic numeracy questions) is \(23\) % compared to \(38\) % in other districts of the same province. Similarly, in the districts where most schools are affected, \(64\) % population has never attended school compared to \(43.7\) % in other districts. The survey revealed three types of flooding scenarios. Firstly, there was an increase in surface water flowing from the mountainous regions of Balochistan towards the flat fields of Sindh. Secondly, the Indus river overflowed into the floodplain. Lastly, prolonged intense rainfall on flat terrain resulted in flooding. The water levels varied across these regions. In the croplands within the floodplain, the water has mostly dried out naturally or has been pumped out. However, in areas like Mehar and beyond, the water levels still remain around 2 feet deep, causing displacement of people and hindering the possibility of sowing crops. ## 5 Conclusion The proposed methodology for flood mapping has certain advantages: (a) useful in spatially identifying flood-affected households and disaster-prone infrastructure such as roads, schools etc. to aid in informed and data driven policy design, (b) uses publicly available satellite imagery and cloud-based processing (c) requires no specific hardware. Our proposed methodology will help us target areas for surveys in a reliable fashion to compare education (and more socioeconomic) outcomes in areas differently affected by floods.
2309.06197
360$^\circ$ from a Single Camera: A Few-Shot Approach for LiDAR Segmentation
Deep learning applications on LiDAR data suffer from a strong domain gap when applied to different sensors or tasks. In order for these methods to obtain similar accuracy on different data in comparison to values reported on public benchmarks, a large scale annotated dataset is necessary. However, in practical applications labeled data is costly and time consuming to obtain. Such factors have triggered various research in label-efficient methods, but a large gap remains to their fully-supervised counterparts. Thus, we propose ImageTo360, an effective and streamlined few-shot approach to label-efficient LiDAR segmentation. Our method utilizes an image teacher network to generate semantic predictions for LiDAR data within a single camera view. The teacher is used to pretrain the LiDAR segmentation student network, prior to optional fine-tuning on 360$^\circ$ data. Our method is implemented in a modular manner on the point level and as such is generalizable to different architectures. We improve over the current state-of-the-art results for label-efficient methods and even surpass some traditional fully-supervised segmentation networks.
Laurenz Reichardt, Nikolas Ebert, Oliver Wasenmüller
2023-09-12T13:04:41Z
http://arxiv.org/abs/2309.06197v1
# 360\({}^{\circ}\) from a Single Camera: ###### Abstract Deep learning applications on LiDAR data suffer from a strong domain gap when applied to different sensors or tasks. In order for these methods to obtain similar accuracy on different data in comparison to values reported on public benchmarks, a large scale annotated dataset is necessary. However, in practical applications labeled data is costly and time consuming to obtain. Such factors have triggered various research in label-efficient methods, but a large gap remains to their fully-supervised counterparts. Thus, we propose ImageTo360, an effective and streamlined few-shot approach to label-efficient LiDAR segmentation. Our method utilizes an image teacher network to generate semantic predictions for LiDAR data within a single camera view. The teacher is used to pretrain the LiDAR segmentation student network, prior to optional fine-tuning on 360\({}^{\circ}\) data. Our method is implemented in a modular manner on the point level and as such is generalizable to different architectures. We improve over the current state-of-the-art results for label-efficient methods and even surpass some traditional fully-supervised segmentation networks. ## 1 Introduction ### Label Efficient LiDAR Segmentation Recent advancements in the application of deep learning methods for LiDAR perception have yielded impressive results on public benchmarks. It is desirable for these methods to perform consistently across different devices and specifications. However, in practice the heterogeneous characteristics of LiDAR sensors (field of view, number of beams, rotational frequency, etc.) result in substantial variations in data. This leads to a decline in performance when deep learning methods are applied to different sensors or tasks. The severity of this sensor domain problem is unique to 3D pointclouds and has prevented the widespread adoption of pretrained feature extraction backbone networks, even between different tasks of the same dataset. As a result, the practical application of the methods require large annotated datasets in order for them to perform on par with their public benchmark counterparts [46]. This has led to research in efficient labeling [48, 40] and Label-efficient training as natural fits to counteract the extensive efforts and costs necessary to obtain such datasets. While recent improvements in label-efficient training show promise, this field still requires more research, as most methods under-perform compared to their fully-supervised counterparts. Building on the readily available camera images in autonomous driving and robotics we propose ImageTo360 for LiDAR semantic segmentation. In this paper we present: * A streamlined, yet effective and practically viable few-shot learning approach for 360\({}^{\circ}\) LiDAR semantic segmentation from a single camera view. Image data is Figure 1: Our few-shot method ImageTo360. A frozen image teacher is used to pretrain the LiDAR student on single camera predictions (“Slice” Data). Later finetuning and inference is performed using the entire 360\({}^{\circ}\) of LiDAR data, without image data. During pretraining / finetuning the loss of the respective LiDAR student method is used, following the original implementations. not required at inference time * The study of effective methods to improve the quality of 2D information for 3D pretraining * We show that information from a limited view is sufficient to train a network for 360\({}^{\circ}\) data. * A comprehensive analysis in the field of label-efficient LiDAR semantic segmentation, on the SemanticKITTI [2] benchmark, showing the effectiveness of our approach. We achieve state-of-the-art results with only 1% data, over comparable label-efficient methods. ## 2 Related Work The continuous and geometrically complex nature of LiDAR data makes the labeling process especially time-consuming and costly. This is exasperated by drift issues and memory requirements, limiting labeling on reconstructed scenes [2]. Such factors have lead to research on training in a weakly supervised manner. A common approach is the integration of region proposals, e.g. through clustering, during the training process [41, 27]. ScribbleKITTI [47] directly trains with weak scribble annotations, combining various methods such as self-training, multi-scale voxel class distribution information and the mean teacher framework [43]. Besides its susceptibility to various hyperparameters, ScribbleKITTI requires a large computational budget. Furthermore, generalization to different network architectures requires individual adaptions. ScribbleKITTI scales from weakly annotated data, which requires additional human effort. LaserMix [24] combines compositional mixing of unlabeled and fully labeled data in a semi-supervised approach. However, the angular partitioning of their composition approach is tuned to the hardware of the KITTI dataset, which may impact performance with different LiDAR sensors. LiDAR [21] integrates selective pseudo-labeling, based on the uncertainty between multiple augmented versions of sequential LiDAR scans. Due to the use of multiple scans, their method has a high memory footprint and requires pose information. HybridCR [26] combines weak supervision with consistency loss and learned augmentation. Genova _et al_. [14] take an image pseudo-labeling approach at urban mapping for temporally assembled LiDAR pointclouds. Their method utilizes a series of filters which remove up to 99% of points, including those of moving entities. While similar in concept, our solution fundamentally differs in that it can function on single-scan data and moving classes such as cars and pedestrians are considered. Their closed-source method was tested on a non-public dataset, making the necessary implementations for comparison infeasible. ### LiDAR Domain Adaption Domain adaption (DA) has been a natural candidate tackling the LiDAR cross-sensor gap. While DA is fundamentally different to label-efficient methods, the shared goal is to reduce or entirely eliminate the need for target domain data. This field is mainly split into Real\(\rightarrow\)Real and Synthetic\(\rightarrow\)Real domain adaption. Some compose intermediate representations of source domain data, e.g. through voxel completion [58] or mesh creation [17], prior to sampling target domain data or using the dense data itself for training. Multi-modal methods such as xMUDA [23] and ADAS [13] bridge LiDAR characteristics with image information. CosMix [37] uses a synthetic trained teacher network with compositional mixing similar to LaserMix, while SynLiDARs [52] PCT module learns to reconstruct synthetic pointclouds in the appearance of the target domain. A reoccurring issue of DA methods is the evaluation on a limited number of overlapping classes between different public datasets [23, 32, 13, 38, 58, 17, 36]. There is no consensus on mapping overlapping labels, limiting the expressiveness of results when compared to label-efficient or fully-supervised methods. Synthetic methods have the benefit that annotations can be generated to match those of the target dataset for evaluation [37, 52]. ### Pointcloud Pretraining Advances in image self-supervised pretraining [18, 8] have kindled research in adapting these concepts to pointclouds. Variants of masked pretraining use reconstruction [59, 31], occlusion completion [49], or occupancy prediction [30], but are specialized towards specific network architectures or limited to data generated from 3D models. Other methods of pointcloud pretraining include contrastive methods [54, 62]. However, the application of these methods has been limited to indoor pointclouds and to impact fully-supervised training, not in the context of label-efficient training. Pointcloud pretraining also includes multi-modal strategies. Janda _et al_. [22] add image depth estimation features to their contrastive method. Other multi-modal methods such as the SLiDR series [39, 28] and CLIP2Scene [7] utilize knowledge distillation with pretrained image, image/text backbones for 3D semantic segmentation, followed by fine-tuning with a low amount of annotated data. However, the teacher backbones in these methods lack knowledge specific to the segmenting of street accuracy, and perform on-par with LiDAR only methods. ## 3 Method We made three main observations based on the current state-of-the-art. Methods with weak annotations [27, 47] or semi-supervised training [37, 21] intricately combine a variety of techniques dependent on a large number of hyper-parameters. Others include image information in the training process, but constrain 3D networks into learning the 2D features of discrete pixel grids [22] instead of in continuous space, or use a general 2D backbone [7, 39, 28]. Thirdly, most pretraining methods are specific to certain architectures or representations (voxels, range-projections, etc.) These observations inspire us to follow a "back to the basics" approach for ImageTo360. We assume that camera data together with registered LiDAR data is readily available in autonomous driving and robotics. Our method leverages this image information in a 2D supervised teacher network pretraining of a 3D student network. Since the teacher predictions are only available within the camera field of view, we follow up by optional fine-tuning on 360\({}^{\circ}\) data. Our proposed few-shot method is visualized in Fig. 1. Pretrained image segmentation networks are readily available [10] and can be used as "off-the-shelf" teacher networks. From a practical perspective image segmentation is a logical choice to introduce specific knowledge into label efficient LiDAR segmentation. In order to not bind the 3D student network to 2D features, our method uses high quality pseudo-labels as a general representation of segmentation knowledge. We implement our method on the point level, as LiDAR data is first represented as pointclouds prior to transformation into other representations. As a result our method generalizes across sensors and architectures. ### 2D Supervision We leverage the Cityscapes [11] dataset due to street scenes similar to those of SemanticKITTI [2]. In order to fully evaluate our method against others, it is necessary to include all SemanticKITTI classes in our analysis, rather than just reporting class-overlap scores. For this reason we fine-tune the network on a subset of the KITTI dataset, however this step is not necessary for the actual implementation of our method. The 2D predictions are limited to the single camera field of view. When projected into 3D space, using the calibration parameters between camera and LiDAR, these predictions cover a "slice" of the 360\({}^{\circ}\) pointcloud (see Fig. 1). This projection inherently has errors, caused by discrepancies in the calibration and synchronization between camera and LiDAR sensors. On top of that, neural network predictions tend to be unfocused at object borders. These discontinuous predictions cause the "Flying Pixels" bleeding effect [44, 34], exemplified in Fig. 2. While the corresponding pixels of these prediction errors are direct neighbors in 2D grids, this does not necessarily hold true when projected into 3D space. We take inspiration from post-processing methods in range projected LiDAR segmentation [12][29] and propose 3D neighborhood refinement as a method to reintroduce continuous space information into 2D predictions. For our method, we use K-Nearest-Neighbors (KNN), but other neighborhood algorithms are also suitable. First we consider neighborhood majority voting. However KN-Neighborhoods have varying distances and volumes. Simple majority voting would equally weight points regardless of distance. For this reason, we also consider distance-weighted voting (\(1-\text{softmax}(\vec{v}_{d})\) with \(\vec{v}_{d}\) as the distance vector). Priority is put on labels closer to the query point. Thirdly, we compare both methods to neighborhood confidence averaging. ### Pretraining from 2D predictions Pseudo-label confidence thresholding [25] has been an established method in order to remove noisy labels due to incorrect predictions. In a multiclass context, networks tend to bias towards majority classes [5][64], and simple static thresholding can remove minority classes. Class-imbalance inherently affects LiDAR street scenario data (e.g. buildings have more points than a person). We follow established methods [47][38] and integrate class-balanced adaptive thresholding with the goal of further improving pseudo-label quality. Formally, the class balanced thresholds \(\vec{\tau}_{c}(i)\) of our method can be expressed as \[\begin{split}\vec{\tau}_{c}(i)=&\frac{\sum[labels=i ]}{max\left\{\forall_{i\in c}\sum[labels=i]\right\}}\times\\ &(\tau_{max}-\tau_{min})+\tau_{min}\end{split} \tag{1}\] with all dataset classes \(c\), single class \(i\), \(\tau_{min}\) as the minimum threshold, and \(\tau_{max}\) as the maximum threshold values. In short, threshold values are scaled within a confidence range (set through \(\tau_{min}\) and \(\tau_{max}\)) for each class, Figure 2: Image segmentation predictions projected into 3D space. Uncertainty at object edges cause “Flying Pixels” [44]; in this example (left image) casting car labels unto the building and vice-versa. Neighborhood refinement counteracts this issue. depending on how many times this class occurs in comparison to the majority class. Predictions below the adaptive threshold are not used as pseudo-labels. For pretraining, the pointclouds are cut into the fields of view of the camera image, significantly reducing training time and memory requirements. Our method can be scaled to include multiple cameras. Optionally, fine-tuning is performed using the annotated 360\({}^{\circ}\) LiDAR data, without image data. ## 4 Evaluation We evaluate our method ImageTo360 against related works for the spaces of fully-supervised, label-efficient and pretraining methods, as well as against domain adaption. The intention is to show the differences in performance when considering practical application and labeling efforts. Our ablation studies and evaluation are performed using the SemanticKITTI [2] dataset in a single-scan context. Results are reported on evaluation sequences 08, either on the full 360\({}^{\circ}\) or the "slice" of data within the left camera field of view (covering 90\({}^{\circ}\)). For image teacher networks we use the OpenMMLab [10] implementation of Segformer [53] and DeepLabV3+ [6] in combination with the left image data of the dataset. For our 3D student networks we use SPVCNN [42] and Cylinder3D [63]. For 3D student fine-tuning, we randomly sample a small percentage of the training split and use the entire 360\({}^{\circ}\) pointclouds and corresponding ground-truth annotations. Fine-tuning is performed according to the original methods hyperparameters, except with \(0.2\times learning\ rate,\ 0.5\times epochs\). Additionally we use translation, pointcloud mixing (similar to CutMix [60] and LaserMix [24]), squeezing, and flipping as augmentation for both networks. Using this augmentation schedule, we reproduced the paper results of the original authors. We do not include test data or validation data in any training steps of our method. Unless otherwise stated we do not use Test Time Augmentation (TTA) or ensembling, in order to provide a fair comparison with research which forgoes such methods. In the case where we use these methods, we utilize Greedy Soup [50] ensembling, and TTA over 12 variants of data (original, three flips, eight rotations). ### Ablation Study #### 4.1.1 Impact of Image Segmentation Network Firstly, we study the impact of an image teacher by pretraining the 3D student network on the predictions within the left camera field of view (\(\sim\)90\({}^{\circ}\)). We follow by fine-tuning on a low-label budget using 360\({}^{\circ}\) annotated pointclouds. The baseline comparisons train on the same fine-tuning data, without pretraining. Improvements in image segmentation theoretically help in pretraining the 3D student. To test this assumption we compare Segformer as a modern transformer architecture with the traditional convolutional DeepLabV3+, as teacher networks. Segformer achieves 58.24 mIOU% compared to the 50.88 mIOU% of DeepLab, an improvement of +14.47% on the left camera field of view predictions. Results are depicted in Table 2. Our pretraining method significantly improves over the baseline for nearly all reduced label scenarios, however the magnitude of improvement reduces with an increasing ratio of labels. The improvements of Segformer over DeepLabV3+ translate to the low label amount of the SPVCNN architecture. With Cylinder3D as the student, results are comparable across both teacher networks. These results reveal noteworthy observations. Without fine-tuning, networks that have only ever seen slices of LiDAR data produce considerable results when predicting on 360\({}^{\circ}\) data. While developments in image segmentation can have an impact, results are heavily influenced by the choice of 3D student architecture. Finally, image pretraining improved fully-supervised training with both image teachers for SPVCNN, showing potential for future research. We continue with Segformer as our teacher network for further ablation and evaluation. #### 4.1.2 3D Neighborhood Refinement of 2D Predictions We study the effects of utilizing neighborhood refinement as a technique for reintroducing 3D information into 2D predictions of Segformer. We examine neighborhood majority label voting, distance weighted label voting, and neighborhood confidence averaging in Table 1. All three approaches demonstrate comparable improvements over the baseline. The use of confidence averaging offers the added benefit of avoiding one-hot labels and as such being compatible with pseudo-label thresholding. For this reason we continue with confidence averaging (\(K=19\)) for further ablation and evaluation, improving pseudo-label quality by +4.86%. #### 4.1.3 Class Balanced Pseudo-Label Thresholds We compare static and class-balanced adaptive thresholding (see Equation 1) after neighborhood confidence averaging. The results are depicted in Table 3. We use \(\tau_{max}=0.95\) as an upper limit to all adaptive threshold ranges. Threshold parameters are chosen to have a comparable percentage of points removed. Class-balanced adaptive thresholding shows a significant improvement over static thresholds, in total improving pseudo-label quality by +20.15% over the neighborhood refined baseline (61.07 mIoU%), while setting 23% of unreliable labels as unlabeled. More information is kept compared to the best static threshold and as a result we continue with adaptive thresholding (\(\tau_{min}=0.8\)) for further evaluation. ### Comparison with Label Efficient Methods We compare our method ImageTo360 with research from the spaces of weakly-supervised, few-shot, and semi-supervised training, as well as self-supervised pretraining. All compared methods in this evaluation rubric share the goal of label-efficient training, fine-tuning on a subset of annotated KITTI data. This rubric excludes efficient annotation methods, which focus on reducing the labeling effort, instead of the data needed. It is common in this field of research to evaluate on the validation set, without ranking on the test benchmark. Also, there is no agreed-upon percentage of labels for evaluation, so we've reported our results (see Table 4) using a variety of percentages. As the most common reported metric, 1% labels can be considered the most indicative of performance. Comparing amongst the same 3D backbone, ImageTo360 improves +17.58% over LaserMix. We apply TTA and ensembling to our best 1% network to show the full potential of our method, improving over next-best HybridCR by +20.26%. We even surpass ScribbleKITTI with its 8% labeled points. Taken to the extremes by fine-tuning on just 57 annotated samples (0.3%), we outperform all comparable methods using 1% of data, and achieve similar results to those using 5% of data. Current image backbone pretraining methods usually transfer 2D features to 3D networks. Considering the gap in performance to label-efficient LiDAR-only methods, this multi-modal knowledge is not necessarily beneficial. Our method differs by introducing the specific knowledge of an off-the-shelf segmentation backbone, outperforming all other multi-modal pretraining methods, even without any fine-tuning. From a application standpoint, segmentation backbones are readily available and lead to a considerable performance increase. ### Comparison with domain adaption methods From a data-perspective, evaluating our ImageTo360 to LiDAR domain adaption (DA) methods is biased. Even with teacher networks trained on a source domain, cross-sensor DA without target domain annotations remains a challenge. Regardless of our method using a modest amount of data, we believe that this comparison is worthwhile when considering practical applications, even more so for safety critical applications where accuracy is of great importance. Most DA methods evaluate on the class-overlap between public benchmarks. Currently, there is no consensus amongst authors on the class-mapping. This makes the comparison of DA methods to each other challenging. However, the performance margins of each DA method to our method as a reference can give indication on what strategy performs best. We reevaluate our best 1% method, according to the class-mapping of each compared method. The results are depicted in Table 5. The margins of improvement point to the benefits of image information in DA. On AudiA2D2 [15]\(\rightarrow\)KITTI our method performs +41.10% better. The margin of improvement is much greater when considering the best method without image data (SynLiDAR \(\rightarrow\)KITTI with +95.34%). Both results put into view the trade-off between the efforts of annotating a limited amount of data or choosing domain-adaption approaches. ### Comparison with fully-supervised methods For our comparison with fully-supervised semantic segmentation methods we use our best 1% network (Segformer / SPVCNN + TTA, ensembling). We follow standard prac \begin{table} \begin{tabular}{|l|c c c c c c c c c c|} \hline \multicolumn{1}{|c|}{**Method**} & \multicolumn{6}{c|}{**Neighbors**} \\ & \(3\) & \(5\) & \(7\) & \(9\) & _11_ & _13_ & _15_ & _17_ & _19_ & _21_ & _23_ \\ \hline None (Segformer-baseline [53]) & & & & & & 58.2 & & & & & \\ KNN Majority Voting & 59.21 & 59.79 & 60.28 & 60.63 & 60.89 & 61.08 & 61.18 & 61.23 & 61.24 & 61.24 & 61.21 \\ KNN Distance Normalized Voting & 59.21 & 59.79 & 60.28 & 60.64 & 60.90 & 61.09 & 61.21 & 61.27 & **61.30** & 61.29 & 61.26 \\ KNN Confidence Averaging & 59.20 & 59.78 & 60.23 & 60.55 & 60.78 & 60.94 & 61.02 & 61.07 & 61.07 & 61.06 & 61.02 \\ \hline \end{tabular} \end{table} Table 1: Evaluation of 3D neighborhood refinement of 2D predictions, using the KITTI validation mIOU in the left camera field of view. \begin{table} \begin{tabular}{|l|c c c c c|} \hline \multicolumn{5}{|c|}{_Static threshold_} \\ \hline **Threshold \(\tau\)** & 0.80 & 0.85 & 0.90 & 0.95 \\ **Point reduction \%** & 16.83 & 20.04 & 23.63 & 28.58 \\ **mIOU \%** & 70.95 & 71.69 & 72.19 & 72.82 \\ \hline \multicolumn{5}{|c|}{_Class-balanced threshold_ (\(\tau_{\text{max}}\) _= 0.95_)_} \\ \hline **Min. threshold \(\tau_{min}\)** & 0.5 & 0.6 & 0.7 & 0.8 \\ **Point reduction \%** & 15.83 & 18.31 & 20.78 & 23.41 \\ **mIOU \%** & 68.75 & 70.95 & 72.45 & **73.38** \\ \hline \end{tabular} \end{table} Table 3: Ablation of pseudo-label thresholding, applied after neighborhood confidence averaging, using the KITTI validation mIOU in the left camera field of view. tice and compare to methods using 100% human annotated data on the KITTI test benchmark. Our method is at a natural disadvantage, even more so considering we do not fine-tune with any validation data or use semi-supervised training on the test split. The results on the KITTI benchmark are depicted in Table 6 and can be interpreted two-fold. While ImageTo360 outperforms comparable methods, few-shot training is still an open field of research. A gap remains to current state-of-the-art fully supervised networks, with 2DPASS [57] performing +26.34% better than our method. But with considerable less human effort and annotation costs, our few-shot method outperforms traditional fully-supervised methods such as SqueezeSegV3 [55]. We encourage future label-efficient research to also upload their results on the public benchmark. ## 5 Conclusion In this paper, we present ImageTo360, a streamlined approach to few-shot LiDAR semantic segmentation using an image teacher network for pretraining. Our method is designed in a modular manner and at the point level, meaning it can be applied across different 3D network architectures and that components can be exchanged with further developments in deep learning. We evaluated against other label-efficient methods, producing state-of-the-art results. With practical applications in mind, we expanded our evaluation against domain adaption and fully-supervised methods, showing the large performance gaps between the different \begin{table} \begin{tabular}{|l c c|} \hline \multicolumn{1}{|c}{**Method**} & **Image training** & **mIOU \%** \\ \hline 2DPASS [57] & yes & 72.9 \\ Point-Voxel KD [20] & & 71.2 \\ Cylinder3D [63] & & 67.8 \\ SPVNAS [42] & & 66.4 \\ JS3C-Net [56] & & 66.0 \\ SalsaNext [12] & & 59.5 \\ KPConv [45] & & 58.8 \\ **ImageTo360 1% (ours)** & yes & 57.7 \\ SCSSnet [35] & & 57.6 \\ SqueezeSegV3 [55] & & 55.9 \\ 3D-MiniNet [1] & & 55.8 \\ RangeNet53++ [29] & & 52.2 \\ \hline \end{tabular} \end{table} Table 6: Evaluation of our method compared to fully-supervised methods on the KITTI single-scan semantic segmentation benchmark. \begin{table} \begin{tabular}{|c c c c c c c c c c c|} \hline \multicolumn{1}{|c}{**Method**} & **Backbone Networks** & **Use of images** & \multicolumn{1}{c}{\multirow{2}{*}{_0\%_}} & \multicolumn{1}{c}{\multirow{2}{*}{_1\%_}} & \multicolumn{1}{c}{\multirow{2}{*}{_5\%_}} & \multicolumn{1}{c}{\multirow{2}{*}{_8\%_}} & \multicolumn{1}{c}{\multirow{2}{*}{_10\%_}} & \multicolumn{1}{c}{\multirow{2}{*}{_20\%_}} & \multicolumn{1}{c}{\multirow{2}{*}{_50\%_}} \\ \hline \multicolumn{1}{|c}{_Label Efficient_} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline LiDAR [21] & SPVCNN [42] & no & - & - & 48.8 & 59.5 & - & - & - & - & - \\ ReDAL [51] & SPVCNN & no & - & - & 41.8 & 59.8 & - & - & - & - \\ HybridCR [26] & PSD [61] & no & - & - & 52.3 & - & - & - & - & - \\ LaserMix [24] & Cylinder3D [63] & no & - & - & 50.6 & - & - & 60.0 & 61.9 & 62.3 \\ **ImageTo360 (ours)** & Segformer [53] / SPVCNN & yes & 52.1 & 57.3 & 59.5 & **61.7** & **62.1** & **62.4** & **64.2** & **66.1** \\ & + TTA, Greedy Soup & yes & - & - & **62.9** & - & - & - & - & - \\ **ImageTo360 (ours)** & Segformer / Cylinder3D & yes & 50.8 & 50.7 & 54.1 & 59.2 & 59.6 & 60.0 & 62.2 & 65.0 \\ \hline \multicolumn{1}{|c}{_Weakly Annotated_} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline ScribbleKITTI [47] & SPVCNN & no & - & - & - & - & 61.3 & - & - & - \\ ScribbleKITTI & Cylinder3D & no & - & - & - & - & 60.8 & - & - & - \\ \hline \multicolumn{1}{|c}{_Self-supervised Pretraining_} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline CLIP2Scene [7] & CLIP [33] / SPVCNN & yes & - & - & 42.6 & - & - & - & - & - \\ SLidR [39] & Minkowski [9] & yes & - & - & 44.6 & - & - & - & - & - \\ ST-SLidR [28] & SwAV [4] / Minkowski & yes & - & - & 44.9 & - & - & - & - & - \\ Contrastive Image [22] & ResNet18 [19] / Minkowski & yes & & & 42.0 (Percentage not stated) & & & & & \\ \hline \end{tabular} \end{table} Table 4: Evaluation mIOU on the 360\({}^{\circ}\) data of the KITTI validation split. Percentages of labels signify the amount of training split data that was used during fine-tuning. The training split data within the camera field of view was used for image teacher pretraining. We compare to the values reported by the authors. Empty ”-” values signify that the original papers did not evaluate on that percentage of annotations. \begin{table} \begin{tabular}{|c c c c|} \hline \multicolumn{1}{|c}{**Method**} & **Backbones** & **Use of images** & **mIOU \%** \\ \hline \multicolumn{1}{|c}{_A122 [17] \& N KITTI [21] (10 classes)_} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \(\lambda\)MDUA [23] & SparseVoxel [16] + ResNet34 [19] & yes & 49.1 \\ InColl. [23] & SparseVoxel + ResNet34 & yes & 52.4 \\ **ImageTo360 15 (ours)** & SPVCNN [42] / Segformer [53] & yes & **76.62** \\ \hline \multicolumn{1}{|c}{_Sanked [23] (syntactic) \& KITTI (7 classes)_} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline GIPSO [38] & MinNet [9] & no & 40.24 \\ **ImageTo360 15 (ours)** & SPVCNN [Segformer] & yes & **83.25** \\ **ImageTo360 15 (ours)** & _None3D [31] / KITTI (10 classes)_} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline Complete \& Label [53] & SparseVoxel & no & 33.7 \\ **ImageTo360 15 (ours)** & SPVCNN + Segformer & & **79.03** \\ \hline \multicolumn{1}{|c}{_Nickness KITTI (11 classes)_} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \(\lambda\)Fake it, Mix it [17] & Cylinder3D [63] & no & 34.3 \\ **ImageTo360 16 (ours)** & SPVCNN + Segformer & yes & **77.62** \\ \hline \multicolumn{1}{|c}{_Nickness to KITTI (11 classes), different mapping_} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline GateAdughters [46] & Salwalker [12] & no & 23.5 \\ **ImageTo360 16 (ours)** & SPVCNN + Segformer & yes & **72.23** \\ \hline \multicolumn{1}{|c}{_SmailDAR_} & PCI [25] / Monkowski & no & 27.0 \\ \multicolumn{1}{|c}{_CoshMix_} [37] & Minkowski & no & 32.2 \\ **ImageTo360 15 (ours)** & SPVCNN + Segformer & yes & **62.9** \\ \hline \end{tabular} \end{table} Table 4: Evaluation mIOU on the 360\({}^{\circ}\) data of the KITTI validation split. Percentages of labels signify the amount of training split data that was used during fields. The results show that image data of a comparatively low cost camera is sufficient for 3D networks to generate remarkable results on 360\({}^{\circ}\) LiDAR data, even outperforming traditional fully-supervised methods. ## Acknowledgement This work was funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) under the grant AuReSi (KK5335501JR1).
2309.16384
Multi-Swap $k$-Means++
The $k$-means++ algorithm of Arthur and Vassilvitskii (SODA 2007) is often the practitioners' choice algorithm for optimizing the popular $k$-means clustering objective and is known to give an $O(\log k)$-approximation in expectation. To obtain higher quality solutions, Lattanzi and Sohler (ICML 2019) proposed augmenting $k$-means++ with $O(k \log \log k)$ local search steps obtained through the $k$-means++ sampling distribution to yield a $c$-approximation to the $k$-means clustering problem, where $c$ is a large absolute constant. Here we generalize and extend their local search algorithm by considering larger and more sophisticated local search neighborhoods hence allowing to swap multiple centers at the same time. Our algorithm achieves a $9 + \varepsilon$ approximation ratio, which is the best possible for local search. Importantly we show that our approach yields substantial practical improvements, we show significant quality improvements over the approach of Lattanzi and Sohler (ICML 2019) on several datasets.
Lorenzo Beretta, Vincent Cohen-Addad, Silvio Lattanzi, Nikos Parotsidis
2023-09-28T12:31:35Z
http://arxiv.org/abs/2309.16384v1
# Multi-Swap \(k\)-Means++ ###### Abstract The \(k\)-means++ algorithm of Arthur and Vassilvitskii (SODA 2007) is often the practitioners' choice algorithm for optimizing the popular \(k\)-means clustering objective and is known to give an \(O(\log k)\)-approximation in expectation. To obtain higher quality solutions, Lattanzi and Sohler (ICML 2019) proposed augmenting \(k\)-means++ with \(O(k\log\log k)\) local search steps obtained through the \(k\)-means++ sampling distribution to yield a \(c\)-approximation to the \(k\)-means clustering problem, where \(c\) is a large absolute constant. Here we generalize and extend their local search algorithm by considering larger and more sophisticated local search neighborhoods hence allowing to swap multiple centers at the same time. Our algorithm achieves a \(9+\varepsilon\) approximation ratio, which is the best possible for local search. Importantly we show that our approach yields substantial practical improvements, we show significant quality improvements over the approach of Lattanzi and Sohler (ICML 2019) on several datasets. ## 1 Introduction Clustering is a central problem in unsupervised learning. In clustering one is interested in grouping together "similar" object and separate "dissimilar" one. Thanks to its popularity many notions of clustering have been proposed overtime. In this paper, we focus on metric clustering and on one of the most studied problem in the area: the Euclidean \(k\)-means problem. In the Euclidean \(k\)-means problem one is given in input a set of points \(P\) in \(\mathbb{R}^{d}\). The goal of the problem is to find a set of \(k\) centers so that the sum of the square distances to the centers is minimized. More formally, we are interested in finding a set \(C\) of \(k\) points in \(\mathbb{R}^{d}\) such that \(\sum_{p\in P}\min_{c\in C}\left|\left|p-c\right|\right|^{2}\), where with \(\left|\left|p-c\right|\right|\) we denote the Euclidean distance between \(p\) and \(c\). The \(k\)-means problem has a long history, in statistics and operations research. For Euclidean \(k\)-means with running time polynomial in both \(n,k\) and \(d\), a \(5.912\)-approximation was recently shown in Cohen-Addad et al. (2022a), improving upon Kanungo et al. (2004), Ahmadian et al. (2019), Grandoni et al. (2022) by leveraging the properties of the Euclidean metric. In terms of lower bounds, the first to show that the high-dimensional \(k\)-means problems were APX-hard were Guruswami and Indyk (2003), and later Awasthi et al. (2015) showed that the APX-hardness holds even if the centers can be placed arbitrarily in \(\mathbb{R}^{d}\). The inapproximability bound was later slightly improved by Lee et al. (2017) until the recent best known bounds of Cohen-Addad and Karthik C. S. (2019), Cohen-Addad et al. (2022d) that showed that it is NP-hard to achieve a better than 1.06-approximation and hard to approximate it better than 1.36 assuming a stronger conjecture. From a more practical point of view, Arthur and Vassilvitskii (2009) showed that the widely-used popular heuristic of Lloyd Lloyd (1957) can lead to solutions with arbitrarily bad approximation guarantees, but can be improved by a simple seeding strategy, called \(k\)-means++, so as to guarantee that the output is within an \(O(\log k)\) factor of the optimum Arthur and Vassilvitskii (2007). Thanks to its simplicity \(k\)-means++ is widely adopted in practice. In an effort to improve its performances Lattanzi and Sohler (2019); Choo et al. (2020) combine \(k\)-means++ and local search to efficiently obtain a constant approximation algorithm with good practical performance. These two studies show that one can use the \(k\)-means++ distribution in combination with a local search algorithm to get the best of both worlds: a practical algorithm with constant approximation guarantees. However, the constant obtained in Lattanzi and Sohler (2019); Choo et al. (2020) is very large (several thousands in theory) and the question as whether one could obtain a practical algorithm that would efficiently match the \(9+\varepsilon\)-approximation obtained by the \(n^{O(d/\epsilon)}\) algorithm of Kanungo et al. (2004) has remained open. Bridging the gap between the theoretical approach of Kanungo et al. (2004) and \(k\)-means++ has thus been a long standing goal. Our Contributions.We make significant progress on the above line of work. * We adapt techniques from the analysis of Kanungo et al. (2004) to obtain a tighter analysis of the algorithm in Lattanzi and Sohler (2019). In particular in Corollary 4, we show that their algorithm achieves an approximation of ratio of \(\approx 26.64\). * We extend this approach to multi-swaps, where we allow swapping more than one center at each iteration of local search, improving significantly the approximation to \(\approx 10.48\) in time \(O(nd\cdot poly(k))\). * Leveraging ideas from Cohen-Addad et al. (2021), we design a better local search swap that improves the approximation further to \(9+\varepsilon\) (see Theorem 12). This new algorithm matches the \(9+\varepsilon\)-approximation achieved by the local search algorithm in Kanungo et al. (2004), but it is significantly more efficient. Notice that \(9\) is the best approximation achievable through local search algorithms, as proved in Kanungo et al. (2004). * We provide experiments where we compare against \(k\)-means++ and Lattanzi and Sohler (2019). We study a variant of our algorithm that performs very competitively with our theoretically sound algorithm. The variant is very efficient and still outperforms previous work in terms of solution quality, even after the standard postprocessing using Lloyd. Additional Related Work.We start by reviewing the approach of Kanungo et al. (2004) and a possible adaptation to our setting. The bound of \(9+\varepsilon\) on the approximation guarantee shown by Kanungo et al. (2004) is for the following algorithm: Given a set \(S\) of \(k\) centers, if there is a set \(S^{+}\) of at most \(2/\varepsilon\) points in \(\mathbb{R}^{d}\) together with a set \(S^{-}\) of \(|S^{+}|\) points in \(S\) such that \(S\setminus S^{-}\cup S^{+}\) achieves a better \(k\)-means cost than \(S\), then set \(S:=S\setminus S^{-}\cup S^{+}\) and repeat until convergence. The main drawback of the algorithm is that it asks whether there exists a set \(S^{+}\) of points in \(\mathbb{R}^{d}\) that could be swapped with elements of \(S\) to improve the cost. Identifying such a set, even of constant size, is already non-trivial. The best way of doing so is through the following path: First compute a coreset using the state-of-the-art coreset construction of Cohen-Addad et al. (2022b) and apply the dimensionality reduction of Becchetti et al. (2019); Makarychev et al. (2019), hence obtaining a set of \(\tilde{O}(k/\varepsilon^{4})\) points in dimension \(O(\log k/\varepsilon^{2})\). Then, compute grids using the discretization framework of Matousek (2000) to identify a set of \(\varepsilon^{-O(d)}\sim k^{O(\varepsilon^{-2}\log(1/\varepsilon))}\) grid points that contains nearly-optimum centers. Now, run the local search algorithm where the sets \(S^{+}\) are chosen from the grid points by brute-force enumeration over all possible subsets of grid points of size at most, say \(s\). The running time of the whole algorithm with swaps of magnitude \(s\), i.e.: \(|S^{+}|\leq s\), hence becomes \(k^{O(s\cdot e^{-2}\log(1/\varepsilon))}\) for an approximation of \((1+\varepsilon)(9+2/s)\), meaning a dependency in \(k\) of \(k^{O(\varepsilon^{-3}\log(1/\varepsilon))}\) to achieve a \(9+\varepsilon\)-approximation. Our results improves upon this approach in two ways: (1) it improves over the above theoretical bound and (2) does so through an efficient and implementable, i.e.: practical, algorithm. Recently, Grunau et al. (2023) looked at how much applying a greedy rule on top of the \(k\)-means++ heuristic improves its performance. The heuristic is that at each step, the algorithm samples \(\ell\) centers and only keeps the one that gives the best improvement in cost. Interestingly the authors prove that from a theoretical standpoint this heuristic does not improve the quality of the output. Local search algorithms for \(k\)-median and \(k\)-means have also been studied by Gupta and Tangwongsan (2008) who drastically simplified the analysis of Arya et al. (2004). Cohen-Addad and Schwiegelshohn (2017) demonstrated the power of local search for stable instances. Friggstad et al. (2019); Cohen-Addad et al. (2019) showed that local search yields a PTAS for Euclidean inputs of bounded dimension (and doubling metrics) and minor-free metrics. Cohen-Addad (2018) showed how to speed up the local search algorithm using \(kd\)-trees (i.e.: for low dimensional inputs). For fixed \(k\), there are several known approximation schemes, typically using small coresets Becchetti et al. (2019); Feldman and Langberg (2011); Kumar et al. (2010). The state-of-the-art approaches are due to Bhattacharya et al. (2020); Jaiswal et al. (2014). The best known coreset construction remains Cohen-Addad et al. (2022c,b). If the constraint on the number of output centers is relaxed, then we talk about bicriteria approximations and \(k\)-means has been largely studied Bandyapadhyay and Varadarajan (2016); Charikar and Guha (2005); Cohen-Addad and Mathieu (2015); Korupolu et al. (2000); Makarychev et al. (2016). ## 2 Preliminaries Notation.We denote with \(P\subseteq\mathbb{R}^{d}\) the set of input points and let \(n=|P|\). Given a point set \(Q\subseteq P\) we use \(\mu(Q)\) to denote the mean of points in \(Q\). Given a point \(p\in P\) and a set of centers \(A\) we denote with \(A[p]\) the closest center in \(A\) to \(p\) (ties are broken arbitrarily). We denote with \(\mathcal{C}\) the set of centers currently found by our algorithm and with \(\mathcal{O}^{*}\) an optimal set of centers. Therefore, given \(p\in P\), we denote with \(\mathcal{C}[p]\) and \(\mathcal{O}^{*}[p]\) its closest ALG-center and OPT-center respectively. We denote by \(\mathtt{cost}(Q,A)\) the cost of points in \(Q\subseteq P\) w.r.t. the centers in \(A\), namely \[\mathtt{cost}(Q,A)=\sum_{q\in Q}\min_{c\in A}||q-c||^{2}\,.\] We use ALG and OPT as a shorthand for \(\mathtt{cost}(P,\mathcal{C})\) and \(\mathtt{cost}(P,\mathcal{O}^{*})\) respectively. When we sample points proportionally to their current cost (namely, sample \(q\) with probability \(\mathtt{cost}(q,\mathcal{C})\,/\mathtt{cost}(P,\mathcal{C})\)) we call this the \(D^{2}\) distribution. When using \(O_{\varepsilon}(\cdot)\) and \(\Omega_{\varepsilon}(\cdot)\) we mean that \(\varepsilon\) is considered constant. We use \(\widetilde{O}(f)\) to hide polylogarithmic factors in \(f\). The following lemma is folklore. **Lemma 1**.: _Given a point set \(Q\subseteq P\) and a point \(p\in P\) we have_ \[\mathtt{cost}(Q,p)=\mathtt{cost}(Q,\mu(Q))+|Q|\cdot||p-\mu(Q)||^{2}\,.\] Let \(O_{i}^{*}\) be an optimal cluster, we define the _radius_ of \(O_{i}^{*}\) as \(\rho_{i}\) such that \(\rho_{i}^{2}\cdot|O_{i}^{*}|=\mathtt{cost}(O_{i}^{*},o_{i})\), where \(o_{i}=\mu(O_{i}^{*})\). We define the \(\delta\)_-core_ of the optimal cluster \(O_{i}^{*}\) as the set of points \(p\in O_{i}^{*}\) that lie in a ball of radius \((1+\delta)\rho_{i}\) centered in \(o_{i}\). In symbols, \(\mathtt{core}(O_{i}^{*})=P\cap B(o_{i},(1+\delta)\rho_{i})\). Throughout the paper, \(\delta\) is always a small constant fixed upfront, hence we omit it. **Lemma 2**.: _Let \(O_{i}^{*}\) be an optimal cluster and sample \(q\in O_{i}^{*}\) according to the \(D^{2}\)-distribution restricted to \(O_{i}^{*}\). If \(\mathtt{cost}(O_{i}^{*},\mathcal{C})>(2+3\delta)\cdot\mathtt{cost}(O_{i}^{*},o_ {i})\) then \(\Pr[q\in\mathtt{core}(O_{i}^{*})]=\Omega_{\delta}(1)\)._ Proof.: Define \(\alpha:=\mathtt{cost}(O_{i}^{*},\mathcal{C})\,/\mathtt{cost}(O_{i}^{*},o_{i})> 2+3\delta\). Thanks to Lemma 1, for each \(c\in\mathcal{C}\) we have \(\left\|c-o_{i}\right\|^{2}\geq(\alpha-1)\rho_{i}^{2}\). Therefore, for each \(y\in\mathtt{core}(O_{i}^{*})\) and every \(c\in\mathcal{C}\) we have \[\mathtt{cost}(y,c)=||y-c||^{2}\geq\left(\sqrt{\alpha-1}-(1+\delta)\right)^{2} \cdot\rho_{i}^{2}=\Omega_{\delta}(\alpha\rho_{i}^{2}).\] Moreover, by a Markov's inequality argument we have \(|O_{i}^{*}\setminus\mathtt{core}(O_{i}^{*})|\leq\frac{1}{1+\delta}\cdot|O_{i}^ {*}|\) and thus \(|\mathtt{core}(O_{i}^{*})|\geq\Omega_{\delta}(|O_{i}^{*}|)\). Combining everything we get \[\mathtt{cost}(\mathtt{core}(O_{i}^{*})\,,\mathcal{C})\geq|\mathtt{core}(O_{i}^ {*})\,|\cdot\min_{\begin{subarray}{c}c\in\mathcal{C}\\ y\in\mathtt{core}(O_{i}^{*})\end{subarray}}\mathtt{cost}(y,c)=\Omega_{\delta} (|O_{i}^{*}|)\cdot\Omega_{\delta}(\alpha\rho_{i}^{2})\] and \(|O_{i}^{*}|\cdot\alpha\rho_{i}^{2}=\mathtt{cost}(O_{i}^{*},\mathcal{C})\), hence \(\mathtt{cost}(\mathtt{core}(O_{i}^{*})\,,\mathcal{C})=\Omega_{\delta}(\mathtt{ cost}(O_{i}^{*},\mathcal{C}))\). ## 3 Multi-Swap \(k\)-Means++ The single-swap local search (SSLS) \(k\)-means++ algorithm in Lattanzi and Sohler (2019) works as follows. First, \(k\) centers are sampled using \(k\)-means++ (namely, they are sampled one by one according to the \(D^{2}\) distribution, updated for every new center). Then, \(O(k\log\log k)\) steps of local search follow. In each local search step a point \(q\in P\) is \(D^{2}\)-sampled, then let \(c\) be the center among the current centers \(\mathcal{C}\) such that \(\texttt{cost}(P,(\mathcal{C}\setminus\{c\})\cup\{q\})\) is minimum. If \(\texttt{cost}(P,(\mathcal{C}\setminus\{c\})\cup\{q\})<\texttt{cost}(P,\mathcal{C})\) then we swap \(c\) and \(q\), or more formally we set \(\mathcal{C}\leftarrow(\mathcal{C}\setminus\{c\})\cup\{q\}\). We extend the SSLS so that we allow to swap multiple centers simultaneously and call this algorithm multi-swap local search (MSLS) \(k\)-means++. Swapping multiple centers at the same time achieves a lower approximation ratio, in exchange for a higher time complexity. In this section, we present and analyse the \(p\)-swap local search (LS) algorithm for a generic number of \(p\) centers swapped at each step. For any constant \(\delta>0\), we obtain an approximation ratio \(\text{ALG}/\text{OPT}=\eta^{2}+\delta\) where \[\eta^{2}-(2+2/p)\eta-(4+2/p)=0. \tag{1}\] The Algorithm.First, we initialize our set of centers using \(k\)-means++. Then, we run \(O(ndk^{p-1})\) local search steps, where a local search step works as follows. We \(D^{2}\)-sample a set \(In=\{q_{1}\ldots q_{p}\}\) of points from \(P\) (without updating costs). Then, we iterate over all possible sets \(Out=\{c_{1}\ldots c_{p}\}\) of \(p\) distinct elements in \(\mathcal{C}\cup In\) and select the set \(Out\) such that performing the swap \((In,Out)\) maximally improves the cost2. If this choice of \(Out\) improves the cost, then we perform the swap \((In,Out)\), else we do not perform any swap for this step. Footnote 2: If \(In\cap Out\neq\emptyset\) then we are actually performing the swap \((In\setminus Out,Out\setminus In)\) of size \(<p\). **Theorem 3**.: _For any \(\delta>0\), the \(p\)-swap local search algorithm above runs in \(\widetilde{O}(ndk^{2p})\) time and, with constant probability, finds an \((\eta^{2}+\delta)\)-approximation of \(k\)-means, where \(\eta\) satisfies Equation (1)._ Notice that the SSLS algorithm of Lattanzi and Sohler (2019) is exactly the \(p\)-swap LS algorithm above for \(p=1\). **Corollary 4**.: _The single-swap local search in Lattanzi and Sohler (2019), Choo et al. (2020) achieves an approximation ratio \(<26.64\)._ **Corollary 5**.: _For \(p=O(1)\) large enough, multi-swap local search achieves an approximation ratio \(<10.48\) in time \(O(nd\cdot poly(k))\)._ ### Analysis of Multi-Swap \(k\)-means++ In this section we prove Theorem 3. Our main stepping stone is the following lemma. **Lemma 6**.: _Let ALG denote the cost at some point in the execution of MSLS. As long as \(\text{ALG}/\text{OPT}>\eta^{2}+\delta\), a local search step improves the cost by a factor \(1-\Omega(1/k)\) with probability \(\Omega(1/k^{p-1})\)._ Proof of Theorem 3.: First, we show that \(O(k^{p}\log\log k)\) local steps suffice to obtain the desired approximation ratio, with constant probability. Notice that a local search step can only improve the cost function, so it is sufficient to show that the approximation ratio is achieved at some point in time. We initialize our centers using \(k\)-means++, which gives a \(O(\log k)\)-approximation in expectation. Thus, using Markov's inequality the approximation guarantee \(O(\log k)\) holds with arbitrary high constant probability. We say that a local-search step is _successful_ if it improves the cost by a factor of at least \(1-\Omega(1/k)\). Thanks to Lemma 6, we know that unless the algorithm has already achieved the desired approximation ratio then a local-search step is successful with probability \(\Omega(1/k^{p-1})\). To go from \(O(\log k)\) to \(\eta^{2}+\delta\) we need \(O(k\log\log k)\) successful local search steps. Standard concentration bounds on the value of a Negative Binomial random variable show that, with high probability, the number of trial to obtain \(O(k\log\log k)\) successful local-search steps is \(O(k^{p}\log\log k)\). Therefore, after \(O(k^{p}\log\log k)\) local-search steps we obtain an approximation ratio of \(\eta^{2}+\delta\). To prove the running time bound it is sufficient to show that a local search step can be performed in time \(\widetilde{O}(ndk^{p-1})\). This is possible if we maintain, for each point \(x\in P\), a dynamic sorted dictionary3 storing the pairs \(\left(\texttt{cost}(x,c_{i})\,,c_{i}\right)\) for each \(c_{i}\in\mathcal{C}\). Then we can combine the exhaustive search over all possible size-\(p\) subsets of \(\mathcal{C}\cup In\) and the computation of the new cost function using time \(O(ndk^{p-1}\log k)\). To do so, we iterate over all possible size-\((p-1)\) subsets \(Z\) of \(\mathcal{C}\cup In\) and update all costs as if these centers were removed, then for each point \(x\in P\) we compute how much its cost increases if we remove its closest center \(c_{x}\) in \((\mathcal{C}\cup In)\setminus Z\) and charge that amount to \(c_{x}\). In the end, we consider \(Out=Z\cup\{c\}\) where \(c\) is the cheapest-to-remove center found in this way. Footnote 3: Also known as dynamic predecessor search data structure. The rest of this section is devoted to proving Lemma 6. For convenience, we prove that Lemma 6 holds whenever \(\text{ALG}/\text{OPT}>\eta^{2}+O(\delta)\), which is wlog by rescaling \(\delta\). Recall that we now focus on a given step of the algorithm, and when we say current cost, current centers and current clusters we refer to the state of these objects at the end of the last local-search step before the current one. Let \(O_{1}^{*}\ldots O_{k}^{*}\) be an optimal clustering of \(P\) and let \(\mathcal{O}^{*}=\{o_{i}=\mu(O_{i}^{*})\mid\text{ for }i=1\ldots k\}\) be the set of optimal centers of these clusters. We denote with \(C_{1}\ldots C_{k}\) the current set of clusters at that stage of the local search and with \(\mathcal{C}=\{c_{1}\ldots c_{k}\}\) the set of their respective current centers. We say that \(c_{i}\)_captures_\(o_{j}\) if \(c_{i}\) is the closest current center to \(o_{j}\), namely \(c_{i}=\mathcal{C}[o_{j}]\). We say that \(c_{i}\) is _busy_ if it captures more than \(p\) optimal centers, and we say it is _lonely_ if it captures no optimal center. Let \(\widetilde{\mathcal{O}}=\{o_{i}\mid\mathtt{cost}(O_{i}^{*},\mathcal{C})> \delta\cdot\text{ALG}/k\}\) and \(\widetilde{\mathcal{C}}=\mathcal{C}\setminus\{\mathcal{C}[o_{i}]\mid o_{i} \in\mathcal{O}^{*}\setminus\widetilde{\mathcal{O}}\}\). For ease of notation, we simply assume that \(\widetilde{\mathcal{O}}=\{o_{1}\ldots o_{h}\}\) and \(\widetilde{\mathcal{C}}=\{c_{1}\ldots c_{h^{\prime}}\}\). Notice that \(h^{\prime}>h\). Weighted ideal multi-swaps.Given \(In\subseteq P\) and \(Out\subseteq\widetilde{\mathcal{C}}\) of the same size we say that the swap \((In,Out)\) is an _ideal_ swap if \(In\subseteq\widetilde{\mathcal{O}}\). We now build a set of _weighted_ ideal multi-swaps \(\mathcal{S}\). First, suppose wlog that \(\{c_{1}\ldots c_{t}\}\) is the set of current centers in \(\widetilde{\mathcal{C}}\) that are neither lonely nor busy. Let \(\mathcal{L}\) be the set of lonely centers in \(\widetilde{\mathcal{C}}\). For each \(i=1\ldots t\), we do the following. Let \(In\) be the set of optimal centers in \(\widetilde{\mathcal{O}}\) captured by \(c_{i}\). Choose a set \(\mathcal{L}_{i}\) of \(|In|-1\) centers from \(\mathcal{L}\), set \(\mathcal{L}\leftarrow\mathcal{L}\setminus\mathcal{L}_{i}\) and define \(Out=\mathcal{L}_{i}\cup\{c_{i}\}\). Assign weight \(1\) to \((In,Out)\) and add it to \(\mathcal{S}\). For each busy center \(c_{i}\in\{c_{t+1}\ldots c_{h^{\prime}}\}\) let \(A\) be the set of optimal centers in \(\widetilde{\mathcal{O}}\) captured by \(c_{i}\), pick a set \(\mathcal{L}_{i}\) of \(|A|-1\) lonely current centers from \(\mathcal{L}\) (a counting argument shows that this is always possible). Set \(\mathcal{L}\leftarrow\mathcal{L}\setminus\mathcal{L}_{i}\). For each \(o_{j}\in A\) and \(c_{\ell}\in\mathcal{L}_{i}\) assign weight \(1/(|A|-1)\) to \((o_{j},c_{\ell})\) and add it to \(\mathcal{S}\). Suppose we are left with \(\ell\) centers \(o_{1}^{\prime}\ldots o_{\ell}^{\prime}\in\widetilde{\mathcal{O}}\) such that \(\mathcal{C}[o_{i}^{\prime}]\not\in\widetilde{\mathcal{C}}\). Apparently, we have not included any \(o_{i}^{\prime}\) in any swap yet. However, since \(|\widetilde{\mathcal{C}}|\geq|\widetilde{\mathcal{O}}|\), we are left with at least \(\ell^{\prime}\geq\ell\) lonely centers \(c_{1}^{\prime}\ldots c_{\ell^{\prime}}^{\prime}\in\widetilde{\mathcal{C}}\). For each \(i=1\ldots\ell\) we assign weight \(1\) to \((o_{i}^{\prime},c_{i}^{\prime})\) and add it to \(\mathcal{S}\). **Observation 7**.: _The process above generates a set of weighted ideal multi-swaps such that: (i) Every swap has size at most \(p\); (ii) The combined weights of swaps involving an optimal center \(o_{i}\in\widetilde{\mathcal{O}}\) is \(1\); (iii) The combined weights of swaps involving a current center \(c_{i}\) is at most \(1+1/p\)._ Consider an ideal swap \((In,Out)\). Let \(O_{In}^{*}=\bigcup_{o_{i}\in In}O_{i}^{*}\) and \(C_{Out}=\bigcup_{c_{j}\in Out}C_{j}\). Define the reassignment cost \(\mathtt{Reassign}(In,Out)\) as the increase in cost of reassigning points in \(C_{Out}\setminus O_{In}^{*}\) to centers in \(\mathcal{C}\setminus Out\). Namely, \[\mathtt{Reassign}(In,Out)=\mathtt{cost}(C_{Out}\setminus O_{In}^{*},\mathcal{C }\setminus Out)-\mathtt{cost}(C_{Out}\setminus O_{In}^{*},\mathcal{C})\,.\] We take the increase in cost of the following reassignment as an upper bound to the reassignment cost. For each \(p\in C_{Out}\setminus O_{In}^{*}\) we consider its closest optimal center \(\mathcal{O}^{*}[p]\) and reassign \(p\) to the current center that is closest to \(\mathcal{O}^{*}[p]\), namely \(\mathcal{C}[\mathcal{O}^{*}[p]]\). In formulas, we have \[\mathtt{Reassign}(In,Out) \leq\sum_{p\in C_{Out}\setminus O_{In}^{*}}\mathtt{cost}(p, \mathcal{C}[\mathcal{O}^{*}[p]])-\mathtt{cost}(p,\mathcal{C}[p])\] \[\leq\sum_{p\in C_{Out}}\mathtt{cost}(p,\mathcal{C}[\mathcal{O}^{* }[p]])-\mathtt{cost}(p,\mathcal{C}[p])\,.\] Indeed, by the way we defined our ideal swaps we have \(\mathcal{C}[\mathcal{O}^{*}[p]]\not\in Out\) for each \(p\not\in O_{In}^{*}\) and this reassignment is valid. Notice that the right hand side in the equation above does not depend on \(In\). **Lemma 8**.: \(\sum_{p\in P}\mathtt{cost}(p,\mathcal{C}[\mathcal{O}^{*}[p]])\leq 2\mathit{OPT}+ \mathit{ALG}+2\sqrt{\mathit{ALG}}\sqrt{\mathit{OPT}}\)_._ Proof.: Deferred to the supplementary material. **Lemma 9**.: _The combined weighted reassignment costs of all ideal multi-swaps in \(\mathcal{S}\) is at most \((2+2/p)\cdot(\mathit{OPT}+\sqrt{\mathit{ALG}}\sqrt{\mathit{OPT}})\)._ Proof.: Denote by \(w(In,Out)\) the weight associated with the swap \((In,Out)\). \[\sum_{(In,Out)\in\mathcal{S}}w(In,Out)\cdot\texttt{Reassign}(In,Out)\leq\] \[\sum_{(In,Out)\in\mathcal{S}}w(In,Out)\cdot\sum_{p\in C_{Out}} \texttt{cost}(p,\mathcal{C}[\mathcal{O}^{*}[p]])-\texttt{cost}(p,\mathcal{C}[p])\leq\] \[(1+1/p)\cdot\sum_{c_{j}\in\mathcal{C}}\sum_{p\in C_{j}}\texttt{ cost}(p,\mathcal{C}[\mathcal{O}^{*}[p]])-\texttt{cost}(p,\mathcal{C}[p])\leq\] \[(1+1/p)\cdot\left(\sum_{p\in P}\texttt{cost}(p,\mathcal{C}[ \mathcal{O}^{*}[p]])-\texttt{ALG}\right).\] The second inequality uses \((iii)\) from Observation 7. Applying Lemma 8 completes the proof. Recall the notions of radius and core of an optimal cluster introduced in Section 2. We say that a swap \((In,Out)\) is _strongly improving_ if \(\texttt{cost}(P,(\mathcal{C}\cup In)\setminus Out)\leq(1-\delta/k)\cdot \texttt{cost}(P,\mathcal{C})\). Let \(In=\{o_{1}\ldots o_{s}\}\subseteq\widetilde{\mathcal{O}}\) and \(Out=\{c_{1}\ldots c_{s}\}\subseteq\widetilde{\mathcal{C}}\) we say that an ideal swap \((In,Out)\) is _good_ if for every \(q_{1}\in\texttt{core}(o_{1})\ldots q_{s}\in\texttt{core}(o_{s})\) the swap \((\mathcal{Q},Out)\) is strongly improving, where \(\mathcal{Q}=\{q_{1}\ldots q_{s}\}\). We call an ideal swap _bad_ otherwise. We say that an optimal center \(o_{i}\in\widetilde{\mathcal{O}}\) is good if that's the case for at least one of the ideal swaps it belongs to, otherwise we say that it is bad. Notice that each optimal center in \(\widetilde{\mathcal{O}}\) is assigned to a swap in \(\mathcal{S}\), so it is either good or bad. Denote with \(G\) the union of cores of good optimal centers in \(\widetilde{\mathcal{O}}\). **Lemma 10**.: _If an ideal swap \((In,Out)\) is bad, then we have_ \[\texttt{cost}(O_{In}^{*},\mathcal{C})\leq(2+\delta)\texttt{cost}(O_{In}^{*}, \mathcal{O}^{*})+\texttt{Reassign}(In,Out)+\delta\texttt{ALG}/k. \tag{2}\] Proof.: Let \(In=\{o_{1}\ldots o_{s}\}\), \(\mathcal{Q}=\{q_{1}\ldots q_{s}\}\) such that \(q_{1}\in\texttt{core}(o_{1})\ldots q_{s}\in\texttt{core}(o_{s})\). Then, by Lemma 1\(\texttt{cost}(O_{In}^{*},\mathcal{Q})\leq(2+\delta)\texttt{cost}(O_{In}^{*}, \mathcal{O}^{*})\). Moreover, \(\texttt{Reassign}(In,Out)=\texttt{cost}(P\setminus O_{In}^{*},\mathcal{C} \setminus Out)-\texttt{cost}(P\setminus O_{In}^{*},\mathcal{C})\) because points in \(P\setminus C_{Out}\) are not affected by the swap. Therefore, \(\texttt{cost}(P,(\mathcal{C}\cup\mathcal{Q})\setminus Out)\leq(2+\delta) \texttt{cost}(O_{In}^{*},\mathcal{O}^{*})+\texttt{Reassign}(In,Out)+\texttt{ cost}(P\setminus O_{In}^{*},\mathcal{C})\). Suppose by contradiction that Equation (2) does not hold, then \[\texttt{cost}(P,\mathcal{C})-\texttt{cost}(P,(\mathcal{C}\cup\mathcal{Q}) \setminus Out)=\] \[\texttt{cost}(P\setminus O_{In}^{*},\mathcal{C})+\texttt{cost}(O_{In}^{*}, \mathcal{C})-\texttt{cost}(P,(\mathcal{C}\cup\mathcal{Q})\setminus Out)\geq \delta\texttt{ALG}/k.\] Hence, \((\mathcal{Q},Out)\) is strongly improving and this holds for any choice of \(\mathcal{Q}\), contradiction. **Lemma 11**.: _If ALG/OPT \(>\eta^{2}+\delta\) then \(\texttt{cost}(G,\mathcal{C})=\Omega_{\delta}(\texttt{cost}(P,\mathcal{C}))\). Thus, if we \(D^{2}\)-sample \(q\) we have \(P[q\in G]=\Omega_{\delta}(1)\)._ Proof.: First, we observe that the combined current cost of all optimal clusters in \(\mathcal{O}^{*}\setminus\widetilde{\mathcal{O}}\) is at most \(k\cdot\delta\texttt{ALG}/k=\delta\texttt{ALG}\). Now, we prove that the combined current cost of all \(O_{i}^{*}\) such that \(o_{i}\) is bad is \(\leq(1-2\delta)\texttt{ALG}\). Suppose, by contradiction, that it is not the case, then we have: \[(1-2\delta)\texttt{ALG}<\sum_{\texttt{Bad}\,o_{i}\in\widetilde{ \mathcal{O}}}\texttt{cost}(O_{i}^{*},\mathcal{C})\leq\sum_{\texttt{Bad}\,(In, Out)\in\mathcal{S}}w(In,Out)\cdot\texttt{cost}(O_{In}^{*},\mathcal{C})\leq\] \[\sum_{\texttt{Bad}\,(In,Out)}w(In,Out)\cdot((2+\delta)\texttt{ cost}(O_{In}^{*},\mathcal{O}^{*})+\texttt{Reassign}(In,Out)+\delta\texttt{ALG}/k)\leq\] \[(2+\delta)\texttt{OPT}+(2+2/p)\texttt{OPT}+(2+2/p)\sqrt{ \texttt{ALG}}\sqrt{\texttt{OPT}}+\delta\texttt{ALG}.\] The second and last inequalities make use of Observation 7. The third inequality uses Lemma 10. Setting \(\eta^{2}=\texttt{ALG}/\texttt{OPT}\) we obtain the inequality \(\eta^{2}-(2+2/p\pm O(\delta))\eta-(4+2/p\pm O(\delta))\leq 0\). Hence, we obtain a contradiction in the previous argument as long as \(\eta^{2}-(2+2/p\pm O(\delta))\eta-(4+2/p\pm O(\delta))>0\). A contradiction there implies that at least an \(\delta\)-fraction of the current cost is due to points in \(\bigcup_{\texttt{Good}\,o_{i}\in\widetilde{\mathcal{O}}}O_{i}^{*}\). We combine this with Lemma 2 and conclude that the total current cost of \(G=\bigcup_{\texttt{Good}\,o_{i}\in\widetilde{\mathcal{O}}}\texttt{core}(O_{i}^ {*})\) is \(\Omega_{\delta}(\texttt{cost}(P,\mathcal{C}))\). Finally, we prove Lemma 6. Whenever \(q_{1}\in G\) we have that \(q_{1}\in\texttt{core}(o_{1})\) for some good \(o_{1}\). Then, for some \(s\leq p\) we can complete \(o_{1}\) with \(o_{2}\ldots o_{s}\) such that \(In=\{o_{1}\ldots o_{s}\}\) belongs to a good swap. Concretely, there exists \(Out\subseteq\mathcal{C}\) such that \((In,Out)\) is a good swap. Since \(In\subset\widetilde{\mathcal{O}}\) we have \(\texttt{cost}(O_{i}^{*},\mathcal{C})>\delta\textsc{OPT}/k\) for all \(o_{i}\in In\), which combined with Lemma 2 gives that for \(i=2\ldots s\)\(P[q_{i}\in\texttt{core}(o_{i})]\geq\Omega_{\delta}(1/k)\). Hence, we have \(P[q_{i}\in\texttt{core}(o_{i})\) for \(i=1\ldots s]\geq\Omega_{\delta,p}(1/k^{p-1})\). Whenever we sample \(q_{1}\ldots q_{s}\) from \(\texttt{core}(o_{1})\ldots\texttt{core}(o_{s})\), we have that \((\mathcal{Q},Out)\) is strongly improving. Notice, however, that \((\mathcal{Q},Out)\) is a \(s\)-swap and we may have \(s<p\). Nevertheless, whenever we sample \(q_{1}\ldots q_{s}\) followed by any sequence \(q_{s+1}\ldots q_{p}\) it is enough to choose \(Out^{\prime}=Out\cup\{q_{s+1}\ldots q_{p}\}\) to obtain that \((\{q_{1}\ldots q_{p}\},Out^{\prime})\) is an improving \(p\)-swap. ## 4 A Faster \((9+\varepsilon)\)-Approximation Local Search Algorithm The MSLS algorithm from Section 3 achieves an approximation ratio of \(\eta^{2}+\varepsilon\), where \(\eta^{2}-(2+2/p)\eta-(4+2/p)=0\) and \(\varepsilon>0\) is an arbitrary small constant. For large \(p\) we have \(\eta\approx 10.48\). On the other hand, employing \(p\) simultaneous swaps, Kanungo et al. (2004) achieve an approximation factor of \(\xi^{2}+\varepsilon\) where \(\xi^{2}-(2+2/p)\xi-(3+2/p)=0\). If we set \(p\approx 1/\varepsilon\) this yields a \((9+O(\varepsilon))\)-approximation. In the same paper, they prove that \(9\)-approximation is indeed the best possible for \(p\)-swap local search, if \(p\) is constant (see Theorem \(3.1\) in Kanungo et al. (2004)). They showed that \(9\) is the right locality gap for local search, but they matched it with a very slow algorithm. To achieve a \((9+\varepsilon)\)-approximation, they discretize the space reducing to \(O(n\varepsilon^{-d})\) candidate centers and perform an exhaustive search over all size-\((1/\varepsilon)\) subsets of candidates at every step. As we saw in the related work section, it is possible to combine techniques from coreset and dimensionality reduction to reduce the number of points to \(n^{\prime}=k\cdot poly(\varepsilon^{-1})\) and the number of dimensions to \(d^{\prime}=\log k\cdot\varepsilon^{-2}\). This reduces the complexity of Kanungo et al. (2004) to \(k^{O(\varepsilon^{-3}\log\varepsilon^{-1})}\). In this section, we leverage techniques from Cohen-Addad et al. (2021) to achieve a \((9+\varepsilon)\)-approximation faster 4. In particular, we obtain the following. Footnote 4: The complexity in Theorem 12 can be improved by applying the same preprocessing techniques using coresets and dimensionality reduction, similar to what can be used to speed up the approach of Kanungo et al. (2004). Our algorithm hence becomes asymptotically faster. **Theorem 12**.: _Given a set of \(n\) points in \(\mathbb{R}^{d}\) with aspect ratio \(\Delta\), there exists an algorithm that computes a \(9+\varepsilon\)-approximation to \(k\)-means in time \(ndk^{O(\varepsilon^{-2})}\log^{O(\varepsilon^{-1})}(\Delta)\cdot 2^{-poly( \varepsilon^{-1})}\)._ Notice that, besides being asymptotically slower, the pipeline obtained combining known techniques is highly impractical and thus it did not make for an experimental test-bed. Moreover, it is not obvious how to simplify such an ensemble of complex techniques to obtain a practical algorithm. Limitations of MSLS.The barrier we need to overcome in order to match the bound in Kanungo et al. (2004) is that, while we only consider points in \(P\) as candidate centers, the discretization they employ considers also points in \(\mathbb{R}^{d}\setminus P\). In the analysis of MSLS we show that we sample each point \(q_{i}\) from \(\texttt{core}(O_{i}^{*})\) or equivalently that \(q_{i}\in B(o_{i},(1+\epsilon)\rho_{i})\), where \(\rho_{i}\) is such that \(O_{i}^{*}\) would have the same cost w.r.t. \(o_{i}\) if all its points were moved on a sphere of radius \(\rho_{i}\) centered in \(o_{i}\). This allows us to use a Markov's inequality kind of argument and conclude that there must be \(\Omega_{\epsilon}(|O_{i}^{*}|)\) points in \(O_{i}^{*}\cap B(o_{i},(1+\epsilon)\rho_{i})\). However, we have no guarantee that there is any point at all in \(O_{i}^{*}\cap B(o_{i},(1-\varepsilon)\rho_{i})\). Indeed, all points in \(O_{i}^{*}\) might lie on \(\partial B(o_{i},\rho_{i})\). The fact that potentially all our candidate centers \(q\) are at distance at least \(\rho_{i}\) from \(o_{i}\) yields (by Lemma 1) \(\texttt{cost}(O_{i}^{*},q)\geq\texttt{2cost}(O_{i}^{*},o_{i})\), which causes the zero-degree term in \(\xi^{2}-(2+2/p)\xi-(3+2/p)=0\) from Kanungo et al. (2004) to become a \(4\) in our analysis. Improving MSLS by taking averages.First, we notice that, in order to achieve \((9+\varepsilon)\)-approximation we need to set \(p=\Theta(1/\varepsilon)\). The main hurdle to achieve a \((9+\varepsilon)\)-approximation is that we need to replace the \(q_{i}\) in MSLS with a better approximation of \(o_{i}\). We design a subroutine that computes, with constant probability, an \(\varepsilon\)-approximation \(\hat{o}_{i}\) of \(o_{i}\) (namely, \(\texttt{cost}(O_{i}^{*},\hat{o}_{i})\leq(1+\varepsilon)\texttt{cost}(O_{i}^{*}, o_{i})\)). The key idea is that, if sample uniformly \(O(1/\varepsilon)\) points from \(O_{i}^{*}\) and define \(\hat{o}_{i}\) to be the average of our samples then \(\texttt{cost}(O_{i}^{*},\hat{o}_{i})\leq(1+\varepsilon)\texttt{cost}(O_{i}^{*}, o_{i})\) Though, we do not know \(O_{i}^{*}\), so sampling uniformly from it is non-trivial. To achieve that, for each \(q_{i}\) we identify a set \(N\) of _nice_ candidate points in \(P\) such that a \(poly(\varepsilon)/k\) fraction of them are from \(O_{i}^{*}\). We sample \(O(1/\varepsilon)\) points uniformly from \(N\) and thus with probability \((\varepsilon/k)^{O(1/\varepsilon)}\) we sample only points from \(O_{i}^{*}\). Thus far, we sampled \(O(1/\varepsilon)\) points uniformly from \(N\cap O_{i}^{*}\). What about the points in \(O_{i}^{*}\setminus N\)? We can define \(N\) so that all points in \(O_{i}^{*}\setminus N\) are either very close to some of the \((q_{j})_{j}\) or they are very far from \(q_{i}\). The points that are very close to points \((q_{j})_{j}\) are easy to treat. Indeed, we can approximately locate them and we just need to guess their mass, which is matters only when \(\geq poly(\varepsilon)\)ALG, and so we pay only a \(\log^{O(1/\varepsilon)}(1/\varepsilon)\) multiplicative overhead to guess the mass close to \(q_{j}\) for \(j=1\ldots p=\Theta(1/\varepsilon)\). As for a point \(f\) that is very far from \(q_{i}\) (say, \(||f-q_{i}||\gg\rho_{i}\)) we notice that, although \(f\)'s contribution to \(\mathtt{cost}(O_{i}^{*},o_{i})\) may be large, we have \(\mathtt{cost}(f,o)\approx\mathtt{cost}(f,o_{i})\) for each \(o\in B(q_{i},\rho_{i})\subseteq B(o_{i},(2+\varepsilon)\rho_{i})\) assuming \(q_{i}\in\mathtt{core}(o_{i})\). ## 5 Experiments In this section, we show that our new algorithm using multi-swap local search can be employed to design an efficient seeding algorithm for Lloyd's which outperforms both the classical \(k\)-means++ seeding and the single-swap local search from Lattanzi and Sohler (2019). Algorithms.The multi-swap local search algorithm that we analysed above performs very well in terms of solution quality. This empirically verifies the improved approximation factor of our algorithm, compared to the single-swap local search of Lattanzi and Sohler (2019). Motivated by practical considerations, we heuristically adapt our algorithm to make it very competitive with SSLS in terms of running time and still remain very close, in terms of solution quality, to the theoretically superior algorithm that we analyzed. The adaptation of our algorithm replaces the phase where it selects the \(p\) centers to swap-out by performing an exhaustive search over \(\binom{k+p}{p}\) subsets of centers. Instead, we use an efficient heuristic procedure for selecting the \(p\) centers to swap-out, by greedily selecting one by one the centers to swap-out. Specifically, we select the first center to be the cheapest one to remove (namely, the one that increases the cost by the least amount once the points in its cluster are reassigned to the remaining centers). Then, we update all costs and select the next center iteratively. After \(p\) repetitions we are done. We perform an experimental evaluation of the "greedy" variant of our algorithm compared to the theoretically-sound algorithm from Section 3 and show that employing the greedy heuristic does not measurably impact performance. The four algorithms that we evaluate are the following: 1) **KM++:** The \(k\)-means++ from Arthur and Vassilvitskii (2007), 2) **SSLS:** The Single-swap local search method from Lattanzi and Sohler (2019), 3) **MSLS:** The multi-swap local search from Section 3, and 4) **MSLS-G:** The greedy variant of multi-swap local search as described above. We use MSLS-G-\(p=x\) and MSLS-\(p=x\), to denote MSLS-G and MSLS with \(p=x\), respectively. Notice that MSLS-G-\(p=1\) is exactly SSLS. Our experimental evaluation explores the effect of \(p\)-swap LS, for \(p>1\), in terms of solution cost and running time. Datasets.We consider the three datasets used in Lattanzi and Sohler (2019) to evaluate the performance of SSLS: 1) KDD-PHY - \(100,000\) points with \(78\) features representing a quantum physic task kdd (2004), 2) RNA - \(488,565\) points with \(8\) features representing RNA input sequence pairs Uzilov et al. (2006), and 3) KDD-BIO - \(145,751\) points with \(74\) features measuring the match between a protein and a native sequence kdd (2004). We discuss the results for two or our datasets, namely KDD-BIO and RNA. We defer the results on KDD-PHY to the appendix and note that the results are very similar to the results on RNA. We performed a preprocessing step to clean-up the datasets. We observed that the standard deviation of some features was disproportionately high. This causes all costs being concentrated in few dimensions making the problem, in some sense, lower-dimensional. Thus, we apply min-max scaling to all datasets and observed that this causes all our features' standard deviations to be comparable. Experimental setting.All our code is written in Python. The code will be made available upon publication of this work. We did not make use of parallelization techniques. To run our experiments, we used a personal computer with \(8\) cores, a \(1.8\) Ghz processor, and \(15.9\) GiB of main memory We run all experiments 5 times and report the mean and standard deviation in our plots. All our plots report the progression of the cost either w.r.t local search steps, or Lloyd's iterations. We run experiments on all our datasets for \(k=10,25,50\). The main body of the paper reports the results for \(k=25\), while the rest can be found in the appendix. We note that the conclusions of the experiments for \(k=10,50\) are similar to those of \(k=25\). Removing centers greedily.We first we compare MSLS-G with MSLS. To perform our experiment, we initialize \(k=25\) centers using \(k\)-means++ and then run \(50\) iterations of local search for both algorithms, for \(p=3\) swaps. Due to the higher running of the MSLS we perform this experiments on 1% uniform sample of each of our datasets. We find out that the performance of the two algorithms is comparable on all our instances, while they both perform roughly 15%-27% at convergence. Figure 1 shows the aggregate results, over 5 repetitions of our experiment. It may happen that MSLS, which considers all possible swaps of size \(p\) at each LS iteration, performs worse than MSLS-G as a sub-optimal swap at intermediate iterations may still lead to a better local optimum by coincidence. Given that MSLS-G performs very comparably to MSLS, while it is much faster in practice, we use MSLS-G for the rest of our experiments where we compare to baselines. This allows us to consider higher values of \(p\), without compromising much the running time. Results: Evaluating the quality and performance of the algorithms.In our first experiment we run KM++ followed by \(50\) iterations of MSLS-G with \(p=1,4,7,10\) and plot the relative cost w.r.t. KM++ at each iteration, for \(k=25\). The first row of Figure 2 plots the results. Our experiment shows that, after \(50\) iterations MSLS-G for \(p=4,7,10\) achieves improvements of roughly \(10\%\) compared to MSLS-G-\(p=1\) and of the order of \(20\%-30\%\) compared to KM++. We also report the time per iteration that each algorithm takes. For comparison, we report the running time of a single iteration of Lloyd's next to the dataset's name. It is important to notice that, although MSLS-G-\(p=1\) is faster, running more iterations MSLS-G-\(p=1\) is not sufficient to compete with MSLS-G when \(p>1\). Results: Evaluating the quality after postprocessing using Lloyd.In our second experiment, we use KM++ and MSLS-G as a seeding algorithm for Lloyd's and measure how much of the performance improvement measured in the first experiment is retained after running Lloyd's. First, we initialize our centers using KM++ and the run \(15\) iterations of MSLS-G for \(p=1,4,7\). We measure the cost achieved by running \(10\) iterations of Lloyd's starting from the solutions found by MSLS-G as well as KM++. In Figure 2 (second row) we plot the results. Notice that, according to the running times from the first experiment, \(15\) iterations iterations of MSLS-G take less than \(10\) iterations of Lloyd's for \(p=4,7\) (and also for \(p=10\), except on RNA). We observe that MSLS-G for \(p>1\) performs at least as good as SSLS from Lattanzi and Sohler (2019) and in some cases maintains non-trivial improvements. Results: Evaluating the quality and performance of the algorithms against a fixed deadline.In this experiment we run KM++ followed by MSLS-G with \(p=1,4,7,10\), for a set of fixed amounts of time. This setting allows the versions of MSLS-G with smaller swap size to perform more iterations compared to the versions of the algorithm with a larger swap size, as smaller swap size leads to lower running time per iteration. Let \(\tau\) be the average time that Lloyd's algorithm requires to complete a simple iteration on a specific instance. We plot the cost of the solution produced by each algorithm after running \(\lambda\times\tau\) for each \(\lambda\in\{1,\cdots,20\}\) in Figure 3. Our experiment shows that MSLS-G for \(p=4,7,10\) achieves improvements of more than \(5\%\) compared to MSLS-G-\(p=1\) even when compared against a fixed running time, and of the order of \(20\%-30\%\) compared to KM++. quality. A very interesting open question is to improve our local search procedure by avoiding the exhaustive search over all possible size-\(p\) subsets of centers to swap out, concretely an algorithm with running time \(\tilde{O}(2^{poly(1/\varepsilon)}ndk)\).
2309.07289
User Training with Error Augmentation for Electromyogram-based Gesture Classification
We designed and tested a system for real-time control of a user interface by extracting surface electromyographic (sEMG) activity from eight electrodes in a wrist-band configuration. sEMG data were streamed into a machine-learning algorithm that classified hand gestures in real-time. After an initial model calibration, participants were presented with one of three types of feedback during a human-learning stage: veridical feedback, in which predicted probabilities from the gesture classification algorithm were displayed without alteration, modified feedback, in which we applied a hidden augmentation of error to these probabilities, and no feedback. User performance was then evaluated in a series of minigames, in which subjects were required to use eight gestures to manipulate their game avatar to complete a task. Experimental results indicated that, relative to baseline, the modified feedback condition led to significantly improved accuracy and improved gesture class separation. These findings suggest that real-time feedback in a gamified user interface with manipulation of feedback may enable intuitive, rapid, and accurate task acquisition for sEMG-based gesture recognition applications.
Yunus Bicer, Niklas Smedemark-Margulies, Basak Celik, Elifnur Sunger, Ryan Orendorff, Stephanie Naufel, Tales Imbiriba, Deniz Erdoğmuş, Eugene Tunik, Mathew Yarossi
2023-09-13T20:15:25Z
http://arxiv.org/abs/2309.07289v3
# User Training with Error Augmentation for Electromyogram-based Gesture Classification ###### Abstract We designed and tested a system for real-time control of a user interface by extracting surface electromyographic (sEMG) activity from eight electrodes in a wrist-band configuration. sEMG data were streamed into a machine-learning algorithm that classified hand gestures in real-time. After an initial model calibration, participants were presented with one of three types of feedback during a human-learning stage: vertical feedback, in which predicted probabilities from the gesture classification algorithm were displayed without alteration, modified feedback, in which we applied a hidden augmentation of error to these probabilities, and no feedback. User performance was then evaluated in a series of minigames, in which subjects were required to use eight gestures to manipulate their game avatar to complete a task. Experimental results indicated that, relative to baseline, the modified feedback condition led to significantly improved accuracy and improved gesture class separation. These findings suggest that real-time feedback in a gamified user interface with manipulation of feedback may enable intuitive, rapid, and accurate task acquisition for sEMG-based gesture recognition applications. Myoelectric control, Gesture recognition, Human-computer interaction, Error augmentation, Co-adaptation, Surface Electromyography (sEMG), ## I Introduction Surface electromyography (EMG) provides a convenient sensor modality for human-computer interaction (HCI) applications [1]. In the past two decades, research efforts have sought to translate the electrical activity associated with muscle contraction into control commands for general use computing, prosthetic control, and motor rehabilitation [2, 3]. Traditional approaches to EMG-based gesture recognition assumed stationarity of the muscle activation to gesture mapping, and did not consider the user's ability to adapt their behavior to feedback about the performance of the algorithm used for gesture classification. The emergence of co-adaptive learning algorithms in the past decade represented a marked shift, acknowledging the human and machine learning as part of integrated system [4, 5, 6, 7, 8]. One key finding from these approaches is that when the human receives continuous feedback about the mapping of muscle activation to gesture, they can increase classification performance [9] through behavioral adaptation to increase class separability [10] or increase movement repeatability [11]. However, to-date, little attention has been paid to how feedback about about classifier performance impacts behavioral adaptations of human learner. The ability to shape human behavioral adaptation and motor skill learning through the use of augmented feedback is well established. Strategies such as error augmentation [12, 13, 14] and reward manipulation [15, 16] have been shown to affect the rate and retention of learning as well as the behavioral variability. Yet, to our knowledge the use of augmented feedback has not been tested for co-adaptation approaches to EMG-based gesture recognition. In this study subjects were given real-time feedback on predicted gesture probabilities from a machine learning model under three conditions: no feedback, vertical feedback, and modified feedback via error augmentation. Participants were asked to freely explore gesture positions while viewing feedback in order to find optimal muscle activation patterns for producing a desired gesture using a fixed model. Modified feedback was produced by a hidden softening of the model probabilities toward a uniform distribution, to guide participants to increase the class separability of their muscle activation patterns. We hypothesized that users who explored hand positions under the more challenging modified feedback condition would learn to create more distinct gestures, thereby improving future classification performance. ## II Experimental Design All protocols were approved by the Northeastern University Institutional Review Board (IRB number 15-10-22) in conformance with the declaration of Helsinki. ### _Subjects_ Forty-four right-handed subjects (21 male/ 23 female, mean age \(\pm\) 1 standard deviation: \(20.9\pm 4.3\) years) participated after providing IRB-approved written informed consent. Subjects were free of orthopedic or neurological disease that could interfere with the task and had normal or corrected-to-normal vision. ### _Experimental Setup_ Subjects viewed a computer display while seated at a table with their right arm positioned comfortably in an armrest trough. Surface electromyography (sEMG) (Trigno, Delsys Inc., sampling frequency: 1926 Hz) was collected from the muscles of the right forearm. Eight sEMG electrodes were positioned at equidistant positions around the circumference of the forearm, at a four finger-width distance from the ulnar styloid (the subject's left hand was wrapped around the right forearm at the ulnar styloid to determine the sEMG placement). The first electrode was placed mid-line on the dorsal aspect of the forearm, and the other electrodes were then equally spaced (see Figure 1). ### _Data Acquisition_ Subjects were randomly assigned to one of three groups and performed a series of tasks as described below. Subjects who were unable to complete all tasks were excluded from further analysis. Each subject group was assigned a different feedback condition: no feedback ("Control", N=\(14\)), vertical feedback ("Vertical", N=\(14\)), or modified feedback ("Modified", N=\(16\)) (see Section II-C4 for details). #### Iii-C1 Gesture Timing Subjects performed a series of tasks composed of one or more gesture trials to move an avatar dice (see details of user interface below). Prior to the start of a trial, the subject's forearm and wrist rested in a pronated position on the trough with the wrist neutral. In each trial, subjects were required to rest or to produce one of eight active gestures (label and action provided in brackets): index-thumb pinch ["Pinch", decrease number on avatar dice], index-thumb key press ["Thumb", increase number on avatar dice], closed fist ["Fist", decrease size of avatar dice], full finger extension ["Open", increase size of avatar dice], wrist extension ["Up", move up], wrist flexion ["Down", move down], wrist ulnar deviation ["Left", move left], radial wrist deviation ["Right", move right]. Each trial began with a 'prompting' epoch (3 sec) cued by a yellow bounding box the participant's display and a picture of the instructed gesture (Calibration and Instructed blocks only, see below), a 'gesture production' epoch (2 sec) cued by a green bounding box, and a'recovery' epoch (3 sec) cued by a red bounding box. See Figure 2 for example timing. Each session was divided into four blocks. #### Iii-C2 Block One: Calibration Subjects from all groups were instructed to perform five consecutive repetitions of each active gesture and eight repetitions of a rest gesture ["Rest"] in which they were asked to relax the hand. This consecutive structure was chosen to help keep the task simple while the participant initially learned the set of available gestures. A classification model was trained on this small dataset before continuing to the next experimental block. #### Iii-C3 Block Two: Instructed Games Subjects from all groups engaged in four practice mini-games. In each mini-game, subjects were instructed to perform a sequence of six gestures to bring an avatar that was shown on the computer screen from a starting position to a desired goal state (e.g. see Figure 3). The trial timing epochs (prompting, gesture production, and rest) were as described above. In this block, the classifier model's predicted probabilities were displayed as post-hoc feedback to the user, but was not used to modify the avatar position or state; the avatar always moved one step closer to the goal after each trial, so that each game lasted exactly six moves. These games were structured so that the \(24\) total gestures (4 games with \(6\) moves each) were evenly distributed among the \(8\) active gestures. After this block, the classification model was retrained to restrain the labeled data from blocks one and two. This training set comprised \(8\) examples for each of the \(9\) classes (\(8\) active gestures and "Rest"). #### Iii-C4 Block Three: Live Feedback Only subjects in the veridical feedback and modified feedback groups participated in this block. Subjects engaged in mini-games while receiving different types of real-time feedback (\(30\) sec duration in the 'gesture production' epoch to produce active gestures). Subjects were asked to freely explore their hand posture in order to maximize the predicted probability of the current gesture class, visible as real-time output of the trained model. For the veridical feedback group, predicted class probabilities were displayed without modification, whereas the modified feedback group was shown probabilities that were softened. This softening procedure is described in detail in Section III-C, and serves to adjust the model's predicted probabilities towards a uniform distribution, with the hope that this encourages participants to compensate by performing more precise gestures. Subjects in the modified feedback group were not informed about this softening procedure. Fig. 1: Electrode Placement. sEMG data is collected using \(8\) Delsys Trigno sEMG sensors uniformly spaced around the right forearm. Fig. 2: Gesture Trial Timing. In the yellow ’prompting’ epoch, the subject sees an instruction. In the green ’gesture production’ epoch, the subject performs the gesture. In the red ’recovery’ epoch, the subject returns to the rest position. Features for classification are extracted from the last \(500\) ms of gesture production to help ensure that steady-state features are collected. #### Ii-A5 Block Four: Free Games All subjects were instructed to perform a series of \(12\) mini-games. The mini-games had the same structure as in block two, with each game requiring a minimum of six moves to bring the avatar from its starting position to a desired goal state. However, unlike the practice mini-games of block two, subjects were tasked with bringing the avatar to its goal state by planning and producing a gesture sequence of their choice. Critically, the avatar only changed its state when the classifier assigned one class a predicted probability above a decision threshold of \(0.5\). The experimenter manually recorded each attempted gesture to serve as labels for subsequent analysis, and the participant's hand movements were also recorded on video to cross-check these labels. ## III Signal Modeling ### _Feature Extraction_ As described in Section II-C1, we extracted raw data for classification from the final \(500\) ms of the active gesture production period of each gesture trial. From each of the \(8\) sensor channels of raw sEMG, we computed the Root-Mean-Square (RMS) and median frequency after Fourier transform, resulting in \(16\)-dimension features. Given a data vector \(x\), RMS is defined as: \[\text{RMS}(x)=\sqrt{\frac{1}{N}\sum_{i=1}^{N}x_{i}^{2}}\enspace. \tag{1}\] The Median Power Frequency is defined as the frequency value \(f_{\textsc{med}}\) that divides the Power Spectral Density (PSD) into two regions with equal power [17]: \[\int_{0}^{f_{\textsc{med}}}\text{PSD}(f)df=\int_{f_{\textsc{med}}}^{\infty} \text{PSD}(f)df=\frac{1}{2}\int_{0}^{\infty}\text{PSD}(f)df\enspace. \tag{2}\] ### _Classification Model_ Given extracted features, we used a two-stage classification pipeline to predict among \(9\) possible gestures: Up, Thumb, Right, Pinch, Down, Fist, Left, Open, Rest. The classification model consisted of an encoder formed from Support Vector Machine (SVM) models that produced a latent representation, and a logistic regression classifier that produced predicted class probabilities. In the encoder portion of the model, we trained a one-vs-one (OVO) SVM classifier [18] for each of the \(\binom{9}{2}=36\) pairs of gestures. Each of these OVO-SVM models produced a scalar output (representing the probability of assigning to the first of its two classes); these \(36\) scalars were stacked into a latent vector and passed to the logistic regression model. Given a supervised training dataset, we first fit the one-vs-one SVM models using linear programming with the CVXPY Python library [19]. The linear programming objective we used was based on the semi-supervised SVM formulation of [20], to allow future semi-supervised extensions. Specifically, the SVM parameters were trained according to the following optimization problem: \[\min_{w,b,\eta}C\sum_{i=1}^{l}\eta_{i}+\frac{1}{2}\|w\|^{2} \tag{3}\] \[\text{s.t.}\;\;y_{i}(wx_{i}-b)+\eta_{i}\geq 1,\;\;\eta_{i}\geq 0,\;\;i=1,\ldots,l\] where \(w,b\) were the parameters to be optimized, \(\eta_{i}\) were slack variables allowing misclassification of individual points, and \(C>0\) is a fixed penalty parameter controlling the margin's strictness. We implemented the logistic regression classifier with the PyTorch Python library [21] using a single linear layer and a SoftMax function. After the SVM encoder portion of the model was trained, it was held fixed while the logistic regression classifier model was trained by stochastic gradient descent to minimize the cross-entropy loss. We trained the classifier model for \(1000\) epochs with a batch size of \(20\) and AdamW [22] optimizer. See Algorithm 1 for a summary of our classifier training procedure. SmoothingAs noted, participants in the vertical feedback and modified feedback groups were shown real-time output from the model. Due to the high sampling frequency of the sEMG sensors used, and the relatively computationally simple prediction model, the system was capable of making very fast adjustments to the predicted output, which can result in unwanted jitter due to slight fluctuations in raw signal or hand positioning. Therefore, we used an exponential moving average (EMA) to smooth the model's predictions in time. At time-step \(t\), the model produces a raw probability vector \(\tilde{P}^{(t)}\), which is then mixed with the previous probability vector using a momentum parameter \(\lambda\) to produce a smoothed vector \(P^{(t)}_{\textsc{EMA}}\): \[P^{(t)}_{\textsc{EMA}}=\lambda P^{(t-1)}_{\textsc{EMA}}+(1-\lambda)\tilde{P }^{(t)}. \tag{4}\] For values of \(\lambda\) close to \(1\), this causes the probability vector to update more slowly. ### _Modified Feedback_ As mentioned above, subjects in the modified feedback group were shown modified real-time output from the trained classifier during Block Three of the experiment. Specifically, Fig. 3: Example mini game. The blue player avatar must be moved to match the gray target avatar. The minimal path includes moving right, down twice, decreasing die number (using a pinch gesture) and reducing size (using a fist gesture). the vector of predicted probabilities from the model was modified according to the following formula: \[P_{\text{MODEL}}=\frac{[P_{\text{EMA}}]^{m}}{\sum\limits_{c\in C}[P_{\text{ EMA}}]^{m}}, \tag{5}\] where the modification exponent \(m\) was set to \(0.75\), and \(C\) represents the \(8\) classes used. ### _User Interface and Software Design_ Figure 4 shows the user interface (UI) displayed to participants. All components of the UI were implemented using PyQt [23] Python package. On the top left, the UI displayed an instructed gesture via image and text during blocks one and two (see Section II-C2 and II-C3). On the bottom left, the UI showed post-hoc predicted probabilities for each gesture as a radial plot. The length of each line was scaled according to the value; the outer circle represented a value of \(1\), and the inner circle represented a value of \(0.5\) (i.e. the model's decision threshold). The opacity of gesture images around the radial plot were also scaled according to the value. The outer edge of the UI was colored yellow, green, or red to indicate gesture timing epoch as described in Section II-C1. On the right of the UI was the task window in which the mini-games were played during blocks two and four (see Section II-C3 and II-C5). As described previously, participants used one of \(8\) active gestures to move their avatar (the blue die). The goal of each mini-game in blocks two and four was to use these gestures to match the blue die to the gray target die. Error Augmentation in Live FeedbackDuring block three (see Section II-C4), participants who received real-time feedback were presented with a different display, as shown in Figure 5. Here, the probability of each class was displayed using a bar plot that was updated in real-time. The participant's goal during this block of the experiment was to explore hand positions in order to maximize the predicted probability of the current gesture class. Data Streaming with LabGraphOur experimental platform was built using the LabGraph[24] Python package. The software was implemented as a collection of LabGraph nodes to separately manage raw data collection, preprocessing, feature extraction, classification, and UI. These nodes were connected using a declarative syntax in a directed acyclic graph. At runtime, LabGraph spawned processes to run each node, and automates both the delivery and logging of messages. ### _Classifier Metrics_ As mentioned in Section II-C5, the experimenter recorded each intended gesture made by the participant, so that model accuracy could be evaluated after-the-fact. Accuracy was defined as the fraction of correctly classified items. In addition to the \(8\) active gestures and the "rest" class, the decision threshold of \(0.5\) that was used resulted in another possible outcome for gesture trials when no gesture rose above the decision threshold, which we refer to as "NoClass". Gesture trials in which the subject was not prepared to make a gesture during the "gesture production" epoch were recorded as having a true label of "rest". ### _Feature-Space Class Structure_ To evaluate how feedback affects human learning, we analyzed the feature-space distribution of trials from different Fig. 4: The participant User Interface. Top left: instructed gesture. Bottom left: predicted gesture probabilities. Right: Task window including subject’s avatar and target. Outer edge: gesture epoch indicator. Fig. 5: Top: Real-time probability feedback window. The horizontal line at \(0.5\) shows the decision threshold. Bottom: Example of probability values without modification (“Vertical”) and with modification (“Modified”) as described in Sec. III-C. gestures performed in block four of the experiment. This feature-space representation does not depend on the model, since these features are obtained using simple, deterministic transformations of the raw data (RMS and median frequency after Fourier transform). The differences in feature-space class structure across treatment groups can therefore give information about human learning. Kernel SimilaritiesWe base our analysis of feature-space structure on a Radial Basis Function (RBF) kernel similarity measure. The RBF kernel computes a similarity measure which corresponds to an implicit infinite-dimensional vector space. For two feature vectors \(x,x^{\prime}\) belonging to a dataset \(X\) and a length scale parameter \(\gamma\in\mathbb{R}\), the RBF kernel similarity is computed as: \[RBF(x,x^{\prime},\gamma)=\exp\left(-\gamma\|x-x^{\prime}\|^{2}\right). \tag{6}\] The length scale \(\gamma\) is an important hyperparameter which determines the rate at which similarities decay as two points are moved farther apart. We follow the so-called "median heuristic" [25], in which \(\gamma\) is set based on the median length scale of a dataset \(X\): \[\gamma_{\text{MED}}=1/\text{med}(\|x-x^{\prime}\|^{2},\ \forall\ (x,x^{\prime}) \in\{X\times X\}). \tag{7}\] We set \(\gamma_{\text{MED}}\) individually for each subject, based on all of their pooled gesture trials. Class Similarity MatricesWe use this notion of kernel similarity to construct a class similarity matrix for each subject. For classes \(C_{1},\ldots,C_{\mathcal{C}}\), we build a square, symmetric matrix \(D\in\mathbb{R}^{(\mathcal{C}\times\mathcal{C})}\) such that the entry at position \((i,j)\) describes the average RBF kernel similarity between items in classes \(C_{i}\) and \(C_{j}\): \[D_{ij}=\frac{1}{|C_{i}||C_{j}|}\sum_{x\in C_{i}}\sum_{x^{\prime}\in C_{j}}RBF( x,x^{\prime},\gamma_{\text{MED}}). \tag{8}\] After computing the entries in a similarity matrix, we normalize the entries to the range \([0,1]\) so that these matrices may be easily compared across subjects and groups. Classes which are closer together in feature space will have a higher average similarity and therefore a larger entry in this similarity matrix. A subject whose gestures are easily classifiable may tend to have precise gestures that are also well-separated from each other. This would result in having high average similarity between trials in the same gesture class (diagonal entries of the class similarity matrix) and low average similarity between trials of different classes (off-diagonal entries). See Section IV-D for class similarity matrices from each experimental group, and see Figure 6 for didactic examples of similarity matrix \(D\). Scalar Class Separation MeasureIn order to look for trends in the feature-space distribution over time and to identify global trends across groups, we also summarize these normalized class similarity matrices using a scalar class separation measure, \(d_{\text{SEP}}\), which we define as the average within-class similarity divided by the average between-class similarity. Given a normalized similarity matrix \(D\) as described above, \[d_{\text{SEP}}=\left(\frac{1}{N}\sum_{i=1}^{N}D_{ii}\right)/\left(\frac{2}{N( N-1)}\sum_{i=2}^{N}\sum_{j=1}^{i-1}D_{ij}\right). \tag{9}\] As indicated above, larger within-class similarities indicate that trials from the same gesture are precise and repeated with high-fidelity, while smaller between-class similarities indicate that trials from different gestures are easily distinguished. Thus, a dataset with a larger value of \(d_{\text{SEP}}\) may contain gestures that will be more easily classified. In Figure 6, we show examples of class similarity matrix \(D\) and scalar similarity measure \(d_{\text{SEP}}\). To produce an example that can be easily visualized, we select a subject from the "Modified" condition that showed a large improvement in feature-space separation. For this subject, we select three gestures ("Left", "Down", and "Right") and three features (RMS value from electrodes 1, 4, and 7). In the top row, we show metrics for this subject's data during the "Calibration" and "Instructed" blocks, and in the bottom row, we show metrics from the "Free" block; recall that the subject experiences live feedback training after the "Instructed" block. We observe that the features of each class become more distinct after the user performs live feedback training; this is captured as an increase in the similarities on the diagonal of \(D\) and a decrease in similarities off-diagonal. These changes in \(D\) are also summarized in \(d_{\text{SEP}}\), which increases from \(2.8\) to \(3.55\). ### _Within-Subject Normalization_ The focus of this work is to measure the effect of the proposed veri Fig. 6: Didactic example for class similarity matrices \(D\) and scalar class separation measure \(d_{\text{SEP}}\). For a chosen subject from the Modified condition, we analyze 2 3 of the original 16 features (RMS value from electrodes 1, 4, and 7) and a subset of gestures (“Left”, “Down”, and “Right”). Top row: features from calibration and instructed blocks. Bottom row: features from free games. Left: Scatter plot of \(3\)-dimensional features, and scalar class separation value. Right: The corresponding class separation matrix. performance. We note that overall subject performance may be influenced by a relatively large number of factors of variation, such as factors affecting dexterity and motor precision, subject motor learning speed, and subject-intrinsic factors affecting raw sEMG signal-to-noise ratio. Thus, a prohibitively large sample size may be required to natively account for this variation. We instead adopt a within-subject normalization strategy, obtaining baseline statistics for each subject using only data measured _before_ our interventions. For each subject, we measure baseline accuracy by training a model from scratch using that subject's block one data (calibration, Section II-C2), and testing this model's classification accuracy on the subject's block two data (instructed games, Section II-C3). We obtain baselines for class similarity matrices in the same manner. Within each subject, we collect all gesture trials from the first two experimental blocks, and compute a normalized class similarity matrix. This is subtracted from the matrix computed using data from block four (free games, Section II-C5) to visualize the difference in similarity for each class. Note that due to the short experimental design, we have relatively few samples per class with which to construct each matrix, and therefore this representation may be somewhat noisy. We transform the normalized similarity matrix describing blocks one and two into the scalar class separation measure \(d_{\text{SEP}}\), and likewise transform the similarity matrix describing block four. This results in a baseline-subtracted class separation measure. Overall, we measure changes from baseline as follows: \[\Delta\text{Acc} =\text{Acc}_{\text{FREE}}-\text{Acc}_{\text{Baseline}} \tag{10}\] \[\Delta D =D_{\text{FREE}}-D_{\text{Baseline}}\] \[\Delta d_{\text{SEP}} =d_{\text{SEP, PREE}}-d_{\text{SEP, BASELINI}}\] ### _Statistical Analysis_ We performed several pre-planned statistical analyses to determine the effect of feedback on classification accuracy and feature space class separation. Differences between Feedback Groups at baseline (\(\text{Acc}_{\text{Baseline}}\), \(\text{D}_{\text{Baseline}}\)) were analyzed using a one-way ANOVAs. Likewise, the effect of Feedback group on change scores (\(\Delta\text{Acc}\), \(\Delta\text{D}\)) were analyzed with one-way ANOVAs (\(\alpha=0.05\)). Alpha level was set at 0.05. Significant findings were further analyzed using post-hoc paired comparisons with Bonferroni correction for multiple comparisons. One sided one sample t-tests with Bonferroni correction for multiple comparisons (\(\alpha=0.0167\)) were used on change scores to test whether each Feedback Group significantly increased accuracy and distance. ## IV Results All participants were able to successfully complete the experiment, with no reported adverse events. ### _Group Baselines_ A one-way ANOVA indicated no significant differences in baseline accuracy (\(F(2,43)=1.15\), \(P=0.326\)) or class separation (\(F(2,43)=0.86\), \(P=0.443\)) between Feedback Groups. Figure 7 shows a group-level summary of the baseline accuracy and class separation measure. Though no significant differences were found, mean baseline accuracy and class separation scores were greatest in the Control Group and smallest in the Modified Group. ### _Effects of Feedback_ Individual one-sided one-sample t-test were used to test for significant improvement in Free block performance from baseline (Bonferroni corrected for 3 comparisons, \(\alpha=0.0167\)). For accuracy, only the Modified Group showed significant improvement (\(t(13)=2.566\), \(P=.012\)). No group showed a significant improvement in class separation. One-way ANOVAs indicated no significant between-group differences in \(\Delta\text{Acc}\) (\(F(2,43)=0.413\), \(P=0.665\)) or \(\Delta\text{D}\) (\(F(2,43)=1.309\), \(P=0.281\)). Figure 8 shows the average change from baseline performance in each experimental group, as measured in accuracy of gesture classification (left panel) and feature-space class separation measure (right panel). These data demonstrate that, on average, the increase in performance over the course of the experiment was greatest for subjects in the modified feedback group. Note that the variation between subjects is relatively high, resulting in overlapping estimates of mean performance. ### _Class Confusion_ Figure 9 shows the group average confusion matrices of gesture trials during block four (free games) for each group. Fig. 7: Baseline Performance. Left: Accuracy. Right: Scalar class separation measure \(d_{\text{SEP}}\). Boxplots show median and quartiles; dotted lines show mean. Note the relative difference in subject baseline task performance, visible as a gap in baseline accuracy. This discrepancy (due to random group assignment and low subject number) indicates the need for within-subject normalization, as described in Section III-G. See Section IV-A for statistical analysis. Rows represent the classification of the attempted gesture, normalized to \(1\). There are notable similarities across the groups, indicating several gestures that are intrinsically difficult and gesture pairs that are inherently close. In particular, the "thumb", "pinch", and "fist" gestures all have a large fraction (about \(25\%\)) of gestures which fall below the decision threshold. Similarly, there was an overall trend that these three gestures tended to be confused, resulting in non-zero entries for the off-diagonal entries (fist, thumb), (fist, pinch), (thumb, pinch), etc. ### _Class Feature Space Similarity_ Figure 10 shows the average normalized class similarity matrix of each group. As described previously, a "desirable" pattern for easy downstream classification (in which the subject produced consistent and well-separated gestures) would consist of larger entries on the diagonal and smaller entries off-diagonal. Each group demonstrated a consistent pattern in which the brightest entries were along the diagonal, indicating that the gestures were generally separable, and a consistent pattern of bright off-diagonal cells, indicating overlap between three specific gestures: "pinch", "fist", and "thumb". ## V Discussion and Future Work This study tested the potential of modified continuous feedback of model performance in a gamified user interface for rapid user training on a sEMG-based gesture recognition system for controlling actions on a computer display. We hypothesized that we could use manipulation of feedback about the gesture class probabilities in a short (4 minute) online learning session to shape user behavior in manner that would increase the separation between muscle activation patterns of different gestures and increase the accuracy of model performance on future attempts. Overall, our results demonstrate that a short user training session using modified feedback has the potential to increase post-calibration performance (accuracy and class separation relative) when compared to veridical feedback and a no-feedback control. ### _User Calibration_ Despite the emergence of research into methods for co-adaptive learning for sEMG-based gesture recognition, there have been few investigations specifically testing the effect of user training as means of rapid calibration. Numerous studies have shown that extended user training on a sEMG-based controller results in significant gains in performance [26, 11, 10]. The majority of these studies have found increased model performance was accompanied by changes in muscle activation patterns that are theoretically favorable to better classification (i.e. increased class separability). In contrast, a recent investigation found low correlation between improved real-time performance and class separability, metrics of muscle activation patterns in feature space and classification performance are unrelated [27]. Krasoulis et. al. first demonstrated that short-term adaptation through biofeedback user training could positively impact prosthetic finger control using sEMG-based decoding [9]. Our results demonstrate that both performance and class separability increase after live feedback training, compared to the no feedback control, and that this increase is greatest when using error-augmented feedback. ### _Influence of Feedback Manipulation on User Behavior._ The Manipulated feedback group showed the largest change in classification accuracy and class separability. Flattening of the class probabilities as was done in this investigation can be considered a form of error augmentation as subjects were lead to believe the separation between classes was smaller that it actually was. As this made the task more difficult it is most closely related to the feedback involving "error amplification" that has been studied extensively. Feedback of performance outcomes that are worse than actual performance, error amplification, has been previously found to expedite motor adaptations to novel task constraints compared to accurate feedback [28, 29] and amplification of task errors has shown promise as an approach to facilitate motor recovery in patients with neurological disorders [30, 31]. Faster or more complete learning with error amplification has been attributed to more brain processes associated with greater attention to execution of the motor task [32, 33, 34] and reduction of sensorimotor noise [14]. We speculate that increased improvement in classification accuracy with Modified feedback in this study may be a product of similar mechanisms that have been previously attributed to performance improvements with error amplification. ### _Selected Gestures_ We selected gestures that mimicked manipulation of commonplace items such as remote controls and cellphones. No subject commented that the gestures were unfamiliar or difficult to perform. Directional gestures using wrist movements Fig. 8: Overall Changes from Baseline Performance. Left: Change in accuracy. Right: Change in scalar class separation measure \(d_{\text{sign}}\). Boxplots show median and quartiles; dotted lines show mean. For each subject, we perform baseline subtraction as described in Section III-G. Change in accuracy for modified group was significantly greater than zero using, see Section IV-B for statistical analysis. ("Up", "Down", "Left", "Right") were generally separable and yielded higher classification accuracy compared to gestures using grasping movements ("Pinch", "Thumb", "Open", "Fist"). These gestures recruit similar extrinsic hand muscle groups (where electrodes were placed), and creating separation in the muscle activation patterns used to perform these gestures may not be intuitive. Thus the feature-space similarity that we observed for these gestures is somewhat expected. Importantly, the use of modified feedback appeared to influence the class separation of these gestures specifically compared to control and veridical feedback conditions. ### _Limitations_ There were several limitations of the current work that may have affected the results and interpretations. Only a single classification model was used. Several machine learning methods, including artificial neural networks (ANN), linear discriminant analysis (LDA), support vector machines (SVM), and Gaussian mixture models have been previously used for sEMG-based control. The choice to use a model based on SVM and Logistic Regression was due its simplicity and the popularity of SVM for this application. It is likely that the choice of classifier model affects not only calibration accuracy, but also the way user explores the mapping of muscle activation to gestures. Nevertheless, the user training scheme employed here likely has general benefit for use and understanding of human co-adaptive behavior. There are a number of possible changes in the signal processing pipeline that may yield improvements in overall model performance. The active window for feature extraction may be tuned, and additional features such as time-frequency domain or higher-dimensional feature vectors may be extracted. The selected features (RMS, and median frequency) were chosen based on their common use in for sEMG-based gesture classification and initial pilot testing. Future work should evaluate how sEMG feature selection effects user training. Only a single type of feedback manipulation was tested. We used a feedback manipulation that essentially flattened probabilities across classes, making it more difficult to achieve a correct classification. This approach was selected as it was expected that participants would respond by increasing the separation between muscle activation patterns for different gestures. While we find this to be case, the manipulation was not directly optimized for this purpose. Future research should explore the optimization of feedback manipulation for shaping user behavior during co-adaptive sEMG-gesture recognition. Adaptive feedback manipulation based on user and model performance characteristics to target specific class confusions is a attractive future direction.
2309.15855
Temporally-Evolving Generalised Networks and their Reproducing Kernels
This paper considers generalised network, intended as networks where (a) the edges connecting the nodes are nonlinear, and (b) stochastic processes are continuously indexed over both vertices and edges. Such topological structures are normally represented through special classes of graphs, termed graphs with Euclidean edges. We build generalised networks in which topology changes over time instants. That is, vertices and edges can disappear at subsequent time instants and edges may change in shape and length. We consider both cases of linear or circular time. For the second case, the generalised network exhibits a periodic structure. Our findings allow to illustrate pros and cons of each setting. Generalised networks become semi-metric spaces whenever equipped with a proper semi-metric. Our approach allows to build proper semi-metrics for the temporally-evolving topological structures of the networks. Our final effort is then devoted to guiding the reader through appropriate choice of classes of functions that allow to build proper reproducing kernels when composed with the temporally-evolving semi-metrics topological structures.
Tobia Filosi, Claudio Agostinelli, Emilio Porcu
2023-09-22T11:47:27Z
http://arxiv.org/abs/2309.15855v1
# Temporally-Evolving Generalised Networks and their Reproducing Kernels ###### Abstract This paper considers generalised network, intended as networks where (a) the edges connecting the nodes are nonlinear, and (b) stochastic processes are continuously indexed over both vertices and edges. Such topological structures are normally represented through special classes of graphs, termed graphs with Euclidean edges. We build generalised networks in which topology changes over time instants. That is, vertices and edges can disappear at subsequent time instants and edges may change in shape and length. We consider both cases of linear or circular time. For the second case, the generalised network exhibits a periodic structure. Our findings allow to illustrate pros and cons of each setting. Generalised networks become semi-metric spaces whenever equipped with a proper semi-metric. Our approach allows to build proper semi-metrics for the temporally-evolving topological structures of the networks. Our final effort is then devoted to guiding the reader through appropriate choice of classes of functions that allow to build proper reproducing kernels when composed with the temporally-evolving semi-metrics topological structures. _Keywords_ -- Generalised networks, time-evolving graphs, reproducing kernels, semi-metric spaces. Introduction ### Context Data complexity is certainly one of the main aspects to address in the Data Science revolution framework. In particular, we call _3D_ complexities those aspects related to Data structure, Data dimension and Data domain. The present paper concerns the latter of these aspects and delves into the problem of graphs whose nodes and edges evolve dynamically over time. Data analysis on graphs became ubiquitous in both Statistics and Machine Learning communities. For the first, recent contributions as in Anderes et al. (2020), Moradi and Mateu (2020), Baddeley et al. (2021) and Rakshit et al. (2017) witness the importance of graph structures for georeferenced data, being realisations of either geostatistical or point processes. As for the machine learning community, the amount of literature is huge. Open Graph Benchmark (OGB) is a comprehensive set of challenging and realistic benchmark datasets with the aim to facilitate scalable, robust and reproducible graph machine learning (ML) research (Hu et al., 2020). An overview of ML methodological approaches on graphs is provided by Chami et al. (2022). Topological complexities under the framework of data analytics are discussed in Stankovic et al. (2020). Excellent surveys about ML on graphs are oriented to angles as different as large scale challenges (Hu et al., 2021), automated ML (Zhang et al., 2021), representation learning (Hamilton et al., 2017), relational ML for knowledge graphs (Nickel et al., 2015) and higher order learning (Agarwal et al., 2006), to mention just a few. In the great majority of contributions, the process is assumed to be defined exclusively over the vertices of the graph. The extension to processes that are continuously defined over both graphs and edges requires substantial mathematical work. This paper focuses on graphs with Euclidean edges (Anderes et al., 2020), being an ingenious topological structure that allows to generalise linear networks to nonlinear edges. Further, the process defined over such structures can have realisations over any point over the edges, and not only in the nodes. Roughly, these are graphs where each edge is associated with an abstract set in bijective correspondence with a segment of the real line. This provides each edge with a Cartesian coordinate system to measure distances between any two points on that edge. There has been increasing attention on generalised networks in ML (Al sheikh et al., 2014, Georgopoulos and Hasler, 2014, Hamilton et al., 2017, Pinder et al., 2021, Borovitskiy et al., 2022], spatial data [Cressie et al., 2006, Gardner et al., 2003, Ver Hoef et al., 2006, Peterson et al., 2013, 2007, Montembeault et al., 2012] and point processes [Xiao et al., 2017, Perry and Wolfe, 2013, Deng et al., 2014, Baddeley et al., 2017]. For all these contributions, _time_ is not part of the game. Reproducing kernel Hilbert space (RKHS) methods [Hofmann et al., 2008] had a considerable success in both ML and Statistics community, and the reader is referred to Hofmann et al. [2006], Kung [2014] and to Pillonetto et al. [2014] for excellent overviews. RKHS methods require _kernels_, being positive semidefinite functions defined over a suitable input space. A customary assumption for such kernels is that of isotropy, _i.e._ the kernel depends only on the distance between any pair of points belonging to the input space. There is a rich literature at hand for the case of the input space being a \(d\)-dimensional Euclidean space [see the celebrated work of Schoenberg, 1942], and the reader is referred to the recent review by Porcu et al. [2023a]. Non Euclidean domains have a more recent literature, and we mention Porcu et al. [2016] as well as Borovitskiy et al. [2023] and Borovitskiy et al. [2021] for recent contributions. For such cases, the _distance_ between the points is no longer the Euclidean distance, but the geodesic distance. The _tour de force_ by Anderes et al. [2020] has allowed to define isotropic reproducing kernels for generalised network, by working with two metrics: the geodesic and the resistance metric. Elegant isometric embedding arguments therein allow to provide sufficient conditions for given classes of functions to generate a legitimate reproducing kernel through composition with either of the two metrics. See the discussion in Section 2. ### Graphs cross time and temporally-evolving graphs Data on generalised networks are usually repeatedly observed over time. For such a case, it is customary to consider the input space, \(X\), as a product (semi) metric space, with two separate metrics: the geodesic (or the resistance) metric for the graph, and the temporal separation for time. This approach has considerable advantages as it simplifies the mathematical architecture considerably. Unfortunately, under such a setting, the graph topology is invariant over time. This fact implies that nodes cannot disappear (nor new nodes can appear at arbitrary future instant times), and the shape and length of the edges do not evolve over time. Reproducing kernels for such a case have been recently proposed by Porcu et al. (2023) and by Tang and Zimmerman (2020). When the input space is a product space equipped with separate metrics, the kernel is component-wise isotropic when it depends, on the one hand, on a suitable distance over the graph and, on the other hand, on temporal separation. For details, the reader is referred to Porcu et al. (2023). For the case of static metric graphs, we mention the impressive approach in Bolin and Lindgren (2011). Outside the reproducing kernel framework, scientists have been mainly focused on the topological structure of temporally dynamical networks (Hanneke et al., 2010). This fact boosted for a wealth of related approaches, ranging from community detection methods (Mankad and Michailidis, 2013; Cherifi et al., 2019) to link prediction (Lim et al., 2019; Divakaran and Mohan, 2020) to structural changes detection (Rossi et al., 2013). The common feature of the above contributions is that they consider the stochastic evolution of the structure (nodes and edges) of the graphs as the basis for the inference, whilst they do not usually allow for _processes_ defined over those graphs. ### Linear or circular time? Might graphs be periodic? This paper consider time-evolving graphs. While the choice of linear time does not need any argument--this is what most of the literature does--this paper argues that periodically-evolving graphs have a reason to exist. Our first argument to advocate in favour of a periodic construction is that it suits perfectly to several real-world phenomena, where both linear-time evolution (_e.g._ long-term trends) and cyclic oscillations (_e.g._ seasonal components) might happen. To make an example, consider temperatures in a given geographical area: apparently there might be strong correlations between: (i) contiguous spatial points at a given time, which are represented by means of spatial edges; (ii) the same points considered at contiguous times, which are represented by means of temporal edges between temporal layers and (iii) the same points considered at the same periods of the year, which are considered in the model as they are exactly the same point in the temporally evolving graph. In many real-world applications, the network underlying a system is only partially observable. As a consequence, it could be hard or impossible to specify the whole time-evolving (not periodic) network in cases of long time series. The periodic assumption, when all in all reasonable, may be a great help in this circumstance as well. ### Our contribution This paper provides the following contributions. 1. We provide a mathematical construction for graphs with Euclidean edges having a topology that evolves over time. That is, the number of vertices can change over time, as well as the shape and length of the edges. Remarkably, our construction allows for stochastic processes that are continuously indexed over both vertices and edges. 2. We start by considering the case of linear time. After providing the structure for a temporally evolving graph, we devote substantial mathematical effort to build a suitable semi-metric over it. This is achieved at the expense of a sophisticated construction through a Gaussian bridge that is interpolated through the edges in the spatio-temporal domain. 3. As an implication of the previous points, we obtain suitable second order properties (hence, the reproducing kernel) associated with such a process. 4. The previous steps are then repeated for a periodic time-evolving graph. 5. We prove that the construction with linear time might have a counterintuitive property: adding temporal layers can change the inter-space distances between points in the graph. This might be a problem in terms of statistical inference, as carefully explained through the paper. We show that the periodic construction does not present such an inconvenience. 6. Our findings culminate by guiding the reader through handy constructions for kernels defined over these graphs. We should mention that our contribution differentiates with respect to earlier literature in several direction. In particular: 1. We allow the topology of the graph to evolve over time (whatever linear or circular). Previous contributions where graphs with Euclidean edges are considered use either a static graph (Anderes et al., 2020, Bolin et al., 2022) or a graph having a topology that is invariant with respect to time (Porcu et al., 2023; Tang and Zimmerman, 2020). Hence, we provide a very flexible framework in comparison with earlier literature. 2. We allow stochastic processes to be continuously defined over the graph. This is a substantial innovation with respect to a massive literature from both Statistics and ML, where the graph topology can vary according to some probability law associated with the nodes, but not to processes defined over the nodes. The structure of the paper is the following. Section 2 recalls the main mathematical objects that will be used. Section 3 builds the skeleton of our construction, _i.e._ time-evolving graphs, which are exploited in Sections 4 and 5, where time-evolving graphs with Euclidean edges are defined for the linear time and circular time cases, respectively. Section 6 illustrates how it is possible to build kernels on such a structure and present some examples. Finally, Section 7 concludes the paper. In addition, in Appendix A we recall some mathematical definitions used throughout. In Appendix B, we recall and then extend some significant results by Anderes et al. (2020) about the definition of kernels on arbitrary domains that are used in this manuscript, and present them under a general and easy-to-handle perspective. As proofs are rather technical, we defer them to Appendix C for a neater exposition of the main text. ## 2 Mathematical background This material is largely expository and provides the necessary mathematical background needed to understand the concept illustrated in the main text. For the unfamiliar reader, Appendix A provides basic definitions and concepts used in network theory. ### Gaussian random fields over semi-metric spaces Let us begin with a brief introduction about Gaussian random fields (Stein, 1999) Let \(X\) be a non-empty set and let \(k:X\times X\to\mathbb{R}\). Then \(k\) is a _positive semi-definite_ function (or a _kernel_, or a _covariance function_) if and only if, for all \(n\in\mathbb{N}^{+}\), \(x_{1},\ldots,x_{n}\in X\) and \(a_{1},\ldots,a_{n}\in\mathbb{R}\), \[\sum_{i=1}^{n}\sum_{j=1}^{n}a_{i}a_{j}k(x_{i},x_{j})\geq 0. \tag{1}\] If, in addition, whenever the above relation is an equality, then necessarily \(a_{1}=\cdots=a_{n}=0\), \(k\) is _(strictly) positive definite_. For \(X\) as above, we denote \(Z\) a real-valued random field, _videlict_: for each \(x\in X\), \(Z(x)\) is a real-valued random variable. Then \(Z\) is called _Gaussian_ if, for all \(n\in\mathbb{N}^{+}\) and \(x_{1},\ldots,x_{n}\in X\), the random vector \(\mathbf{Z}:=(Z(x_{1}),\ldots,Z(x_{n}))^{\top}\), with \(\top\) denoting the transpose operator, follows a \(n\)-variate Gaussian distribution. A Gaussian random field \(Z\) on \(X\) is completely determined by its first two moments: the mean function \[\mu_{Z}:X \to\mathbb{R}\] \[x \mapsto\mathbb{E}\left(Z(x)\right),\] with \(\mathbb{E}\) denoting stochastic expectation, and the covariance function (kernel) \[k_{Z}:X\times X \to\mathbb{R}\] \[(x_{1},x_{2}) \mapsto\mathbb{C}\mathrm{ov}\left(Z(x_{1}),Z(x_{2})\right).\] A necessary and sufficient condition for a function \(k_{Z}\) to be a covariance function (a kernel) of some random field \(Z\) is to be positive semi-definite. For \(X\) as above, we define a mapping \(d:X\times X\to\mathbb{R}\). Then \((X,d)\) is called _semi-metric space_ (or, equivalently, \(d\) is called a _semi-metric_ on \(X\)) if the following conditions hold for each \(x,y\in X\): 1. \(d(x,y)\geq 0\), 2. \(d(x,y)=0\Longleftrightarrow x=y\), 3. \(d(x,y)=d(y,x)\). In addition, \((X,d)\) is called a _metric space_ (or, equivalently, \(d\) is called a _metric_ on \(X\)) if it is a semi-metric space and the triangle inequality holds, namely, for all \(x,y,z\in X\): \[d(x,y)+d(y,z)\geq d(x,z).\] The covariance function \(k_{Z}\) is called isotropic for the semi-metric space \((X,d)\) if there exists a mapping \(\psi:D_{X}^{d}\to\mathbb{R}\) such that \(k_{Z}(x,y)=\psi(d(x,y))\), for \(x,y\in X\). Here, \(D_{X}^{d}:=\{d(x_{1},x_{2}):x_{1},x_{2}\in X\}\) is the diameter of \(X\). See Appendix B on how to construct isotropic kernels on arbitrary domains. For a Gaussian random field \(Z\) on \(X\), we define its _variogram_\(\gamma_{Z}:X\times X\to\mathbb{R}\) through \[\gamma_{Z}(u_{1},u_{2}):=\mathbb{V}\mathrm{ar}\left(Z(u_{1})-Z(u_{2})\right), \qquad u_{1},u_{2}\in X, \tag{2}\] with \(\mathbb{V}\mathrm{ar}\) denoting _variance_. The celebrated work of Schoenberg (1942) proves that \(\gamma_{Z}\) is a variogram if and only if the mapping \(\exp(-\gamma_{Z}(\cdot,\cdot))\) is positive definite on \(X\times X\). Let \((X_{1},d_{1})\) and \((X_{2},d_{2})\) be two semi-metric spaces. Then, the triple \((X_{1}\times X_{2},d_{1},d_{2})\) is called a _product semi-metric space_. Menegatto et al. (2020) define isotropy over a product semi-metric space through continuous functions \(\psi:D_{X_{1}}^{d_{1}}\times D_{X_{2}}^{d_{2}}\to\mathbb{R}\) such that, for \((x_{1},x_{2}),(x_{1}^{\prime},x_{2}^{\prime})\in X_{1}\times X_{2}\), \[((x_{1},x_{2}),(x_{1}^{\prime},x_{2}^{\prime}))\mapsto\psi(d_{1}(x_{1},x_{1}^{ \prime}),d_{2}(x_{2},x_{2}^{\prime})), \tag{3}\] is positive definite. The above definition naturally arises from spatio-temporal settings: suppose we have a _static_ semi-metric space \((X,d)\) that represents some spatial structure and \((T,d_{T})\) representing time, where \(T\subseteq\mathbb{R}\) and, usually, \(d_{T}(t_{1},t_{2})=|t_{1}-t_{2}|\). In such a case, Equation (3) can be re-adapted to define kernels. This is the setting adopted by Porcu et al. (2023) and by Tang and Zimmerman (2020). _Remark 1_.: We deviate from earlier literature and we instead consider a metric space \(X_{t}\) that evolves over time, \(t\in T\). Hence, our domain is written as \[\left\{(x_{t},t)\,:\,x_{t}\in X_{t},\,t\in T\right\},\] where \(t\) describes _time_, and the graph coordinate \(x_{t}\) is constrained on the space \(X_{t}\). Such a framework entails a way more sophisticated construction to equip such a space with a proper metric. ### Graphs with Euclidean edges We start with a formal definition of graphs with Euclidean edges. We slightly deviate from the definition provided by Anderes et al. (2020), for the reasons that will be clarified subsequently. For a definition of graph, see 5 in the Appendix. **Definition 1** (Graph with Euclidean edges).: Consider a simple, connected and weighted graph \(G=(V,E,w)\), where \(w:E\to\mathbb{R}^{+}\) represents the weight mapping. Then, \(G\) is called a _graph with Euclidean edges_ provided that the following conditions hold. 1. Edge sets: Each edge \(e\in E\) is associated to the compact segment (also denoted by \(e\)) \([0,\ell(e)]\), where \(\ell(e):=w(e)^{-1}\) may be interpreted as the _length_ of the edge \(e\). 2. Linear edge coordinates: Each point \(u\in e=(\underline{u},\overline{u})\) is uniquely determined by the endpoints \(\underline{u}\) and \(\overline{u}\) of \(e\) and its relative distance \(\delta_{e}(u):=\frac{u}{\ell(e)}=u\,w(e)\) from \(\underline{u}\), that is \(u=(\underline{u},\overline{u},\delta_{e}(u))\), so that \(\underline{u}=(\underline{u},\overline{u},0)=(\overline{u},\underline{u},1)\) and \(\overline{u}=(\underline{u},\overline{u},1)=(\overline{u},\underline{u},0)\). Henceforth, we shall assume the existence of a total order relation on the set of vertices \(V\) and that every edge is represented through the ordered pair \((v_{1},v_{2})\), where \(v_{1}<v_{2}\). In particular, for each \(u\in e\), the endpoints of \(e\), \(\underline{u}\) and \(\overline{u}\) satisfy the relation \(\underline{u}<\overline{u}\). A relevant fact is that our setting deviates from Anderes et al. (2020). In particular, our Definition 1 does not require any _distance consistency_ opposed to Anderes et al. (2020, Definition 1, (d)). The reason is that our setting does not need a bridge between geodesic and resistance metrics. A second relevant fact is that we have restricted the space of possible bijections from each edge onto closed intervals with orientation. We restrict to linear bijections: the main reason is that the focus of this paper is not to explore isometric embeddings, but to provide suitable topological structures evolving over time, and attach to them stochastic processes. As a final remark, we stress that the framework introduced through Definition 1 is way more general than linear networks, for at least two reasons: (a) in our framework, the weights of each edge be chosen independently from the others, and (b) our framework needs no restriction on the network structure, _e.g._, as shown in Figure 1 (right), edges may cross without sharing the crossing point. ### Graph laplacian and resistance metric The resistance metric has been widely used in graph analysis, as it is more natural than the shortest-path metric when considering flows or transport networks, where multiple roads between two given points may share the total flow. In order to define the classic effective resistance distance for an undirected and connected graph, we briefly report its definition and its mathematical construction. Let \(G=(V,E,w)\) be a simple, weighted and connected graph (see Definition 5 in the Appendix) and let \(W\) its adjacency matrix, that is: \(W(v_{1},v_{2})=w((v_{1},v_{2}))\), where we set \(w((v_{1},v_{2}))=0\) whenever \(v_{1}\not\sim v_{2}\). In addition, for each node \(v\in V\), we define its _degree_ as the sum of the weights of the edges adjacent to it. Let \(D\) be the degree matrix of \(G\), i.e. the diagonal matrix where each diagonal element is the degree of the corresponding vertex. Then, the _laplacian matrix_ (or simply _laplacian_) of \(G\) is the matrix \(L:=D-W\), namely the matrix \(L:V\times V\to\mathbb{R}\) having entries \[L(v_{1},v_{2})=\begin{cases}-w((v_{1},v_{2}))&\text{if }v_{1}\neq v_{2}\\ \sum_{u\in V}w((v_{1},u))&\text{if }v_{1}=v_{2}.\end{cases} \tag{4}\] Laplacian matrices enjoy several properties (see, for instance, Devriendt (2022)): they are symmetric, diagonally dominant, positive semidefinite and singular with exactly one null eigenvalue, corresponding to the eigenvector Figure 1: Left: a linear network. Right: a graph with Euclidean edges, where the bijections between the edges \(e_{1}\) and \(e_{2}\) and their respective real segments \([0,\ell(e_{1})]\) and \([0,\ell(e_{2})]\) are stressed. \(\mathbf{1}_{n}\). Furthermore, they have non-positive off-diagonal entries and positive main-diagonal entries. A graph \(G=(V,E)\) is called a _resistor graph_ if the edges \(e\in E\) represent electrical resistors and the nodes represent contact points. Given a resistor graph, the _effective resistance distance_\(R\) between two vertices is defined as the voltage drop between them when injecting one Ampere of current in one and withdrawing one Ampere from the other. Several mathematical formulations of this concept have been provided, and the reader is reminded, among many others, to Jorgensen and Pearse (2010, Subsection 2.1). Throughout, we follow Ghosh et al. (2008). Let \(G=(V,E)\) be a resistor graph. For each \(v_{1}\sim v_{2}\in V\), let \(r(v_{1},v_{2})\in\mathbb{R}^{+}\) denote the resistance of the resistor that connects \(v_{1}\) and \(v_{2}\). In addition, for each \(v_{1},v_{2}\in V\), define the weight (which plays the role of the physical conduttance) \[w((v_{1},v_{2})):=\begin{cases}\frac{1}{r((v_{1},v_{2}))}&\text{if }v_{1}\sim v_{2}\\ 0&\text{if }v_{1}\not\sim v_{2}.\end{cases}\] Let \(L\) be the laplacian matrix of \(G\) with the above-defined weights, \(L^{+}\) its Moore-Penrose generalised inverse (see Definition 6 in the Appendix). Finally let \(e_{v_{i}}\) denote the vector with all zeroes, except a one at position \(v_{i}\). Then the effective resistance distance \(R\) between two nodes \(v_{1}\) and \(v_{2}\) enjoys the following expression: \[R(v_{1},v_{2})=(e_{v_{1}}-e_{v_{2}})^{\top}L^{+}(e_{v_{1}}-e_{v_{2}}). \tag{5}\] ## 3 Time-evolving graphs with Euclidean edges Defining a time-evolving graph with Euclidean edges requires some mathematical formalism. While keeping such a formalism below, we shall then provide some narrative in concert with some graphical representation to have an intuition of how these graphs work. Even though the underlying rationale of our construction is quite natural and intuitive, the mathematical description of such an object is quite involved as it requires several of steps and a substantial formalism. Hence, we provide a sketch of our procedure to help the reader in the Box below. 1. Define a time-evolving graph as a properly defined sequence of graphs indexed by discrete time instants; 2. Define _connected equivalent simple_ time-evolving graphs by completing a time evolving graph through a set of edges that connect the same nodes at different time instants; 3. Over the connected equivalent simple graph, we can now define, for every time \(t\), a graph with Euclidean edges, \(G_{t}\); 4. Define a time-evolving Markov graph to exploit computational advantages. Some comments are in order. Step 1 is completely general and does not require any topological structure on every _marginal_ graph \(G_{t}\), for a given time \(t\). Yet, having graphs with Euclidean edges that evolve over time requires some more work, and this fact justifies Step 2, which allows for connectivity, being one of the properties _sine qua non_ of a graph with Euclidean edges. Step 4 is not mathematically necessary to guarantee the validity of the structure, but it is justified by computational and intuitive reasons as explained throughout. Step 1 starts with a formal definition. **Definition 2** (Time-evolving graph).: Let \(T=\{0,...,m-1\}\) be a (finite) collection of time instants. To every time instant \(t\in T\) we associate a simple undirected and weighted graph \(G_{t}=(V_{t},E_{t},w_{t})\), with \(V_{t}\cap V_{t^{\prime}}=\emptyset\) whenever \(t\neq t^{\prime}\). For an edge \(e_{t}\in E_{t}\), the corresponding weight is denoted \(w(e_{t}):=w_{t}(e_{t})\). We use \(n_{t}:=|V_{t}|\) for the number of vertices at time \(t\). Let \(G=\{G_{0},...,G_{m-1}\}\) be the associate finite collection of these graphs. Call \(V:=\bigcup_{t}V_{t}\) the set of vertices, \(n:=|V|\) the total number of vertices, and \(E_{S}:=\bigcup_{t}E_{t}\) the set of _spatial_ edges. Finally, if \(v\in V\), whenever convenient we shall write \(t(v)\) for the unique value \(t\) such that \(v\in V_{t}\). Let \(s:V\to S\) be a mapping from \(V\), where \(S\) is a set of labels, such that \(s(v_{1})\neq s(v_{2})\) whenever \(v_{1}\) and \(v_{2}\) are two distinct vertices belonging to the same graph \(G_{t}\), \(t\in T\). Two vertices \(v_{1}\neq v_{2}\in V\) are considered the same vertex at different times if \(s(v_{1})=s(v_{2})\). We call the triple \(\boldsymbol{G}=(T,G,s)\) a _time-evolving graph_. While Definition 2 provides a flexible framework to manage graphs that evolve over time, we are going to merge its underlying idea with the one of graph with Euclidean edges presented in Subsection 2.2. Step 2 of our routine intends to _complete_ the time-evolving graph as in Definition 2 so to ensure spatio-temporal connectivity. Let \(\mathbf{G}\) be a time-evolving graph. We define its _equivalent simple_ time-evolving graph, \(\widetilde{\mathbf{G}}=(V,\widetilde{E})\) as the graph with edges \(\widetilde{E}:=E_{S}\cup E_{T}\), with \(E_{T}\) a set of additional edges (called _temporal_ edges throughout) that connect the same nodes at different time instants. More precisely, \(E_{T}\) is a subset of \(\left\{(v_{1},v_{2})\in V\times V\,:\,s(v_{1})=s(v_{2}),\,t(v_{1})\neq t(v_{2})\right\}\). To each new edge \(e=(v_{1},v_{2})\in E_{T}\) a weight \(w(e)>0\) is assigned, while all the other weights remain unchanged. One possibility is to choose \(w(e):=\alpha\left|t(v_{1})-t(v_{2})\right|^{-1}\), with \(\alpha>0\) a given scale factor. Although we assume this particular expression in all the following examples, we stress that any choice leads to a valid model as long as \(w(e)>0\). The intuitive idea behind the construction of an equivalent simple time-evolving graph is to consider \(m\) layers, each representing a different temporal instant (namely a graph \(G_{t}\)), and connect them by means of additional intra-time edges, which account for the time-dependency of the graphs. Figure 2 depicts an example of time-evolving graph and the resulting equivalent simple graph. Henceforth, we will consider each connected component of the equivalent simple graph separately. Step 2 ensures that it becomes feasible to assign, to each temporal label \(t\in T\), a graph with Euclidean edges to the equivalent simple graph associated with a given time-evolving graph (Step 3). We note that the choice of the temporal edges needs care. Indeed, the set of possible temporal edges for a fixed label \(s\in S\) may grow quadratically in the number of considered temporal instants \(m\). Our proposal is to connect every node that exists at adjacent times, so that there will be no temporal edge connecting non-adjacent times. Hence, we propose a temporally Markovian structure for the graph (Step 4). This is formalised below. **Definition 3** (Time-evolving Markov graph).: A _time-evolving Markov graph_ is a time-evolving graph \(\mathbf{G}=(V,E)\), where \[E_{T}\subseteq\left\{(v_{1},v_{2})\in V\times V\,:\,s(v_{1})=s(v_{2}),\,|t(v_ {1})-t(v_{2})|=1\right\}. \tag{6}\] We stress that temporal Markovianity is not needed to prove the mathematical results following subsequently, which work even for the case of non Figure 2: An example of an equivalent simple graph (bottom-right), with \(m=3\), \(S=\{A,B,C,D,E,F\}\), \(n_{1}=5\), \(n_{2}=n_{3}=6\). The coloured edges belong to \(E_{S}\), whilst the black ones belong to \(E_{T}\). The temporal _slices_ at time instants \(t=0,1,2\) are reported, respectively, at quadrants (a), (b) and (c). adjacent layers. Yet, Markovianity simplifies our job considerably. In fact, it allows for a plain representation of an equivalent simple graph. Furthermore, there are non-negligible computational reasons. Indeed, for large networks, Markovianity allows for sparse Laplacian matrices (block tridiagonal) of the associated equivalent simple graph. This entails huge computational savings in both terms of storage and computation. Finally, allowing edges between non-adjacent layers could lead to a huge number of weights \(\alpha\)'s. In particular, under the assumption of temporally homogeneous weights (the weight of a temporal edges depends only on the temporal distance between the layers it connects), we would have \(m-1\) possible weights if Markovianity is not assumed. For a large collection of time instants, this can become computationally unfeasible. We call a time-evolving Markov graph \(\mathbf{G}\)_temporally complete_ when the set \(E_{T}\) is identically equal to the set in the right hand side of (6). Albeit such a property is not required to prove our theoretical results, it is operationally useful as it allows to avoid removing or adding temporal edges. ## 4 Resistance metrics for linear time We start this section by noting that defining the classical resistance metrics between nodes of the temporally evolving graph is not an issue. Yet, we are dealing with a graph where distances should be computed between any pair of points lying continuously over the edges. This is a major challenge that requires some work as follows. For the case of static graphs with Euclidean edges, Anderes et al. (2020) provide an ingenious construction that allows for a suitable continuously-defined metric on the basis of Brownian bridges and their variograms. The idea is to follow a similar path, by defining a Gaussian process that is continuously indexed over the edges of an equivalent simple connected graph associated with a given time-evolving graph. Before going into technical details, we present a brief outline. Following Anderes et al. (2020), we are going to define a distance on all the points of the graph, namely its vertices and the points on its edges. To this aim, we define a Gaussian process \(Z\) on every point of the time-evolving equivalent simple Markovian graph and then _define_ the distance between two points as the variogram of such process, i.e., for each \(u_{1},u_{2}\in\widetilde{\mathbf{G}}\): \[d(u_{1},u_{2}):=\gamma_{Z}(u_{1},u_{2}), \tag{7}\] with \(\gamma_{Z}\) as being defined through (2). In such a way, we can directly apply Theorem 1 stated in Appendix B to obtain kernels. Here, \(Z:=Z_{V}+Z_{E}\) is the sum of two independent Gaussian processes defined on the equivalent simple graph \(\widetilde{\mathbf{G}}\). The process \(Z_{V}\) accounts for the structure of the graph (namely its vertices and the weights of its edges) and plays the role of major source of variability (i.e., the distance), whilst \(Z_{E}\) adds some variability on the edges and accounts for the temporal relationship between the same edge at different times. ### Formal construction of \(Z_{v}\) and \(Z_{e}\) We start by defining the process \(Z_{V}\) through \[Z_{V}(u):=\left(1-\delta_{e}(u)\right)Z_{V}(\underline{u})+\delta_{e}(u)Z_{V}( \overline{u}), \tag{8}\] where \(u=(\underline{u},\overline{u},\delta_{e}(u))\), with \(e=(\underline{u},\overline{u})\) and \(\delta_{e}(u)\) as in Definition 1. Further, at the vertices of \(\widetilde{\mathbf{G}}\), \(Z_{V}\) is defined as a multivariate normal random variable, denoted \(Z_{V}\big{|}_{V}\sim\mathcal{N}\left(0,\,L^{+}\right)\), being \(L\) the laplacian matrix associated to the graph \(\widetilde{\mathbf{G}}\). The intuitive interpretation is that outside the vertices the process \(Z_{V}\) is obtained through a sheer linear interpolation. The construction of \(Z_{E}\) is a bit more complex, as \(Z_{E}\) is piecewise defined on a suitable partition of \(E\). A formalisation of this concept follows. For each \(e=(v_{1},v_{2})\in E_{S}\), we define the _lifespan_ of \(e\), written \(\mathrm{ls}(e)\), as the maximal connected set of time instants, \(t\), for which the edge \(e\) exists. More formally, \(\mathrm{ls}(e)\) is defined as the maximal (with respect to the inclusion partial order) subset of \(T\) such that: * \(t(v_{1})\in\mathrm{ls}(e)\); * \(\mathrm{ls}(e)\) is connected, that is \(\forall t_{1}<t_{2}\in\mathrm{ls}(e)\), \(\{t_{1},\ldots,t_{2}\}\subseteq\mathrm{ls}(e)\); * \(\forall t\in\mathrm{ls}(e)\), there exists \((v^{\prime}_{1},v^{\prime}_{2})\in E_{S}\) such that \(s(v^{\prime}_{1})=s(v_{1})\), \(s(v^{\prime}_{2})=s(v_{2})\) and \(t(v^{\prime}_{1})=t(v^{\prime}_{2})=t\). Figure 2 allows to visualise the situation. The lifespan of the edge \((A,B)\) at time \(t=0\) is \(\{0,1,2\}\); the lifespan of \((C,E)\) at time \(t=1\) is \(\{0,1\}\) and the one of \((B,C)\) at time \(t=2\) is \(\{2\}\). We now define the _life_ of \(e\) (denoted \(\mathrm{lf}(e)\)) as the set of edges that represent \(e\) at different times and have the same lifespan \(\mathrm{ls}(e)\). Formally, we have \[\mathrm{lf}(e):=\left\{(v_{1}^{\prime},v_{2}^{\prime})\in E_{S}:s(v_{1}^{\prime} )=s(v_{1}),\,s(v_{2}^{\prime})=s(v_{2}),\,t(v_{1}^{\prime})=t(v_{2}^{\prime}) \in\mathrm{ls}(e)\right\}.\] For convenience, we define the life for temporal edges as well: if \(e\in E_{T}\), \(\mathrm{lf}(e):=\{e\}\). It is clear that the set \(\left\{\mathrm{lf}(e)\,:\,e\in E_{S}\right\}\) forms a partition of all the spatial edges \(E_{S}\), and that \(\left\{\mathrm{lf}(e)\,:\,e\in E_{T}\right\}\) is a partition of \(E_{T}\). The main idea is to consider the life of each spatial and temporal edge and define a suitable process on it, being independent from the others. Let us consider a spatial edge \(e\in E_{S}\) and its lifespan \(\mathrm{ls}(e)\). Consider now the set \(\mathrm{ls}(e)\times[0,1]\) and define on it a zero-mean Gaussian process \(B(t,\delta)\) whose covariance function is given by \[k_{B}\left((t_{1},\delta_{1}),(t_{2},\delta_{2})\right):=k_{T}(|t_{1}-t_{2}|) \,k_{BB}(\delta_{1},\delta_{2}), \tag{9}\] with \(t_{1},t_{2}\in\mathrm{ls}(e)\) and \(\delta_{1},\delta_{2}\in[0,1]\). Here \(k_{T}\) is a temporal kernel defined on \(\mathbb{N}\) such that \(k_{T}(0)=1\) and \(k_{BB}(\delta_{1},\delta_{2}):=\min(\delta_{1},\delta_{2})-\delta_{1}\delta_{2}\) is the kernel of the standard Brownian bridge on \([0,1]\). Notice that the spatial marginals of the process \(B\) are standard Brownian bridges. We stress that the process \(B(t,\delta)\) is only needed for the definition of the process \(Z_{E}\) on \(\mathrm{lf}(e)\), as it is better explained below. Now, we define the process \(Z_{E}\) on \(\mathrm{lf}(e)\), denoted \(Z_{E}\big{|}_{\mathrm{lf}(e)}\), as follows: given an edge \(e^{\prime}=(\underline{u},\overline{u})\in\mathrm{lf}(e)\) and given a point \(u=(\underline{u},\overline{u},\delta)\) on it, \[Z_{E}\big{|}_{\mathrm{lf}(e)}(u):=\sqrt{\ell(e^{\prime})}\,B(t(\underline{u} ),\delta). \tag{10}\] Finally, for each temporal edge \(e=[0,\ell(e)]\in E_{T}\), we define the process \(Z_{E}\) on it as an independent (from both \(Z_{V}\) and \(Z_{E}\) on \(E_{S}\)) Brownian bridge on \([0,\ell(e)]\), having covariance function given by \[\mathbb{C}\mathrm{ov}\left(Z_{E}\big{|}_{e}(\delta_{1}),Z_{E}\big{|}_{e}( \delta_{2})\right)=\ell(e)\left(\min(\delta_{1},\delta_{2})-\delta_{1}\delta_{ 2}\right).\] This concludes the construction of the process on the whole set of the edges. ### Mathematical properties of the construction We remind the reader that the process \(Z\) is Gaussian, being the sum of two independent Gaussian processes. Hence, the finite dimensional distribution of \(Z\) is completely specified through the second order properties, namely the covariance function. The following result provides an analytical expression for the covariance function associated with \(Z\). **Proposition 1**.: _Let \(u_{1},u_{2}\in\widetilde{\mathbf{G}}\), with \(u_{i}=(\underline{u}_{i},\overline{u}_{i},\delta_{i})\), \(i=1,2\). Let \(Z=Z_{V}+Z_{E}\), with \(Z_{V}\) as defined through (8) and \(Z_{E}\) as defined through (10). Then,_ \[\begin{split} k_{Z}(u_{1},u_{2})&=\mathbf{\delta}_{1}^ {\top}\,L^{+}\left[(\underline{u}_{1},\overline{u}_{1}),(\underline{u}_{2}, \overline{u}_{2})\right]\,\mathbf{\delta}_{2}\\ &+\mathbb{1}_{\mathrm{lf}(e_{1})=\mathrm{lf}(e_{2})}\sqrt{\ell(e _{1})\ell(e_{2})}\,k_{T}\left(|t(\underline{u}_{1})-t(\underline{u}_{2})| \right)\left(\min\left(\delta_{1},\delta_{2}\right)-\delta_{1}\delta_{2} \right),\end{split} \tag{11}\] _where \(\mathbf{\delta}_{i}:=(1-\delta_{i},\delta_{i})^{\top}\), \(i=1,2\), and \(L^{+}\left[(\underline{u}_{1},\overline{u}_{1}),(\underline{u}_{2},\overline{ u}_{2})\right]\) represents the \(2\times 2\) submatrix of \(L^{+}\) with rows \((\underline{u}_{1},\overline{u}_{1})\) and columns \((\underline{u}_{2},\overline{u}_{2})\)._ While noting that this construction is completely general, we also point out that Markovianity properties, whenever aimed, can be achieved through a proper choice of the temporal kernel \(k_{T}\). A reasonable choice for \(k_{T}\) is the correlation function of an autoregressive process of order one, which is given by: \[k_{T}(h)=\lambda^{|h|},\qquad h\in\mathbb{Z}, \tag{12}\] where \(\lambda\in(-1,1)\) is a free parameter and \(h\in\mathbb{Z}\) is the lag. Notice that the special case \(\lambda=0\), for which \(k_{T}(h)=\mathbb{1}_{h=0}\), corresponds to the static resistance metric provided by Anderes et al. (2020). Figure 3 depicts some realisations for the process \(Z_{E}\) over an edge for different values of the parameter \(\lambda\). The parameter \(\lambda\) for the edges plays a similar role of the parameter \(\alpha\) for the nodes: their are both closely related to the inter-dependency of the process at different times. Indeed, \(\lambda\) measures how much the process \(Z_{E}\) is correlated between two times \(t_{1},t_{2}\in\mathrm{ls}(e)\). Analogously, the value \(\alpha\) as weight of the temporal edge \(e\in E_{T}\), is related to the partial correlation of the endpoints of \(e\) given everything else. As a consequence, it is natural to choose a high value of \(\lambda\) for high values of \(\alpha\) and vice-versa. We notice that it is natural to choose non-negative values for \(\lambda\), as we usually expect a non-negative correlation between the values of \(Z_{E}\) for close times. Using equation (7), we get the following expression for the distance between any to points \(u_{1},u_{2}\in\widetilde{\mathbf{G}}\). \[d(u_{1},u_{2})=k_{Z}(u_{1},u_{1})+k_{Z}(u_{2},u_{2})-2k_{Z}(u_{1},u_{2}). \tag{13}\] Figure 3: Draws from the process \(Z_{E}\) on an edge with lifespan \(\{0,1,2,3\}\), for several values of the parameter \(\lambda\). The formal statement below provides a complete description of the space \((\widetilde{\mathbf{G}},d)\), that is, the time-evolving graph \(\widetilde{\mathbf{G}}\) equipped with the metric \(d\). **Proposition 2**.: _Let \(d\) be the mapping defined at (13). Then, the pair \((\widetilde{\mathbf{G}},d)\) is a semi-metric space._ One might ask whether a stronger assertion holds for the pair \((\widetilde{\mathbf{G}},d)\) as defined above. The next statement provides a negative answer. A counterexample is given by the graph depicted in Figure 4, where the length of the top edge \((A_{2},B_{2})\) becomes vanishingly small, while length of the bottom edge \((A_{0},B_{0})\) grows to infinity. For such a graph, we have \(d(P,Q)+d(Q,R)<d(P,R)\) (see the proof of Proposition 3 in the Appendix for more details). **Proposition 3**.: _Let \(d\) be the mapping defined at (13). Then, the pair \((\widetilde{\mathbf{G}},d)\) is not a metric space._ _Remark 2_.: Although in general our extension to the classic resistance distance is not a metric, it retains some of its properties. 1. \((V,d\big{|}_{V})\) is a metric space. 2. For all \(t\), \((G_{t},d\big{|}_{G_{t}})\) coincides with the restriction on \(G_{t}\) of the resistance metric of Anderes et al. (2020) computed on the whole graph \(\widetilde{\mathbf{G}}\), hence it is a metric and it is invariant to splitting edges and merging edges at degree 2 vertices (Anderes et al., 2020, Propositions 2 and 3). Figure 4: An example of equivalent simple graph for which the semi-distance defined at (13) does not satisfy the triangle inequality. ### A special case: time-evolving linear networks Anderes et al. (2020) defined graphs with Euclidean edges as a generalisation of linear networks, and Euclidean trees with a given number of leaves. For both cases, edges are linear. This case is not especially exciting for the framework proposed in this paper. The reason is that a simple isometric embedding arguments as in Tang and Zimmerman (2020) proves that one can embed a time-evolving linear network in \(\mathbb{R}\times\mathbb{R}^{2}=\mathbb{R}^{3}\), where the first component indicates time. As a consequence, it is immediate to build a vast class of covariance functions on a time-evolving linear network by a sheer restriction of a given covariance function defined on \(\mathbb{R}^{3}\). However, such a method does not take into account the _structure_ of the graph: two points that are close in \(\mathbb{R}^{3}\) but far in the time-evolving graph could have a high correlation. Section 6 illustrates how to build kernels over the special topologies proposed in this paper. Apparently, the choices are more restrictive than the ones available for the case of linear networks, but they ensure that the spatio-temporal structure is taken into account. ## 5 Circular time and periodic graphs Perhaps the main drawback of using the resistance distance in the layer graphs that express the spatio-temporal variability is that, when adding one or more new layers, the distances between the points of the previous layers may change (more specifically: they may decrease). Indeed, whenever new paths between a couple of points are added, the effective distance between such points decreases, as the current meets less resistance. This presents a critical interpretation problem: for a given time series, let new data be added on a daily basis. Then, inference routines may provide different results when compared to the results of same inference techniques applied to the updated time series. Indeed, as the distances may vary, the covariances between the same space-time points may vary as well. Here, we consider the alternative of time-evolving _periodic_ networks, _i.e._ time-evolving networks whose evolution repeats after a fixed amount of time instants (number of layers). Not only does this construction solve the above-mentioned issue, but it also suits many phenomena whose evolution present both linear and periodic components. **Definition 4** (Time-evolving periodic graph).: Let \(\mathbf{G}=\{G_{0},G_{1},\ldots\}\) be a countable sequence of graphs. Then, \(\boldsymbol{G}\) is a _time-evolving periodic graph_ if there exists a natural number \(m\geq 3\) such that, for all \(t\in\mathbb{N}\), \(G_{t}=G_{t+m}\). Its equivalent simple Markovian periodic graph \(\widetilde{\boldsymbol{G}}\) is built by connecting \(G_{0},\ldots,G_{m-1}\) through the set of edges \[E_{T}=\left\{(v_{1},v_{2})\in V\times V\,:\,s(v_{1})=s(v_{2}),\,|t(v_{1})-t(v_ {2})|\equiv\pm 1\pmod{m}\right\}. \tag{14}\] Each point in the resulting space-time is denoted by its true time \(t\in\mathbb{R}_{0}^{+}\), by the endpoints of the edge \(e\) it lies on and by the relative distance \(\delta_{e}(u)\) from the first one: we write \(u=(t,\underline{u},\overline{u},\delta_{e}(u))\), where \(e=(\underline{u},\overline{u})\). Notice that \(t\in\mathbb{N}\) whenever \(u\) belongs to a temporal layer, while \(t\) is not integer if \(u\) belongs to the inner part of a temporal edge \(e\in E_{T}\). Given a point \(u\in V\cup\bigcup E_{S}\), we sometimes write \(\tau(u)\) as the unique layer \(\tau\in\left\{0,\ldots,m-1\right\}\) that contains \(u\). Clearly \(\tau(u)\equiv t(u)\pmod{m}\). We start by noting that the previously-mentioned issue about linear time-evolving graphs is overcome by this construction. Indeed, once the full periodic structure has been established, the Laplacian matrix needs be computed only once, regardless of how many new time points are added. A second remark comes from the metric construction, which necessarily needs to be adapted to a periodic process. Otherwise, some counter-intuitive properties can arise. Suppose the distance \(d(u_{1},u_{2})\) is defined as in (7). Then, for any couple of points \(u_{1}=(t_{1},\underline{u}_{1},\overline{u}_{1},\delta_{e}(u_{1}))\) and \(u_{2}=(t_{2},\underline{u}_{2},\overline{u}_{2},\delta_{e}(u_{2}))\) with \(\underline{u}_{1}=\underline{u}_{2}\), \(\overline{u}_{1}=\overline{u}_{2}\) and \(\delta_{e}(u_{1})=\delta_{e}(u_{2})\), even when \(t_{1}\neq t_{2}\), the distance would be identically equal to zero. Hence, a different definition for the process \(Z\) is necessary. For a point \(u=(t,\underline{u},\overline{u},\delta_{e}(u))\in\widetilde{\boldsymbol{G}}\), we define the process \(Z\) for the periodic graph as follows: \[Z(u):=Z_{V}(u)+Z_{E}(u)+\beta W(t), \tag{15}\] where \(\beta>0\) is a given parameter, \(W\) is a standard Wiener process, \(Z_{V}\) is the same process as in the linear-time case, while \(Z_{E}\), albeit similar, presents some difference with respect to the construction given in Subsection 4.1, aimed to capture the time structure of the periodic graph. Let \(e=(v_{1},v_{2})\in E_{S}\): we define the _lifespan_ of \(e\) as the maximal subset of \(\left\{0,\ldots,m-1\right\}\) such that: * \(\tau(v_{1})\in\mathrm{ls}(e)\), * \(\mathrm{ls}(e)\) is connected, i.e. if \(\tau_{1}<\tau_{2}\in\mathrm{ls}(e)\), then \(\{\tau_{1},\ldots,\tau_{2}\}\subseteq\mathrm{ls}(e)\) or \(\{\tau_{2},\ldots,m-1,0,\ldots,\tau_{1}\}\subseteq\mathrm{ls}(e)\), * \(\forall\tau\in\mathrm{ls}(e)\), \(\exists(v_{1}^{\prime},v_{2}^{\prime})\in E_{S}\) such that \(s(v_{1}^{\prime})=s(v_{1})\), \(s(v_{2}^{\prime})=s(v_{2})\) and \(\tau(v_{1}^{\prime})=\tau(v_{2}^{\prime})=\tau\). Figure 5 depicts this situation. Here, the lifespan of the edge \((A,B)\) at time \(\tau=0\) is \(\{0,1,2,3\}\); the lifespan of \((A,D)\) at time \(\tau=2\) is \(\{2\}\); the lifespan of \((C,D)\) at time \(\tau=3\) is \(\{0,1,3\}\). The definition of life of any edge \(e\in E\) remains unchanged: if \(e\in E_{S}\), \[\mathrm{lf}(e):=\{(v_{1}^{\prime},v_{2}^{\prime})\in E_{S}:s(v_{1}^{\prime})= s(v_{1}),\,s(v_{2}^{\prime})=s(v_{2}),\,\tau(v_{1}^{\prime})=\tau(v_{2}^{ \prime})\in\mathrm{ls}(e)\}\] while, if \(e\in E_{T}\), \(\mathrm{lf}(e):=\{e\}\). The definition of \(Z_{E}\) is now identical to the one of the linear-time graph, exception made for the choice of the temporal kernel \(k_{T}\). Indeed, we ought to consider that the time is now cyclic in the dependence structure of the temporal layers \(\tau\in\{0,...,m-1\}\). It is reasonable to model the process underlying the temporal kernel \(k_{T}\) by means of a graphical model, as it Figure 5: An example of an equivalent simple graph for a periodic time-evolving graph with \(m=4\) and \(S=\{A,B,C,D\}\). The coloured edges belong to \(E_{S}\), whilst the black ones belong to \(E_{T}\). embodies the idea of conditional independence. For a given spatial edge \(e\in E_{S}\), we distinguish two cases: whether the lifespan of \(e\) is the whole temporal set \(T=\{0,...,m-1\}\) or not. ### The lifespan coincides with \(T\) In this case, we define the covariance matrix of a zero-mean Gaussian random vector \(Z_{T}:\{0,...,m-1\}\rightarrow\mathbb{R}\) via its precision matrix. More precisely, let \(G_{T}\) be the circulant graph with \(m\) nodes (labelled by \(\tau\in\{0,...,m-1\}\)) and \(m\) edges between adjacent nodes, as shown in Figure 6. We associate each edge with a given weight \(\rho\in\left[0,\frac{1}{2}\right)\), which represents the partial correlation between subsequent times. As a consequence, the precision matrix \(\Theta_{Z_{T}}\) is the circulant matrix that follows. \[\Theta_{Z_{T}}=\kappa\begin{bmatrix}1&-\rho&0&\dots&0&-\rho\\ -\rho&1&-\rho&\dots&0&0\\ 0&-\rho&1&\dots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\dots&1&-\rho\\ -\rho&0&0&\dots&-\rho&1\end{bmatrix}\] Figure 6: Conditional dependence structure of the process \(Z_{T}\) for \(m=8\). Here, \(\kappa>0\) is a normalising constant which role is to make the covariance matrix \(\Sigma_{Z_{T}}:=\left(\Theta_{Z_{T}}\right)^{-1}\) a correlation matrix (namely the variances of every entry of \(Z_{T}\) should be 1). Notice that the matrix \(\Sigma_{Z_{T}}\) is a symmetric circulant matrix: as a consequence, it is possible to store only its first column, which will be denoted by \(\sigma_{Z_{T}}\in\mathbb{R}^{m}\). In Figure 7, the values of the vector \(\sigma_{Z_{T}}\) are plotted for some values of \(m\) and \(\rho\). ### The lifespan does not coincide with \(T\) In this case, the life of the edge \(e\) is interrupted. Thus, it is reasonable to consider the different parts of the life of \(e\) as independent. To this aim, we consider the subgraph of \(G_{T}\) that represents the evolution of the edge \(e\). More precisely, we remove from \(G_{T}\) all the nodes \(\tau\) for which the edge \(e\) does not exist and we remove from \(G_{T}\) all the edges whose at least one endpoint has been eliminated. Next, we consider all the connected components of the so-obtained graph and define an autoregressive model on each of them, independently from the others (similarly to the linear case of Section 4.2). More precisely, we define the covariance matrix of the process \(Z_{T}\) as a block diagonal matrix whose diagonal blocks are of the form \[\left[\begin{array}{cccc}1&\lambda&\dots&\lambda^{j-1}\\ \lambda&1&\dots&\lambda^{j-2}\\ \vdots&\vdots&\ddots&\vdots\\ \lambda^{j-1}&\lambda^{j-2}&\dots&1\end{array}\right],\] being \(\lambda\in(-1,1)\) the lag-1 correlation and \(j\) the number of times \(\tau\) that belong to \(\mathrm{ls}(e)\), _i.e._\(j:=|\mathrm{ls}(e)|\). ### Second-order properties of \(Z\) in the circular case The following result illustrates the analytic expression for the covariance function associated with \(Z\) in the construction of the metric associated with \(\widetilde{\mathbf{G}}\) in the periodic case. **Proposition 4**.: _Let \(u_{1}=(t_{1},\underline{u}_{1},\overline{u}_{1},\delta_{e}(u_{1}))\in \widetilde{\mathbf{G}}\) and \(u_{2}=(t_{2},\underline{u}_{2},\overline{u}_{2},\delta_{e}(u_{2}))\in \widetilde{\mathbf{G}}\). Then the kernel of the process \(Z\) defined on \(\widetilde{\mathbf{G}}\) enjoys the following representation:_ \[k_{Z}(u_{1},u_{2}) =\mathbf{\delta}_{1}^{\top}\,L^{+}\left[(\underline{u}_{1},\overline {u}_{1}),(\underline{u}_{2},\overline{u}_{2})\right]\,\mathbf{\delta}_{2}\] \[\quad+\mathbb{1}_{\mathrm{lf}(e_{1})=\mathrm{lf}(e_{2})}\sqrt{ \ell(e_{1})\ell(e_{2})}\,k_{T}\left(\tau(\underline{u}_{1}),\tau(\underline{u }_{2})\right)(\min\left(\delta_{1},\delta_{2}\right)-\delta_{1}\delta_{2})\] \[\quad+\beta^{2}\min(t_{1},t_{2}) \tag{16}\] _where \(\mathbf{\delta}_{i}:=(1-\delta_{i},\delta_{i})^{\top}\) and \(L^{+}\left[(\underline{u}_{1},\overline{u}_{1}),(\underline{u}_{2},\overline{ u}_{2})\right]\) represents the \(2\times 2\) submatrix of \(L^{+}\) with rows \((\underline{u}_{1},\overline{u}_{1})\) and columns \((\underline{u}_{2},\overline{u}_{2})\)._ Notice that the unique differences with the expression (11) are the different choices for the temporal kernel \(k_{T}\) and the additional addend \(\beta^{2}\min(t_{1},t_{2})\). The latter ensures that the same points at different times have a strictly positive distance, as, combining equations (13) and (16) for such points \(u_{1}\) and \(u_{2}\), we get \(d(u_{1},u_{2})=\beta^{2}\left|t_{1}-t_{2}\right|\). We conclude this section with a formal assertion regarding the mapping \(d\) as being introduced for the case of a periodic graph \(\widetilde{\mathbf{G}}\). **Proposition 5**.: \((\widetilde{\mathbf{G}},d)\) _is a semi-metric space._ ## 6 Reproducing kernels ### General construction principles Variograms can be composed with certain classes of functions to create reproducing kernels associated with semi-metric spaces. A function \(\psi:[0,+\infty)\to\mathbb{R}\) is called _completely monotone_ if it is continuous on \([0,+\infty)\), infinitely differentiable on \((0,+\infty)\) and for each \(i\in\mathbb{N}\) it holds \[(-1)^{i}\,\psi^{(i)}(x)\geq 0,\] where \(\psi^{(i)}\) denotes the \(i^{\text{th}}\) derivative of \(\psi\) and \(\psi^{(0)}:=\psi\). By the celebrated Bernstein's theorem (Bernstein, 1929), completely monotone functions are the Laplace transforms of positive and bounded measures. Some examples of parametric families of completely monotone functions are listed in Table 1. The result below comes straight by using similar arguments as in Anderes et al. (2020), which have been reported in Theorem 1 in Appendix B. **Proposition 6**.: _Let \(\psi:[0,\infty)\to\mathbb{R}\) be continuous, completely monotonic on the positive real line, and with \(\psi(0)<\infty\). Let \(d:\widetilde{\boldsymbol{G}}\times\widetilde{\boldsymbol{G}}\to\mathbb{R}\) be the mapping defined at (13). Then, the function_ \[k_{Z}(u_{1},u_{2})=\psi\left(d(u_{1},u_{2})\right),\qquad u_{1},u_{2}\in \widetilde{\boldsymbol{G}}\] _is a strictly positive definite function._ \begin{table} \begin{tabular}{|c|c|c|} \hline Type & \(\psi(x)\) & Parameter range \\ \hline Power exponential & \(e^{-\beta x^{\alpha}}\) & \(0<\alpha\leq 1\), \(\beta>0\) \\ Matérn & \(\dfrac{2^{1-\alpha}}{\Gamma(\alpha)}(\beta x)^{\alpha}K_{r}(\beta x)\) & \(0<\alpha\leq\frac{1}{2}\), \(\beta>0\) \\ Generalised Cauchy & \((\beta x^{\alpha}+1)^{-\xi/\alpha}\) & \(0<\alpha\leq 1\), \(\beta>0\), \(\xi>0\) \\ Dagum & \(1-\left(\dfrac{\beta x^{\alpha}}{1+\beta x^{\alpha}}\right)^{\xi/\alpha}\) & \(0<\alpha\leq 1\), \(0<\xi\leq 1\), \(\beta>0\) \\ \hline \end{tabular} \end{table} Table 1: Examples of completely monotone functions \(0\leq x\mapsto\psi(x)\) such that \(\psi(0)=1\). Here, \(K_{r}\) denotes the modified Bessel function of the second kind. Proposition 6 provides a very easy recipe to build kernels over time evolving graphs, whatever the temporal structure (linear or periodic). Any element from the Table 1 is a good candidate for such a composition. We do not report the corresponding algebraic forms for obvious reasons. Instead, we concentrate on illustrating how these covariance works through two practical examples. We believe that the free parameters and the large number of analytically-tractable completely monotone functions provide a wide range of models that could fit several real-world frameworks. ### Linear time We start by considering the graph in Figure 8. Here, we have \(m=3\) time instants. Further, \[V=\left\{A_{0},B_{0},C_{0},D_{0},A_{1},B_{1},C_{1},D_{1},A_{2},C_{2},D_{2} \right\}.\] We focus on the distances as well as the covariances between the points \(A_{0}=(A_{0},B_{0},\delta_{A_{0}}=0)\), \(P:=(C_{0},D_{0},\delta_{P}=0.8)\) and \(Q:=(C_{2},D_{2},\delta_{Q}=0.5)\). All the spatial edges \(E_{S}\) have weight \(1\), whilst the temporal edges have weight Figure 8: Equivalent simple graph taken as an example for the linear-time case. \(\alpha>0\). Finally, we use \(k_{T}\) as in (12), with \(\lambda\) free parameter. The adjacency matrix follows (here \(\cdot\) stands for \(0\)). \[\begin{bmatrix}\cdot&1&\cdot&1&\alpha&\cdot&\cdot&\cdot&\cdot&\cdot\\ 1&\cdot&1&\cdot&\cdot&\alpha&\cdot&\cdot&\cdot&\cdot\\ \cdot&1&\cdot&1&\cdot&\cdot&\alpha&\cdot&\cdot&\cdot\\ 1&\cdot&1&\cdot&\cdot&\cdot&\alpha&\cdot&\cdot&\cdot\\ \alpha&\cdot&\cdot&\cdot&1&\cdot&\cdot&\alpha&\cdot&\cdot\\ \cdot&\alpha&\cdot&\cdot&1&\cdot&1&\cdot&\cdot&\cdot\\ \cdot&\alpha&\cdot&\cdot&1&\cdot&1&\cdot&\alpha&\cdot\\ \cdot&\cdot&\cdot&\alpha&\cdot&\cdot&1&\cdot&\cdot&\alpha\\ \cdot&\cdot&\cdot&\alpha&\cdot&\cdot&\cdot&\cdot&\cdot&1\\ \cdot&\cdot&\cdot&\cdot&\cdot&\alpha&\cdot&\cdot&\cdot&1\\ \cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\alpha&1&1&\cdot\end{bmatrix}\] Figure 9 clearly shows the effect the temporal edge parameter \(\alpha\) plays on the distances: while it has a considerable impact on the distances \(d(A_{0},Q)\) and \(d(P,Q)\), it shows a negligible effect on \(d(A_{0},P)\). This is reasonable given the graph structure: \(A_{0}\) and \(P\) belong to the same layer (\(t=0\)) and they are connected from both the paths \(A_{0},D_{0},P\) and \(A_{0},B_{0},C_{0},P\), which completely lie on \(t=0\) (and therefore they do not change with \(\alpha\)). On the other hand, \(Q\) can be connected to \(A_{0}\) and \(P\) only via paths that include temporal edges. As a consequence, if \(\alpha\to 0^{+}\), both \(d(A_{0},Q)\) and \(d(P,Q)\) will go to infinity. The plot on the right of Figure 9 shows the effect of the correlation parameter \(\lambda\) as well. Whilst it does not influence the distances concerning \(A_{0}\) (since it is a vertex), as it increase, it reduces the distances between \(P\) and \(Q\). Clearly, the effect is more significant for large values of the parameter \(\alpha\). Indeed, when \(\alpha\) is small, the distances between nodes at different time instants are large. As a consequence, the second line of equation (11) becomes negligible when compared to the first one. Figure 10 shows the resulting effect of the parameter \(\alpha\) on the correlations between \(A_{0}\), \(P\) and \(Q\) generated by the composition of two completely monotone functions taken from Table 1 and the distances shown in Figure 9. Figure 10: Generated covariances between the points \(A_{0}\), \(P\) and \(Q\). Left: exponential kernel with parameters (\(\alpha=1,\beta=1\)) (see Table 1). Right: generalised Cauchy kernel with parameters (\(\alpha=1,\beta=5,\xi=0.5\)) (see Table 1). Figure 9: Distances between the points \(A_{0}\), \(P\) and \(Q\). Notice that while \(d(A_{0},P)\) and \(d(A_{0},Q)\) (left) do not depend on \(\lambda\), the distance \(d(P,Q)\) (right) decreases as \(\lambda\) increases. ### Circular time We are going to analyse the time-evolving periodic graph depicted in Figure 11. In this case, we have \(m=8\) and \[V=\left\{A_{0},B_{0},A_{1},B_{1},\ldots,A_{7},B_{7}\right\}.\] We will compare the distances and the covariances between the points \(P_{0}:=(0,A_{0},B_{0},\delta_{P_{0}}=0.5)\) and \(P_{t}:=(t,A_{\tau},B_{\tau},\delta_{P_{t}}=0.5)\), where \(t\in\mathbb{N}\) and \(\tau\equiv t\pmod{m}\). Here, all the spatial edges have weight \(1\), whilst the temporal ones have weight \(\alpha>0\). We use the temporal kernel \(k_{T}\) as described in Subsection 5.1, with \(\rho\) and \(\beta\) free parameters. The distances and covariances in Figure 12 show the effect of the parameter \(\beta\) on our construction. It adds a linear component \(\beta^{2}t\), which allows to calibrate the distance (and, as a result, the covariance) between the points at different time instants. In such a way, the effect of the periodicity is increased by setting a low \(\beta\) and becomes negligible when \(\beta\) grows. Furthermore, notice Figure 11: Equivalent simple graph taken as an example for the circular-time case. Figure 12: Distances (left) and covariances (right) for the graph in Figure 11 between the points \(P_{0}\) and \(P_{t}\) for \(\rho=0.45\) and \(\alpha=1\). Covariances have been generated via the exponential kernel with parameters \(\alpha=0.5\) and \(\beta=0.5\) (see Table 1). Figure 13: Distances (left) and covariances (right) for the graph in Figure 11 between the points \(P_{0}\) and \(P_{t}\) for \(\alpha=10\) and \(\beta=0.3\). Covariances have been generated via the Dagum kernel with parameters \(\alpha=1\), \(\beta=2\) and \(\xi=0.5\) (see Table 1). the spikes the covariance functions show: they perfectly embody the periodic setting of a process, as introduced in Subsection 1.3. It is in order to remark that, although isotropic covariances are decreasing functions of the spatial distances, Figures 12 and 13 show valid covariance functions, as the distance of our setting is completely different from the Euclidean distance on \(\mathbb{R}^{n}\) as it takes into account the spatio-temporal structure of the time-evolving graph. In Figure 13, it is possible to visualise the effect of the partial correlation parameter \(\rho\in[0,\frac{1}{2})\). First, notice that its role is particularly significant when the weight \(\alpha\) is high. Indeed, for low \(\alpha\)'s, the covariance structure of the vertices given by the inverse laplacian matrix is dominant. Yet, when \(\alpha\) is high, the nodes at different time instants are considered close to each other and the resulting distances given by the sheer first line of (16) are low. Thus, the parameter \(\rho\) (which enters in the second line of (16)) has a greater influence. Clearly, the greater is \(\rho\), the lower is the distance, as it correlates different Brownian bridge realisations. ## 7 Conclusion: impact of this research The research presented in this paper provides the fundamentals to a part of literature that had considered networks evolution to a very limited extent. The impact of this research is multifarious: 1. temporal evolution of networks is preoccupying many scientists in both theoretical and applied disciplines. In network design problems (NDP), the optimal selection of subgraphs is a major issue. Several applications of generalised NDP arise in the fields of telecommunications, transportation and biology, and the reader is referred to Feremans et al. (2003), with the references therein. Dealing with these kind of problems under the framework presented here will be a major task. 2. Most of the approaches about generalised networks are computer based, and the attention has been largely put on algorithmic complexity rather than on mathematical and statistical accuracy, see Harrington and Mautz (1976) and Glover et al. (1978), for instance. Here, we have provided an analytic effort, and the computational burden has been taken care thanks to the assumption of temporal Markovianity. 3. Several emerging fields are increasingly working over generalised networks. Yet, temporal evolution has been considered under overly re
2309.14593
Short Second Moment Bound for GL(2) $L$-functions in $q$-Aspect
We prove a Lindel\"{o}f-on-average upper bound for the second moment of the $L$-functions associated to a level 1 holomorphic cusp form, twisted along a coset of subgroup of the characters modulo $q^{2/3}$ (where $q = p^3$ for some odd prime $p$). This result should be seen as a $q$-aspect analogue of Anton Good's (1982) result on upper bounds of the second moment of cusp forms in short intervals.
Agniva Dasgupta
2023-09-26T00:58:50Z
http://arxiv.org/abs/2309.14593v2
# Short second moment bound for GL(2) \(L\)-functions in the level aspect ###### Abstract. We prove a Lindelof-on-average upper bound for the second moment of the \(L\)-functions associated to a level \(1\) holomorphic cusp form, twisted along a coset of subgroup of the characters modulo \(q^{2/3}\) (where \(q=p^{3}\) for some odd prime \(p\)). This result should be seen as a \(q\)-aspect analogue of Anton Good's (1982) result on upper bounds of the second moment of cusp forms in short intervals. ## 1. Introduction ### Statement of Results The study of moments of \(L\)-functions at the central point, \(s=\frac{1}{2}\), has been an important area of research in analytic number theory. While originally this interest grew due to connections with the Lindelof Hypothesis, estimation of similar moments for a family of \(L\)-functions has now become an interesting point of study on its own. One key result in this area is the following proposition, due to Iwaniec in 1978 (Theorem 3 in [12]). **Proposition 1.1**.: _For \(T\geq 2\) and \(\varepsilon>0\),_ \[\int_{T}^{T+T^{\frac{2}{3}}}|\zeta\big{(}\tfrac{1}{2}+it\big{)}|^{4}dt\ll_{ \varepsilon}T^{\frac{2}{3}+\varepsilon}. \tag{1.1}\] Proposition 1.1 is one of the first examples of an upper bound on a'short' moment of \(L\)-functions. Most of the earlier results were concerned with estimating integrals of the type \(\int_{0}^{T}\lvert\zeta\big{(}\tfrac{1}{2}+it\big{)}\rvert^{k}dt\), for some positive even integer \(k\). Note that (1.1) is consistent with the Lindelof Hypothesis, and is an example of a Lindelof-on-average upper bound. For level \(1\) holomorphic cusp forms, an analogous result follows from Good [10], **Proposition 1.2**.: _Let \(f\) be a fixed level \(1\) holomorphic cusp form. For \(T\geq 2\) and \(\varepsilon>0\),_ \[\int_{T}^{T+T^{\frac{2}{3}}}|L\big{(}f,\tfrac{1}{2}+it\big{)}|^{2}dt\ll_{f, \varepsilon}T^{\frac{2}{3}+\varepsilon}. \tag{1.2}\] Equations (1.1) and (1.2) imply a Weyl-type subconvexity bound for the respective \(L\) functions, \(\zeta(\cdot)\), and \(L(f,\cdot)\). In fact, for holomorphic cusp forms, [10] was the first instance where such a result was obtained. In their paper on the Weyl bound for Dirichlet \(L\)-functions [13], Petrow and Young, proved a \(q\)-aspect analogue of Proposition 1.1. A special case of Theorem 1.4 (with \(q=p^{3},d=p^{2}\)) in [13] can be stated as - **Proposition 1.3**.: _Let \(q=p^{3}\), for an odd prime \(p\). Let \(\alpha\) be a fixed primitive character mod \(q\). For any \(\varepsilon>0\), we have_ \[\sum_{\psi\,(\mathrm{mod}\ q^{\frac{2}{3}})}|L\big{(}\alpha\cdot\psi,\tfrac{1}{2 }\big{)}|^{4}\ll_{\varepsilon}q^{\frac{2}{3}+\varepsilon}. \tag{1.3}\] We discuss this analogy in more detail in Section 1.4. In this paper, following up on the ideas in [10], we derive a \(q\)-aspect analogue of Good's result in Proposition 1.2. We prove the following theorem. **Theorem 1.4**.: _Let \(f\) be a level \(1\) cusp form. Let \(q=p^{3}\), for an odd prime \(p\), and let \(\alpha\) be a primitive character modulo \(q\). For any \(\varepsilon>0\),_ \[\sum_{\psi\,(\mathrm{mod}\ q^{\frac{2}{3}})}|L\big{(}f\otimes(\alpha\cdot\psi),\tfrac{1}{2}\big{)}|^{2}\ll_{f,\varepsilon}q^{\frac{2}{3}+\varepsilon}. \tag{1.4}\] We note that, similar to Propositions 1.1, 1.2, 1.3, this is also a Lindelof-on-average bound. Also, even though Theorem 1.4 is stated for \(q=p^{3}\), we expect this to generalise to a short moment exactly analogous to the Theorem 1.4 [10]. While the authors in [10] needed the fully general result to prove the Weyl bound for all Dirichlet \(L\)-functions (see also [10]), we do not have any such demand. So, for the sake of simplicity, we choose to work with \(q=p^{3}\). Using a lower bound on the associated first moment, we can deduce from Theorem 1.4 the following result about non-vanishing of \(L\)-functions within this family. **Theorem 1.5**.: _Let \(f,p,q,\alpha\) be as in Theorem 1.4. Then for \(\varepsilon>0\),_ \[\#\{\psi\,(\mathrm{mod}\ q^{\frac{2}{3}});\ L\big{(}f\otimes(\alpha\cdot\psi),\tfrac{1}{2}\big{)}\neq 0\}\gg_{\varepsilon}p^{2-\varepsilon}. \tag{1.5}\] Moments of a family of \(L\)-functions encode a lot of information about the individual members of the family. For one such example, the strength of the second moment bound in (1.4) is enough to immediately deduce a Weyl-type subconvexity bound for individual \(L\)-functions in this family. This result was originally proven by Munshi and Singh in 2019 (Theorem 1.1 in [11] with \(r=1,t=0\)). The authors use a completely different approach. They do not compute moments for an associated family; relying instead on a novel variant of the circle method, first introduced in [12]. We state their result as a corollary. **Corollary 1.6**.: _Let \(f,p,q,\alpha\) be as in Theorem 1.4. Let \(\psi\) be a character modulo \(q^{\frac{2}{3}}\). Then for any \(\varepsilon>0\),_ \[L\big{(}f\otimes(\alpha\cdot\psi),\tfrac{1}{2}\big{)}\ll_{f,\varepsilon}q^{ \frac{1}{3}+\varepsilon}. \tag{1.6}\] ### Notations We use standard conventions present in analytic number theory. We use \(\varepsilon\) to denote an arbitrarily small positive constant. For brevity of notation, we allow \(\varepsilon\) to change depending on the context. The expression \(F\ll G\) implies there exists some constant \(k\) for which \(|F|\leq k\cdot G\) for all the relevant \(F\) and \(G\). We use \(F\ll_{\varepsilon}G\) to emphasize that the implied constant \(k\) depends on \(\varepsilon\) (it may also depend on other parameters). For error terms, we often use the big \(O\) notation, so \(f(x)=O(g(x))\) implies that \(f(x)\ll g(x)\) for sufficiently large \(x\). We use the term'small' or'very small' to refer to error terms which are of the size \(O_{A}(p^{-A})\), for any arbitrarily large \(A\). By a dyadic interval, we mean an interval of the type \([2^{\frac{k}{2}}M,2^{\frac{k+1}{2}}M]\) for some \(k\in\mathbb{Z}\), \(M\in\mathbb{R}\). We also use \(m\asymp M_{0}\) to mean \(m\) ranges over the dyadic interval \([M_{0},2M_{0}]\). We also use \({\sum\nolimits^{*}}_{n}\) to denote \({\sum\nolimits_{n}}\). Similarly, \({\sum\nolimits^{*}}_{a(\text{mod }c)}\) is used for \({\sum\nolimits_{a(\text{mod }c)}}\), and \({\sum\nolimits^{*}}_{\chi(p)}\) for \({\sum\nolimits_{\chi(\text{mod }p)}}\). As usual, \(e(x)=e^{2\pi ix}\). Also, \(e_{p}(x)=e(\frac{x}{p})=e^{2\pi i\frac{x}{p}}\). ### Sketch of Proof We give a brief sketch of Theorem 1.4 in this section. In this sketch, we use the symbol '\(\approx\)' between two expressions to mean the expressions on the left side can be written as an expression similar to the right side, along with an acceptable error term. Using an approximate functional equation and orthogonality of characters, (see Section 3), it suffices to prove the following result on an associated shifted convolution problem. **Theorem 1.7**.: _Let \(f,p,q,\alpha\) be as in Theorem 1.4. Let \(\lambda_{f}(\cdot)\) be the coefficients of the associated Dirichlet series. Let_ \[S(N,\alpha)\coloneqq{\sum\nolimits_{l,n}}^{*}\lambda_{f}(n+p^{2}l)\overline{ \lambda_{f}}(n)\alpha(n+p^{2}l)\overline{\alpha(n)}w_{N}(n+p^{2}l)w_{N}(n), \tag{1.7}\] _where \(N\ll_{\varepsilon}p^{3+\varepsilon}\), and \(w_{N}(\cdot)\) is some smooth function supported on \([N,2N]\) satisfying \(w_{N}^{(j)}(x)\ll N^{-j}\). We then have_ \[S(N,\alpha)\ll_{f,\varepsilon}Np^{\varepsilon}. \tag{1.8}\] **Remark 1.8**.: _As \(w_{N}(\cdot)\) is supported in \([N,2N]\), we have that, in (1.7), \(0<l\leq\frac{N}{p^{2}}\), and \(n\asymp N\)._ The trivial bound on \(S(N,\alpha)\) is \(\frac{N^{2}}{p^{2}}p^{\varepsilon}\ll Np^{1+\varepsilon}\). To improve on this, we first note that \(S(N,\alpha)\) displays a conductor dropping phenomenon, notably \[\alpha(n+p^{2}l)\overline{\alpha(n)}=e_{p}(a_{\alpha}l\overline{n}). \tag{1.9}\] for some non-zero \(a_{\alpha}(\text{mod }p)\). Notice that, when \(p\mid l\) in (1.7), \(S(N,\alpha)\) does not have any cancellations. However, we are saved by the fact that the number of such terms is \(O(Np^{\varepsilon})\). So for the rest of the sketch, we assume \((l,p)=1\). In order to separate the variables \(n\) and \((n+p^{2}l)\) we introduce a delta symbol (see Section 2.3). This introduces a new averaging variable \(c\), and once again we split the resulting expression into whether \(\gcd(c,p)=1\), or not. The former requires more work, and we focus on this term for the sketch. We have \[S(N,\alpha) \approx{\sum\nolimits_{l\leq\frac{N}{p^{2}}}}^{*}{\sum\nolimits_{ c\leq\sqrt{N}}}^{*}{\sum\nolimits_{a(\text{mod }c)}}^{*}\int_{-\infty}^{\infty}g_{c}(v)e(-p^{2}lv)\] \[\cdot\sum_{m\asymp N}\lambda_{f}(m)e\Big{(}\frac{am}{c}\Big{)}w_{ N}(m)e(mv){\sum\nolimits_{n\asymp N}}^{*}\overline{\lambda}_{f}(n)e_{p}(a_{ \alpha}l\overline{n})e\bigg{(}\frac{-an}{c}\bigg{)}w_{N}(n)e(-nv)\ dv. \tag{1.10}\] Here, \(g_{c}(v)\) is a smooth function which is small when \(v\gg\frac{1}{c\sqrt{N}}\). We can now use the Voronoi summation formula (see Sec. 2.5), for the \(m\) and \(n\) sums in (1.10). Note that, as there is an extra additive character \(e_{p}(a_{\alpha}l\overline{n})\) in the \(n\) sum, we need to use a modified Voronoi summation formula (see Prop 2.17) here. While this modification does introduce some extra terms to the final expression, these terms are pretty easily dealt with (see Section 6.1). We get that \[S(N,\alpha)\approx\sideset{}{{}^{*}}{\sum}_{\begin{subarray}{c}l\leq\frac{N}{p^{2} }\\ m\ll p^{e}\\ n\ll p^{2+\varepsilon}\end{subarray}}\frac{\lambda_{f}(m)\overline{\lambda_{f}} (n)}{p^{2}}\sideset{}{{}^{*}}{\sum}_{c\leq\sqrt{N}}\frac{1}{c^{2}}S(\overline {p}^{2}n-m,-p^{2}l;c)\mathrm{Kl}_{3}(-n\overline{c}^{2}a_{\alpha}l,1,1;p)I_{N} (c,l,m,n), \tag{1.11}\] with \(S(a,b;c)=\sum_{mn\equiv 1(\mathrm{mod}\ c)}e\big{(}\frac{am+bn}{c}\big{)}\), and \(\mathrm{Kl}_{3}(x,y,z;c)=\sum_{pqr\equiv 1(\mathrm{mod}\ c)}e\big{(}\frac{px+ qy+rz}{c}\big{)}\) denoting the Kloosterman and hyper-Kloosterman sums respectively. Also, \(I_{N}(\cdot)\) is an integral of special functions (see (4.24)) which trivially satisfies \(I_{N}(c,l,m,n)\ll_{f,\varepsilon}\frac{1}{c}N^{\frac{3}{2}+\varepsilon}\). Here, we should indicate that while the original sums were of length \(N\) each (and \(N\ll p^{3+\varepsilon}\)), the dual sums are significantly shorter - one is \(m\ll p^{\varepsilon}\), and the other \(n\ll p^{2+\varepsilon}\). Even though this represents significant savings (by a factor of \(\frac{N^{2}}{p^{2}}\)), it is still not sufficient. If we use Weil's bound for the Kloosterman sum, and Deligne's bound for the hyper-Kloosterman sum, along with the trivial bound on \(I_{N}(\cdot)\), we get (1.11) is bounded by \(Np^{\frac{3}{4}+\varepsilon}\). While this beats the trivial bound for (1.7), it still falls short of the desired bound by a factor of \(p^{-\frac{3}{4}}\). In order to get more cancellations, we want to use a spectral decomposition for the \(c\)-sum. We do this by using the Bruggeman Kuznetsov formula (Prop. 2.18). Note that if \(\gcd(r,p)=1\), the hyper-Kloosterman sum can be decomposed as \[\mathrm{Kl}_{3}(r,1,1;p)=\frac{1}{\phi(p)}\sum_{\chi(p)}\tau(\chi)^{3}\chi(r).\] Using this we rewrite (1.11) as \[\sideset{}{{}^{*}}{\sum}_{\begin{subarray}{c}l\leq\frac{N}{p^{2}}\\ m\ll p^{e}\\ n\ll p^{2+\varepsilon}\end{subarray}}\frac{\lambda_{f}(m)\overline{\lambda_{f}} (n)}{\phi(p)p^{2}}\sum_{\chi(p)}\tau(\chi)^{3}\chi(-na_{\alpha}l)\sideset{}{ {}^{*}}{\sum}_{c}\frac{1}{c^{2}}S(\overline{p}(n-p^{2}m),-pl;c)\overline{\chi }^{2}(c)I_{N}(c,l,m,n).\] We now use the Bruggeman-Kuznetsov formula for \(\Gamma_{0}(p)\) with central character \(\overline{\chi}^{2}\) at the cusps \(\infty\) and \(0\). We use two different versions of the integral transforms, depending on whether the integral \(I_{N}(c,l,m,n)\) has oscillatory behavior, or not (see Section 2.6). We focus on the non-oscillatory case here, the other one is similar. We have \[S(N,\alpha)\approx\frac{N}{p^{2}}\sideset{}{{}^{*}}{\sum}_{\chi(p) }\chi(a_{\alpha})\frac{\tau(\chi)^{3}}{p^{3}}\sum_{t_{j}}\sum_{\pi\in\mathcal{ H}_{it_{j}}(p,\chi^{2})}L(\tfrac{1}{2},\overline{\pi}\otimes\chi)\sideset{}{{}^{*}}{ \sum}_{\begin{subarray}{c}m\ll p^{e}\\ n\ll p^{2+\varepsilon}\end{subarray}}\frac{\overline{\lambda_{f}}(n)\overline{ \lambda_{\pi}}(|p^{2}m-n|)\chi(-n)}{\sqrt{|p^{2}m-n|}}\\ +(\text{Holomorphic terms})+(\text{Eisenstein terms}). \tag{1.12}\] Here \(\mathcal{H}_{it}(n,\psi)\) denotes the set of Hecke-Maass newforms of conductor \(n\), central character \(\psi\), and spectral parameter \(it\). Analysing the associated integral transforms, it suffices to restrict (1.12) to when \(t_{j}\ll p^{\varepsilon}\). The contribution of the holomorphic and Eisenstein terms are similar to the Maass form term written down here. Using the Cauchy-Schwarz inequality, it then suffices to bound the sums \[\sum_{t_{j}\ll p^{e}}\sum_{\chi(p)}\!^{*}\sum_{\pi\in\mathcal{H}_{it_{j}}(p,\chi^ {2})}\!\left|L(\tfrac{1}{2},\overline{\pi}\otimes\chi)\right|^{2}\,,\ \ \sum_{t_{j}\ll p^{e}}\sum_{\chi(p)}\!^{*}\sum_{\pi\in\mathcal{H}_{it_{j}}(p,\chi^ {2})}\sum_{n\ll p^{2+e}}\left|\frac{\overline{\lambda}_{f}(n)\overline{\lambda}_ {\pi}(|p^{2}m-n|)\chi(-n)}{\sqrt{|p^{2}m-n|}}\right|^{2}.\] Using the fact that \(\overline{\pi}\otimes\chi\in\mathcal{H}_{it_{j}}(p^{2},1)\), we can get the required bounds by applying the spectral large sieve inequality on each of the two sums. The details can be seen in Section 5. If we compare our proof and the proof of Prop 1.3, we can see that the authors in [10] do not use a delta symbol for the shifted convolution sum at the beginning, instead relying on an approximate functional equation-type formula for the divisor function. A more significant difference can be observed later, by considering the shape of the terms in (1.12) and equation (1.20) in [10]. Unlike our case, the authors get a product of three \(L\)-functions that they then bound using Holder's inequality. This reduces their problem to bounding the fourth moment of L -functions, which is accomplished via the use of the spectral large sieve inequality. ### Short Moments Theorem 1.4 joins a growing list of results on short moments of families of \(L\)-functions. The general idea here is to work with a subfamily of \(L\)-functions that exhibit a conductor lowering phenomenon. Say, we start with a family \(\mathcal{F}\) of analytic \(L\)-functions; we choose a subfamily \(\mathcal{F}_{0}\), such that the quantity \(C(\pi_{1}\otimes\overline{\pi_{2}})\), where \(C(\pi)\) denotes the analytic conductor of \(\pi\), is of a comparatively smaller size, when \(\pi_{1},\pi_{2}\) are in \(\mathcal{F}_{0}\). (See Section 1.5 in [10] for a more detailed discussion on this). We see an archimedean version of this in Propositions 1.1 and 1.2. The full family of \(L\)-functions considered in Proposition 1.2, for example, is \(\mathcal{F}=\{L(f\otimes|\cdot|^{it},\tfrac{1}{2}),T\leq t\leq\ 2T\}.\) Here \(f\) is a level 1 cusp form. The analytic conductor of \(L(f\otimes|\cdot|^{it_{1}}\otimes\overline{f}\otimes|\cdot|^{it_{2}})\) is proportional to \(|t_{1}-t_{2}|^{4}\). Now, for arbitrary \(t_{1},t_{2}\) in \([T,2T]\), this can be as high as \(T^{4}\). However, if we restrict to the subfamily, \(\mathcal{F}_{0}=\{L(f\otimes|\cdot|^{it},\tfrac{1}{2}),T\leq t\leq\ T+T^{\tfrac {2}{3}}\}\) - as considered in (1.2), we have that \(|t_{1}-t_{2}|^{4}\leq T^{\tfrac{8}{3}}\), which is significantly smaller than \(T^{4}\). This leads to a conductor lowering analogous to (1.9). For our work, we start with the family, \(\mathcal{F}=\{L(f\otimes\chi,\tfrac{1}{2}),\chi(\text{mod }q)\}\), where \(q=p^{3}\). Using some of the results in [11], we know that the analytic conductor of \(L\Big{(}(f\otimes\chi_{1}\otimes\overline{(f\otimes\chi_{2})},\tfrac{1}{2} \Big{)}\) depends on the fourth power of the conductor of \((\chi_{1}\overline{\chi}_{2})\). Now, for arbitrary \(\chi_{1}\), and \(\chi_{2}\), this can be as high as \(q^{4}\). To get a short moment in the level aspect, we choose a subfamily of \(\mathcal{F}\) that lowers this significantly. We consider the subfamily \(\mathcal{F}_{0}=\{L(f\otimes(\alpha\cdot\psi),\tfrac{1}{2}),\psi(\text{mod }q^{\tfrac{ 2}{3}})\}\), for a fixed primitive character \(\alpha(\text{mod }q)\).Now, the analytic conductor of \(L\Big{(}(f\otimes(\alpha\cdot\psi_{1}))\otimes\overline{(f\otimes(\alpha \cdot\psi_{2}))},\tfrac{1}{2}\Big{)}\) (more accurately, of \(L\big{(}(f\otimes\overline{f}\otimes(\psi_{1}\overline{\psi}_{2}),\tfrac{1}{2} \big{)}\big{)}\)) depends on \(\big{(}\text{conductor}(\psi_{1}\overline{\psi}_{2})\big{)}^{4}\leq q^{\tfrac{8} {3}}\), which is significantly lower than \(q^{4}\). Short moments in the level aspect, in a \(GL(1)\) setting, have also been considered in [14], and in [15]. Another similar application was considered along Galois orbits in [15]. ### Acknowledgements I am deeply grateful to my doctoral advisor, Matthew P. Young, for suggesting this problem, and for engaging in extensive discussions on this subject throughout the process of working on this paper. ## 2. Preliminary Results from Analytic Number Theory In this section, we note down some of the analytic number theory preliminaries we will use later. We omit proofs in most cases. Interested readers can check the references listed next to the results. ### Approximate Functional Equation We state a version of the approximate functional equation for analytic \(L\)-functions. For proofs, see Theorem 5.3 and Proposition 5.4 in [13]. **Proposition 2.1**.: _Let \(X>0\), and \(L(f,s)\) be an \(L\)-function given by \(L(f,s)=\sum_{n\geq 1}\frac{\lambda_{f}(n)}{n^{s}}\) when \(\Re(s)>1\). If the completed \(L-\)function \(\Lambda(f,s)\) is entire, then we have the following approximation for \(L\left(f,s\right)\), when \(0\leq\Re(s)\leq 1\) -_ \[L\left(f,s\right)=\sum_{n}\frac{\lambda_{f}(n)}{n^{s}}V\left(\frac{n}{X\sqrt{ q}}\right)+\varepsilon\left(f,s\right)\sum_{n}\frac{\overline{\lambda_{f}}(n)}{ n^{1-s}}V\left(\frac{nX}{\sqrt{q}}\right). \tag{2.1}\] _Here, \(q\) is the conductor of \(L(f,s)\), and \(V(y)\) is a smooth function that satisfies the following bounds -_ \[V(y)\ll\left(1+\frac{y}{\sqrt{\mathfrak{q}_{\infty}}}\right)^{-A}, \tag{2.2}\] _where \(q\cdot\mathfrak{q}_{\infty}\) is the analytic conductor of \(L(f,s)\) and \(A>0\)._ As \(|\varepsilon\left(f,\frac{1}{2}\right)|=1\), we have the following immediate corollary (using \(s=\frac{1}{2},X=1\)), **Corollary 2.2**.: _Let \(L(f,s),q,\lambda\) be as above. For any fixed \(\varepsilon>0,A>0\),_ \[\left|L\left(f,\tfrac{1}{2}\right)\right|^{2}\leq 4\left|\sum_{n\leq q^{1+ \varepsilon}}\frac{\lambda_{f}(n)}{\sqrt{n}}V\left(\frac{n}{\sqrt{q}}\right) \right|^{2}+O(q^{-A}). \tag{2.3}\] ### Postnikov Formula We will need to use a particular case of the Postnikov formula stated below. **Proposition 2.3**.: _Let \(p\) be an odd prime, \(\alpha\) be a primitive character modulo \(p^{3}\) and \(l\in\mathbb{Z}_{>0}\). Then for all \(n\) with \(\text{gcd}(n,p)\neq 1,\ \alpha(n+p^{2}l)\overline{\alpha(n)}=e_{p}(a_{\alpha}l \overline{n}),\) for some \(a_{\alpha}\in(\mathbb{Z}/p\mathbb{Z})^{\times},\) independent of \(n.\)_ Proof.: Define a function \(\psi\) on \(\mathbb{Z}\) as \(\psi(n)=\alpha(1+p^{2}n)\). We have \[\psi(m+n)=\alpha(1+p^{2}(m+n))=\alpha(1+p^{2}(m+n)+p^{4}mn)=\alpha(1+p^{2}n) \cdot\alpha(1+p^{2}m)=\psi(m)\cdot\psi(n).\] Hence, \(\psi\) is an additive character modulo \(p\), and \(\psi(n)=e_{p}(a_{\alpha}n),\) for some \(a_{\alpha}\in(\mathbb{Z}/p\mathbb{Z})^{\times}\) (as \(\alpha\) is primitive, \(a_{\alpha}\) cannot be \(0\)). Now, when \(\text{gcd}(n,p)=1\), we have \(\alpha(n+p^{2}l)\overline{\alpha(n)}=\alpha(1+p^{2}l\overline{n})=\psi(l \overline{n})=e_{p}(a_{\alpha}l\overline{n}).\) ### Delta Symbol This section is a brief review of the delta-symbol method from Section 20.5 in [13]. Let \(w(u)\) be a smooth, compactly supported function on \([C,2C]\) for some \(C>0\). Additionally suppose that \(\sum_{q=1}^{\infty}w(q)=1\). Then we can rewrite the Kronecker delta function at zero as \[\delta(n)=\sum_{q|n}\biggl{(}w(q)-w\biggl{(}\frac{|n|}{q}\biggr{)}\biggr{)}. \tag{2.4}\] **Proposition 2.4**.: _Let \(\delta\) be as in (2.4). Using orthogonality of the additive characters modulo \(q\), we can rewrite (2.4) as_ \[\delta(n)=\sum_{c=1}^{\infty}S(0,n;c)\Delta_{c}(n). \tag{2.5}\] Here, \[S(0,n;c)=\!\!\!\!\!\sum_{d\,(\!\!\!\!\!\mod c)}^{*}e\bigg{(}\frac{dn}{c}\bigg{)}, \text{ and }\Delta_{c}(u)=\sum_{r=1}^{\infty}\frac{1}{cr}\bigg{(}w(cr)-w\bigg{(}\frac{|u |}{cr}\bigg{)}\bigg{)}.\] Since \(\Delta_{c}(u)\) is not compactly supported, we multiply it by a smooth function \(f\) supported on \([-2N,2N]\) (\(N>0\)) such that \(f(0)=1\). We also assume, for any \(a\in\mathbb{N}\), \(f^{(a)}(u)\ll N^{-a}\). Thus, we have \[\delta(n)=\sum_{c=1}^{\infty}S(0,n;c)\Delta_{c}(n)f(n). \tag{2.6}\] As \(w\) is supported on \([C,2C]\) and \(f\) is supported on \([N,2N]\), the sum on the right hand side is only upto \(c\leq 2\text{ max }\big{(}C,\frac{N}{C}\big{)}=X\). The optimal choice would thus be to take \(C=\ \sqrt{N}\), giving \(X=2C=2\sqrt{N}\). Thus, (2.6) becomes \[\delta(n)=\sum_{c\leq 2C}S(0,n;c)\Delta_{c}(n)f(n). \tag{2.7}\] We can get additional bounds on the derivatives of \(\Delta_{c}(u)\) if we assume more conditions on \(w\). **Proposition 2.5**.: _Suppose \(w(u)\) is smooth, compactly supported in the segment \(C\leq u\leq 2C\) with \(C\geq 1\). Additionally, suppose that \(\sum_{q=1}^{\infty}w(q)=1\), and \(w(u)\) has derivatives satisfying_ \[w^{(a)}(u)\ll C^{-a-1}, \tag{2.8}\] _for arbitrarily large \(a\in\mathbb{N}\). Then for any \(c\geq 1\) and \(u\in\mathbb{R}\) we have_ \[\Delta_{c}(u)\ll\frac{1}{(c+C)C}+\frac{1}{(|u|+cC)}, \tag{2.9}\] \[\Delta_{c}^{(a)}(n)\ll(cC)^{-1}(|u|+cC)^{-a}. \tag{2.10}\] Finally, it is often useful to express \(\Delta_{c}(n)f(n)\) in terms of additive characters, by considering its Fourier transform. Let \[g_{c}(v)=\int_{-\infty}^{\infty}\Delta_{c}(u)f(u)e(-uv)du, \tag{2.11}\] be the Fourier transform of \(\Delta_{c}(u)f(u)\). By the Fourier inversion formula we have \[\Delta_{c}(u)f(u)=\int_{-\infty}^{\infty}g_{c}(v)e(uv)dv. \tag{2.12}\] Using this in (2.7), we have \[\delta(n)=\sum_{c\leq 2C}S(0,n;c)\int_{-\infty}^{\infty}g_{c}(v)e(nv)dv. \tag{2.13}\] we have the following bounds on \(g_{c}(v)\): **Proposition 2.6**.: _Suppose \(g_{c}(v)\) is defined as in (2.11), and \(C=\sqrt{N}\). Then_ \[g_{c}(v)=1+O\bigg{(}\frac{1}{cC}\Big{(}\frac{c}{C}+|v|cC\Big{)}\bigg{)}, \tag{2.14}\] _and_ \[g_{c}(v)\ll(|v|cC)^{-a}. \tag{2.15}\] ### Inert Functions and Stationary Phase We mention some properties of inert functions in this section. Inert functions are special families of smooth functions characterised by certain derivative bounds. See Sections 2 and 3 in [11], and Section 4 in [11], for proofs. **Definition 2.7**.: _Let \(\mathcal{F}\) be an index set. A family \(\{w_{T}\}_{T\in\mathcal{F}}\) of smooth function supported on a product of dyadic intervals in \(\mathbb{R}^{d}_{>0}\) is called \(X\)-inert if for each \(j=(j_{1},j_{2},\cdots,j_{d})\in\mathbb{Z}^{d}_{>0}\), we have_ \[C(j_{1},j_{2},\cdots,j_{d})\coloneqq\sup_{T\in\mathcal{F}}\ \sup_{(x_{1},x_{2}, \cdots,x_{d})\in\mathbb{R}^{d}_{>0}}X^{-j_{1}-\cdots-j_{d}}\left|x_{1}^{j_{1} }\cdots x_{d}^{j_{d}}{w_{T}}^{(j_{1},\cdots,j_{d})}(x_{1},\cdots,x_{d})\right| <\infty.\] We will often denote the sequence of constants \(C(j_{1},j_{2},\ldots,j_{d})\) associated with this inert function as \(C_{\mathcal{F}}\). We note that the requirements for the functions to be supported on dyadic intervals can be easily achieved by applying a dyadic partition of unity. We give one simple example to highlight how such families can be constructed. **Example 2.8**.: _Let \(w(x_{1},\cdots,x_{d})\) be a fixed smooth function that is supported on \([1,2]^{d}\), and define,_ \[w_{X_{1},\cdots,X_{d}}(x_{1},\cdots,x_{d})=w\bigg{(}\frac{x_{1}}{X_{1}}, \cdots,\frac{x_{d}}{X_{d}}\bigg{)}. \tag{2.16}\] _Then with \(\mathcal{F}=\{T=(X_{1},\cdots,X_{d})\in\mathbb{R}^{d}_{>0}\}\), the family \(\{w_{T}\}_{T\in\mathcal{F}}\) is \(1\)-inert._ The following propositions can be checked immediately: **Proposition 2.9**.: _Let \(a,b\in\mathbb{R}\) with \(b>0\). If \(w_{T}(x)\) is a family of \(X\)-inert functions supported on \(x\in[N,2n]\), then the family \(W_{T}(x)\) given by, \(W_{T}(x)=w_{T}(bx^{a})\) is also \(X\)-inert, with support \(\left[\left(\frac{N}{b}\right)^{\frac{1}{a}},2\left(\frac{N}{b}\right)^{ \frac{1}{a}}\right]\)._ **Proposition 2.10**.: _Let \(a\in\mathbb{R}\). If \(w_{T}(x)\) is a family of \(X\)-inert functions supported on \(x\asymp N\), then so is the family \(W_{T}(x)\) given by, \(W_{T}(x)=(\frac{x}{N})^{-a}w_{T}(x)\)._ **Proposition 2.11**.: _If \(w\) is an \(X\)-inert function and \(v\) is a \(Y\)-inert function, then their product \(w\cdot v\) is a \(\max{(X,Y)}\)-inert function._ Suppose that \(w_{T}(x_{1},\cdots,x_{d})\) is \(X\)-inert and is supported on \(x_{i}\asymp X_{i}\). Let \[\widehat{w}_{T}(t_{1},x_{2},\cdots,x_{d})=\int_{-\infty}^{\infty}w_{T}(x_{1},\cdots,x_{d})e(-x_{1}t_{1})dx_{1}, \tag{2.17}\] denote its Fourier transform in the \(x_{1}\)-variable. We state the following result (Prop 2.6 in [11]) regarding the Fourier transform of inert functions. **Proposition 2.12**.: _Suppose that \(\{w_{T}:T\in\mathcal{F}\}\) is a family of \(X\)-inert functions such that \(x_{1}\) is supported on \(x_{1}\asymp X_{1}\), and \(\{w_{\pm Y_{1}}:Y_{1}\in(0,\infty)\}\) is a \(1\)-inert family of functions with support on \(\pm t_{1}\asymp Y_{1}\). Then the family \(\{X_{1}^{-1}w_{\pm Y_{1}}(t_{1})\widehat{w}_{T}(t_{1},x_{2},\cdots,x_{d})\ :\ (T,\pm Y_{1})\in \mathcal{F}\times\pm(0,\infty)\}\) is \(X-\)inert. Furthermore if \(Y_{1}\gg p^{\varepsilon}\frac{X}{X_{1}}\), then for any \(A>0\), we have_ \[X_{1}^{-1}w_{\pm Y_{1}}(t_{1})\widehat{w}_{T}(t_{1},x_{2},\cdots,x_{d})\ll_{ \varepsilon,A}p^{-A}. \tag{2.18}\] We can similarly describe the Mellin transform of such functions as well. We state the following result (Lemma 4.2 in [10]). **Proposition 2.13**.: _Suppose that \(\{w_{T}(x_{1},x_{2},\cdots,x_{d}):T\in\mathcal{F}\}\) is a family of \(X\)-inert functions such that \(x_{1}\) is supported on \(x_{1}\asymp X_{1}\). Let_ \[\widetilde{w}_{T}(s,x_{2},\cdots,x_{d})=\int_{0}^{\infty}w_{T}(x,x_{2},\cdots,x_{d})x^{s}\frac{dx}{x}. \tag{2.19}\] _Then we have \(\widetilde{w}_{T}(s,x_{2},\cdots,x_{d})={X_{1}}^{s}W_{T}(s,x_{2},\cdots,x_{d})\) where \(W_{T}(s,\cdots)\) is a family of \(X\)-inert functions in \(x_{2},\cdots,x_{d}\), which is entire in \(s\), and has rapid decay for \(|\Im(s)|\gg{X_{1}}^{1+\varepsilon}\)._ The following is a restatement of Lemma 3.1 in [10]. **Proposition 2.14**.: _Suppose that \(w\) is an \(X\)-inert function, with compact support on \([Z,2Z]\), so that \(w^{(j)}(t)\ll(Z/X)^{-j}\). And suppose that \(\phi\) is smooth and satisfies \(\phi^{(j)}(t)\ll\frac{Y}{Z^{j}}\) for some \(Y/X^{2}\geq R\geq 1\), and for all \(t\in[Z,2Z]\). Let_ \[I=\int_{-\infty}^{\infty}w(t)e^{i\phi(t)}dt. \tag{2.20}\] _We then have_ 1. _If_ \(|\phi^{\prime}(t)|\geq\frac{Y}{Z}\) _for all_ \(t\in[Z,2Z]\)_, then_ \(I\ll ZR^{-A}\) _for_ \(A\) _arbitrary large._ 2. _If_ \(\phi^{\prime\prime}(t)\gg\frac{Y}{Z^{2}}\) _for all_ \(t\in[Z,2Z]\)_, and there exists a (necessarily unique)_ \(t_{0}\in\mathbb{R}\) _such that_ \(\phi^{\prime}(t_{0})=0\)_, then_ (2.21) \[I=\frac{e^{i\phi(t_{0})}}{\sqrt{\phi^{\prime\prime}(t_{0})}}F(t_{0})+O_{A}(ZR^ {-A}),\] _where_ \(F\) _is an_ \(X\)_-inert function supported on_ \(t_{0}\asymp Z\)_._ The previous result has a natural generalisation (Main Theorem in [10]). **Proposition 2.15**.: _Suppose that \(w\) is an \(X\)-inert function in \(t_{1},\cdots,t_{d}\), supported on \(t_{1}\asymp Z\) and \(t_{i}\asymp X_{i}\) for \(i=2,3,\ldots,d\). Suppose that the smooth function \(\phi\) satisfies_ \[\frac{\partial^{a_{1}+a_{2}+\cdots+a_{d}}}{\partial t_{1}^{a_{1}}\partial t_ {2}^{a_{2}}\ldots\partial t_{d}^{a_{d}}}\phi(t_{1},t_{2},\ldots,t_{d})\ll_{C }\frac{Y}{Z^{a_{1}}}\frac{1}{X_{2}^{a_{2}}\ldots X_{d}^{a_{d}}}, \tag{2.22}\] _for all \(a_{1},a_{2},\ldots,a_{d}\in\mathbb{N}\). Now, suppose that \(\phi^{\prime}(t_{1},t_{2},\ldots,t_{d})\gg\frac{Y}{Z^{2}}\) (here \(\phi^{\prime\prime}\) and \(\phi^{\prime}\) denote the derivatives of \(\phi\) with respect to \(t_{1}\)), for all \(t_{1},t_{2},\ldots,t_{d}\) in the support of \(w\), and there exists a (necessarily unique) \(t_{0}\in\mathbb{R}\) such that \(\phi^{\prime}(t_{0})=0\). Suppose also that \(\frac{Y}{X^{2}}\geq R\geq 1\), then_ \[I=\int_{\mathbb{R}}e^{i\phi(t_{1},\ldots,t_{d})w(t_{1},\ldots,t_{d})dt_{1}}= \frac{Z}{\sqrt{Y}}e^{i\phi(t_{0},t_{2},\ldots,t_{d})}W(t_{2},\ldots,t_{d})+O_{A }(ZR^{-A}), \tag{2.23}\] _for some Xinert function \(W\), and \(A\) can be taken to be arbitrarily large. The implied constant depends on \(A\) and \(C\)._ ### Voronoi Summation Formula We will need the following two Voronoi-type summation formulas. **Proposition 2.16**.: _Let \(f\) be a cusp form of weight \(k\geq 1\), level \(1\) and trivial central character. Suppose \(c\geq 1\), and \(ad\equiv 1(\mathrm{mod}\ c)\). Then for any smooth, compactly supported function \(g\) on \(\mathbb{R}^{+}\), we have_ \[\sum_{m=1}^{\infty}\lambda_{f}(m)e\Big{(}\frac{am}{c}\Big{)}g(m)=\frac{1}{c} \sum_{n=1}^{\infty}\lambda_{f}(n)e\bigg{(}\frac{-dn}{c}\bigg{)}H(n), \tag{2.24}\] _where_ \[H(n)=2\pi i^{k}\int_{0}^{\infty}g(x)J_{k-1}\bigg{(}\frac{4\pi}{c}\sqrt{xn} \bigg{)}dx. \tag{2.25}\] We will also use a modified Voronoi summation formula, which we state and prove below. **Proposition 2.17**.: _Let \(f,g,a,c,\) and \(d\) be as in Proposition 2.16. Additionally, assume that \(f\) is a Hecke eigenform. Let \(p\) be a prime with \(\text{gcd}(c,p)=1\) and fix a constant \(r\not\equiv 0(\mathrm{mod}\ p)\). We then have_ \[\sum_{\text{gcd}(m,p)=1}\overline{\lambda_{f}}(m)e\bigg{(}\frac{ \overline{rn}}{p}\bigg{)}e\bigg{(}-\frac{am}{c}\bigg{)}g(m)=\\ \sum_{\text{gcd}(n,p)=1}\frac{\overline{\lambda_{f}}(n)H_{1}(n)} {p^{2}c}e\bigg{(}\frac{\overline{ap^{2}}n}{c}\bigg{)}Kl_{3}(-n\overline{c}^{2} r,1,1;p)+\sum_{\text{gcd}(n,p^{2})=p}\frac{\overline{\lambda_{f}}(n)H_{1}(n)}{p^{2}c }e\bigg{(}\frac{\overline{ap^{2}}n}{c}\bigg{)}\\ +\sum_{n=1}^{\infty}\frac{\overline{\lambda_{f}}(np)\overline{ \lambda_{f}}(p)H_{1}(p^{2}n)}{p^{2}c}e\bigg{(}\frac{\overline{a}n}{c}\bigg{)}- \bigg{(}1+\frac{1}{p}\bigg{)}\sum_{n=1}^{\infty}\frac{\overline{\lambda_{f}}(n )H_{1}(p^{2}n)}{pc}e\bigg{(}\frac{\overline{a}n}{c}\bigg{)}. \tag{2.26}\] Here, \[\mathrm{Kl}_{3}(x,y,x;p)=\sum_{abc\equiv 1(\mathrm{mod}\ p)}e\bigg{(}\frac{ax+ by+cz}{p}\bigg{)}, \tag{2.27}\] and \[H_{1}(y)=2\pi i^{k}\int_{o}^{\infty}J_{k-1}\bigg{(}\frac{4\pi\sqrt{xy}}{pc} \bigg{)}g(x)dx. \tag{2.28}\] Proof.: Let \[S=\sum_{\text{gcd}(m,p)=1}\overline{\lambda_{f}}(m)e\bigg{(}\frac{r\overline {m}}{p}\bigg{)}e\bigg{(}-\frac{am}{c}\bigg{)}g(m). \tag{2.29}\] From the orthogonality of characters modulo \(p\), we know that when \(\text{gcd}(m,p)=1\), \[e\bigg{(}\frac{r\overline{m}}{p}\bigg{)}=\frac{1}{p}\sum_{t\,(\mathrm{mod}\ p)}\sum_{h\,(\mathrm{mod}\ p)}^{*}e\bigg{(}\frac{r\overline{h}-ht+mt}{p}\bigg{)}. \tag{2.30}\] We note that if \(\text{gcd}(m,p)\neq 1\), then the right hand side of (2.30) is zero. We thus have \[S=\frac{1}{p}\sum_{t\,(\text{mod }p)}{\sum_{h\,(\text{mod }p)}}^{*}e\biggl{(} \frac{r\overline{h}-ht}{p}\biggr{)}\sum_{m=1}^{\infty}\overline{\lambda_{f}}(m) g(m)e\biggl{(}\frac{-am}{c}+\frac{tm}{p}\biggr{)}. \tag{2.31}\] Separating the \(t\equiv 0(\text{mod }p)\) term, (2.31) can be rewritten as \[S=S_{1}+S_{2}, \tag{2.32}\] where \[S_{1}=\frac{1}{p}{\sum_{h,t(\text{mod }p)}}^{*}e\biggl{(}\frac{r\overline{h}- ht}{p}\biggr{)}\sum_{m=1}^{\infty}g(m)\overline{\lambda_{f}}(m)e\biggl{(} \frac{m(-ap+tc)}{pc}\biggr{)}, \tag{2.33}\] and \[S_{2}=\frac{1}{p}{\sum_{h(\text{mod }p)}}^{*}e\biggl{(}\frac{r\overline{h}}{p} \biggr{)}\sum_{m=1}^{\infty}g(m)\overline{\lambda_{f}}(m)e\biggl{(}\frac{-am}{ c}\biggr{)}. \tag{2.34}\] We can now use Proposition 2.16 to simplify \(S_{1}\) and \(S_{2}\). Note that \(\gcd(a,c)=1\) and \(\gcd(-ap+tc,pc)=1\) if \(\gcd(t,p)=1\). Thus (2.33) becomes \[S_{1}=\frac{1}{p}{\sum_{h,t(\text{mod }p)}}^{*}e\biggl{(}\frac{r\overline{h} -ht}{p}\biggr{)}\biggl{(}\frac{1}{pc}\sum_{n=1}^{\infty}\overline{\lambda_{f}} (n)e\biggl{(}-\frac{\overline{tc-ap}}{pc}\cdot n\biggr{)}H_{1}(n)\biggr{)}, \tag{2.35}\] where \(H_{1}(y)\) is defined in (2.28). We now use the Chinese Remainder Theorem to simplify (2.35) further. \[S_{1}=\frac{1}{p^{2}c}\sum_{n=1}^{\infty}\overline{\lambda_{f}} (n)H_{1}(n)\Biggl{(}{\sum_{h,t(\text{mod }p)}}^{*}e\biggl{(}\frac{r\overline{h}-ht}{p}\biggr{)}e\biggl{(}\frac{- \overline{p}(\overline{tc-ap})n}{c}\biggr{)}e\biggl{(}\frac{-\overline{c}( \overline{tc-ap})n}{p}\biggr{)}\Biggr{)}\\ =\frac{1}{p^{2}c}\sum_{n=1}^{\infty}\overline{\lambda_{f}}(n)H_ {1}(n)e\Biggl{(}\frac{\overline{ap^{2}}n}{c}\Biggr{)}\Biggl{(}{\sum_{h,t( \text{mod }p)}}^{*}e\Biggl{(}\frac{r\overline{h}-ht-\overline{tc^{2}}n}{p}\Biggr{)} \Biggr{)}\\ =\sum_{n=1}^{\infty}\frac{\overline{\lambda_{f}}(n)H_{1}(n)}{p^{ 2}c}e\Biggl{(}\frac{\overline{ap^{2}}n}{c}\Biggr{)}\text{Kl}_{3}(-n\overline{c }^{2}r,1,1;p), \tag{2.36}\] where \(\text{Kl}_{3}(\cdot)\) is defined in (2.27). Note that if \(p\mid n\), then this hyper-Kloosterman sum can be simplified as \[\text{Kl}_{3}(-n\overline{c}^{2}r,1,1;p)=\sum_{xyz\equiv 1(\text{mod }p)}e \biggl{(}\frac{x(-n\overline{c}^{2}r)+y+z}{p}\biggr{)}={\sum_{y,z(\text{mod }p)}}^{*}e\biggl{(}\frac{y+z}{p}\biggr{)}=1. \tag{2.37}\] This allows us to rewrite (2.36) as, \[S_{1}=S_{1}^{*}+S_{1}^{p}+S_{1}^{p^{2}}. \tag{2.38}\] Here, \[S_{1}^{*}=\sum_{\gcd(n,p)=1}\frac{\overline{\lambda_{f}}(n)H_{1}(n)}{p^{2}c}e \Biggl{(}\frac{\overline{ap^{2}}n}{c}\Biggr{)}\text{Kl}_{3}(-n\overline{c}^{2}r,1,1;p), \tag{2.39}\] \[S_{1}^{p}=\sum_{\gcd(n,p^{2})=p}\frac{\overline{\lambda_{f}}(n)H_{1}(n)}{p^{2}c}e \Bigg{(}\frac{\overline{ap^{2}}n}{c}\Bigg{)}\mathrm{Kl}_{3}(-n\overline{c}^{2}r, 1,1;p)=\sum_{\gcd(n,p^{2})=p}\frac{\overline{\lambda_{f}}(n)H_{1}(n)}{p^{2}c}e \Bigg{(}\frac{\overline{ap^{2}}n}{c}\Bigg{)}, \tag{2.40}\] and \[S_{1}^{p^{2}}=\sum_{\gcd(n,p^{2})=p^{2}}\frac{\overline{\lambda_{f}}(n)H_{1}(n )}{p^{2}c}e\Bigg{(}\frac{\overline{ap^{2}}n}{c}\Bigg{)}\mathrm{Kl}_{3}(-n \overline{c}^{2}r,1,1;p)=\sum_{\gcd(n,p^{2})=p^{2}}\frac{\overline{\lambda_{f }}(n)H_{1}(n)}{p^{2}c}e\Bigg{(}\frac{\overline{ap^{2}}n}{c}\Bigg{)}. \tag{2.41}\] Using Proposition 2.16 again on (2.34) we get (2.42) \[S_{2}=\frac{1}{p}\sideset{}{{}^{*}}{\sum}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{ *}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{ *}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{ *}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{ *}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{ *}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{ *}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{ *}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{{*}}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{{*}}}{{}^{{*}}}{{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{{*}}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{{}^{*}}}{{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{{*}}}{{}^{*}}{{} {}^{*}}{{}^{{*}}}{{{}^{*}}{{}^{*}}}{{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{ *}}{{}^{*}}{{}^{{*}}}{{}^{*}}{{}^{{*}}}{{{}^{*}}}{{{}^{*}}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{{*}}}{{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{{}^{*}}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{} {}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{{}^{*}}{ Here, \(J\) and \(K\) denote the usual \(J\)-Bessel and \(K\)-Bessel functions. We could also use Mellin inversion formula to rewrite these integral transforms as- \[\mathcal{L}^{\text{hol}}\Phi(k)=\frac{1}{2\pi i}\int_{(1)}\frac{2^{s-1}\Gamma \big{(}\frac{s+k-1}{2}\big{)}}{\Gamma\big{(}\frac{k+1-s}{2}\big{)}}\tilde{\Phi }(s+1)(4\pi\sqrt{|mn|})^{-s}ds, \tag{2.49}\] \[\mathcal{L}^{\pm}\Phi(t)=\frac{1}{2\pi i}\int_{(2)}h_{\pm}(s,t)\tilde{\Phi}(s+ 1)(4\pi\sqrt{|mn|})^{-s}ds, \tag{2.50}\] where \[h_{\pm}(s,t)=\frac{2^{s-1}}{\pi}\Gamma(\frac{s}{2}+it)\Gamma(\frac{s}{2}-it) \begin{cases}\cos(\pi s/2),&\pm=+\\ \cosh(\pi t),&\pm=-\end{cases} \tag{2.51}\] and \(\tilde{\Phi}(\cdot)\) denotes the Mellin transform of \(\Phi(\cdot)\). For any Hecke eigenform \(\pi\) define, \[\lambda_{\pi}^{(\delta)}(n)=\sum_{d|\delta}d^{\frac{1}{2}}x_{\delta}(d) \lambda_{\pi}(n/d), \tag{2.52}\] where \(\lambda_{\pi}(n/d)=0\) if \(\frac{n}{d}\) is not an integer. \(x_{\delta}^{\prime}s\) are defined in the same way as equation 6.21 in [4]. Also, let \(\mathcal{H}_{it}(m,\psi)\) denote the set of Hecke-Maass newforms of conductor \(m\), central character \(\psi\), and spectral parameter \(it\). Similarly, we can define \(\mathcal{H}_{k}(m,\psi)\) as the set of Hecke newforms of conductor \(m\), central character \(\psi\) and weight \(k\). We can also define \(\mathcal{H}_{it,\text{Eis}}(m,\psi)\) as the set of newform Eisenstein series of level \(m\) and character \(\chi\). We can now state a version of the Bruggeman-Kuznetsov formula - **Proposition 2.18**.: _Let \(\Phi\in C_{C}^{\infty}(\mathbb{R}_{>0})\). We have_ \[\sum_{gcd(c,q)=1}\overline{\chi}(c)S(\overline{q}m,n,c)\Phi(q^{\frac{1}{2}}c) =\mathcal{K}_{Maass}+\mathcal{K}_{Eis}+\mathcal{K}_{hol}. \tag{2.53}\] _Here,_ \[\mathcal{K}_{Maass}=\sum_{t_{j}}\mathcal{L}^{\pm}\Phi(t_{j})\sum_{lr=q}\sum_{ \pi\in\mathcal{H}_{it_{j}}(r,\chi)}\frac{4\pi\epsilon_{\pi}}{V(q)\mathscr{L} _{\pi}^{*}(1)}\sum_{\delta|l}\overline{\lambda}_{\pi}^{(\delta)}(|m|)\overline {\lambda}_{\pi}^{(\delta)}(|n|), \tag{2.54}\] _and_ \[\mathcal{K}_{Eis}=\frac{1}{4\pi}\int_{-\infty}^{\infty}\mathcal{L}^{\pm}\Phi (t_{j})\sum_{lr=q}\sum_{\pi\in\mathcal{H}_{it,\text{Eis}}(r,\chi)}\frac{4\pi \epsilon_{\pi}}{V(q)\mathscr{L}_{\pi}^{*}(1)}\sum_{\delta|l}\overline{\lambda} _{\pi}^{(\delta)}(|m|)\overline{\lambda}_{\pi}^{(\delta)}(|n|)dt, \tag{2.55}\] _where one takes \(\Phi+\) (resp. \(\Phi^{-}\)) if \(mn>0\) (resp. \(mn<0\)), and \(\epsilon_{\pi}\) is the finite root number of \(\pi\). Also,_ \[\mathcal{K}_{hol}=\sum_{k>0,\text{ even}}\mathcal{L}^{hol}\Phi(k)\sum_{lr=q} \sum_{\pi\in\mathcal{H}_{k}(r,\chi)}\frac{4\pi\epsilon_{\pi}}{V(q)\mathscr{L} _{\pi}^{*}(1)}\sum_{\delta|l}\overline{\lambda}_{\pi}^{(\delta)}(|m|)\overline {\lambda}_{\pi}^{(\delta)}(|n|), \tag{2.56}\] _if \(mn>0\), and \(\mathcal{K}_{hol}=0\) if \(mn<0\)._ ### Spectral Large Sieve Inequality We state the version of spectral large sieve inequality that we will use. This is a restatement of Lemma 7.4 in [10]. Let us denote by \[\int_{*\leq T}\ \ \text{any of}\ \sum_{|t_{j}|\leq T},\sum_{k\leq T},\ \text{or}\ \int_{|t|\leq T}dt,\] according to whether \(*=it_{j},k,\) or \(it,\text{Eis}.\) **Proposition 2.19**.: _For any sequence of complex numbers \(a_{n}\), we have_ \[\int_{*\leq T}\sum_{\pi\in\mathcal{H}_{*}(q)}\left|\sum_{n\leq N}a_{n}\lambda_ {\pi}(n)\right|^{2}\ll_{\varepsilon}(T^{2}q+N)(qTN)^{\varepsilon}\sum_{n\leq N }|a_{n}|^{2}. \tag{2.57}\] ## 3. Reduction of Theorem 1.4 to Theorem 1.7 In this section, we will prove Theorem 1.4, assuming Theorem 1.7 is true. Using Corollary 2.2 and a dyadic partition of unity, it suffices to show (via an application of Cauchy-Schwarz inequality) that for any \(\varepsilon>0\), \[S(N,\alpha)\coloneqq\sum_{\psi(\text{mod}\ p^{2})}\left|\sum_{n\asymp N}{}^{* }\lambda_{f}(n)\psi(n)\alpha(n)w_{N}(n)\right|^{2}\ll Np^{2+\varepsilon}, \tag{3.1}\] where \(w_{N}\) is a smooth function supported on \([N,2N]\) satisfying \(w_{N}^{(j)}(x)\ll N^{-j}\) and \(N\ll p^{3+\varepsilon}\). Recall that \(\sideset{}{{}^{*}}{\sum}{}_{n}\) denotes the sum is over values of \(n\) relatively prime to \(p\). Expanding the square in \(S(N,\alpha)\) and rearranging terms, we get \[S(N,\alpha)=\sideset{}{{}^{*}}{\sum}{}_{m,n}\lambda_{f}(m)\overline{\lambda_{ f}}(n)\alpha(m)\overline{\alpha(n)}w_{N}(m)w_{N}(n)\sum_{\psi(\text{mod}\ p^{2})}\psi(m\overline{n}). \tag{3.2}\] The inner sum in the right hand side of (3.2) vanishes unless \(m\equiv n(\text{mod}\ p^{2})\), in which case it is equal to \(\phi(p^{2})\). We can then rewrite \(S(N,\alpha)\) as a sum of diagonal and off-diagonal terms. We get \[S(N,\alpha)=\sideset{}{{}^{*}}{\sum}{}_{n}\phi(p^{2})\left|\lambda_{f}(n) \right|^{2}w_{n}(N)^{2}+S_{0}(N,\alpha), \tag{3.3}\] where \[S_{0}(N,\alpha)=\sideset{}{{}^{*}}{\sum}{}_{\begin{subarray}{c}m\neq n\\ m\equiv n(\text{mod}\ p^{2})\end{subarray}}\phi(p^{2})\lambda_{f}(m)\overline{ \lambda_{f}}(n)\alpha(m)\overline{\alpha(n)}w_{N}(m)w_{N}(n), \tag{3.4}\] is the off-diagonal term. Using known bounds on \(w_{N}\) and \(\lambda_{f}(n)\) the diagonal term is of the order \(O(Np^{2+\varepsilon})\). Thus (3.1) follows if we can show that off-diagonal terms (\(S_{0}(N,\alpha)\)) satisfy the same bound. Now, for \(S_{0}(N,\alpha)\), we note that it suffices to consider the sum for the terms with \(m>n\), as the sum for the terms with \(m<n\) is just the complex conjugate of the \(m>n\) sum. Letting \(l=\frac{m-n}{p^{2}}\), and considering only positive values of \(l\), we get \[S_{0}(N,\alpha)\ll\phi(p^{2})\sum_{l}\ \sideset{}{{}^{*}}{\sum}{}_{n}\lambda_{f}(n +p^{2}l)\overline{\lambda_{f}}(n)\alpha(n+p^{2}l)\overline{\alpha(n)}w_{N}(n +p^{2}l)w_{N}(n). \tag{3.5}\] We define \[S_{1}(N,\alpha)\coloneqq\,{\sum_{n,l}}^{*}\,\lambda_{f}(n+p^{2}l)\overline{ \lambda_{f}}(n)\alpha(n+p^{2}l)\overline{\alpha(n)}w_{N}(n+p^{2}l)w_{N}(n), \tag{3.6}\] and \[S_{2}(N,\alpha)\coloneqq\sum_{l\equiv 0\,(\text{mod }p)}\ \ {\sum_{n}}^{*}\, \lambda_{f}(n+p^{2}l)\overline{\lambda_{f}}(n)\alpha(n+p^{2}l)\overline{ \alpha(n)}w_{N}(n+p^{2}l)w_{N}(n). \tag{3.7}\] Note that \(S_{1}(N,\alpha)\) is precisely the left hand side in (1.7) in Theorem 1.7. Then (3.5) can be written as \[S_{0}(N,\alpha)\ll\phi(p^{2})(S_{1}(N,\alpha)+S_{2}(N,\alpha)). \tag{3.8}\] So, in order to prove Theorem 1.4, it suffices to show \[S_{1}(N,\alpha)\ll\frac{Np^{2+\varepsilon}}{\phi(p^{2})}\ll Np^{\varepsilon}, \tag{3.9}\] and \[S_{2}(N,\alpha)\ll Np^{\varepsilon}. \tag{3.10}\] The bound in (3.9) is just a restatement of Theorem 1.7. As \(N\ll p^{3+\varepsilon}\), (3.10) can be obtained by just taking trivial bounds for all the terms. Thus, Theorem 1.4 follows. ## 4. Harmonic Analysis We note that, bounding all the terms trivially, we get that \(S_{1}(N,\alpha)\ll\frac{N^{2}}{p^{2}}p^{\varepsilon}\ll Np^{1+\varepsilon}\). The proof for Theorem 1.7 starts with introducing the delta symbol, followed by using Voronoi summation to get additional cancellations on the terms that spring up. Barring a'main term', this approach proves very fruitful, and we end up getting the desired upper bounds. ### Application of the Delta Symbol As \(\gcd(n,p)=1\) in the definition of \(S_{1}(N,\alpha)\), Proposition 2.3 guarantees the existence of a non-zero \(a_{\alpha}(\text{mod }p)\) such that \(\alpha(n+p^{2}l)\overline{\alpha(n)}=e_{p}(a_{\alpha}l\overline{n})\). We recall that from Remark 1.8, we know that \(0<l\leq\frac{N}{p^{2}}\), and \(N\leq n\leq 2N\) in (3.6). Using \(m=n+p^{2}l\), we can rewrite (3.6) as \[S_{1}(N,\alpha)=\sum_{m}\,{\sum_{l,n}}^{*}\,\lambda_{f}(n+p^{2}l)\overline{ \lambda_{f}}(n)e_{p}(a_{\alpha}l\overline{n})w_{N}(m)w_{N}(n)\delta(m-n-p^{2} l). \tag{4.1}\] We can now introduce the \(\delta\)-symbol in the \(n-\)sum and get via (2.13) (using \(C=\sqrt{N}\)), \[S_{1}(N,\alpha)=\sum_{m}\,{\sum_{l,n}}^{*}\,\lambda_{f}(n+p^{2}l)\overline{ \lambda_{f}}(n)e_{p}(a_{\alpha}l\overline{n})w_{N}(m)w_{N}(n)\\ \cdot\sum_{c\leq 2C}S(0,m-n-p^{2}l;c)\int_{-\infty}^{\infty}g_{c}(v)e \big{(}(m-n-p^{2}l)v\big{)}dv. \tag{4.2}\] Expanding out the Ramanujan sum and combining the \(m\) and \(n\) terms, we get \[S_{1}(N,\alpha)=\,{\sum_{l}}^{*}\,\sum_{c\leq 2C}V_{N,l}(c). \tag{4.3}\] Here, \[V_{N,l}(c)=\sum_{a\,(\text{mod }c)}^{*}e\bigg{(}\frac{-ap^{2}l}{c}\bigg{)}\int_{- \infty}^{\infty}g_{c}(v)e(-p^{2}lv)\cdot T_{1}(a,c,v)\cdot T_{2}(a,c,v)dv, \tag{4.4}\] \[T_{1}(a,c,v)=\sum_{m=1}^{\infty}\lambda_{f}(m)e\Big{(}\frac{am}{c}\Big{)}w_{N}( m)e(mv), \tag{4.5}\] \[T_{2}(a,c,v)=\sum_{\gcd(n,p)=1}\overline{\lambda_{f}}(n)e_{p}(a_{\alpha}l \overline{n})e\bigg{(}\frac{-an}{c}\bigg{)}w_{N}(n)e(-nv). \tag{4.6}\] We further split (4.3) based on whether \(c\) is relatively prime to \(p\), or not. We get \[S_{1}(N,\alpha)=S_{3}(N,\alpha)+S_{4}(N,\alpha), \tag{4.7}\] where \[S_{3}(N,\alpha)=\,\sum_{l}^{*}\sum_{c\leq 2C}^{*}V_{N,l}(c), \tag{4.8}\] and \[S_{4}(N,\alpha)=\,\sum_{l}^{*}\sum_{\begin{subarray}{c}c\leq 2C\\ p|c\end{subarray}}V_{N,l}(c). \tag{4.9}\] We have the following two lemmas. **Lemma 4.1**.: _Let \(\varepsilon>0\) and \(N\ll p^{3+\varepsilon}\). Let \(S_{3}(N,\alpha)\) be defined as in (4.8). Then_ \[S_{3}(N,\alpha)\ll_{\varepsilon}Np^{\varepsilon}. \tag{4.10}\] **Lemma 4.2**.: _Let \(\varepsilon>0\) and \(N\ll p^{3+\varepsilon}\). Let \(S_{4}(N,\alpha)\) be defined as in (4.9). Then_ \[S_{4}(N,\alpha)\ll_{\varepsilon}Np^{\varepsilon}. \tag{4.11}\] It is clear that Theorem 1.7 follows from these two lemmas. We proceed with the proof of Lemma 4.1 now, and delay the proof of Lemma 4.2 to Section 6. ### Voronoi Summation We recall that \(T_{1}(a,c,v)\) and \(T_{2}(a,c,v)\) are defined in (4.5) and (4.6). We use Proposition 2.16 on \(T_{1}(a,c,v)\) to get \[T_{1}(a,c,v)=\frac{1}{c}\sum_{m=1}^{\infty}\lambda_{f}(m)e\bigg{(}\frac{- \overline{a}m}{c}\bigg{)}\widetilde{w}_{c,v,N}(m). \tag{4.12}\] Here, \[\widetilde{w}_{c,v,N}(m)=2\pi i^{k}\int_{o}^{\infty}J_{k-1}\bigg{(}\frac{4\pi \sqrt{mx}}{c}\bigg{)}e(xv)w_{N}(x)dx. \tag{4.13}\] For \(T_{2}(a,c,v)\), (note that \(\gcd(a_{\alpha}l,p)=1\)) we can use Proposition 2.17, to get \[T_{2}(a,c,v)=D_{0}+D_{1}+D_{2}+D_{3} \tag{4.14}\] where \[D_{0}=\sum_{\gcd(n,p)=1}\frac{\overline{\lambda_{f}}(n)\widetilde{w}_{pc,-v,N}(n)}{ p^{2}c}e\Bigg{(}\frac{\overline{ap^{2}}n}{c}\Bigg{)}\mathrm{Kl}_{3}(-n \overline{c}^{2}a_{\alpha}l,1,1;p), \tag{4.15}\] \[D_{1}=\sum_{\gcd(n,p^{2})=p}\frac{\overline{\lambda_{f}}(n)\widetilde{w}_{pc,-v, N}(n)}{p^{2}c}e\Bigg{(}\frac{\overline{ap^{2}}n}{c}\Bigg{)}, \tag{4.16}\] \[D_{2}=\sum_{n=1}^{\infty}\frac{\overline{\lambda_{f}}(np)\overline{\lambda_{f} }(p)\widetilde{w}_{pc,-v,N}(p^{2}n)}{p^{2}c}e\Bigg{(}\frac{\overline{a}n}{c} \Bigg{)}, \tag{4.17}\] and \[D_{3}=-\bigg{(}1+\frac{1}{p}\bigg{)}\sum_{n=1}^{\infty}\frac{\overline{\lambda _{f}}(n)\widetilde{w}_{pc,-v,N}(p^{2}n)}{pc}e\bigg{(}\frac{\overline{a}n}{c} \Bigg{)}. \tag{4.18}\] Here, \(\widetilde{w}_{-}(\cdot)\) is the same as in (4.13). Thus, using (4.12) and (4.14), we have \[S_{3}(N,\alpha)=E_{0}+E_{1}+E_{2}+E_{3}+\text{ small error} \tag{4.19}\] where \[E_{0}=\sum_{l}^{*}\ \sum_{c\leq 2C}^{*}\ \sum_{a\,(\mathrm{mod}\ c)} ^{*}e\Bigg{(}\frac{-ap^{2}l}{c}\Bigg{)}\int_{-\infty}^{\infty}g_{c}(v)e(-p^{2} lv)\Bigg{(}\frac{1}{c}\sum_{m}\lambda_{f}(m)e\bigg{(}\frac{-\overline{a}m}{c} \bigg{)}\widetilde{w}_{c,v,N}(m)\Bigg{)}\\ \Bigg{(}\sum_{n}^{*}\frac{\overline{\lambda_{f}}(n)\widetilde{w}_ {pc,-v,N}(n)}{p^{2}c}e\Bigg{(}\frac{\overline{ap^{2}}n}{c}\Bigg{)}\mathrm{Kl}_ {3}(-n\overline{c}^{2}a_{\alpha}l,1,1;p)\Bigg{)}dv, \tag{4.20}\] is the contribution to \(S_{3}(N,\alpha)\) from \(D_{0}\). Similarly, \(E_{1},E_{2},E_{3}\) represent the contributions to \(E\) from the remaining three terms of (4.14). We have the two following lemmas. **Lemma 4.3**.: _Let \(\varepsilon>0\) and let \(N\ll p^{3+\varepsilon}\). Let \(E_{0}\) be defined as in (4.19). We have_ \[E_{0}\ll_{\varepsilon}Np^{\varepsilon}. \tag{4.21}\] **Lemma 4.4**.: _Let \(\varepsilon>0\) and let \(N\ll p^{3+\varepsilon}\). Let \(E_{1},E_{2}\) and \(E_{3}\) be defined as in (4.19). We have_ \[E_{j}\ll_{\varepsilon}N^{\frac{3}{4}}p^{\varepsilon}. \tag{4.22}\] It is clear that together, these two lemmas imply Lemma 4.1. We prove Lemma 4.3 first, and postpone the proof of Lemma 4.4 to Section 6. The main idea for proving Lemma 4.3 is via spectral analysis, using the Bruggeman-Kuznetsov formula, followed by applying spectral large sieve inequality, along with other results, to bound each of the resultant terms. To set this up, note that the \(a\)-sum in (4.20) forms a Kloosterman sum. Using this, we have \[E_{0}=\sum_{l,n}^{*}\sum_{m}\frac{\lambda_{f}(m)\overline{\lambda_{f}}(n)}{p^{ 2}}{\sum_{c\leq 2C}}^{*}\frac{1}{c^{2}}S(\overline{p}^{2}n-m,-p^{2}l;c)\mathrm{Kl}_{ 3}(-n\overline{c}^{2}a_{\alpha}l,1,1;p)I_{N}(c,l,m,n), \tag{4.23}\] where \[I_{N}(c,l,m,n)=\int_{-\infty}^{\infty}g_{c}(v)e(-p^{2}lv)\widetilde{w}_{c,v,N}(m )\widetilde{w}_{pc,-v,N}(n)dv. \tag{4.24}\] Using the fact that,when \(\gcd(r,p)=1\) \[\operatorname{Kl}_{3}(r,1,1;p)=\frac{1}{\phi(p)}\sum_{\chi(p)}\tau(\chi)^{3} \chi(r). \tag{4.25}\] we get that, \[E_{0}=\sideset{}{{}^{*}}{\sum}_{l,n}\sum_{m}\frac{\lambda_{f}(m)\overline{ \lambda_{f}}(n)}{\phi(p)p^{2}}\sum_{\chi(p)}\tau(\chi)^{3}\chi(-na_{\alpha}l) \cdot\mathcal{K}, \tag{4.26}\] where \[\mathcal{K}=\sideset{}{{}^{*}}{\sum}_{c\leq 2C}\frac{1}{c^{2}}S(\overline{p}(n-p ^{2}m),-pl;c)\chi(\overline{c}^{2})I_{N}(c,l,m,n), \tag{4.27}\] We want to use the Bruggeman-Kuznetsov formula here. However, before that, we need to dyadically decompose (4.26). ### Dyadic Partition of Unity We want to modify (4.20) by applying a dyadic decomposition. We recall some definitions from Section 6.1 in [10]. We define a number \(N\) to be dyadic if \(N=2^{\frac{k}{2}}\) for some \(k\in\mathbb{Z}\). Let \(g\) be a fixed smooth function supported on the interval \([1,2]\). A dyadic partition of unity is a partition of unity of the form \(\sum_{k\in\mathbb{Z}}g(2^{-\frac{k}{2}}x)\equiv 1\), for \(x>0\). The family \(g_{N}(x)=g(\frac{x}{N})\) forms a \(1-\)inert family of functions. We want to dyadically decomposition \(E_{0}\) (as defined in (4.26)) in the \(l,m,n,c,\) and \(v\) variable. As \(l,m,n,c\) are all positive integers, this decomposition is relatively simple, but a dyadic decomposition in the \(v\)-variable is a bit more involved. We show that first. #### 4.3.1. Dyadic decomposition of \(I_{N}(c,l,m,n).\) Recall that \(g_{c}(v)\) is defined in (2.11), and satisfies bounds given by (2.14) and (2.15). Also, \(\widetilde{w}_{c,v,N}(m)\) is defined in (4.13), and \(I_{N}(c,l,m,n)\) is defined in (4.24). We define, \[H(v)\coloneqq g_{c}(v)e(-p^{2}lv)\widetilde{w}_{c,v,N}(m)\widetilde{w}_{pc,-v,N}(n). \tag{4.28}\] Note that, we trivially have \(H(v)\ll N^{2+\varepsilon}\). \[I_{N}(c,l,m,n)=\int_{-\infty}^{\infty}H(v)dv. \tag{4.29}\] The bounds in (2.14) and (2.15) immediately imply that for any \(A>0\) arbitrarily large, \[I_{N}(c,l,m,n)=\int_{|v|\ll\frac{1}{cC}}H(v)dv+O(p^{-A}). \tag{4.30}\] We now state and prove the following lemma, that allows us to apply a dyadic decomposition. **Lemma 4.5**.: _Let \(A>0\) be arbitrarily large. Then there exists \(V_{0}>0\), and a smooth function \(F(x)\) with \(|F(x)|\leq 1\) (both depending on \(A\)), such that_ \[I_{N}(c,l,m,n)=\int_{V_{0}\leq|v|\ll\frac{1}{cC}}H(v)F(v)+O(p^{-A}). \tag{4.31}\] Note that as the integral in (4.31) is defined away from zero, we can easily apply a dyadic decomposition to it. Proof.: Fix \(A>0\). For any \(V_{0}>0\), we can choose a smooth even function \(G(x)\) compactly supported on \([-2V_{0},2V_{0}]\) such that \(G(x)\leq 1\) and \(G(x)=1\) on \([-V_{0},V_{0}]\). Notice that \(1-G(\cdot)\) is supported on \(|x|>V_{0}\). Thus, for any \(V_{0}<\frac{1}{cC}\), we can rewrite (4.29) as \[I_{N}(c,l,m,n)=\int_{|v|\ll\frac{1}{cC}}H(v)G(v)dv+\int_{|v|\ll\frac{1}{cC}}H( v)(1-G(v))dv+O(p^{-A}). \tag{4.32}\] Now, as \(G\) is supported on \([-2V_{0},2V_{0}]\) and equals \(1\) on \([-V_{0},V_{0}]\), we can modify this further as, \[I_{N}(c,l,m,n)=\int_{|v|\leq 2V_{0}}H(v)G(v)dv+\int_{V_{0}\leq|v|\ll\frac{1}{cC}} H(v)(1-G)(v)dv+O(p^{-A}). \tag{4.33}\] As \(N\ll p^{3+\varepsilon}\), and \(\int_{|v|\leq 2V_{0}}H(v)G(v)dv\ll V_{0}N^{2+\varepsilon}\), if we choose \(V_{0}=p^{-A-6}\), we will get that this integral is also \(O(p^{-A})\). Choose \(F(x)=1-G(x)\), completes the proof. We can now apply a dyadic decomposition to (4.31), to get \[I_{N}(c,l,m,n)=\sum_{\begin{subarray}{c}V\text{dyadic}\\ V_{0}\leq|V|\ll\frac{1}{cC}\end{subarray}}I_{N,V}(c,l,m,n)+O(p^{-A}), \tag{4.34}\] where \[I_{N,V}(c,l,m,n)=\int_{-V}^{V}g_{c}(v)e(-p^{2}lv)\widetilde{w}_{c,v,N}(m) \widetilde{w}_{pc,-v,N}(n)F(v)g\Big{(}\frac{v}{V}\Big{)}dv. \tag{4.35}\] As \(F(v)\) and \(g\big{(}\frac{v}{V}\big{)}\) are smooth, we can absorb them in \(\widetilde{w}_{c,v,N}(m)\), and by a slight abuse of notation, we denote the new function as \(\widetilde{w}_{c,v,N}(m)\) also. #### 4.3.2. Dyadic Decomposition of \(E_{0}\) We can now use apply a dyadic decomposition to (4.26), to get \[E_{0}=\sum_{\begin{subarray}{c}C_{0},L_{0},M_{0},N_{0},V\\ \text{dyadic}\end{subarray}}E_{C_{0},L_{0},M_{0},N_{0},V}+O(p^{-A}), \tag{4.36}\] where \[E_{C_{0},L_{0},M_{0},N_{0},V}=\sum_{m}\sum_{l,n,c}\sum^{*}_{l,n,c}\frac{ \lambda_{f}(m)\overline{\lambda_{f}}(n)}{\phi(p)p^{2}}\sum_{\chi(p)}\tau( \chi)^{3}\chi(-na_{\alpha}l)\cdot\mathcal{K}_{C_{0},L_{0},M_{0},N_{0},V}, \tag{4.37}\] with \[\mathcal{K}_{C_{0},L_{0},M_{0},N_{0},V}=\,\sideset{}{{}^{*}}{\sum}_{c}\frac{1}{c^{2 }}S(\overline{p}(n-p^{2}m),-pl;c)\overline{\chi^{2}}(c)I_{N,V}(c,l,m,n)g_{T}(l,m, n,c), \tag{4.38}\] and \[g_{T}(l,m,n,c)=g\bigg{(}\frac{l}{L_{0}}\bigg{)}g\bigg{(}\frac{m}{M_{0}}\bigg{)} g\bigg{(}\frac{n}{N_{0}}\bigg{)}g\bigg{(}\frac{c}{C_{0}}\bigg{)}. \tag{4.39}\] Also, we claim the dyadic numbers satisfy \[2^{-\frac{1}{2}}\leq M_{0}\ll p^{\varepsilon},\;2^{-\frac{1}{2}}\leq N_{0}\ll p ^{2+\varepsilon},\;2^{-\frac{1}{2}}\leq\,C_{0}\leq 2\sqrt{N},\;2^{-\frac{1}{2}} \leq L_{0}\leq\frac{N}{p^{2}},\;V_{0}\leq|V|\ll\frac{1}{cC}. \tag{4.40}\] The bounds for \(C_{0},L_{0}\) and \(V_{0}\) are clear. The bounds for \(M_{0}\) and \(N_{0}\) follow as a consequence of Lemmas 4.6 and 4.8. We will use the Bruggeman-Kuznetsov formula on (4.38) in Section 5. We finish this section by stating some results regarding the behaviour of \(I_{N,V}(c,l,m,n)\). ### Analysis of \(I_{N,V}(c,l,m,n)\) We use stationary phase methods to analyse the behaviour of some of the oscillatory integrals we have obtained so far. The general scheme is to identify regions when the integral is highly oscillatory, and when it is not. #### 4.4.1. Bounds for \(\widetilde{w}_{c,v,N}(m)\) We want to use the properties of inert functions to analyse the behaviour of \(\widetilde{w}_{c,v,N}(m)\). **Lemma 4.6**.: _Let \(\widetilde{w}_{c,v,N}(m)\) be as in (4.13). Let \(l,m,n,c,v\) be in dyadic intervals as before. We have_ * _(Non-Oscillatory) If_ \(\frac{\sqrt{M_{0}N}}{C_{0}}\ll p^{\varepsilon}\)_, then_ (4.41) \[\widetilde{w}_{c,v,N}(m)=\bigg{(}\frac{\sqrt{M_{0}N}}{C_{0}}\bigg{)}^{k-1} \cdot N\cdot W_{T}(c,m,v).\] _Here,_ \(T=(V,M_{0},C_{0})\) _and_ \(W_{T}(c,m,v)\) _is a_ \(p^{\varepsilon}\)_-inert family in_ \(c,m,v\)_._ _Also,_ \(\widetilde{w}_{c,v,N}(m)\) _is small unless_ \(N|V|\ll p^{\varepsilon}\)_._ * _(Oscillatory) If_ \(\frac{\sqrt{M_{0}N}}{C_{0}}\gg p^{\varepsilon}\)_, then_ (4.42) \[\widetilde{w}_{c,v,N}(m)=\frac{NC_{0}}{\sqrt{M_{0}N}}\cdot e\Big{(}-\frac{m}{ c^{2}v}\Big{)}\cdot W_{T}(c,m,v)+O(p^{-A}).\] _Here,_ \(T=(V,M_{0},C_{0})\) _and_ \(W_{T}(c,m,v)\) _is_ \(p^{\varepsilon}\)_-inert in_ \(c,m,v\)_. A can be chosen to be arbitrarily large._ _Also,_ \(\widetilde{w}_{c,v,N}(m)\) _is small unless_ \(N|V|\asymp\frac{\sqrt{M_{0}N}}{C_{0}}\)_._ Proof.: Note that, given the conditions of the lemma, and the fact that the integral in (4.13) can be truncated up to \(N\), we have \(\frac{4\pi\sqrt{mx}}{c}\asymp\frac{\sqrt{M_{0}N}}{C_{0}}\). * If \(\frac{\sqrt{M_{0}N}}{C_{0}}\ll p^{\varepsilon}\), the Bessel function is not oscillatory, and we can write, \(J_{k-1}(t)=t^{k-1}W(t)\), where \(t^{j}W^{(j)}(t)\ll T_{0}\) with \(T_{0}\ll p^{\varepsilon}\). This is the same derivative bound satisfied by a \(T_{0}\)-inert function (again, when \(Y\ll p^{\varepsilon}\)). Thus, we have \[\widetilde{w}_{c,v,N}(m)=2\pi i^{k}\int_{0}^{\infty}\biggl{(}\frac{4\pi\sqrt{ mx}}{c}\biggr{)}^{k-1}W\biggl{(}\frac{4\pi\sqrt{mx}}{c}\biggr{)}w_{N}(x)e(xv)dx. \tag{4.43}\] We can now use Proposition 2.10 and 2.12 to get (4.41). Proposition 2.12 also implies that \(\widetilde{w}_{c,v,N}(m)\) is small unless \(|V|\ll\frac{1}{N}p^{\varepsilon}\), or, \(N|V|\ll p^{\varepsilon}\). 2. We use the fact that when \(t\gg 1\), the J-Bessel Function has an oscillatory behaviour and satisfies, (4.44) \[J_{k-1}(t)=\frac{1}{\sqrt{t}}(e^{it}W_{+}(t)+e^{-it}W_{it}),\] where \(W_{+}\) and \(W_{-}\) satisfy the same derivative bounds as a 1-inert family of functions. Thus, we have (using \(t=\frac{4\pi\sqrt{mx}}{c},\ T_{0}=\frac{\sqrt{M_{0}N}}{C_{0}}\)), (4.45) \[\widetilde{w}_{c,v,N}(m)=\sum_{\pm}\int_{0}^{\infty}e(xv)w_{N}(x)\frac{e^{\pm it }}{\sqrt{t}}W_{\pm}(t)dx=\sum_{\pm}\frac{1}{\sqrt{T_{0}}}\int_{0}^{\infty}e^{i \phi_{\pm}(x)}W_{N_{\pm}}(x)dx,\] where (4.46) \[\phi_{\pm}(x)=2\pi xv\pm 4\pi\frac{\sqrt{mx}}{c},\] and (4.47) \[W_{N_{\pm}}(x)=\frac{w_{N}(x)W_{\pm}(t)\sqrt{T_{0}}}{\sqrt{t}}.\] We want to use Proposition 2.14 to analyse (4.45). Note that \(W_{N_{\pm}}(x)\) forms a \(p^{\varepsilon}\)-inert family. Notice that, as \(c\leq\sqrt{N}\), when \(V>0\), \(|\phi^{\prime}_{+}(x)|=|2\pi\Big{(}v+\frac{\sqrt{m}}{c\sqrt{x}}\Big{)}|\geq \frac{1}{N}\). So we can use Proposition 2.14 (a) to conclude that the '+' integral is small. Similarly, when \(V<0\), we can conclude that the '-' integral is small. So, it suffices to consider the '-' (resp. '+') integral only when \(V>0\) (resp. \(V<0\)). As the cases are similar, we only consider the first. So, assume \(V>0\). We can check that \(\phi^{\prime\prime}_{-}(x)=\frac{\pi\sqrt{m}}{cx\sqrt{x}}\gg\frac{1}{N^{2}}\). Also, \(\phi^{\prime}_{-}(x_{0})=0\), when \(x_{0}=\frac{m}{c^{2}v^{2}}\); and \(\phi^{\prime\prime}(x_{0})=\frac{\pi c^{2}v^{3}}{m}\). Using Proposition 2.14 (b), we can rewrite (4.45) (when \(V>0\))as (4.48) \[\widetilde{w}_{c,v,N}(m)=\frac{1}{\sqrt{T_{0}}}\frac{1}{\sqrt{\phi^{\prime \prime}(x_{0})}}e^{i\phi_{-}(x_{0})}\cdot W_{T}(x_{0})+O(p^{-A}).\] where \(T=(V,M_{0},C_{0})\) and \(W_{T}(x_{0})\) is \(p^{\varepsilon}\)-inert in \(c,m,v\), and is supported on \(x_{0}\asymp N\). The condition \(x_{0}\asymp N\) implies \(\frac{M_{0}}{NC_{0}{}^{2}}\asymp V^{2}\). This, along with the fact that \(T_{0}=\frac{\sqrt{M_{0}N}}{C_{0}}\) gives (4.42). This also implies the integral is small unless \(N|V|\asymp\frac{\sqrt{M_{0}N}}{C_{0}}\). **Remark 4.7**.: _We note that as \(|V|\leq\frac{1}{C_{0}C}=\frac{1}{C_{0}\sqrt{N}}\), and \(C_{0}\leq\sqrt{N}\), \(N|V|\asymp\frac{\sqrt{M_{0}N}}{C_{0}}\) is only possible when \(M_{0}\ll p^{\varepsilon}\). As the condition in \((a)\) automatically implies \(M_{0}\ll p^{\varepsilon}\), it suffices to only consider \(M_{0}\ll p^{\varepsilon}\) for (4.36)._ We can also get a similar result for \(w_{pc,-v,N}(n)\) by following the same arguments. We state the result here. This also allows us to only consider \(N_{0}\ll p^{2+\varepsilon}\) for (4.36). **Lemma 4.8**.: _Let \(l,m,n,c,v\) be in dyadic intervals at \(L_{0},M_{0},N_{0},C_{0},V\). We have_ 1. _(Non-Oscillatory) If_ \(\frac{\sqrt{N_{0}N}}{pC_{0}}\ll p^{\varepsilon}\)_, then_ (4.49) \[\widetilde{w}_{pc,-v,N}(n)=\left(\frac{\sqrt{N_{0}N}}{pC_{0}}\right)^{k-1}\cdot N \cdot W_{T}(c,n,v).\] _Here,_ \(T=(V,N_{0},C_{0})\) _and_ \(W_{T}(c,n,v)\) _is a_ \(p^{\varepsilon}\)_-inert family in_ \(c,n,v\)_. Also,_ \(\widetilde{w}_{pc,-v,N}(n)\) _is small unless_ \(N|V|\ll p^{\varepsilon}\)__ 2. _(Oscillatory) If_ \(\frac{\sqrt{N_{0}N}}{pC_{0}}\gg p^{\varepsilon}\)_, then_ (4.50) \[\widetilde{w}_{pc,-v,N}(n)=\frac{NpC_{0}}{\sqrt{N_{0}N}}\cdot e\bigg{(}\frac{n }{p^{2}c^{2}v}\bigg{)}\cdot W_{T}(c,n,v)+O(p^{-A}).\] _Here,_ \(T=(V,N_{0},C_{0})\) _and_ \(W_{T}(c,n,v)\) _is_ \(p^{\varepsilon}\)_-inert in_ \(c,n,v\)_. A can be chosen to be arbitrarily large._ _Also,_ \(\widetilde{w}_{pc,-v,N}(n)\) _is small unless_ \(N|V|\asymp\frac{\sqrt{N_{0}N}}{pC_{0}}\)_._ #### 4.4.2. Bounds on \(I_{N,V}(c,l,m,n)\) We can use Lemma 4.6 and Lemma 4.8, along with Proposition 2.14 to analyse the behaviour of \[I_{N,V}(c,l,m,n)=\int_{V}^{2V}g_{c}(v)e(-p^{2}lv)\widetilde{w}_{c,v,N}(m) \widetilde{w}_{pc,-v,N}(n)dv. \tag{4.51}\] We prove the following proposition. **Proposition 4.9**.: _Let \(I_{N,V}(c,l,m,n)\) be as in (4.51). Let Let \(l,m,n,c,v\) be in dyadic intervals at \(L_{0},M_{0},N_{0},C_{0},V\) satisfying (4.40). Then_ 1. _(Non-oscillatory) If_ \(N|V|\ll p^{\varepsilon}\)_, then_ (4.52) \[I_{N,V}(c,l,m,n)=N^{2}V\cdot W_{T}(c,l,m,n).\] _Here,_ \(T=(C_{0},L_{0},M_{0},N_{0})\) _and_ \(W_{T}(c,l,m,n)\) _is a_ \(p^{\varepsilon}\)_-inert family in_ \(c,l,m,n\)_. Also,_ \(I_{N,V}(c,l,m,n)\) _is small unless_ \(C_{0}\gg N^{\frac{1}{2}-\varepsilon}\)_._ 2. _(Oscillatory) If_ \(N|V|\gg p^{\varepsilon}\)_, then_ \(I_{N,V}(c,l,m,n)\) _is small unless_ \(p^{2}m-n>0\)_,_ **and**__\(N|V|\asymp\frac{\sqrt{M_{0}N}}{C_{0}}\asymp\frac{\sqrt{N_{0}N}}{pC_{0}}\)_._ _If_ \(p^{2}m-n>0\)_,_ **and**__\(N|V|\asymp\frac{\sqrt{M_{0}N}}{C_{0}}\asymp\frac{\sqrt{N_{0}N}}{pC_{0}}\)_,_ (4.53) \[I_{N,V}(c,l,m,n)=\frac{N}{(N|V|)^{\frac{3}{2}}}\cdot e\bigg{(}\frac{2\sqrt{l(p ^{2}m-n)}}{c}\bigg{)}\cdot W_{T}(c,l,m,n)+O(p^{-A}).\] _Here,_ \(T=(C_{0},L_{0},M_{0},N_{0})\) _and_ \(W_{T}(c,l,m,n)\) _is a_ \(p^{\varepsilon}\)_-inert family in_ \(c,l,m,n\)_. A can be chosen to be arbitrarily large._ _Note that, in both cases we have the trivial bound, \(I_{N,V}(c,l,m,n)\ll Np^{\varepsilon}\)._ Proof.: 1. We have \(N|V|\ll p^{\varepsilon}\). Now, using Lemma 4.6(b) (resp. Lemma 4.8(b)), if \(\frac{\sqrt{M_{0}N}}{C_{0}}\gg p^{\varepsilon}\) (resp., \(\frac{\sqrt{N_{0}N}}{pC_{0}}\gg p^{\varepsilon}\)), then \(I_{N,V}(c,l,m,n)\) is small, as \(N|V|\ll\frac{\sqrt{N_{0}N}}{pC_{0}}\) (resp. \(N|V|\ll\frac{\sqrt{N_{0}N}}{pC_{0}}\)). So, we must have \(\frac{\sqrt{M_{0}N}}{C_{0}}\ll p^{\varepsilon}\) and \(\frac{\sqrt{N_{0}N}}{pC_{0}}\ll p^{\varepsilon}\). Using Lemma 4.6(a) and Lemma 4.8(a), we get (4.54) \[I_{N,V}(c,l,m,n)=N^{2}\bigg{(}\frac{\sqrt{M_{0}N}}{C_{0}}\bigg{)}^{k-1}\bigg{(} \frac{\sqrt{N_{0}N}}{pC_{0}}\bigg{)}^{k-1}\int_{V}^{2V}g_{c}(v)e(-p^{2}lv) \cdot W_{T}(c,m,n,v)dv.\] Using the fact that \(g_{c}(v)\) satisfies the same bounds as a \(1\)-inert function, the integral is the Fourier transform of a \(p^{\varepsilon}\)-inert family of functions at \(p^{2}l\). As \(p^{2}l\leq N\ll\frac{1}{|V|}p^{\varepsilon}\), we can use Proposition 2.12 to complete the proof. Also, as \(M_{0}\ll p^{\varepsilon}\), \(\frac{\sqrt{M_{0}N}}{C_{0}}\ll p^{\varepsilon}\) implies \(C_{0}\gg N^{\frac{1}{2}-\varepsilon}\). 2. We have \(N|V|\gg p^{\varepsilon}\). Using Lemma 4.6(a) and. Lemma 4.8(a), if \(\frac{\sqrt{M_{0}N}}{C_{0}}\ll p^{\varepsilon}\) or if \(\frac{\sqrt{N_{0}N}}{pC_{0}}\ll p^{\varepsilon}\), then \(I_{N,V}(c,l,m,n)\) is small, as \(N|V|\gg p^{\varepsilon}\). So, we must have \(\frac{\sqrt{M_{0}N}}{C_{0}}\gg p^{\varepsilon}\) and \(\frac{\sqrt{N_{0}N}}{pC_{0}}\gg p^{\varepsilon}\). Now, using Lemma 4.6(b) (resp. Lemma 4.8(b)) the integral then is small, unless \(N|V|\asymp\frac{\sqrt{M_{0}N}}{C_{0}}\asymp\frac{\sqrt{N_{0}N}}{pC_{0}}\). If \(N|V|\asymp\frac{\sqrt{M_{0}N}}{C_{0}}\asymp\frac{\sqrt{N_{0}N}}{pC_{0}}\), we have (for \(A>0\) arbitrarily large) (4.55) \[I_{N,V}(c,l,m,n)=\frac{Np{C_{0}}^{2}}{\sqrt{M_{0}N_{0}}}\int_{V}^{2V}g_{c}(v)e \bigg{(}\frac{n}{p^{2}c^{2}v}-\frac{m}{c^{2}v}-p^{2}lv\bigg{)}W_{T}(c,m,n,v) dv+O(p^{-A}).\] We can then use Proposition 2.14(a) to conclude that the integral is small if \(p^{2}m-n\leq 0\). If \(p^{2}m-n>0\), we can locate the stationary point and complete the proof using Proposition 2.14(b). **Remark 4.10**.: _In (4.38), we actually need to work with the product \(I_{N,V}(c,l,m,n)\cdot g_{T}(l,m,n,c)\). However, as \(g_{T}(l,m,n,c)\) satisfies the same derivative bounds in \(l,m,n,c\) as a \(1\)-inert family of functions, we can use the results proven in Proposition 4.9 for the product \(I_{N,V}(c,l,m,n)\cdot g_{T}(l,m,n,c)\) also._ ## 5. Spectral Analysis We continue with our proof of Lemma 4.3. One simplification is immediate. If \(\chi\) is trivial, then \(\tau(\chi)=1\) and we can trivially bound all the terms in (4.37) (we can use \(I_{N,V}(c,l,m,n)\ll Np^{\varepsilon}\)) to get \(E_{C_{0},L_{0},M_{0},N_{0},V}=O(Np^{\varepsilon})\). As the number of dyadic terms is also \(O(p^{\varepsilon})\), we get the required bound for \(E_{0}\). So, it suffices to work with \(\chi\) non-trivial in (4.37). Recall \(\mathcal{K}_{C_{0},L_{0},M_{0},N_{0},V}\), as defined in (4.38). Comparing with Proposition 2.18, we see that the parameters \((q,m,n,\chi,\phi(\cdot))\) for the Bruggeman Kuznetsov formula in (2.53) takes the form \((p,n-p^{2}m,-pl,\chi^{2},I_{N,V}(\cdot))\) in our case. Thus, from (4.38) and (2.53), we have \[\mathcal{K}_{C_{0},L_{0},M_{0},N_{0},V}=\mathcal{K}_{\rm Maass}+\mathcal{K}_ {\rm hol}+\mathcal{K}_{\rm Eis}, \tag{5.1}\] where \[\mathcal{K}_{\rm Maass}=\sum_{t_{j}}\mathcal{L}^{\pm}\Phi(t_{j})\sum_{r_{1}r_{ 2}=p}\sum_{\pi\in\mathcal{H}_{it}(r_{2},\chi^{2})}\frac{4\pi\epsilon_{\pi}}{V( p)\mathscr{L}_{\pi}^{*}(1)}\sum_{\delta|r_{1}}\overline{\lambda}_{\pi}^{(\delta)}(|n-p ^{2}m|)\overline{\lambda}_{\pi}^{(\delta)}(|-pl|). \tag{5.2}\] \(\mathcal{K}_{\rm hol}\) and \(\mathcal{K}_{\rm Eis}\) are defined similarly, as in Proposition 2.18. We can now rewrite (4.36) as \[E_{C_{0},L_{0},M_{0},N_{0},V}=\mathcal{M}_{\rm Maass}+\mathcal{M}_{\rm hol}+ \mathcal{M}_{\rm Eis}, \tag{5.3}\] where \[\mathcal{M}_{\text{Maass}}=\sum_{\begin{subarray}{c}l\asymp L_{0},m\asymp M_{0}, \cdots\\ \gcd(cln,p)=1\end{subarray}}\frac{\lambda_{f}(m)\overline{\lambda_{f}}(n)}{\phi( p)p^{2}}\sideset{}{\sum}{}_{\chi(p)}{}^{*}\tau(\chi)^{3}\chi(-na_{\alpha}l)\cdot \mathcal{K}_{\text{Maass}}, \tag{5.4}\] is the contribution from the \(\mathcal{K}_{\text{Maass}}\) term. Similarly, \(\mathcal{M}_{\text{hol}}\) and \(\mathcal{M}_{\text{Eis}}\) are the contributions from the \(\mathcal{K}_{\text{Eis}}\) and \(\mathcal{K}_{\text{hol}}\) terms respectively. We note that as the number of dyadic components in (4.36) is \(O(p^{\varepsilon})\), Lemma 4.3 follows if we can show each of these terms, \(\mathcal{M}_{\text{Maass}},\ \mathcal{M}_{\text{Eis}}\) and \(\mathcal{M}_{\text{hol}}\) are \(O(Np^{\varepsilon})\). This is what we will show. We use two different expressions for the integral transforms \(\mathcal{L}^{\text{hol}}\Phi(\cdot)\), and \(\mathcal{L}^{\pm}\Phi(\cdot)\). If \(N|V|\ll p^{\varepsilon}\) (non-oscillatory range), we use the ones given in (2.49) and (2.50). If \(N|V|\gg p^{\varepsilon}\) (oscillatory range), we use the ones given in (2.45) and (2.46). We first work with the Maass form term. ### Maass Form Term Analysis As \(r_{1}r_{2}=p\) and \(\delta\mid r_{1}\) in (5.2), the right side of (5.2) can be rewritten as a sum of three terms corresponding to \((r_{1},r_{2},\delta)=(1,p,1)\), or \((p,1,1)\), or \((p,1,p)\). Also, recall that \(\lambda_{\pi}^{(\delta)}(\cdot)\) is defined in (2.52). In particular, \(\lambda_{\pi}^{(1)}(m)=\lambda_{\pi}(m)\). We can use this to split (5.4) as - \[\mathcal{M}_{\text{Maass}}=\mathcal{M}_{\text{Maass}_{0}}+\mathcal{M}_{\text {Maass}_{1}}+\mathcal{M}_{\text{Maass}_{2}}, \tag{5.5}\] where \[\mathcal{M}_{\text{Maass}_{0}}=\sum_{\begin{subarray}{c}l\asymp L_{0},m \asymp M_{0},\cdots\\ \gcd(cln,p)=1\end{subarray}}\frac{\lambda_{f}(m)\overline{\lambda_{f}}(n)}{ \phi(p)p^{2}}\sideset{}{\sum}{}_{\chi(p)}{}^{*}\tau(\chi)^{3}\chi(-na_{\alpha }l)\\ \cdot\sum_{t_{j}}\mathcal{L}^{\pm}\Phi(t_{j})\sum_{\pi\in\mathcal{ H}_{it_{j}}(p,\chi^{2})}\frac{4\pi\epsilon_{\pi}}{V(p)\mathscr{L}_{\pi}^{*}(1)} \overline{\lambda}_{\pi}^{(1)}(|n-p^{2}m|)\overline{\lambda}_{\pi}^{(1)}(|-pl|), \tag{5.6}\] is the contribution from \(K_{\text{Maass}}\) when \((r_{1},r_{2},\delta)=(1,p,1)\). The other two are defined similarly - \(\mathcal{M}_{\text{Maass}_{1}}\) corresponds to \((r_{1},r_{2},\delta)=(p,1,1)\), and \(\mathcal{M}_{\text{Maass}_{2}}\) corresponds to \((r_{1},r_{2},\delta)=(p,1,p)\). We note that, for \(\mathcal{M}_{\text{Maass}_{1}}\) and \(\mathcal{M}_{\text{Maass}_{2}}\), as \(r_{2}=1\), we must have that \(\chi^{2}\) is trivial. As \(\chi\) itself is non-trivial, \(\chi\) must be the unique quadratic character modulo \(p\) (Legendre symbol). We show that each of the three terms in (5.5) is \(O(Np^{\varepsilon})\). We start with \(\mathcal{M}_{\text{Maass}_{0}}\). We split the analysis into two cases - non-oscillatory range (when \(N|V|\ll p^{\varepsilon}\)) and oscillatory range (when \(N|V|\gg p^{\varepsilon}\)). #### 5.1.1. Non-Oscillatory Case We begin first by noting some properties of the functions \(\Phi(\cdot)\) and \(\mathcal{L}^{\pm}\Phi(\cdot)\). **Analysis of \(\Phi(\cdot)\):**: We have \[\Phi(y,\cdot)=\frac{p}{y^{2}}I_{N,V}\bigg{(}\frac{y}{\sqrt{p}},l,m,n\bigg{)}. \tag{5.7}\] The Mellin transform of \(\Phi\) is given by, \[\widetilde{\Phi}(s,\cdot)=\int_{0}^{\infty}\Phi(y,\cdot)y^{s-1}dy, \tag{5.8}\] As \(N|V|\ll p^{\varepsilon}\), Proposition 4.9(a) implies that \(I_{N,V}(c,l,m,n)=N^{2}\cdot V\cdot W_{T}(c,l,m,n)\), where \(W_{T}\) is \(p^{\varepsilon}\)-inert in the variables \(c,l,m,n\). We also recall that Proposition 4.9(a) allows us to restrict to \(C_{0}\geq N^{\frac{1}{2}-\varepsilon}\), as \(I_{N,V}(\cdot)\) is small otherwise. Thus, \(\widetilde{\Phi}(s+1,\cdot)\) is the Mellin transform of a \(p^{\varepsilon}\)-inert family of function at \(s-1\). Proposition 2.13 then implies, \[\widetilde{\Phi}(s+1,\cdot)=(Np)(N|V|)(\sqrt{p}C_{0})^{s-1}W_{T}(s-1,l,m,n). \tag{5.9}\] Here, \(W_{T}(\cdot)\) is \(p^{\varepsilon}-\)inert in \(l,m,n\) and is small when \(|\text{Im}(s-1)|\gg p^{\varepsilon}\). **Analysis of \(\mathcal{L}^{\pm}\Phi(\cdot)\):** Recall \(h_{\pm}(s,t)\) was defined in (2.51). We note that, for any \(\sigma_{0}\) fixed and with \(d(\frac{\sigma_{0}}{2},\mathbb{Z}_{\leq 0})\geq\frac{1}{100}\), we have the bound, \[h_{\pm}(\sigma_{0}+iv,t)\ll\Big{(}1+|t+\frac{v}{2}|\Big{)}^{\frac{\sigma_{0}- 1}{2}}\Big{(}1+|t-\frac{v}{2}|\Big{)}^{\frac{\sigma_{0}-1}{2}}. \tag{5.10}\] Also recall that \(\mathcal{L}^{\pm}\Phi(t)\) was defined in (2.50). We define, \[H_{\pm}(s,t,l,m,n)=\frac{1}{2\pi i}h_{\pm}(s,t)\widetilde{\Phi}(s+1,\cdot)(4 \pi)^{-s}. \tag{5.11}\] So, we can rewrite (2.50) as \[\mathcal{L}^{\pm}\Phi(t)=\int_{\Re(s)=\sigma}H_{\pm}(s,t,l,m,n)|(n-p^{2}m)(-pl )|^{-\frac{\sigma}{2}}ds. \tag{5.12}\] We state the following lemma regarding the behaviour of \(\mathcal{L}^{\pm}\Phi(t)\). **Lemma 5.1**.: _Let \(H_{\pm}(s,t,l,m,n)\) and \(\mathcal{L}^{\pm}\Phi(t)\) be as in (5.11) and (5.12). Let \(A>0\) be arbitrarily large._ * _If_ \(|t|\gg p^{\varepsilon},\) _then_ (5.13) \[\mathcal{L}^{\pm}\Phi(t)\ll_{A,\varepsilon}(1+|t|)^{-A}(Np)^{-100}.\] * _If_ \(|t|\ll p^{\varepsilon},\) _Then for_ \(\sigma=\Re(s)>1\)_,_ (5.14) \[|H_{\pm}(s,t)|\ll(Np)^{1+\varepsilon}(\sqrt{p}C_{0})^{\sigma-1}(1+|s|)^{-A}.\] Proof.: * If we take the contour of integration in (5.12) far to the left, we encounter poles at \(\frac{s}{2}\pm it=0,-1,-2,\cdots\). This means that \(|\text{Im}(s)|\asymp|t|\gg(Np)^{\varepsilon}\). Now, by (5.9), \(\widetilde{\Phi}(\cdot)\) is very small at this height. * This follows from repeated integration by parts on (5.8), and using Proposition 4.9(a) and (5.10). Lemma 5.1 allows us to essentially restrict to \(|t_{j}|\ll p^{\varepsilon}\). **Mellin Transform of \(H_{\pm}(s,t,\cdot)\):** Consider the functions \(H_{+}(\cdot)\) as defined in (5.11). By the Mellin inversion theorem, we have for \(\Re(u_{j})=\sigma_{j}>0\), \(j=1,2,3\) \[H_{+}(s,t,l,m,n)=\int_{\Re(u_{j})=\sigma_{j}}l^{-u_{1}}m^{-u_{2}}n^{-u_{3}} \widetilde{H}(s,t,u_{1},u_{2},u_{3})\,du_{1}\,du_{2}\,du_{3}, \tag{5.15}\] where \(\widetilde{H}(s,t,u_{1},u_{2},u_{3})\) is the (partial) Mellin transform of \(H_{+}(\cdot)\) with respect to the variables \(l,m,n\). More specifically, \[\widetilde{H}(s,t,u_{1},u_{2},u_{3})=\int_{0}^{\infty}\int_{0}^{\infty}\int_{0 }^{\infty}H_{+}(s,t,l,m,n)l^{u_{1}-1}m^{u_{2}-1}n^{u_{3}-1}\,dl\,dm\,dn. \tag{5.16}\] Using the fact that \(\widetilde{\Phi}(\cdot)\) is \(p^{\varepsilon}\)-inert in \(l,m,n\), when \(\sigma=\Re(s)>1\) and \(\sigma_{j}=\Re(u_{j})>0\) for \(j=1,2,3\), we have \[\widetilde{H}(s,t,u_{1},u_{2},u_{3})\ll(\sqrt{p}C_{0})^{\sigma-1}(Np)^{1+ \varepsilon}L_{0}^{\sigma_{1}}M_{0}^{\sigma_{2}}N_{0}^{\sigma_{3}}(1+|s|)^{-A }\prod_{j=1}^{3}(1+|u_{j}|)^{-A}. \tag{5.17}\] Using (5.12) and (5.15), we get that, \[\mathcal{L}^{+}\Phi(t)=\int_{\Re(s)=\sigma}\int_{\Re(u_{j})=\sigma_{j}}l^{-u _{1}}m^{-u_{2}}n^{-u_{3}}\widetilde{H}(s,t,u_{1},u_{2},u_{3})\big{(}(p^{2}m-n )pl\big{)}^{-\frac{\varepsilon}{2}}\,ds\,du_{1}\,du_{2}\,du_{3}. \tag{5.18}\] **Remark 5.2**.: _We can repeat these steps for \(H_{-}(\cdot)\), and get an analogous expression to (5.18) for \(\mathcal{L}^{-}\Phi(t)\)._ **Final Steps:** We show the required bounds for \(\mathcal{M}_{\rm Maasso_{0}}\) in the non-oscillatory region, when the sign is \(+\). The proof when the sign is \(-\) is very similar. We recall that the sign in (5.6) depends on \[\operatorname{sgn}(-pl(n-p^{2}m))=\operatorname{sgn}(p^{2}m-n)),\text{ as }l>0.\] So, we only consider terms in (5.6) with \(p^{2}m-n>0\), and \(NV\ll p^{\varepsilon}\). Using (5.18), we can rewrite (5.6) as, \[\mathcal{M}_{\rm Maasso_{0}}=\sum_{\begin{subarray}{c}l,m,n\\ \gcd(ln,p)=1\end{subarray}}\frac{\lambda_{f}(m)\overline{\lambda_{f}}(n)}{ \phi(p)p^{2}}\sum\nolimits_{\chi(p)}{}^{*}\tau(\chi)^{3}\chi(-na_{\alpha}l)\\ \cdot\sum_{t_{j}}\sum_{\pi\in\mathcal{H}_{it_{j}}(p,\chi^{2})}\frac {4\pi\epsilon_{\pi}}{V(p)\mathscr{L}_{\pi}^{*}(1)}\overline{\lambda}_{\pi}(p^ {2}m-n)\overline{\lambda}_{\pi}(pl)\int_{\begin{subarray}{c}\Re(s)=\sigma\\ \Re(u_{j})=\sigma_{j}\end{subarray}}\frac{\widetilde{H}(s,t,u_{1},u_{2},u_{3}) }{((p^{2}m-n)pl)^{\frac{\varepsilon}{2}}}\frac{ds\,\,du_{1}\,\,du_{2}\,\,du_{3} }{l^{u_{1}}m^{u_{2}}n^{u_{3}}}. \tag{5.19}\] We bound (5.19) by taking the absolute value of the integrand. Using (5.17) gives us (for arbitrarily large \(A>0\)), \[\mathcal{M}_{\rm Maasso_{0}}\ll\sum_{\begin{subarray}{c}l,m,n\\ \gcd(ln,p)=1\end{subarray}}\frac{\lambda_{f}(m)\overline{\lambda_{f}}(n)}{ \phi(p)p^{2}}\sum\nolimits_{\chi(p)}{}^{*}\tau(\chi)^{3}\chi(-na_{\alpha}l) \sum_{t_{j}}\sum_{\pi\in\mathcal{H}_{it_{j}}(p,\chi^{2})}\frac{4\pi\epsilon_{ \pi}}{V(p)\mathscr{L}_{\pi}^{*}(1)}\\ \cdot(\sqrt{p}C_{0})^{\sigma-1}(Np)^{1+\varepsilon}\frac{L_{0}^{ \sigma_{1}}M_{0}^{\sigma_{2}}N_{0}^{\sigma_{3}}}{l^{\sigma_{1}}m^{\sigma_{2}} n^{\sigma_{3}}}\frac{\overline{\lambda}_{\pi}(p^{2}m-n)\overline{\lambda}_{\pi}(pl)}{ ((p^{2}m-n)pl)^{\frac{\varepsilon}{2}}}\int_{\begin{subarray}{c}\Re(s)=\sigma \\ \Re(u_{j})=\sigma_{j}\end{subarray}}\frac{ds\,\,du_{1}\,\,du_{2}\,\,du_{3}}{(1+|s| )^{A}\prod_{j=1}^{3}(1+|u_{j}|)^{A}}. \tag{5.20}\] We can combine the \(l\) terms in (5.20) to form an L-function. In particular, we have \[\sum_{l}\frac{\chi(l)\overline{\lambda_{\pi}}(pl)}{l^{\frac{\varepsilon}{2}+ \sigma_{1}}}=\overline{\lambda}_{\pi}(p)\sum_{l}\frac{\chi(l)\overline{ \lambda}_{\pi}(l)}{l^{\frac{\varepsilon}{2}+\sigma_{1}}}=\overline{\lambda}_{ \pi}(p)L(\overline{\pi}\otimes\chi,\tfrac{\sigma}{2}+\sigma_{1}). \tag{5.21}\] Lemma 5.1 and Proposition 4.9(a) allow us to restrict (upto a small error) to when \(|t_{j}|\ll p^{\varepsilon}\), and \(N^{\frac{1}{2}-\varepsilon}\leq C_{0}\leq\sqrt{N}\). Also, as \(\pi\in\mathcal{H}_{it_{j}}(p,\chi^{2})\), \(|\lambda_{\pi}(p)|\leq 1\) (\(p\) divides the level). Additionally we have the bounds \(|\tau(\chi)|=\sqrt{p},\ |V(p)|\asymp p\). These, along with (5.21), and the fact that \(\chi(-n)=\chi(p^{2}m-n)\) give us, \[\mathcal{M}_{\mathrm{Maass}_{0}}\ll\mathcal{S}\cdot\ \frac{p^{\frac{3}{2}}L_{0}^{ \sigma_{1}}M_{0}^{\sigma_{2}}N_{0}^{\sigma_{3}}(Np)^{\frac{1+\sigma}{2}}}{p \cdot p^{3}\cdot p^{\frac{\sigma}{2}}}\int_{\Re(u_{j})=\sigma_{j}}\frac{ds\ du_{1}\ du_{2}\ du_{3}}{(1+|s|)^{A}\prod_{j=1}^{3}(1+|u_{j}|)^{A}}, \tag{5.22}\] where \[\mathcal{S}=\sum_{t_{j}\ll p^{\varepsilon}}\sideset{}{{}^{*}}{\sum}_{\pi\in \mathcal{H}_{it_{j}}(p,\chi^{2})}L(\overline{\pi}\otimes\chi,\tfrac{\sigma}{2 }+\sigma_{1})\sum_{m\asymp M_{0}}\frac{\lambda_{f}(m)}{m^{\sigma_{2}}}\sum_{n \asymp N_{0}}\frac{\overline{\lambda}_{f}(n)\overline{\lambda}_{\pi}(p^{2}m- n)\chi(p^{2}m-n)}{(p^{2}m-n)^{\frac{\sigma}{2}}n^{\sigma_{3}}}.\] Using the Cauchy-Schwarz inequality on \(\mathcal{S}\) we get \[\mathcal{S}\leq(\mathcal{S}_{1})^{\frac{1}{2}}\cdot(\mathcal{S}_{2})^{\frac{ 1}{2}}, \tag{5.23}\] where \[\mathcal{S}_{1}=\sum_{t_{j}\ll p^{\varepsilon}}\sideset{}{{}^{*}}{\sum}_{\chi (p)}\sum_{\pi\in\mathcal{H}_{it_{j}}(p,\chi^{2})}\left|L(\overline{\pi}\otimes \chi,\tfrac{\sigma}{2}+\sigma_{1})\right|^{2}, \tag{5.24}\] and \[\mathcal{S}_{2}=\sum_{t_{j}\ll p^{\varepsilon}}\sideset{}{{}^{*}}{\sum}_{\chi (p)}\sum_{\pi\in\mathcal{H}_{it_{j}}(p,\chi^{2})}\left|\sum_{m\asymp M_{0}} \frac{\lambda_{f}(m)}{m^{\sigma_{2}}}\sum_{n\asymp N_{0}}\frac{\overline{ \lambda}_{f}(n)\overline{\lambda}_{\pi}(p^{2}m-n)\chi(p^{2}m-n)}{(p^{2}m-n)^{ \frac{\sigma}{2}}n^{\sigma_{3}}}\right|^{2}. \tag{5.25}\] We now choose \(\sigma=1+\varepsilon,\ \sigma_{j}=\frac{\varepsilon}{2}\), for \(j=1,2,3\), and use the spectral large sieve inequality to get bounds on (5.24) and (5.25). We note that the map \((\pi,\chi)\to\overline{\pi}\otimes\chi\) is an at most two-to-one map, and \(\overline{\pi}\otimes\chi\in\mathcal{H}_{it_{j}}(p^{2},1)\). Now, using Proposition 2.1 (with \(s=\frac{1}{2}+\varepsilon,X=1\)) in (5.24), we get \[\mathcal{S}_{1}\ll\sum_{t_{j}\ll p^{\varepsilon}}\sideset{}{{} ^{*}}{\sum}_{\phi\in\mathcal{H}_{it_{j}}(p^{2},1)}\left|L(\phi,\tfrac{1}{2}+ \varepsilon)\right|^{2}\\ \ll\sum_{t_{j}\ll p^{\varepsilon}}\sideset{}{{}^{*}}{\sum}_{\phi \in\mathcal{H}_{it_{j}}(p^{2},1)}\left(\left|\sum_{n\ll p^{1+\varepsilon}} \frac{\lambda_{\phi}(n)}{n^{\frac{1}{2}+\varepsilon}}V\bigg{(}\frac{n}{p} \bigg{)}\right|^{2}+\left|\sum_{n\ll p^{1+\varepsilon}}\frac{\overline{\lambda _{\phi}}(n)}{n^{\frac{1}{2}-\varepsilon}}V\bigg{(}\frac{n}{p}\bigg{)}\right|^{ 2}\right). \tag{5.26}\] Using Proposition 2.19 twice in (5.26), we get that \[\mathcal{S}_{1}\ll\left(p^{2+\varepsilon}+p^{1+\varepsilon}\right)^{1+ \varepsilon}\cdot\left(\sum_{n\leq p^{1+\varepsilon}}\frac{1}{n^{1+2 \varepsilon}}\bigg{|}V\bigg{(}\frac{n}{p}\bigg{)}\bigg{|}^{2}+\sum_{n\leq p^{1 +\varepsilon}}\frac{1}{n^{1-2\varepsilon}}\bigg{|}V\bigg{(}\frac{n}{p}\bigg{)} \bigg{|}^{2}\right)\ll p^{2+\varepsilon}. \tag{5.27}\] Bounding \(\mathcal{S}_{2}\) requires more work. We first define, \[S_{2,N_{0}}(m)=\sum_{n\asymp N_{0}}\frac{\overline{\lambda}_{f}(n)\overline{ \lambda}_{\pi}(p^{2}m-n)\chi(p^{2}m-n)}{(p^{2}m-n)^{\frac{1+\varepsilon}{2}}n^{ \frac{\varepsilon}{2}}}. \tag{5.28}\] Now, consider the \(m\)-sum in (5.25) (with \(\sigma=1+\varepsilon,\sigma_{2}=\sigma_{3}=\frac{\varepsilon}{2}\)). We recall that \(M_{0}\ll p^{\varepsilon}\), and \(N_{0}\ll p^{2+\varepsilon}\). Using the Cauchy-Schwarz inequality here, we see that \[\left|\sum_{m\asymp M_{0}}\frac{\lambda_{f}(m)}{m^{\sigma_{2}}}S_{2,N_{0}}(m) \right|^{2}\leq\sum_{m\asymp M_{0}}\left|\frac{\lambda_{f}(m)}{m^{\frac{ \varepsilon}{2}}}\right|^{2}\cdot\sum_{m\asymp M_{0}}|S_{2,N_{0}}(m)|^{2}\ll p ^{\varepsilon}\sum_{m\asymp M_{0}}|S_{2,N_{0}}(m)|^{2}. \tag{5.29}\] Now,changing variables \(n\to p^{2}m-n\), we can rewrite (5.28) as, \[S_{2,N_{0}}(m)=\sum_{n\ll p^{2+\varepsilon}}\frac{\overline{\lambda}_{f}(p^{2 }m-n)}{(p^{2}m-n)^{\frac{\varepsilon}{2}}}\cdot\frac{\overline{\lambda}_{ \pi\otimes\chi}(n)}{n^{\frac{1+\varepsilon}{2}}}. \tag{5.30}\] Now, we can use (5.29), and (5.30) in (5.25), to get \[\mathcal{S}_{2}\ll p^{\varepsilon}\sum_{m\asymp M_{0}}\sum_{t_{j}\ll p^{ \varepsilon}}\sum_{\phi\in\mathcal{H}_{iij}(p^{2},1)}\left|\sum_{n\ll p^{2+ \varepsilon}}\frac{\overline{\lambda}_{f}(p^{2}m-n)}{(p^{2}m-n)^{\frac{ \varepsilon}{2}}}\cdot\frac{\lambda_{\phi}(n)}{n^{\frac{1+\varepsilon}{2}}} \right|^{2}. \tag{5.31}\] Using Proposition 2.19 in (5.31), we get \[\mathcal{S}_{2}\ll\left(p^{2+\varepsilon}+p^{2+\varepsilon}\right)^{1+ \varepsilon}\sum_{m\asymp M_{0}}\sum_{n\ll p^{2+\varepsilon}}\frac{|\lambda_{ f}(p^{2}m-n)|^{2}}{n^{1+\varepsilon}}\ll p^{2+\varepsilon}. \tag{5.32}\] Here, the last inequality follows from Deligne's bound for the Fourier coefficients of holomorphic cusp forms. Using (5.23),(5.27) and (5.32) in (5.22), we have (with \(\sigma=1+\varepsilon,\ \sigma_{j}\to\frac{\varepsilon}{2}\), for \(j=1,2,3\)), \[\mathcal{M}_{\mathrm{Maass}_{0}}\ll p^{2+\varepsilon}\cdot\ \frac{p^{\frac{3}{2}}(L_{0}M_{0}N_{0})^{\frac{ \varepsilon}{2}}(Np)^{1+\frac{\varepsilon}{2}}}{p\cdot p^{3}\cdot p^{\frac{1}{ 2}}}\ll Np^{\varepsilon}. \tag{5.33}\] #### 5.1.2. Oscillatory Case In this range, \(NV\gg p^{\varepsilon}\). The analysis is similar to the non-oscillatory case, although we use the Bessel integral form for the Bruggeman-Kuznetsov formula. Once again, we begin by noting some properties of the functions \(\Phi(\cdot)\) and \(\mathcal{L}^{\pm}\Phi(\cdot)\). **Analysis of \(\Phi(\cdot)\):** Recall \(\Phi(y,\cdot)\), as defined in (5.7). As \(N|V|\gg p^{\varepsilon}\), Proposition 4.9(b) implies that \(I_{N,V}(c,l,m,n)\) is small unless \(p^{2}m-n>0\)**and \(NV\asymp\frac{\sqrt{M_{0}N}}{C_{0}}\asymp\frac{\sqrt{N_{0}N}}{pC_{0}}\)**, in which case we have (restating (4.53)) \[I_{N,V}(c,l,m,n)=\frac{N}{(NV)^{\frac{3}{2}}}\Bigg{(}\frac{2\sqrt{l(p^{2}m-n)} }{c}\Bigg{)}\cdot W_{T}(l,m,n,c)+O(p^{-A}). \tag{5.34}\] We recall that the sign in (5.6) is \(\mathrm{sgn}((n-p^{2}m)(-pl))=\mathrm{sgn}(p^{2}m-n)\). Now, as \(I_{N,V}(c,l,m,n)\) is small if \(p^{2}m-n<0\), we only need to consider the case when the sign is \(+\). **Analysis of \(\mathcal{L}^{+}\Phi(\cdot)\):** Recall that \(\mathcal{L}^{+}\Phi(\cdot)\) is defined as in (2.46). Let \(z=2\sqrt{pl(p^{2}m-n)}\). As \(p^{2}m-n>0\), this is well defined. Note that \(z\asymp\left(\frac{NN_{0}}{p}\right)^{\frac{1}{2}}\). Now, \[\frac{J_{2ir(x)}-J_{-2ir}(x)}{\sinh(\pi r)}=\frac{2}{\pi i}\int_{-\infty}^{ \infty}\cos(x\cosh y)e\Big{(}\frac{ry}{\pi}\Big{)}dy. \tag{5.35}\] We can use (4.53) and (5.35) to rewrite \(\mathcal{L}^{+}\Phi(\cdot)\) (up to a small error term) as, \[\mathcal{L}^{+}\Phi(t_{j})=\frac{Np}{\pi(NV)^{\frac{3}{2}}}\int_{0}^{\infty}\int_ {-\infty}^{\infty}\cos\biggl{(}\frac{2\pi z}{x}\cosh y\biggr{)}e\biggl{(} \frac{t_{j}y}{\pi}\biggr{)}e\Bigl{(}\frac{z}{x}\Bigr{)}W_{T}(\frac{x}{\sqrt{p} },\cdot)\frac{dy\ dx}{x^{2}}. \tag{5.36}\] Changing variables, \(x\to\frac{C_{0}\sqrt{p}}{x}\), this becomes, \[\mathcal{L}^{+}\Phi(t_{j})=\frac{N\sqrt{p}}{\pi C_{0}(NV)^{\frac{3}{2}}}\int_{ 0}^{\infty}\int_{-\infty}^{\infty}\cos\biggl{(}\frac{2\pi zx}{C_{0}\sqrt{p}} \cosh y\biggr{)}e\biggl{(}\frac{t_{j}y}{\pi}+\frac{zx}{C_{0}\sqrt{p}}\biggr{)} W_{T^{\prime}}(x,\cdot)dy\ dx. \tag{5.37}\] Here, \(W_{T^{\prime}}\) is another \(p^{\varepsilon}\)-inert family, supported on \(x\asymp 1\). We can extend the \(x-\)integral to all of \(\mathbb{R}\) as \(W_{T^{\prime}}(\cdot)\) vanishes in the negative reals. Also, as \(\cos u=\frac{1}{2}\bigl{(}e^{iu}+e^{-iu}\bigr{)}\), we can use Fubini's theorem once to get \[\mathcal{L}^{+}\Phi(t_{j})=\frac{N\sqrt{p}}{\pi C_{0}(NV)^{\frac{3}{2}}}\int_ {-\infty}^{\infty}e\biggl{(}\frac{t_{j}y}{\pi}\biggr{)}\widehat{W_{T^{\prime} }}\biggl{(}-\frac{z}{C_{0}\sqrt{p}}(1\pm\cosh y)\biggr{)}dy. \tag{5.38}\] Here, \[\widehat{W_{T^{\prime}}}(t)=\int_{-\infty}^{\infty}e(-xt)W_{T^{\prime}}(x, \cdot)dx. \tag{5.39}\] Using Proposition 2.12, we can show that the the right hand side in (5.39) is small unless \(\frac{z}{C_{0}\sqrt{p}}(1\pm\cosh y)\ll p^{\varepsilon}\). So, for the integral to not be small, the sign should be \(-\). As, \(1-\cosh y=-(\frac{y^{2}}{2!}+\frac{y^{4}}{4!}+\dots)\), we should have \[y\ll\biggl{(}\frac{C_{0}\sqrt{p}}{z}\biggr{)}^{\frac{1}{2}}p^{\varepsilon} \asymp\biggl{(}\frac{C_{0}p}{\sqrt{NN_{0}}}\biggr{)}^{\frac{1}{2}}p^{ \varepsilon}\asymp\biggl{(}\frac{1}{NV}\biggr{)}^{\frac{1}{2}}p^{\varepsilon}. \tag{5.40}\] We can use a Taylor series expansion of \(\widehat{W_{T^{\prime}}}(\cdot)\), with the leading term being \(\widehat{W_{T^{\prime}}}\Bigl{(}-\frac{z}{C_{0}\sqrt{p}}\frac{y^{2}}{2}\Bigr{)}\), to rewrite (5.38) as, \[\mathcal{L}^{+}\Phi(t_{j})=\frac{N\sqrt{p}}{\pi C_{0}(NV)^{\frac{3}{2}}}\int_ {-\infty}^{\infty}e\biggl{(}\frac{t_{j}y}{\pi}\biggr{)}\widehat{W_{T^{\prime} }}\biggl{(}\frac{z}{C_{0}\sqrt{p}}\frac{y^{2}}{2},\cdot\biggr{)}dy\ +O(p^{-A}). \tag{5.41}\] We define \[Q\coloneqq\frac{z}{2C_{0}\sqrt{p}}\asymp\frac{\sqrt{NN_{0}}}{C_{0}p}\asymp NV. \tag{5.42}\] Using a change of variables, \(y\to\sqrt{Q}y\) in (5.41), we get \[\mathcal{L}^{+}\Phi(t_{j})=\frac{N\sqrt{p}}{\pi C_{0}(NV)^{\frac{3}{2}}\sqrt{ Q}}\int_{-\infty}^{\infty}e\biggl{(}\frac{t_{j}y}{\pi\sqrt{Q}}\biggr{)} \widehat{W_{T^{\prime}}}\bigl{(}y^{2},\cdot\bigr{)}dy. \tag{5.43}\] Now, we can use Proposition 2.12 once again on (5.43), to get \[\mathcal{L}^{+}\Phi(t_{j})=\frac{N\sqrt{p}}{\pi C_{0}(NV)^{\frac{3}{2}}\sqrt{ Q}}G\biggl{(}\frac{t_{j}}{\sqrt{Q}},\cdot\biggr{)}, \tag{5.44}\] where \(G(t,l,m,n)\) is a satisfies the same derivative bounds as a \(p^{\varepsilon}\)-inert family in the first variable. It is \(p^{\varepsilon}\)-inert in the other variables \((l,m,n)\). Also, \[G\bigg{(}\frac{t_{j}}{\sqrt{Q}},\cdot\bigg{)}\text{ is small unless }|t_{j}|\ll\sqrt{Q}p^{\varepsilon}\asymp(NV)^{\frac{1}{2}}p^{ \varepsilon}. \tag{5.45}\] **Mellin Transform of \(G(t,\cdot)\):** Similar to (5.18), we take a Mellin transform of \(G(\cdot)\) with respect to the variables, \(l,m,n\), and use Mellin inversion to rewrite (5.44) as \[\mathcal{L}^{+}\Phi(t_{j})=\frac{N\sqrt{p}}{\pi C_{0}(NV)^{2}}\int_{ \begin{subarray}{c}\Re(u_{1})=\sigma_{1}\\ \Re(u_{2})=\sigma_{2}\\ \Re(u_{3})=\sigma_{3}\end{subarray}}l^{-u_{1}}m^{-u_{2}}n^{-u_{3}}\widetilde{ G}(t_{j},u_{1},u_{2},u_{3})\ du_{1}\ du_{2}\ du_{3}, \tag{5.46}\] where \[\widetilde{G}(t,u_{1},u_{2},u_{3})=\int_{0}^{\infty}\int_{0}^{\infty}\int_{0 }^{\infty}G(t,l,m,n)l^{u_{1}-1}m^{u_{2}-1}n^{u_{3}-1}\ dl\ dm\ dn, \tag{5.47}\] and \(\sigma_{j}>0\), for \(j=1,2,3\). We note here that, \(\widetilde{G}(t,\cdot)\) is small unless \(|t_{j}|\ll\sqrt{Q}p^{\varepsilon}\asymp(NV)^{\frac{1}{2}}p^{\varepsilon}\), because of a similar bound on \(G(t,\cdot)\). Also, similar to (5.17), we have that, for arbitrarily large \(A>0\), \[\widetilde{G}(t,u_{1},u_{2},u_{3})\ll p^{\varepsilon}L_{0}^{\sigma_{1}}M_{0} ^{\sigma_{2}}N_{0}^{\sigma_{3}}\prod_{j=1}^{3}(1+|u_{j}|)^{-A}. \tag{5.48}\] **Final Steps:** Using (5.46), we can rewrite (5.6) (up to a small error term) as, \[\mathcal{M}_{\text{Maass}_{0}}=\frac{N\sqrt{p}}{\pi C_{0}(NV)^{2} }\sum_{\begin{subarray}{c}l,m,n\cdots\\ \text{gcd}(ln,p)=1\end{subarray}}\frac{\lambda_{f}(m)\overline{\lambda_{f}}( n)}{\phi(p)p^{2}}\sum_{\chi(p)}{}^{*}\tau(\chi)^{3}\chi(-na_{\alpha}l)\] \[\cdot\sum_{t_{j}}\sum_{\pi\in\mathcal{H}_{it_{j}}(p,\chi^{2})} \frac{4\pi\epsilon_{\pi}}{V(p)\mathscr{L}_{\pi}^{*}(1)}\overline{\lambda}_{ \pi}(p^{2}m-n)\overline{\lambda}_{\pi}(pl)\int_{\Re(u_{j})=\sigma_{j}}\frac{ \widetilde{G}(t_{j},u_{1},u_{2},u_{3})\ du_{1}\ du_{2}\ du_{3}}{l^{u_{1}}m^{u_{2 }}n^{u_{3}}}. \tag{5.49}\] We bound (5.49) by taking the absolute value of the integrand. Using (5.48) gives us (for arbitrarily large \(A>0\)), \[\mathcal{M}_{\text{Maass}_{0}}\ll\frac{N\sqrt{p}}{\pi C_{0}(NV)^{ 2}}\sum_{\begin{subarray}{c}l,m,n\\ \text{gcd}(ln,p)=1\end{subarray}}\frac{\lambda_{f}(m)\overline{\lambda_{f}}( n)}{\phi(p)p^{2}}\sum_{\chi(p)}{}^{*}\tau(\chi)^{3}\chi(-na_{\alpha}l)\] \[\cdot\sum_{t_{j}}\sum_{\pi\in\mathcal{H}_{it_{j}}(p,\chi^{2})} \frac{4\pi\epsilon_{\pi}}{V(p)\mathscr{L}_{\pi}^{*}(1)}\frac{L_{0}^{\sigma_{1}} M_{0}^{\sigma_{2}}N_{0}^{\sigma_{3}}}{l^{\sigma_{1}}m^{\sigma_{2}}n^{\sigma_{3}}} \overline{\lambda}_{\pi}(p^{2}m-n)\overline{\lambda}_{\pi}(pl)\int_{\Re(u_{j}) =\sigma_{j}}\frac{du_{1}\ du_{2}\ du_{3}}{\prod_{j=1}^{3}(1+|u_{j}|)^{A}}. \tag{5.50}\] Combining the \(l\) terms, once again we have (similar to (5.21)) \[\sum_{l}\frac{\chi(l)\overline{\lambda}_{\pi}(pl)}{l^{\sigma_{1}}}=\overline{ \lambda}_{\pi}(p)\sum_{l}\frac{\chi(l)\overline{\lambda}_{\pi}(l)}{l^{\sigma_{1 }}}=\overline{\lambda}_{\pi}(p)L(\overline{\pi}\otimes\chi,\sigma_{1}). \tag{5.51}\] We can use (5.45) to restrict (upto a small error) to when \(|t_{j}|\ll\sqrt{Q}p^{\varepsilon}\). Also, as \(\pi\in\mathcal{H}_{it_{j}}(p,\chi^{2})\), \(|\lambda_{\pi}(p)|\leq 1\) (\(p\) divides the level). Additionally we have the bounds \(|\tau(\chi)|=\sqrt{p},\;|V(p)|\asymp p\). These, along with (5.51), and the fact that \(\chi(-n)=\chi(p^{2}m-n)\) give us, \[\mathcal{M}_{\text{Maass}_{0}}\ll\mathcal{S}^{\prime}\cdot\frac{L_{0}^{\sigma _{1}}M_{0}^{\sigma_{2}}N_{0}^{\sigma_{3}}(N\sqrt{p})}{\pi C_{0}(NV)^{2}}\frac{p ^{\frac{3}{2}}}{p^{3}}\frac{1}{p}\int_{\Re(u_{j})=\sigma_{j}}\frac{du_{1}\ du_{2}\ du_{3}}{\prod_{j=1}^{3}(1+|u_{j}|)^{A}}. \tag{5.52}\] with \[\mathcal{S}^{\prime}\!=\!\sum_{t_{j}\ll\sqrt{Q}p^{\varepsilon}}\sideset{}{ \sum}{{}^{*}}{\sum}{{}_{\pi\in\mathcal{H}_{it_{j}}(p,\chi^{2})}}L(\overline{ \pi}\otimes\chi,\sigma_{1})\left|\sum_{m\asymp M_{0}}\frac{\lambda_{f}(m)}{m^ {\sigma_{2}}}\sum_{n\asymp N_{0}}\frac{\overline{\lambda}_{f}(n)\overline{ \lambda}_{\pi}(p^{2}m-n)\chi(p^{2}m-n)}{n^{\sigma_{3}}}\right|.\] Once again, we can use Cauchy-Schwarz inequality on \(\mathcal{S}^{\prime}\) to get \[\mathcal{S}^{\prime}\leq(\mathcal{S^{\prime}}{}_{1})^{\frac{1}{2}}\cdot( \mathcal{S^{\prime}}{}_{2})^{\frac{1}{2}}, \tag{5.53}\] where \[\mathcal{S^{\prime}}{}_{1}=\sum_{t_{j}\ll\sqrt{Q}p^{\varepsilon}}\sideset{}{ \sum}{{}^{*}}{\sum}{{}_{\pi\in\mathcal{H}_{it_{j}}(p,\chi^{2})}}\sum_{n\asymp M _{0}}\frac{\overline{\lambda}_{f}(n)\overline{\lambda}_{\pi}(p^{2}m-n)\chi(p^ {2}m-n)}{n^{\sigma_{3}}} \tag{5.54}\] and \[\mathcal{S^{\prime}}{}_{2}=\sum_{t_{j}\ll\sqrt{Q}p^{\varepsilon}}\sideset{}{ \sum}{{}^{*}}{\sum}{{}_{\pi\in\mathcal{H}_{it_{j}}(p,\chi^{2})}}\left|\sum_{m \asymp M_{0}}\frac{\lambda_{f}(m)}{m^{\sigma_{2}}}\sum_{n\asymp N_{0}}\frac{ \overline{\lambda}_{f}(n)\overline{\lambda}_{\pi}(p^{2}m-n)\chi(p^{2}m-n)}{n^ {\sigma_{3}}}\right|^{2}. \tag{5.55}\] We can now take \(\sigma_{1}=\sigma_{3}=\frac{1}{2}\), and \(\sigma_{2}=\varepsilon\), and similar to (5.27) and (5.32), use the spectral large sieve inequality to bound \(\mathcal{S^{\prime}}{}_{1}\) and \(\mathcal{S^{\prime}}{}_{2}\). The only difference is that now \(t_{j}\ll\sqrt{Q}p^{\varepsilon}\). The analogous bounds then become, \[\mathcal{S^{\prime}}{}_{1}\ll\left(Qp^{2+\varepsilon}+p^{1+\varepsilon}\right) ^{1+\varepsilon}\cdot\sum_{n\leq p^{1+\varepsilon}}\frac{1}{n}\left|V\!\left( \frac{n}{p}\right)\right|^{2}\ll Qp^{2+\varepsilon}, \tag{5.56}\] and \[\mathcal{S^{\prime}}{}_{2}\ll\left(Qp^{2+\varepsilon}+p^{2+\varepsilon}\right) ^{1+\varepsilon}\sum_{m\asymp M_{0}}\sum_{n\ll p^{2+\varepsilon}}\frac{|\lambda _{f}(n)|^{2}}{n}\ll Qp^{2+\varepsilon}. \tag{5.57}\] Using (5.53), (5.56), and (5.57) in (5.52) we get (with \(\sigma_{1}=\sigma_{3}=\frac{1}{2},\sigma_{2}=\varepsilon\),) \[\mathcal{M}_{\text{Maass}_{0}}\ll Qp^{2+\varepsilon}\frac{(L_{0}N_{0})^{\frac{ 1}{2}}N\sqrt{p}}{\pi C_{0}(NV)^{2}}\frac{p^{\frac{3}{2}}}{p^{3}}\frac{1}{p} \int_{\Re(u_{j})=\sigma_{j}}\frac{du_{1}\ du_{2}\ du_{3}}{\prod_{j=1}^{3}(1+|u_{j }|)^{A}}. \tag{5.58}\] Using the fact that \(Q\asymp\frac{NN_{0}}{\sqrt{p}C_{0}}\asymp NV\), and \(L_{0}\leq\frac{N}{p^{2}}\), and \(N_{0}\ll p^{2+\varepsilon}\), if we take absolute values in (5.58), we get that, \[\mathcal{M}_{\text{Maass}_{0}}\ll Np^{\varepsilon}. \tag{5.59}\] #### 5.1.3. \(\mathcal{M}_{\text{Maass}_{1}}\) and \(\mathcal{M}_{\text{Maass}_{2}}\) We briefly go over the process for bounding \(\mathcal{M}_{\text{Maass}_{1}}\) and \(\mathcal{M}_{\text{Maass}_{2}}\). Notice that, in this case \(\chi=\chi_{0}\) is fixed, it is the unique quadratic character modulo \(p\). So compared to (5.6), the corresponding expressions for \(\mathcal{M}_{\text{Maass}_{1}}\) and \(\mathcal{M}_{\text{Maass}_{2}}\) do not have a sum over the characters \(\chi\) modulo \(p\); and the Maass forms are level \(1\), instead of \(p\). Both of these facts lead to additional cancellations. However, we can no longer use the bound \(|\lambda_{\pi}(p)|\leq 1\). Nevertheless the trivial bound \(|\lambda_{\pi}(p)|\leq\sqrt{p}\) is sufficient for our result. For \(\mathcal{M}_{\text{Maass}_{1}}\), similar to (5.6), we have the expression, \[\mathcal{M}_{\text{Maass}_{1}}=\sum_{\begin{subarray}{c}l,m,n\\ \gcd(ln,p)=1\end{subarray}}\frac{\lambda_{f}(m)\overline{\lambda_{f}}(n)}{ \phi(p)p^{2}}\tau(\chi_{0})^{3}\chi_{0}(-na_{\alpha}l)\\ \cdot\sum_{t_{j}}\mathcal{L}^{\pm}\Phi(t_{j})\sum_{\pi\in\mathcal{ H}_{it_{j}}(1,1)}\frac{4\pi\epsilon_{\pi}}{V(p)\mathscr{L}_{\pi}^{*}(1)} \overline{\lambda}_{\pi}(p^{2}m-n)\overline{\lambda}_{\pi}(pl). \tag{5.60}\] We can now repeat the same steps as in \(\mathcal{M}_{\text{Maass}_{0}}\) case and in fact get an even better bound of \(O(Np^{-\frac{1}{2}+\varepsilon})\). (We save a factor of \(p^{2}\) in the corresponding expressions for \(\mathcal{S}_{1}\) and \(\mathcal{S}_{1}^{\prime}\)) For \(\mathcal{M}_{\text{Maass}_{2}}\), we have the expression, \[\mathcal{M}_{\text{Maass}_{2}}=\sum_{\begin{subarray}{c}l\sim L_{ 0},m\simeq M_{0},\cdots\\ \gcd(cln,p)=1\end{subarray}}\frac{\lambda_{f}(m)\overline{\lambda_{f}}(n)}{ \phi(p)p^{2}}\tau(\chi)^{3}\chi(-na_{\alpha}l)\\ \cdot\sum_{t_{j}}\mathcal{L}^{\pm}\Phi(t_{j})\sum_{\pi\in\mathcal{ H}_{it_{j}}(1,1)}\frac{4\pi\epsilon_{\pi}}{V(p)\mathscr{L}_{\pi}^{*}(1)} \overline{\lambda}_{\pi}^{(p)}(p^{2}m-n)\overline{\lambda}_{\pi}^{(p)}(pl), \tag{5.61}\] where (recalling from (2.52)) \[\lambda_{\pi}^{(p)}(m)=x_{p}(1)\lambda_{\pi}(m)+\sqrt{p}\ x_{p}(p)\lambda_{ \pi}\bigg{(}\frac{m}{p}\bigg{)}. \tag{5.62}\] Here, \(\lambda_{\pi}(\frac{m}{p})=0\) if \(p\nmid m\). Notice that, as \(\gcd(n,p)=1\), \(p\nmid(n-p^{2}m)\). Also, \(x_{p}(\cdot)\ll p^{\varepsilon}\). Comparing with (5.60), we see that there's an additional \(\sqrt{p}\) factor. However, the cancellation we get from not having the \(\chi\)-sum and the level of the Maass forms dropping to \(1\) is more than enough to take care of this. We can once again proceed as before and show that \(\mathcal{M}_{\text{Maass}_{2}}\ll Np^{\varepsilon}\). **Remark 5.3**.: _We could use non trivial bounds for \(|\lambda_{\pi}(p)|\) while analysing \(\mathcal{M}_{\text{Maass}_{1}}\) and \(\mathcal{M}_{\text{Maass}_{2}}\). In that case we would get \(\mathcal{M}_{\text{Maass}_{1}}=O\big{(}Np^{-1+\theta+\varepsilon}\big{)}\), and \(\mathcal{M}_{\text{Maass}_{2}}=O\Big{(}Np^{-\frac{1}{2}+\theta+\varepsilon} \Big{)}\). Here, \(\theta\) is current progress towards the Ramanujan Peterson conjecture, so for instance, \(\theta\leq\frac{7}{64}\)._ ### Holomorphic Term Analysis For the non-oscillatory case in \(\mathcal{M}_{\text{hol}}\), we can very easily prove a variant of Lemma 5.1 (with \(t\) replaced by \(k\)), by choosing \[h_{\text{hol}}(s,k)=\frac{2^{s-1}}{\pi}\frac{\Gamma\big{(}\frac{s+k-1}{2}\big{)} }{\Gamma\big{(}\frac{k-s+1}{2}\big{)}}, \tag{5.63}\] and \[H_{\rm hol}(s,k,l,m,n)=\frac{1}{2\pi i}h_{\rm hol}(s,t)\widetilde{\Phi}(s+1, \cdot)(4\pi)^{-s}. \tag{5.64}\] The rest of the steps are identical to the corresponding Maass form case. For the oscillatory case, instead of (5.35), we use \[J_{l-1}(x)=\sum_{\pm}\frac{e^{\mp i(l-1)\frac{\pi}{2}}}{\pi}\int_{0}^{\frac{ \pi}{2}}\cos((l-1)\theta)e^{\pm ix\cos\theta}\,d\theta. \tag{5.65}\] We then proceed similarly, the only difference being, we use the power series expansion of \(\cos(y)\) instead of \(\cosh y\) in (5.41). ### Eisenstein Term Analysis The kernel function for \(\mathcal{M}_{\rm Eis}\) is the same as \(\mathcal{M}_{\rm Maass}\), and hence, the analysis is pretty similar, except for replacing \(\sum_{t_{j}}(\cdot)\) with \(\int_{-\infty}^{\infty}(\cdot)dt\). Similar to (5.2), the corresponding Eisenstein expression is, \[\mathcal{K}_{\rm Eis}=\frac{1}{4\pi}\int_{-\infty}^{\infty}\mathcal{L}^{\pm} \Phi(t)\sum_{r_{1}r_{2}=p}\sum_{\pi\in\mathcal{H}_{it,Eis}(r_{2},\chi^{2})} \frac{4\pi\epsilon_{\pi}}{V(p)\mathscr{L}_{\pi}^{s}(1)}\sum_{\delta|r_{1}} \overline{\lambda}_{\pi}^{(\delta)}(|n-p^{2}m|)\overline{\lambda}_{\pi}^{( \delta)}(|-pl|)dt \tag{5.66}\] Here, \[\mathcal{H}_{it,Eis}(r_{2},\chi^{2})=\{E_{\chi_{1},\chi_{2}}\bigg{(}z,\frac{1 }{2}+it\bigg{)},\chi_{i}\,(\text{mod }q_{i}),\text{ for }i=1,2;\ q_{1}q_{2}=r_{2},\ \chi_{1} \overline{\chi_{2}}\simeq\chi^{2}\}, \tag{5.67}\] where we use the notation \(\psi\simeq\chi\) to mean that \(\psi\) and \(\chi\) share the same underlying primitive character. Also, for \(\pi=E_{\chi_{1},\chi_{2}}\big{(}z,\frac{1}{2}+it\big{)}\), we have \[\lambda_{\pi}(n)=\lambda_{\chi_{1},\chi_{2},t}(n)=\chi_{2}(\text{sgn}(n)) \sum_{ab=|n|}\frac{\chi_{1}(a)\overline{\chi}_{2}(b)}{a^{it}b^{-it}}. \tag{5.68}\] As \(r_{1}r_{2}=p\) in (5.66), the only possibilities are \(r_{2}=1\), or \(r_{2}=p\). Again, \(r_{2}=1\) only if \(\chi\) is quadratic (\(\chi\) cannot be trivial). Hence, the only possibilities for \((\chi_{1},\chi_{2})\) are \((1,1)\), or \((1,\overline{\chi}^{2})\), or \((\chi^{2},1)\). Thus, \(\mathcal{H}_{it,Eis}(1,1)=\{E_{1,1}\big{(}z,\frac{1}{2}+it\big{)}\}\), and \(\mathcal{H}_{it,Eis}(p,\chi^{2})=\{E_{1,\overline{\chi^{2}}}\big{(}z,\frac{1}{ 2}+it\big{)},E_{\chi^{2},1}\big{(}z,\frac{1}{2}+it\big{)}\}\). We can now proceed exactly similar to the Maass form case. The only difference we encounter is for the corresponding expressions for (5.21) (and (5.51) for the oscillatory case). Here, we get (for the non-oscillatory case), \[\sum_{l}\frac{\chi(l)\overline{\lambda_{\pi}}(pl)}{l^{\frac{\pi}{ 2}+\sigma_{1}}}=\overline{\lambda_{\pi}}(p)\sum_{l}\frac{\chi(l)\overline{ \lambda_{\pi}}(l)}{l^{\frac{\pi}{2}+\sigma_{1}}}=\overline{\lambda_{\pi}}(p) \sum_{a}\sum_{b}\frac{\overline{\chi_{1}}(a)\chi_{2}(b)\chi(ab)}{(ab)^{\frac{ \pi}{2}+\sigma_{1}}a^{-it}b^{it}}\\ =\overline{\lambda}_{\pi}(p)L(\frac{\sigma}{2}+\sigma_{1}-it, \chi\cdot\overline{\chi_{1}})L(\frac{\sigma}{2}+\sigma_{1}+it,\chi\cdot\chi_{2 }). \tag{5.69}\] The last expression can be simplified further depending on if \((\chi_{1},\chi_{2})=(1,1)\), or \((1,\overline{\chi}^{2})\), or \((\chi^{2},1)\). In each case, we get a product of Dirichlet \(L\)-functions for the characters \(\chi\) and \(\overline{\chi}\). The rest of the analysis is the same, we just swap a degree \(2\)\(L\)-function with a product of two degree \(1\)\(L\)-functions. ## 6. Remaining Terms We complete the proof of Theorem 1.7 by proving Lemma 4.2 (bounding the terms introduced by the delta symbol with \(c\) divisible by \(p\)), and Lemma 4.4 (bounding the smaller terms introduced after Voronoi summation). We start with the latter. ### Proof of Lemma 4.4 Recall the term \(E_{1}\) was defined in (4.19). This can be rewritten as, \[E_{1}=\sum_{\begin{subarray}{c}l\leq\frac{N}{p^{2}}\\ \gcd(l,p)=1\end{subarray}}\sum_{m\ll p^{\varepsilon}}\sum_{\begin{subarray}{ c}n\ll p^{2+\varepsilon}\\ \gcd(n,p^{2})=p\end{subarray}}\sum_{\begin{subarray}{c}c\\ c\\ c,p)=1\end{subarray}}\frac{\lambda_{f}(m)\overline{\lambda_{f}}(n)}{p^{2}c^{2} }I_{N}(c,l,m,n)\\ \cdot\sum_{\begin{subarray}{c}a<c\\ \gcd(a,c)=1\end{subarray}}e\bigg{(}\frac{-ap^{2}l}{c}\bigg{)}e\bigg{(}\frac{- a\overline{m}}{c}\bigg{)}e\bigg{(}\frac{\overline{ap^{2}}n}{c}\bigg{)}, \tag{6.1}\] where \[I_{N}(c,l,m,n)=\int_{-\infty}^{\infty}g_{c}(v)e(-p^{2}lv)\widetilde{w}_{c,v, N}(m)\widetilde{w}_{pc,-v,N}(n)dv. \tag{6.2}\] Here, \(\widetilde{w}_{c,v,N}(\cdot)\) is given by (4.13). Similar to Lemma 6.1, we claim that, \(I_{N}(c,l,m,n)\ll Np^{\varepsilon}\). The \(a\)-sum at the end is actually a Kloosterman sum, \[\sum_{\begin{subarray}{c}a<c\\ \gcd(a,c)=1\end{subarray}}e\bigg{(}\frac{-ap^{2}l}{c}\bigg{)}e\bigg{(}\frac{ -a\overline{m}}{c}\bigg{)}e\bigg{(}\frac{\overline{ap^{2}}n}{c}\bigg{)}=S(n \overline{p}^{2}-m,-p^{2}l;c)=S(\overline{p}(n-p^{2}m),-pl;c). \tag{6.3}\] Using the Weil bound, it follows that \[E_{1}\ll\frac{N}{p^{2}}\cdot p\cdot\frac{1}{p^{2}}\cdot Np^{\varepsilon}\cdot \sum_{c\leq\sqrt{N}}c^{-\frac{3}{2}}\ll N^{\frac{3}{4}+\varepsilon}. \tag{6.4}\] The proofs for the corresponding bounds for \(E_{2}\) and \(E_{3}\) are very similar. Following the same steps, we can, in fact, get an even better bound of \(O(N^{\frac{3}{4}}p^{-1+\varepsilon})\). ### Proof of Lemma 4.2 We use Voronoi summation twice to prove Lemma 4.2. Using Proposition 2.16 on \(T_{1}(a,c,v)\), we have \[T_{1}(a,c,v)=\frac{1}{c}\sum_{m\geq 1}\lambda_{f}(m)e\bigg{(}-\frac{\overline{a} m}{c}\bigg{)}\widetilde{w}_{c,v,N}(m). \tag{6.5}\] Here, \[\widetilde{w}_{c,v,N}(m)=2\pi i^{k}\int_{o}^{\infty}J_{k-1}\bigg{(}\frac{4\pi \sqrt{mx}}{c}\bigg{)}e(xv)w_{N}(x)dx.\] Using an integration by parts argument similar to the one in Lemma 4.6, we can show that the sum on the right in (6.5) is effectively for \(m\ll\frac{c^{2}}{N}\). As \(c\leq C=\sqrt{N}\), this is non-trivial only when \(c^{2}\geq N^{1-\varepsilon}\), in which case \(1\leq m\ll p^{\varepsilon}\). \(T_{2}(a,c,v)\) needs more attention. Since \(c\) is divisible by \(p\), \(e\Big{(}\frac{a_{\alpha}\overline{m}}{p}\Big{)}e\big{(}\frac{-an}{c}\big{)}\) is periodic modulo \(c\) (whenever \(n\) is coprime to \(p\)), and hence can be written as a finite sum of additive characters. Using this and taking \(r=a_{\alpha}l\), we get \[T_{2}(a,c,v)=\sum_{\begin{subarray}{c}N\leq n\leq 2N\\ \gcd(n,p)=1\end{subarray}}\overline{\lambda_{f}}(n)w_{N}(n)e(-nv)\frac{1}{c} \sum_{t\,(\text{mod }c)}e\bigg{(}\frac{nt}{c}\bigg{)}\sum_{\begin{subarray}{c}u\,( \text{mod }c)\\ \gcd(u,p)=1\end{subarray}}e\bigg{(}\frac{r\overline{u}}{p}\bigg{)}e\bigg{(}- \frac{au+tu}{c}\bigg{)}\\ =\frac{1}{c}\sum_{t\,(\text{mod }c)}\sum_{\begin{subarray}{c}u\,( \text{mod }c)\\ \gcd(u,p)=1\end{subarray}}e\bigg{(}\frac{r\overline{u}}{p}\bigg{)}e\bigg{(}- \frac{au+tu}{c}\bigg{)}\sum_{\begin{subarray}{c}N\leq n\leq 2N\\ \gcd(n,p)=1\end{subarray}}\overline{\lambda_{f}}(n)e\bigg{(}\frac{nt}{c}\bigg{)} w_{N}(n)e(-nv). \tag{6.6}\] We want to use Voronoi summation for the last sum.Once again, because of an integration by parts argument, the length of the summation, post Voronoi summation, will effectively be up to \(n\ll\frac{1}{N}\Big{(}\frac{c}{\gcd(c,t)}\Big{)}^{2}\). As \(c\leq\sqrt{N}\), this will have non-trivial contribution only when \(\gcd(c,t)\ll p^{\varepsilon}\), and \(c^{2}\geq N^{1-\varepsilon}\). We can in fact show that we can restrict to just \(\gcd(c,t)=1\) case. Suppose \(\gcd(c,t)=g>1\). As \(g\ll p^{\varepsilon},\ p\mid\frac{c}{g}\). Now, we can write the contribution of these terms in (6.6) is \[T_{2}^{\prime}(a,c,v)=\frac{1}{c}\sum_{\begin{subarray}{c}g|c\\ 1<g\ll p^{\varepsilon}\end{subarray}}S_{a,c,v}(g), \tag{6.7}\] where \[S_{a,c,v}(g)=\sum_{\begin{subarray}{c}t\equiv 0\,(\text{mod }c)\\ \gcd(t,c)=g\end{subarray}}\sum_{\begin{subarray}{c}N\leq n\leq 2N\\ \gcd(n,p)=1\end{subarray}}\overline{\lambda_{f}}(n)e\bigg{(}\frac{nt}{c}\bigg{)} w_{N}(n)e(-nv)\sum_{\begin{subarray}{c}u\,(\text{mod }c)\\ \gcd(u,p)=1\end{subarray}}e\bigg{(}\frac{r\overline{u}}{p}\bigg{)}e\bigg{(}- \frac{au+tu}{c}\bigg{)}. \tag{6.8}\] Consider the \(u\)-sum in (6.8). If we change variables with \(u\to u+\frac{c}{g}\) in the \(u\)-sum in (6.8) (this is well defined as \(p\mid\frac{c}{g}\)), we notice that \[\sum_{\begin{subarray}{c}u\,(\text{mod }c)\\ \gcd(u,p)=1\end{subarray}}e\bigg{(}\frac{r\overline{u}}{p}\bigg{)}e\bigg{(}- \frac{au+tu}{c}\bigg{)}=e\bigg{(}\frac{a}{g}\bigg{)}\sum_{\begin{subarray}{c}u \,(\text{mod }c)\\ \gcd(u,p)=1\end{subarray}}e\bigg{(}\frac{r\overline{u}}{p}\bigg{)}e\bigg{(}- \frac{au+tu}{c}\bigg{)}. \tag{6.9}\] As \(\gcd(a,c)=1\), and \(g\mid c\), \(e\Big{(}-\frac{a}{g}\Big{)}\neq 1\). So, the \(u\)-sum in (6.8) must be zero. Thus, \(T_{2}^{\prime}(a,c,v)=0\), and it suffices to only consider the case when \(\gcd(c,t)=1\). Using Proposition 2.16 for that case, we have (up to a small error), \[T_{2}(a,c,v)=\frac{1}{c}\sum_{\begin{subarray}{c}t\,(\text{mod }c)\\ \gcd(t,c)=1\end{subarray}}\sum_{\begin{subarray}{c}u\,(\text{mod }c)\\ \gcd(u,p)=1\end{subarray}}e\bigg{(}\frac{r\overline{u}}{p}\bigg{)}e\bigg{(}- \frac{au+tu}{c}\bigg{)}\frac{1}{c}\sum_{n\ll p^{\varepsilon}}\overline{ \lambda_{f}}(n)e\bigg{(}-\frac{n\overline{t}}{c}\bigg{)}\widetilde{w}_{c,-v,N}( n). \tag{6.10}\] So, using (6.5) and (6.10) in (4.9), we get that (up to a small error), \[S_{4}(N,\alpha)=\sum_{\begin{subarray}{c}l\leq N/p^{2}\\ \gcd(l,p)=1\end{subarray}}\sum_{m\ll p^{\varepsilon}}\sum_{n\ll p^{ \varepsilon}}\sum_{\begin{subarray}{c}c\\ \gcd(c,p)=p\end{subarray}}\frac{\lambda_{f}(m)\overline{\lambda_{f}}(n)}{c^{3} }\tilde{I}_{N}(c,l,m,n)B(c,l,m,n), \tag{6.11}\] where \[\tilde{I}_{N}(c,l,m,n)=\int_{-\infty}^{\infty}g_{c}(v)e(-p^{2}lv)\widetilde{w}_{c, v,N}(m)\widetilde{w}_{c,-v,N}(n)dv, \tag{6.12}\] and \[B(c,l,m,n)=\sum_{\begin{subarray}{c}a\,(\mathrm{mod}\ c)\\ \gcd(a,c)=1\end{subarray}}e\bigg{(}\frac{-ap^{2}l}{c}\bigg{)}e\bigg{(}-\frac{ \overline{a}m}{c}\bigg{)}\sum_{\begin{subarray}{c}t\,(\mathrm{mod}\ c)\\ \gcd(t,c)=1\end{subarray}}\sum_{\begin{subarray}{c}u\,(\mathrm{mod}\ c)\\ \gcd(u,p)=1\end{subarray}}e\bigg{(}\frac{r\overline{u}}{p}\bigg{)}e\bigg{(}- \frac{au+tu}{c}\bigg{)}e\bigg{(}-\frac{n\overline{t}}{c}\bigg{)}. \tag{6.13}\] Let \(c=k\cdot p\), and as \(c\leq\sqrt{N}\ll p^{\frac{3}{2}+\varepsilon}\), \(k\ll p^{\frac{1}{2}+\varepsilon}\) and \(\gcd(k,p)=1\). We can then rewrite (6.11) (up to small error) as \[S_{4}(N,\alpha)=\sum_{\begin{subarray}{c}l\leq N/p^{2}\\ \gcd(l,p)=1\end{subarray}}\sum_{\begin{subarray}{c}m,n\\ m,n\ll p^{\varepsilon}\end{subarray}}\sum_{k\ll p^{\frac{1}{2}+\varepsilon}} \frac{\lambda_{f}(m)\overline{\lambda_{f}}(n)}{k^{3}p^{3}}\tilde{I}_{N}(kp,l,m,n)B(kp,l,m,n). \tag{6.14}\] We complete the proof by stating the following two lemmas. **Lemma 6.1**.: _Let \(\varepsilon>0\) and let \(N\ll p^{3+\varepsilon}\). Let \(l\leq\frac{N}{p^{2}}\) with \((l,p)=1\), and \(m,n\ll p^{\varepsilon}\). We define \(\tilde{I}_{N}(c,l,m,n)\) as in (6.12). Now, if in addition \(c\) satisfies \(N^{1-\varepsilon}\ll c^{2}\leq N\), then_ \[\tilde{I}_{N}(c,l,m,n)\ll_{\varepsilon}Np^{\varepsilon}. \tag{6.15}\] **Lemma 6.2**.: _Let \(\varepsilon,N,l,m,n\) be same as before, in Lemma 6.1. We define \(B(c,l,m,n)\) as in (6.13). Now, if in addition, \(c=k\cdot p\), with \((k,p)=1\), then_ \[B(kp,l,m,n)\ll k\sqrt{k}\ p^{2}. \tag{6.16}\] We quickly show how Lemmas 6.1 and 6.2 imply Lemma 4.2. Using (6.15) and (6.16) in (6.14), we get (using trivial bounds) \[S_{4}(N,\alpha)\ll\sum_{\begin{subarray}{c}l\leq N/p^{2}\\ \gcd(l,p)=1\end{subarray}}\sum_{\begin{subarray}{c}m,n\\ m,n\ll p^{\varepsilon}\end{subarray}}\sum_{k\ll p^{\frac{1}{2}+\varepsilon}} \frac{|\lambda_{f}(m)\overline{\lambda_{f}}(n)|}{k^{3}p^{3}}Nk^{\frac{3}{2}}p ^{2+\varepsilon}\ll N^{2}p^{-3-\frac{1}{4}+\varepsilon}\ll Np^{-\frac{1}{4}+ \varepsilon}. \tag{6.17}\] Clearly, (6.17) implies (4.11). #### 6.2.1. Proof of Lemma 6.1 Using Proposition 2.6, we can restrict the integral in (6.12), up to a small error, to \(|v|\ll\frac{1}{cC}\). Here, \(C=\sqrt{N}\). Using the trivial bounds \(\widetilde{w}_{c,v,N}(m)\ll N\), we get \[\tilde{I}_{N}(c,l,m,n)\ll N^{2+\varepsilon}\frac{1}{cC}. \tag{6.18}\] We have already noted that \(c^{2}\geq N^{1-\varepsilon}\). Using this, Lemma 6.1 follows. #### 6.2.2. Proof of Lemma 6.2 Using the Chinese Remainder theorem, we can rewrite \(B(kp,l,m,n)\) as \[B(kp,l,m,n)=B_{k}\cdot B_{p} \tag{6.19}\] where \[B_{k}=\sum_{\begin{subarray}{c}a_{1}\,(\bmod\,k)\\ \gcd(a_{1},k)=1\end{subarray}}e\biggl{(}\frac{-a_{1}pl}{k}\biggr{)}e\biggl{(} -\frac{\overline{a_{1}}m\overline{p}}{k}\biggr{)}\sum_{\begin{subarray}{c}t_ {1}\,(\bmod\,k)\\ \gcd(t_{1},k)=1\end{subarray}}e\biggl{(}-\frac{n\overline{t_{1}}\overline{p}} {k}\biggr{)}\sum_{u_{1}\,(\bmod\,k)}e\biggl{(}-\frac{a_{1}u_{1}\overline{p}} {k}\biggr{)}e\biggl{(}-\frac{t_{1}u_{1}\overline{p}}{k}\biggr{)}, \tag{6.20}\] and \[B_{p}=\sum_{\begin{subarray}{c}u_{2}\,(\bmod\,p)\\ \gcd(u_{2},p)=1\end{subarray}}e\biggl{(}\frac{r\overline{u_{2}}}{p}\biggr{)} \sum_{\begin{subarray}{c}a_{2}\,(\bmod\,p)\\ \gcd(a_{2},p)=1\end{subarray}}e\biggl{(}-\frac{\overline{a_{2}}m\overline{k}} {p}\biggr{)}e\biggl{(}-\frac{a_{2}u_{2}\overline{k}}{p}\biggr{)}\sum_{ \begin{subarray}{c}t_{2}\,(\bmod\,p)\\ \gcd(t_{2},p)=1\end{subarray}}e\biggl{(}-\frac{n\overline{t_{2}}\overline{k}} {p}\biggr{)}e\biggl{(}-\frac{t_{2}u_{2}\overline{k}}{p}\biggr{)}. \tag{6.21}\] Consider \(B_{k}\) first. Using orthogonality of characters modulo \(k\) for the \(u_{1}\) sum in (6.20), and rearranging terms, we get \[B_{k}=k\sum_{\begin{subarray}{c}a_{1}\,(\bmod\,k)\\ \gcd(a_{1},k)=1\end{subarray}}e\biggl{(}\frac{-a_{1}pl}{k}\biggr{)}e\biggl{(} \frac{\overline{a_{1}}n\overline{p}}{k}\biggr{)}e\biggl{(}-\frac{\overline{a_{1 }}m\overline{p}}{k}\biggr{)}. \tag{6.22}\] We note that the \(a_{1}-\)sum is in fact a Kloosterman sum. \[B_{k}=k\cdot S(-pl,(n-m)\overline{p};k)=k\cdot S(-l,(n-m);k). \tag{6.23}\] The Weil bound immediately gives, \[B_{k}\ll k\sqrt{k}. \tag{6.24}\] Consider \(B_{p}\) now. The \(t_{2}\) and the \(u_{2}\) sums in (6.21) both define Kloosterman sums. This gives us, \[B_{p}=\sum_{\begin{subarray}{c}u_{2}\,(\bmod\,p)\\ \gcd(u_{2},p)=1\end{subarray}}e\biggl{(}\frac{r\overline{u_{2}}}{p}\biggr{)} S(u_{2}\overline{k},m\overline{k};p)S(u_{2}\overline{k},n\overline{k};p). \tag{6.25}\] Trivially bounding the \(u_{2}\) sum, and using the Weil bound twice gives us, \[B_{p}\ll p^{2}. \tag{6.26}\] Now, (6.24) and (6.26) imply (6.16). This completes the proof of Lemma 6.2. ## 7. Application to Non-vanishing We use the upper bound obtained in Theorem 1.4 to prove Theorem 1.5. We first state a lemma regarding the first moment for the family of \(L\)-functions we have been working with so far. **Lemma 7.1**.: _Let \(p,q,f,\alpha\) be as before. Then for \(\varepsilon>0\),_ \[\sum_{\psi\,(\bmod\,p^{2})}L\bigl{(}\tfrac{1}{2},f\otimes(\alpha\cdot\psi) \bigr{)}=p^{2}+O(p^{\frac{7}{4}+\varepsilon}). \tag{7.1}\] Assume Lemma 7.1 for now. Using, Cauchy-Schwarz inequality, we have \[\left|\sum_{\psi\,(\mathrm{mod}\ p^{2})}L\big{(}\tfrac{1}{2},f\otimes( \alpha\cdot\psi)\big{)}\right|^{2}=\left|\sum_{\psi\,(\mathrm{mod}\ p^{2})}L \big{(}\tfrac{1}{2},f\otimes(\alpha\cdot\psi)\big{)}\cdot\delta\big{(}L\big{(} \tfrac{1}{2},f\otimes(\alpha\cdot\psi)\big{)}\neq 0\big{)}\right|^{2}\\ \leq\sum_{\psi\,(\mathrm{mod}\ p^{2})}\big{|}L\big{(}\tfrac{1}{2}, f\otimes(\alpha\cdot\psi)\big{)}\big{|}^{2}\cdot\sum_{\psi\,(\mathrm{mod}\ p^{2})}\big{|}\delta\big{(}L\big{(}\tfrac{1}{2},f \otimes(\alpha\cdot\psi)\big{)}\neq 0\big{)}\big{|}^{2}\\ =\sum_{\psi\,(\mathrm{mod}\ p^{2})}\big{|}L\big{(}\tfrac{1}{2},f \otimes(\alpha\cdot\psi)\big{)}\big{|}^{2}\cdot\sum_{\begin{subarray}{c}\psi \,(\mathrm{mod}\ p^{2})\\ L\big{(}\tfrac{1}{2},f\otimes(\alpha\cdot\psi)\big{)}\neq 0\end{subarray}}1.\] This implies, \[\#\{\psi\,(\mathrm{mod}\ p^{2});\ L\big{(}\tfrac{1}{2},f\otimes(\alpha\cdot \psi)\big{)}\neq 0\}=\sum_{\begin{subarray}{c}\psi\,(\mathrm{mod}\ p^{2})\\ L\big{(}\tfrac{1}{2},f\otimes(\alpha\cdot\psi)\big{)}\neq 0\end{subarray}}1 \geq\ \ \frac{\left|\sum_{\psi\,(\mathrm{mod}\ p^{2})}L\big{(}\tfrac{1}{2},f \otimes(\alpha\cdot\psi)\big{)}\right|^{2}}{\sum_{\psi\,(\mathrm{mod}\ p^{2})} \big{|}L\big{(}\tfrac{1}{2},f\otimes(\alpha\cdot\psi)\big{)}\big{|}^{2}}. \tag{7.2}\] Theorem 1.5 now follows using the corresponding bounds in Lemma 7.1 and Theorem 1.4 in (7.2). ### Proof of Lemma 7.1 Using the approximate functional equation in Proposition 2.1, we have \[\sum_{\psi\,(\mathrm{mod}\ p^{2})}L\big{(}\tfrac{1}{2},f\otimes(\alpha\cdot \psi)\big{)}=A+B, \tag{7.3}\] with \[A=\sum_{\psi\,(\mathrm{mod}\ p^{2})}\sum_{n\ll(Xq)^{1+\varepsilon}}\frac{ \lambda_{f}(n)\alpha(n)\psi(n)}{\sqrt{n}}V\left(\frac{n}{Xq}\right), \tag{7.4}\] and \[B=\sum_{\psi\,(\mathrm{mod}\ p^{2})}\varepsilon\left(f\otimes(\alpha\cdot \psi),\tfrac{1}{2}\right)\sum_{n\ll\frac{\alpha}{\lambda}\cdot q^{\varepsilon} }\frac{\overline{\lambda_{f}}(n)\overline{\alpha}(n)\overline{\psi}(n)}{ \sqrt{n}}V\left(\frac{nX}{q}\right). \tag{7.5}\] Here, \(V(y)\) is a smooth function satisfying \(V(y)\ll_{f,A}(1+\frac{y}{\sqrt{4\alpha}})^{-A}\), for any \(A>0\). #### 7.1.1. Bounds on A Interchanging the order of summation in \(A\), we get, via the orthogonality of characters (and separating the diagonal term), \[A=p^{2}\cdot V\bigg{(}\frac{1}{Xq}\bigg{)}+p^{2}\cdot\sum_{1\leq l\ll\frac{Xq }{p^{2}}}\frac{\lambda_{f}(1+p^{2}l)\alpha(1+p^{2}l)\psi(1+p^{2}l)}{\sqrt{1+p^ {2}l}}V\left(\frac{1+p^{2}l}{Xq}\right). \tag{7.6}\] Using \[\sum_{1\leq l\ll\frac{Xq}{p^{2}}}\frac{\lambda_{f}(1+p^{2}l)\alpha(1+p^{2}l)\psi(1 +p^{2}l)}{\sqrt{1+p^{2}l}}V\left(\frac{1+p^{2}l}{Xq}\right)\ll\frac{1}{p}\sum_{ 1\leq l\ll\frac{Xq}{p^{2}}}\frac{1}{\sqrt{l}}\ll\frac{(Xq)^{\frac{1}{2}}}{p^{2}},\] we can get \[A=p^{2}+O(X^{\frac{1}{2}}q^{\frac{1}{2}})=p^{2}+O(X^{\frac{1}{2}}p^{\frac{3}{2 }}). \tag{7.7}\] #### 7.1.2. Bounds on B Here, we prove that \(B=O(p^{2}X^{-\frac{1}{2}})\). Along with (7.6), this proves Lemma 7.1 once we set \(X=p^{\frac{1}{2}}\). The root number of \(L(f\otimes(\alpha\cdot\psi))\) is given by, \[\varepsilon\left(f\otimes(\alpha\cdot\psi),\tfrac{1}{2}\right)=\frac{\tau( \psi\cdot\alpha)}{p^{3}}. \tag{7.8}\] Using this on (7.5), we get that \[B=\sum_{\psi\,(\mathrm{mod}\ p^{2})}\frac{\tau(\psi\cdot\alpha)}{p^{3}}{ \sum_{n\ll\frac{x}{X}}}^{*}\frac{\overline{\lambda_{f}}(n)\overline{\alpha}(n )\overline{\psi}(n)}{\sqrt{n}}V\bigg{(}\frac{nX}{q}\bigg{)}. \tag{7.9}\] Expanding out the Gauss sum, and using orthogonality for the \(\psi\)-sum, we get \[B=\frac{1}{p^{3}}{\sum_{1\leq a,b\leq p^{3}}}{\sum_{n\ll\frac{x}{X}}}^{*} \frac{\overline{\lambda_{f}}(n)}{\sqrt{n}}V\bigg{(}\frac{nX}{q}\bigg{)}e\bigg{(} \frac{a+b}{q}\bigg{)}\alpha(ab\overline{n})p^{2}\cdot\delta(ab\equiv n\,( \mathrm{mod}\ p^{2})). \tag{7.10}\] Here, \(\overline{n}\cdot n\equiv 1(\mathrm{mod}\ p^{3})\). The \(\delta(ab\equiv n(\mathrm{mod}\ p^{2}))\) condition implies \(\exists\ l,\ 1\leq l\leq p\) with \(b=\overline{a}n(1+p^{2}l)\), where \(\overline{a}\cdot a\equiv 1(\mathrm{mod}\ p^{3})\). Using this, \[B=\frac{1}{p}{\sum_{n\ll\frac{x}{X}}}^{*}\frac{\overline{\lambda_{f}}(n)}{ \sqrt{n}}V\bigg{(}\frac{nX}{q}\bigg{)}{\sum_{1\leq a\leq p^{3}}}^{*}\sum_{1 \leq l\leq p}e\bigg{(}\frac{a+\overline{a}n(1+p^{2}l)}{p^{3}}\bigg{)}\alpha(1+ p^{2}l). \tag{7.11}\] Notice that \(\alpha(1+p^{2}(\cdot))\) is an additive character mod \(p\). So, \(\exists\ c_{\alpha}(\mathrm{mod}\ p)\) such that \(\alpha(1+p^{2}l)=e\bigg{(}\frac{c_{\alpha}\cdot l}{p}\bigg{)}\). Using this, and orthogonality in the \(l\)-sum, \[B=\frac{1}{p}{\sum_{n\ll\frac{x}{X}}}^{*}\frac{\overline{\lambda_{f}}(n)}{ \sqrt{n}}V\bigg{(}\frac{nX}{q}\bigg{)}{\sum_{1\leq a\leq p^{3}}}^{*}e\bigg{(} \frac{a+\overline{a}n}{p^{3}}\bigg{)}p\cdot\delta(c_{\alpha}\equiv-\overline{a }n\,(\mathrm{mod}\ p)). \tag{7.12}\] Again, the \(\delta(c_{\alpha}\equiv-\overline{a}n(\mathrm{mod}\ p))\) condition implies \(\exists\ y_{0},y_{1},\ 1\leq y_{0},y_{1}\leq p\) with \(a=-n\overline{c_{\alpha}}(1+py_{0}+p^{2}y_{1})\), where \(\overline{c_{\alpha}}\cdot c_{\alpha}\equiv 1(\mathrm{mod}\ p^{3})\). So, \[B={\sum_{n\ll\frac{x}{X}}}^{*}\frac{\overline{\lambda_{f}}(n)}{\sqrt{n}}V \bigg{(}\frac{nX}{q}\bigg{)}e\bigg{(}-\frac{c_{\alpha}+n\overline{c_{\alpha}}} {p^{3}}\bigg{)}\sum_{1\leq y_{0}\leq p}e\bigg{(}\frac{-c_{\alpha}py_{0}^{2}+c_{ \alpha}-n\overline{c_{\alpha}}}{p^{2}}\bigg{)}\sum_{1\leq y_{1}\leq p}e\bigg{(} \frac{y_{1}(c_{\alpha}-n\overline{c_{\alpha}})}{p}\bigg{)}. \tag{7.13}\] Orthogonality in the \(y_{1}\) sum implies that we have \(n\equiv c_{\alpha}^{2}(\mathrm{mod}\ p)\). So, \[B=p\cdot\sum_{\begin{subarray}{c}n\ll\frac{\alpha}{\overline{\chi}}\\ n\equiv c_{\alpha}^{2}\,(\bmod\,p)\end{subarray}}\frac{\overline{\lambda_{f}}(n)}{ \sqrt{n}}V\bigg{(}\frac{nX}{q}\bigg{)}e\bigg{(}-\frac{c_{\alpha}+n\overline{c_ {\alpha}}}{p^{3}}\bigg{)}\cdot R, \tag{7.14}\] with \[R=\sum_{1\leq y_{0}\leq p}e\bigg{(}\frac{-c_{\alpha}y_{0}^{2}+k_{\alpha}y_{0}}{ p}\bigg{)}, \tag{7.15}\] where \(k_{\alpha}=\frac{c_{\alpha}-n\overline{c_{\alpha}}}{p}\). Completing the square in (7.15), we have \[R=e\bigg{(}\frac{k\alpha^{2}}{4c_{\alpha}}\bigg{)}\sum_{1\leq y_{0}\leq p}e \Bigg{(}\frac{c_{\alpha}(y_{0}+\frac{k_{\alpha}}{2c_{\alpha}})^{2}}{p}\bigg{)} =e\bigg{(}\frac{k\alpha^{2}}{p4c_{\alpha}}\bigg{)}\varepsilon_{p}\sqrt{p} \cdot\bigg{(}\frac{c_{\alpha}}{p}\bigg{)}\,. \tag{7.16}\] Here \(\varepsilon_{p}\) is a constant of absolute value \(1\). Using (7.16) in (7.14), we have \[B=p\cdot\sum_{\begin{subarray}{c}n\ll\frac{q}{\overline{\chi}}\\ n\equiv c_{\alpha}^{2}\,(\bmod\,p)\end{subarray}}\frac{\overline{\lambda_{f}} (n)}{\sqrt{n}}V\bigg{(}\frac{nX}{q}\bigg{)}e\bigg{(}\frac{k\alpha^{2}}{4pc_{ \alpha}}-\frac{c_{\alpha}+n\overline{c_{\alpha}}}{p^{3}}\bigg{)}\cdot \varepsilon_{p}\sqrt{p}\cdot\bigg{(}\frac{c_{\alpha}}{p}\bigg{)}\,. \tag{7.17}\] Using trivial bounds, we finally get \[B\ll p^{\frac{1}{2}+\varepsilon}\bigg{(}\frac{q}{X}\bigg{)}^{\frac{1}{2}}=O( p^{2}X^{-\frac{1}{2}}). \tag{7.18}\] Using (7.7) and (7.18) in (7.3), we get that \[\sum_{\psi\,(\bmod\,p^{2})}L\big{(}\tfrac{1}{2},f\otimes(\alpha\cdot\psi) \big{)}=p^{2}+O(X^{\frac{1}{2}}p^{\frac{3}{2}})+O(p^{2}X^{-\frac{1}{2}}). \tag{7.19}\] Choosing \(X=p^{\frac{1}{2}}\) now completes the proof of Lemma 7.1.
2309.04794
All-charm tetraquark mass and possible quantum numbers of X(6900)
In this work we propose possible quantum numbers of X(6900) and suggest a model for it internal structure that explains its unusually high mass. We solve the Schr\"odinger Equation with Mathematica 12, first for charmonium spectrum, then for all-charm tetraquark spectrum which is understood as a pair of two-particle states, mesons or diquark-antidiquark states. The obtained candidates for all-charm tetraquark will be separated into contributors to various resonances and structures that are visible in the experiments then an explanation for the prominence of X(6900) will be proposed.
Morgan Kuchta
2023-09-09T13:39:15Z
http://arxiv.org/abs/2309.04794v1
# All-charm tetraquark mass and possible quantum numbers of \(X(6900)\) ###### Abstract In this work we propose possible quantum numbers of \(X(6900)\) and suggest a model for it internal structure that explains its unusually high mass. We solve the Schrodinger Equation with Mathematica 12, first for charmonium spectrum, then for all-charm tetraquark spectrum which is understood as a pair of two-particle states, mesons or diquark-antidiquark states. The obtained candidates for all-charm te-traquark will be separated into contributors to various resonances and structures that are visible in the experiments then an explanation for the prominence of \(X(6900)\) will be proposed. ## 1 Introduction In June 2020 in LHCb experiment found in proton-proton collisions exotic meson originally named \(X(6900)\) along with it a broad structure between \(6200-6800\) MeV and a smaller peak around 7200 MeV were found [1]. Until now, the parity and the charge symmetry, as well as the total angular momentum of any of the states mentioned here were not identified. The mass of \(X(6900)\) was surprisingly high, as before its discovery the expected mass of the ground state was placed around 6200 MeV, which means that it should have been found within the region of the broad structure. In 2022, the ATLAS collaboration searched for potential all-charm tetraquark (\(cc\bar{c}\bar{c}\)) states in two different decay channels [2]. Results of the search also feature a peak around 6900 MeV and the broad structure visible in results archived in the previous experiment. The experiment differs from the one conducted in 2020 by investigating the \(J/\psi+\Psi(2S)\) invariant mass spectrum, where the two significant peaks were found, one supposedly corresponding to the peak around 6900 MeV found in LHCb, another around 7200 MeV, both of them have statistical significance lesser than \(5\sigma\). Other significant results were presented by the CMS collaboration [3] that suggests three peaks in the \(di-J/\psi\) spectrum. A summary of the results can be found in the Table 1. In this work we will suggest possible quantum numbers for the \(X(6900)\) together with its quark structure that explains seemingly lacking prominent ground state and create tables of possible masses of other all-charm and all-bottom tetraquarks applying a compact tetraquark framework. The discussion will not involve in-medium effects. Numerical calculations have been performed using Wolfram Language and Mathematica 13 and the base code by F. Schoberl and W. Lucha [4] which had been modified and adjusted to its application in the present work. The code can be found in the supplemental file 1. In this work we will use new notation for exotic hadrons suggested in reference [5]. ## 2 Charmonia and bottomonia The charmonium spectrum can be obtained through solving the time-independent Schrodinger equation with Fermi-Breit Hamiltonian [6] with reduced mass \(\mu_{12}=\frac{m_{1}m_{2}}{m_{1}+m_{2}}\), \[\begin{split}&\big{[}m_{1}+m_{2}+\frac{1}{2\mu_{12}}\big{(}- \frac{d^{2}}{dr^{2}}+\frac{l(l+1)}{r^{2}}\big{)}+\\ &\quad+V_{12}^{S}+V_{12}^{SS}+V_{12}^{LS}+V_{12}^{T}\big{]} \Psi=E_{12}^{nJ}\Psi.\end{split} \tag{1a}\] The Hamiltonian includes kinetic energy and the contribution of strong interaction between the quark-antiquark pair. The pair interaction includes the one gluon exchange (OGE). For the charmonium and the bottomonium spectrum Cornell potential applies, \[V_{ij}^{G}(r_{ij})=\kappa_{s}\frac{\alpha_{s}}{r_{12}}+\sigma r_{12}, \tag{1b}\] along with three spin dependent terms; spin-spin term (\(V_{ij}^{SS}\)), spin-orbit term (\(V_{ij}^{SL}\)) and tensor term (\(V_{ij}^{T}\)) ; \[V_{ij}^{SS}(r_{ij})=-\frac{8\kappa_{s}\alpha_{s}\pi}{3m^{2}}(\frac{\sigma_{ss} }{\sqrt{\pi}})^{3}e^{-\sigma_{ss}^{2}r_{ij}^{2}}S_{i}S_{j}, \tag{1c}\] \[V_{ij}^{LS}(r_{ij})=\big{[}-\frac{3\kappa_{s}\alpha_{s}}{2m^{2}}\frac{1}{r_{ij }^{3}}-\frac{b}{2m^{2}}\frac{1}{r_{ij}}\big{]}LS, \tag{1d}\] \[V_{ij}^{T}(r_{ij})=-\frac{12\kappa_{s}\alpha_{s}}{4m^{2}}\frac{1}{r_{ij}^{3}} \big{(}\frac{(S_{i}r_{ij})(S_{j}r_{ij})}{r_{ij}^{2}}-\frac{S_{i}S_{j}}{3} \big{)}. \tag{1e}\] In the equation (1b) there are three parameters to be set. The first one is the Casimir coefficient \(\kappa_{s}\) that can be obtained by calculating scattering amplitude for the quark-antiquark pair that form a colour-singlet: \[\begin{split}\kappa_{s}^{1}=-f_{1}=\frac{1}{4}\Sigma_{\alpha}(c _{3}^{\dagger}\lambda^{\alpha}c_{1})(c_{2}^{\dagger}\lambda^{\alpha}c_{4})=\\ =\frac{1}{4}\big{(}\frac{1}{\sqrt{3}}\big{)}^{2}Tr(\lambda^{ \alpha}\lambda^{\alpha})=\frac{-4}{3}.\end{split} \tag{2}\] The other two parameters, strong coupling constant \(\alpha_{s}\) and string tension \(\sigma\) are obtained from a fit to the charmonium and bottomonium spectra and radiative decays [7]. Spin dependent terms include one additional parameter \(\sigma_{ss}\) which is also impossible to determine empirically. Mesons are formed from two quarks with spin \(\frac{1}{2}\) that can either form spin singlets (\(S=0\)) or spin triplets (\(S=3\)): \[2\otimes 2=1\oplus 3. \tag{3}\] The contribution of the spin-spin interation in the \(V^{SS}\) is evaluated using: \[\langle S_{i}S_{j}\rangle=\frac{1}{2}\langle S^{2}-S_{1}^{2}-S_{2}^{2}\rangle =\begin{cases}\frac{-3}{4},&S=|0\rangle\\ \frac{1}{4},&S=|1\rangle,\end{cases} \tag{4}\] where \(S_{1},S_{2}\) are the spins of each quark and \(S\) is the total \begin{table} \begin{tabular}{c l l l l l} \hline & & \(X(6200)\) & \(X(6500)\) & \(X(6900)\) & \(X(7200)\) \\ \cline{3-6} & No interference & Observed & Observed & \(\begin{array}{l}m=6905\text{ MeV}\\ \Gamma=80\text{ MeV}\end{array}\) & Observed \\ \cline{2-6} LHCb [1] & & & & & \\ \cline{2-6} & NRSPS interference & Observed & Observed & \(\begin{array}{l}m=6886\text{ MeV}\\ \Gamma=168\text{ MeV}\end{array}\) & Observed \\ \hline \multirow{3}{*}{ATLAS [2]} & Model A & \(\begin{array}{l}m=6.22\text{ GeV}\\ \Gamma=0.31\text{ GeV}\end{array}\) & \(\begin{array}{l}m=6.62\text{ GeV}\\ \Gamma=0.31\text{ GeV}\end{array}\) & \(\begin{array}{l}m=6.87\text{ GeV}\\ \Gamma=0.12\text{ GeV}\end{array}\) & \(\begin{array}{l}m=7.22\text{ GeV}\\ \Gamma=0.10\text{ GeV}\end{array}\) \\ \cline{2-6} & Model B & NA & NA & NA & \(\begin{array}{l}m=6.78\text{ GeV}\\ \Gamma=0.39\text{ GeV}\end{array}\) \\ \hline \multirow{3}{*}{CMS [3]} & & NA & \(\begin{array}{l}m=6552\text{ MeV}\\ \Gamma=124\text{ MeV}\end{array}\) & \(\begin{array}{l}m=6927\text{ MeV}\\ \Gamma=122\text{ MeV}\end{array}\) & \(\begin{array}{l}m=7287\text{ MeV}\\ \Gamma=95\text{ MeV}\end{array}\) \\ \hline \end{tabular} \end{table} Table 1: Summary of estimated masses (in MeV) and widths of four resonances from three different sources. If the source includes several estimations they are included with brief descriptions. More detailed explanations can be found in the sources. Marker ”NA” suggests that resonance was either not identified or its mass was not estimated by the model. spin. In \(V^{SL}\) for \(L\neq 0\): \[\langle LS\rangle_{S=|1\rangle}=\begin{cases}-(l-1),\quad J=L-1\\ -1,\quad J=L\\ l,\quad J=L+1.\end{cases} \tag{5}\] The tensor dependent part of the potential (1e) ought to be transformed with the usage of the identity (6) [8] to remove \(r_{ij}\) dependency and include \(J\) and \(L\) instead (7). \[(a\cdot b)r^{2}-3(a\cdot r)(b\cdot r)=\frac{r^{2}}{(2l+3)(2l+1)}\times \tag{6}\] \[\times(k^{2}(a\cdot b)-3(a\cdot k)(b\cdot k)-3(b\cdot k)(a\cdot k))\] \[\langle T_{ij}\rangle=12\langle\frac{(S_{i}r_{ij})(S_{j}r_{ij})}{r_{ij}^{2}} -\frac{S_{i}S_{j}}{3}\rangle=\] \[=\frac{4}{(2l+3)(2l-1)}\big{\langle}S^{2}L^{2}+\frac{3}{2}(LS)+3(LS)^{2} \big{\rangle}\] Therefore \(V^{T}\neq 0\) for \(S\neq 0\) and \(L\neq 0\): \[\langle T_{ij}\rangle=\begin{cases}2,\quad J=L\\ \frac{-2(l+1)}{2l-1},\quad J=L-1\\ \frac{-2l}{2l+3},\quad J=L+1.\end{cases} \tag{8}\] We made appropriate choice of the parameters for charmonia (9a) and bottomonia (9b): \[\alpha_{0}=0.198,\quad\sigma=0.177\;\text{GeV}^{2}, \tag{9a}\] \[\sigma_{ss}=1.08,\quad m_{c}=1.263\;\text{GeV},\] (9b) \[\alpha_{0}=0.164,\quad\sigma=0.177\;\text{GeV}^{2},\] \[\sigma_{ss}=1.08,\quad m_{b}=4.581\;\text{GeV}.\] Masses and widths of the resonances were sourced from PDG [9]. ## 3 Tetraquarks The tools that were first used to properly describe non-exotic hadrons [10][11] may be applied to the tetraquarks and other exotic hadrons [12][13]. The discussion of potential tetraquark states predates the discovery of the first exotic hadron candidate \(\chi_{c1}(3872)\), previously known as \(X(3872)\), in 2003 [14]. In terms of the fundamental representation, a quark and an antiquark could form a colour-singlet meson that can be observed in the resonances or a colour octet (10a). Non-singlet colour states have not been observed [9] due to the colour confinement, however we can treat newly build colour octets as our new "building blocks" that may be combined into a colour singlet and multiple non-singlets (10b). \[3\otimes\bar{3}=1\oplus 8 \tag{10a}\] \[8\otimes 8=1\oplus 8\oplus 8\oplus 10\oplus 1\bar{0}\oplus 27 \tag{10b}\] The concept of a diquark has been frequently used to describe exotic hadrons since the conception of the quark model [15]. Compact tetraquarks are usually described as bound states of a diquark and an antidiquark forming an exotic meson [16]. In this work we will call this configuration "diquark-antidiquark configuration" or "DA". In addition to the diquark picture we will also consider combination of two colour octets, referred to as "meson-meson configuration" or "MM". To obtain masses of the tetraquarks we use the parameters calculated in the previous section to calculate masses of colour non-singlet states. \[3\otimes 3=3\oplus 6,\quad 3\otimes 3=3\oplus\bar{6} \tag{10c}\] \[6\otimes\bar{6}=1\oplus 8\oplus 27 \tag{10d}\] We can see that there are several possible methods of "composing" tetraquaks. Using the same method as in Eq. (2) to obtain Casimir Coefficient for colour singlet, we archive the results displayed in the Table 4 and Table 5. The parameters used in the Cornell Potential Eq. (1b) for interactions between the pairs are identical for those used to obtain charmonium and bottomonium spectra, modified by the Casimir scaling (11) [17][18]. A transition from the DA to the MM configuration could be performed unless the distance between the diquark and the antidiquark is sufficiently larger than the distance between their constituents [19], therefore the parameters \(\alpha_{s}\) and \(\sigma\) need to be scaled accordingly. \[\frac{V_{1}(r)}{V_{r}(2)}=\frac{\kappa_{s1}}{\kappa_{s2}} \tag{11}\] The compact tetraquark picture is a description in which interaction between a diquark and an antidiquark is treated similarly to interaction between a quark and an \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Name & \(N^{2S+1}I_{J}\) & \(J^{PC}\) & \(M_{exp}({\rm MeV})\) & \(\Gamma({\rm MeV})\) & \(M_{cal}({\rm MeV})\) & \(\frac{M_{cal}-M_{exp}}{M_{exp}}(\%)\) \\ \hline \(\eta_{c}(1S)\) & \(1^{1}S_{0}\) & \(1^{-+}\) & \(2983.9\pm 0.5\) & \(32.0\pm 0.7\) & \(2983.8\) & \(<0.1\) \\ \(J/\psi\) & \(1^{3}S_{1}\) & \(1^{--}\) & \(3096.900\pm 0.006\) & \(92.9\pm 2.8\) & \(3094.7\) & \(<0.1\) \\ \hline \(h_{c}(1P)\) & \(1^{1}P_{1}\) & \(1^{+-}\) & \(3525.38\pm 0.11\) & \(0.7\pm 0.4\) & \(3576.1\) & \(1.4\) \\ \(\chi_{c0}(1P)\) & \(1^{3}P_{0}\) & \(0^{++}\) & \(3414.71\pm 0.30\) & \(10.8\pm 0.6\) & \(3286.5\) & \(3.8\) \\ \(\chi_{c1}(1P)\) & \(1^{3}P_{1}\) & \(1^{++}\) & \(3510.67\pm 0.05\) & \(0.84\pm 0.04\) & \(3580.9\) & \(1.9\) \\ \(\chi_{c2}(1P)\) & \(1^{3}P_{2}\) & \(2^{++}\) & \(3556.17\pm 0.07\) & \(1.97\pm 0.09\) & \(3365.6\) & \(5.4\) \\ \hline \(\eta_{c}(2S)\) & \(2^{1}S_{0}\) & \(1^{-+}\) & \(3637.5\pm 1.1\) & \(11.3\pm 3.2\) & \(3576.4\) & \(1.6\) \\ \(\psi(2S)\) & \(2^{3}S_{1}\) & \(1^{--}\) & \(3686.10\pm 0.06\) & \(294\pm 8\) & \(3641.9\) & \(1.2\) \\ \(\psi(3770)\) & \(1^{3}D_{1}\) & \(1^{--}\) & \(3773.7\pm 0.4\) & \(27.2\pm 1.0\) & \(3758.5\) & \(0.4\) \\ \(\psi_{2}(3823)*\) & \(1^{3}D_{2}\) & \(2^{--}\) & \(3823.7\pm 0.5\) & \(<5.2\) & \(3816.4\) & \(0.4\) \\ \(\psi_{3}(3842)*\) & \(1^{3}D_{3}\) & \(3^{--}\) & \(3842.71\pm 0.20\) & \(2.8\pm 0.6\) & \(3602.8\) & \(6.2\) \\ \hline \(Z_{c}(3900)\) & \(2^{1}P_{1}\) & \(1^{+-}\) & \(3887.1\pm 2.6\) & \(28.4\pm 2.6\) & \(4108.3\) & \(5.7\) \\ \(\chi_{c0}(3915)\) & \(2^{3}P_{0}\) & \(0^{++}\) & \(3921.7\pm 1.9\) & \(18.8\pm 3.5\) & \(3561.7\) & \(9.2\) \\ \(\chi_{c1}(3872)\) & \(2^{3}P_{1}\) & \(1^{++}\) & \(3871.65\pm 0.06\) & \(1.19\pm 0.21\) & \(4120.9\) & \(6.4\) \\ \hline \hline \end{tabular} \end{table} Table 2: The \(c\bar{c}\) mesons and their respective masses (\(M_{exp}\)) compared to the masses calculated in this work (\(M_{cal}\)) using parameters (9a). The last column incudes the comparison of \(M_{exp}\) with \(M_{calc}\) in percentages. antiquark in a non-exotic meson [20]. The method of using diquarks in the description of hadronic bound states is not unique to tetraquarks as it can be used for baryons [21][22] and pentaquarks [23][24]. Using the numerical method described in previous section we will calculate masses of the diquarks and the non-singlet quark-antiquark states and use the obtained result to calculate masses of the tetraquarks composed of those pairs, or in other words, we treat the problem like a set of three different two-body problems. In this work we will use Columbia-like potential to describe boson exchange between clusters. The Hamiltonian used to calculate masses of the tetraquarks is presented in Eq. (12a) with \(H_{T}\) representing contribution of kinetic energy and \((V_{ij})\) representing all the potential contributions including the spin contributions. \[H=H_{T}+V_{(12)(34)}+ \tag{12a}\] \[+(m_{1}+m_{2}+V_{12})+(m_{3}+m_{4}+V_{34})\] \[H_{T}=\frac{\hat{p}_{12}^{2}}{2\mu_{12}}+\frac{\hat{p}_{34}^{2}}{2 \mu_{34}}+\frac{\hat{p}_{1234}^{2}}{2\mu_{1234}}\] (12b) \[\mu_{12}=\frac{m_{1}m_{2}}{m_{1}+m_{2}}\,,\quad\mu_{34}=\frac{m_{3}m_{4}}{m_{ 3}+m_{4}}\] (12c) \[\mu_{1234}=\frac{\mu_{12}\mu_{34}}{\mu_{12}+\mu_{34}}\] \[\vec{r}_{12}=\vec{r}_{1}-\vec{r}_{2},\quad\vec{r}_{34}=\vec{r}_{3 }-\vec{r}_{4}\] (12d) \[\vec{r}_{1234}=\vec{r}_{34}-\vec{r}_{12}\] \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Name} & \multirow{2}{*}{\(N^{2S+1}l_{J}\)} & \multirow{2}{*}{\(J^{PC}\)} & \multirow{2}{*}{\(M_{exp}(\mathrm{MeV})\)} & \multirow{2}{*}{\(\Gamma(\mathrm{MeV})\)} & \multirow{2}{*}{\(M_{cal}(\mathrm{MeV})\)} & \multirow{2}{*}{\(M_{cal}(\mathrm{MeV})\)} & \multirow{2}{*}{\(\frac{M_{cal}-M_{exp}}{M_{exp}}(\%)\)} \\ \hline \(\eta_{b}(1S)\) & \(1^{1}S_{0}\) & \(1^{-+}\) & \(9398.7\pm 2.0\) & \(10\pm 5\) & \(9417.1\) & \(0.2\) \\ \(\Upsilon(1S)\) & \(1^{3}S_{1}\) & \(1^{--}\) & \(9460.30\pm 0.26\) & \(\approx 0.054\pm 0.001\) & \(9397.4\) & \(0.7\) \\ \hline \(\chi_{b0}(1P)\) & \(1^{3}P_{0}\) & \(0^{++}\) & \(9859.44\pm 0.26\) & NA & \(9417.1\) & \(4.4\) \\ \(\chi_{b1}(1P)\) & \(1^{3}P_{1}\) & \(1^{++}\) & \(9892.78\pm 0.05\) & NA & \(9708.2\) & \(1.9\) \\ \(\chi_{b2}(1P)\) & \(1^{3}P_{2}\) & \(2^{++}\) & \(9912.21\pm 0.26\) & NA & \(9674.6\) & \(2.3\) \\ \hline \(\Upsilon(2S)\) & \(2^{3}S_{1}\) & \(1^{--}\) & \(10023.26\pm 0.31\) & \(\approx 0.031\pm 0.003\) & \(9814.6\) & \(2.1\) \\ \(\Upsilon(3S)\) & \(2^{3}S_{1}\) & \(1^{--}\) & \(10355.2\pm 0.56\) & \(\approx 0.020\pm 0.002\) & \(10102.4\) & \(2.4\) \\ \(\Upsilon(4S)\) & \(2^{3}S_{1}\) & \(1^{--}\) & \(10579.4\pm 1.2\) & \(20.5\pm 2.5\) & \(10358.4\) & \(1.3\) \\ \hline \(\Upsilon_{2}(1D)\) & \(1^{3}D_{2}\) & \(2^{--}\) & \(10163.7\pm 1.4\) & NA & \(10062.4\) & \(1.0\) \\ \hline \(h_{b}(1P)\) & \(1^{1}P_{1}\) & \(1^{+-}\) & \(9899.3\pm 0.8\) & NA & \(9713.0\) & \(1.9\) \\ \hline \hline \end{tabular} \end{table} Table 3: The \(b\bar{b}\) mesons and their respective masses (\(M_{exp}\)) compared to the masses calculated in this paper (\(M_{cal}\)) using parameters (9b). The last column incudes the comparison of \(M_{exp}\) with \(M_{cal}\) in percentages. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \(\frac{MM}{MM_{11}}\) & \(MM_{88}\) & \(\frac{D\bar{D}}{DA_{33}}\) & \(DA_{66}\) \\ \(\kappa_{s}\) & \(-4/3\) & \(-3\) & \(-4/3\) & \(-10/3\) \\ \hline \hline \end{tabular} \end{table} Table 5: Casimir coefficients for the two cluster interaction. \[\vec{R}_{12} =\frac{m_{1}r_{1}}{m_{1}+m_{2}}+\frac{m_{2}r_{2}}{m_{1}+m_{2}}, \tag{12e}\] \[\vec{R}_{34} =\frac{m_{3}r_{3}}{m_{3}+m_{4}}+\frac{m_{4}r_{4}}{m_{3}+m_{4}}\] \[\vec{R}_{1234} =\frac{\mu_{12}r_{12}}{\mu_{12}+\mu_{34}}+\frac{\mu_{34}r_{34}}{ \mu_{12}+\mu_{34}} \tag{12f}\] One of the significant differences between classic meson and compact tetraquark is the fact that pairs are bosons, therefore we cannot use identity found in Eq. (6). Complete representation can be written as: \[(2\otimes 2)\otimes(2\otimes 2)=(1\oplus 3)\otimes(1\oplus 3)=1\oplus 3\oplus 3 \oplus 1\oplus 3\oplus 5, \tag{13}\] therefore for the pair-pair interaction in (1c): \[\langle S_{12}S_{34}\rangle=\frac{1}{2}\langle S^{2}-S_{1}^{2}-S_ {2}^{2}\rangle= \tag{14}\] \[=\begin{cases}0,&S=|0\rangle,S_{12}=S_{34}=|0\rangle\\ 0,&S=|1\rangle,S_{12}\neq S_{34}\\ -2,&S=|0\rangle,S_{12}=S_{34}=|1\rangle\\ -1,&S=|1\rangle,S_{12}=S_{34}=|1\rangle\\ 1,&S=|2\rangle,S_{12}=S_{34}=|1\rangle\end{cases}.\] Because all of the resonances were identified in \(di-J/\psi\) channel (except for \(X(7200)\) which was additionally identified in \(J/\psi+\Psi(2S)\)[2]) we will assume that total angular momentum is \(L=0\), so \(V^{SL}\) (1d) and \(V^{T}\) (1e) can be neglected. Additional adjustment should be made to the \(\alpha_{s}\) to include its scale dependence. Relation (15a) [25][26] with parameters (15b) and \(\mu\) as the reduced mass of the system. \[\alpha_{s}(\mu)=\frac{\alpha_{0}}{ln(\frac{\mu^{2}+\mu_{0}^{2}}{\Lambda_{0}^{ 2}})} \tag{15a}\] \[\mu_{0}\approx 0~{}\text{MeV},\quad\Lambda_{0}=0.112~{}\text{MeV},\quad \alpha_{0}=3.2524 \tag{15b}\] For the purposes of this work we will use following naming scheme for potential tetraquark candidates that includes two-letter code, "\(MM\)" or "\(DA\)", with subscript denting possible colour configuration and three sets of quantum terms; one for total system, two for the pairs. An example of the usage this naming scheme can be found in Eq. (16.) The naming scheme can potentially be used to label wave functions for those states. \[DA_{33}(2^{3}S_{1}\gets 1^{1}L_{1}+1^{1}L_{0}). \tag{16}\] \begin{table} \begin{tabular}{l c c c c c c} \hline \multirow{2}{*}{\(\kappa_{s}\)=\(\frac{1}{3}\)} & \multicolumn{3}{c}{S=0} & \multicolumn{3}{c}{S=1} \\ \cline{2-7} & L=0 & L=1 & L=0 & & L=1 & \\ \cline{2-7} & J=0 & J=1 & J=1 & J=0 & J=1 & J=2 \\ \hline N=1 & 2819.4 & 2929.4 & 2815.1 & 2997.3 & 2929.2 & 2912.0 \\ N=2 & 3016.0 & 3100.7 & 3011.9 & 3188.6 & 3100.3 & 3084.1 \\ \hline \end{tabular} \end{table} Table 6: Masses of colour sextet diquarks for possible combinations of their quantum numbers. \begin{table} \begin{tabular}{l c c c c c c} \hline \multirow{2}{*}{\(\kappa_{s}\)=\(-\frac{2}{3}\)} & \multicolumn{3}{c}{S=0} & \multicolumn{3}{c}{S=1} \\ \cline{2-7} & L=0 & L=1 & L=0 & & L=1 & \\ \cline{2-7} & J=0 & J=1 & J=1 & J=0 & J=1 & J=2 \\ \hline N=1 & 2882.3 & 3172.6 & 2908.8 & 3072.3 & 3173.8 & 3083.6 \\ N=2 & 3226.2 & 3503.8 & 3245.6 & 3263.1 & 3503.2 & 3363.5 \\ \hline \end{tabular} \end{table} Table 7: Masses of colour triplet diquarks for possible combinations of their quantum numbers. Results with masses of the diquarks have been calculated with use of the code and placed in the Tables 6-8. Masses of the tetraquarks were calculated by choosing quantum numbers of the components, quantum numbers of their product and possible colour combination. The code allows to consider other types of interaction than the Cornell potential or to consider massive force carriers, however, we want to stay on safe ground concerning the heavy quarkonia spectroscopy. The exact results for the ground state (\(N=0\)) and the first orbital excitation (\(N=1\)) can be for in the tables and energy level diagrams can be found in Figs 1-3. ## 4 Discussion of the results We can clearly see that regardless of the composition of the tetraquark there are multiple candidates for for \(X(6900)\). All of the possible candidates are \(N=2\) states. Only sextet-antisextet states have suitable \(X(7200)\) candidates and all the suitable \(X(6900)\) candidates in that configuration have components with \(L=0\). Due to the use of a simple model one definite configuration cannot be suggested with certainty, however the closest lying candidate is \(DA_{66}(2^{3}S_{1}\gets 1^{3}S_{1}+1^{3}S_{1})\). In the previous section the subject of the wave function was omitted, as it held no significant impact on the calculation. However, we have to consider influence of the Pauli exclusion principle on the possibility of the existence of several resonances. Using the example shown in the Eq. (16) we will show the use of the notation we introduced to write down the wave function of the tetraquark: \[\begin{split}|DA_{33}(2^{3}S_{1}\gets 1^{1}L_{1}+1^{1}L_{0} )\rangle&=|D_{3}(1^{1}L_{1})\rangle\langle A_{3}(1^{1}L_{1})|,\\ |X_{c}(N^{2S+1}I_{J})\rangle&=|Y^{m}_{l}\times X^{ \sigma}\times\,X^{c}\times X^{f}\rangle,\end{split} \tag{17}\] The four parts of the wave wave function are the spherical harmonics \(Y^{m}_{l}\), the spin wave function \(X^{\sigma}\), the colour wave function \(X^{c}\) and the flavour wave function \(X^{f}\). The flavour wave functions \(X^{f}\) of a diquark \(|D\rangle\) or antiquark \(|A\rangle\) is symmetric, unlike the the wave function for a meson or a meson-like state \(|M\rangle\); \[|X^{f}_{D}\rangle=|cc\rangle,\quad|X^{f}_{A}\rangle=|\bar{c}\bar{c}\rangle, \quad|X^{f}_{M}\rangle=|c\bar{c}\rangle.\] (18a) The spin wave function is identical for any pair and the total spin wave function \[X^{\sigma}_{S,S_{z}}\] is dependent on the spin wave functions of the pairs: \[\begin{split}|X^{\sigma}_{0,0}\rangle&=\frac{1}{ \sqrt{2}}(|\uparrow\downarrow\rangle-|\downarrow\uparrow\rangle),\\ |X^{\sigma}_{1,0}\rangle&=\frac{1}{\sqrt{2}}(| \uparrow\downarrow\rangle+|\downarrow\uparrow\rangle),\\ |X^{\sigma}_{1,1}\rangle&=|\uparrow\uparrow\rangle, \quad\quad|X^{\sigma}_{1,-1}\rangle=|\downarrow\downarrow\rangle.\end{split}\] (18b) For the pairs only the \[|X_{0,0}\rangle\] is antisymmetric. The colour wave functions for the pairs will not be shown explicitly for the sake of brevity, however it needs to be explicitly stated that \[|X^{c}_{3}\rangle\] is fully antisymmetric while \[|X^{c}_{6}\rangle\] is fully symmetric. The wave function for the colour antisextet \(1^{3}S_{1}\) diquark would have a wave function that would violate the Pauli exclusion principle, therefore any state with that subsystem is marked as forbidden. \begin{table} \begin{tabular}{l c c c c c c} \hline \multirow{2}{*}{\(\kappa_{\text{s}}\)=\(\frac{1}{6}\)} & \multicolumn{3}{c}{S=0} & \multicolumn{3}{c}{S=1} \\ \cline{2-7} & L=0 & L=1 & L=0 & & L=1 & \\ \cline{2-7} & J=0 & J=1 & J=1 & J=0 & J=1 & J=2 \\ \hline N=1 & 2705.2 & 2777.3 & 2704.0 & 2808.8 & 2777.2 & 2768.6 \\ N=2 & 2830.4 & 2886.9 & 2829.3 & 2922.4 & 2886.8 & 2878.3 \\ \hline \end{tabular} \end{table} Table 8: Masses of colour octet quark-antiquark states for possible combinations of their quantum numbers. We have only two possible flavour wave functions for the tetraquark: \[\begin{split}|X^{f}_{MM}\rangle=|X^{f}_{M}\rangle\langle X^{f}_{M}|=| c\bar{c}c\bar{c}\rangle,\\ |X^{f}_{DA}\rangle=|X^{f}_{D}\rangle\langle X^{f}_{A}|=|cc\bar{c} \bar{c}\rangle.\end{split}\] (19a) Assuming that the total spin of the tetraquark is known as well as the spin of the both pairs we can obtain the total of six possible total spin wave functions: \[\begin{split}|X^{\sigma_{1}}_{0,0}\rangle=|X^{\sigma}_{0,0} \rangle\langle X^{\sigma}_{0,0}|\\ |X^{\sigma_{2}}_{0,0}\rangle=\frac{1}{\sqrt{3}}(|X^{\sigma}_{1,1 }\rangle\langle X^{\sigma}_{1,-1}|+|X^{\sigma}_{1,-1}\rangle\langle X^{\sigma }_{1,1}|\\ -|X^{\sigma}_{1,0}\rangle\langle X^{\sigma}_{1,0}|)\\ |X^{\sigma_{3}}_{1,1}\rangle=|X^{\sigma}_{1,1}\rangle\langle X^{ \sigma}_{0,0}|\\ |X^{\sigma_{4}}_{1,1}\rangle=|X^{\sigma}_{0,0}\rangle\langle X^{ \sigma}_{1,1}|\\ |X^{\sigma_{5}}_{1,1}\rangle=\frac{1}{\sqrt{2}}(|X^{\sigma}_{1, 1}\rangle\langle X^{\sigma}_{1,0}|+|X^{\sigma}_{1,0}\rangle\langle X^{\sigma }_{1,1}|)\\ |X^{\sigma_{6}}_{2,2}\rangle=|X^{\sigma}_{1,1}\rangle\langle X^{ \sigma}_{1,1}|.\end{split} \tag{19b}\] There are only three total colour wave functions that we need to consider: \[\begin{split}|X^{c}_{88}\rangle=|X^{c}_{8}\rangle\langle X^{c}_{8 }|,\\ |X^{c}_{33}\rangle=|X^{c}_{3}\rangle\langle X^{c}_{3}|,\quad|X^{c}_{6 6}\rangle=|X^{c}_{6}\rangle\langle X^{c}_{6}|.\end{split} \tag{19c}\] For the \(L=0\) the spherical harmonic \(Y^{0}_{0}\) is symmetric therefore \(X^{\sigma}\times X^{c}\times X^{f}\) must be antisymmetric, therefore we can mark \(DA_{33}(1^{1}S_{0}\gets 1^{1}S_{0}+1^{1}S_{0})\) as forbidden. One of the problems that we are attempting to solve in this work is finding the reason of the high mass of the \(X(6900)\) and the lack of more prominent lighter ground state. If we assume that the resonance has \(DA_{66}\) structure we could easily explain lack of a prominent resonances closer to \(di-J/\psi\) and \(di-\eta_{c}\) mass threshold could be explained by them violating Pauli exclusion principle, while states with \(N=2\) could possibly be observed. Further inspection of the data suggest that \(X(6900)\) most likely is either \(0^{-+}\) or \(1^{--}\) state that have substructures with orbital angular momentum \(L_{12}=L_{34}=0\). No suggestion for \(X(7200)\) can be made, however if we assume that it has the same colour composition as \(X(6900)\) we could assume that it state with \(N=2\) and \(L_{12}=L_{34}=1\). The possibility of \(X(7200)\) being a \(N=3\) state was not evaluated due to a lack of precision during the calculation. The other possible compositions of all-charm tetraquarks might be contributing to the broad structure. Since the background consists of multiple components and is possibly composed of several all-charm states with different quantum numbers, the confirmation of quantum numbers of \(X(6900)\) and the other structures would require extraordinary precision and the ability to separate data from multiple hadrons with identical content. Assuming that the \(bb\bar{b}\bar{b}\) would behave in the same manner as \(cc\bar{c}\bar{c}\), we can calculate potential mass of all-bottom tetraquark. Assuming that the composition of \(DA_{6\bar{6}}(2^{3}S_{1}\gets 1^{3}S_{1}+1^{3}S_{1})\) we archive: \[m(bb\bar{b}\bar{b})\approx 19.23\;\mathrm{GeV}. \tag{20}\] Due to the high mass, it might be unlikely to confirm the existence of the resonance in the next few years. Investigation of the additional decay channels may be prolific, in particular investigation of channel involving another exotic hadron and/or D mesons. Despite the lack Figure 1: Energy level diagram for \(DA_{33}\) all-charm tetraquark structure. Numerical data can be found in Table 9 and Table 10. Masses and widths of the resonances \(X(6900)\) and \(X(7200)\) were obtained by ATLAS with model A [2]. of significant data about some of the XYZ states, we have hypotheses regarding their internal dynamics, therefore gaining more information about possibility of the investigation of intermediate decays may possibly bring more information about tetraquark structure. Possible candidates for decay channels are \(X(6900)\rightarrow\chi_{c1}(3872)+J/\psi\) or \(X(6900)\rightarrow\chi_{c1}(3872)+\eta_{c}\). State \(\chi_{c1}(3872)\) is one of the lowest-lying exotic states and potential identification of it in the decays may help with obtaining the threshold from sharp all-charm resonances, since mass of \(X(6900)\) is approximately equal to the sum of masses of the ground state charmonium and \(\chi_{c1}(3872)\). ## 5 Conclusions Assuming \(L=0\), the \(X(6900)\) might have sextet-antisextet colour structure with \(N=1\) and possible terms \(0^{-+}\) or \(1^{--}\), possibly a mixed state of both. The lack of a more prominent resonance with lower energy could be explained by Pauli exclusion principle. Some of the low-lying \(DA_{6\bar{6}}\) resonances do not appear as sharp peaks due to the impact of the non-zero angular momenta and due to the large number of the states with similar energies, instead they contribute to the broad structures around 6200 MeV and 6500 MeV. The octet-octet and triplet-antitriplet structures contribute to the background and to the low-lying broad structures. One could consider the possibility that the \(DA_{33}\) structure is less prominent than \(DA_{6\bar{6}}\) due to a higher likelihood of the formation of a doubly-charmed baryon and and its antiparticle instead of tightly bound tetraquark. More experimental data is needed to determine the quantum numbers of \(X(6900)\) and investigating the \(J/\psi+\Psi(2S)\) invariant mass spectrum to more precisely determine the mass of \(X(7200)\) and the quantum numbers of the resonances. Additional decay channels involving the XYZ states, in particular \(\chi_{c1}(3872)\) should Figure 3: Energy level diagram for \(MM_{88}\) all-charm tetraquark structure. Numerical data can be found in Table 9 and Table 10. Masses and widths of the resonances \(X(6900)\) and \(X(7200)\) were obtained by ATLAS with model A [2]. Figure 2: Energy level diagram for \(DA_{6\bar{6}}\) all-charm tetraquark structure. Numerical data can be found in Table 9 and Table 10. Masses and widths of the resonances \(X(6900)\) and \(X(7200)\) were obtained by ATLAS with model A [2]. be considered. ### Acknowledgement I would like to express my gratitude to my supervisor, prof. dr hab. David Blaschke for the support during my thesis research and writing of this paper.
2309.15841
Spectral properties of edge Laplacian matrix
Let $N(X)$ be the Laplacian matrix of a directed graph obtained from the edge adjacency matrix of a graph $X.$ In this work, we study the bipartiteness property of the graph with the help of $N(X).$ We computed the spectrum of the edge Laplacian matrix for the regular graphs, the complete bipartite graphs, and the trees. Further, it is proved that given a graph $X,$ the characteristic polynomial of $N(X)$ divides the characteristic polynomial of $N(X^{\prime\prime}),$ where $X^{\prime\prime}$ denote the Kronecker double cover of $X.$
Shivani Chauhan, A. Satyanarayana Reddy
2023-09-26T07:22:29Z
http://arxiv.org/abs/2309.15841v1
# Spectral properties of edge Laplacian matrix ###### Abstract. Let \(N(X)\) be the Laplacian matrix of a directed graph obtained from the edge adjacency matrix of a graph \(X.\) In this work, we study the bipartiteness property of the graph with the help of \(N(X).\) We computed the spectrum of the edge Laplacian matrix for the regular graphs, the complete bipartite graphs, and the trees. Further, it is proved that given a graph \(X,\) the characteristic polynomial of \(N(X)\) divides the characteristic polynomial of \(N(X^{\prime\prime}),\) where \(X^{\prime\prime}\) denote the Kronecker double cover of \(X.\) 2010 Mathematics Subject Classification: 05C05, 05C50, 05C76. Keywords and phrases. Laplacian matrix, edge adjacency matrix. ## 1. Introduction The _Laplacian matrix_\(\mathfrak{L}(X)\) or \(\mathfrak{L}\) of a directed graph \(X\) is defined as \(\mathfrak{L}(X)=\mathfrak{D}(X)-A(X),\) where \(A(X)\) is the adjacency matrix of \(X\) and \(\mathfrak{D}(X)\) is the diagonal matrix of the row sums of \(A(X).\) In [12, 13], Chai Wah Wu defined the algebraic connectivity of directed graphs. It is useful in deriving synchronization criteria for arrays of coupled dynamical systems for both constant and time-varying coupling. In this work, we are studying the Laplacian matrix of a special class of directed graphs, for that we first define the edge adjacency matrix of a graph. There are other definitions of Laplacian matrices of directed graphs (see [2, 4]). Let \(X=(V(X),E(X))\) be a graph with \(|V(X)|=n,\)\(|E(X)|=m.\) We orient the edges of \(X\) arbitrarily, label them as \(e_{1},e_{2},\ldots,e_{m}\) and also \(e_{m+i}=e_{i}^{-1},\ 1\leq i\leq m,\) where \(e_{k}^{-1}\) denotes the edge \(e_{k}\) with the direction reversed. Then the _edge adjacency matrix_ of \(X\), denoted by \(M(X)\) or \(M\) if \(X\) is clear from the context, is defined as \[M_{ij}=\begin{cases}1&\text{if }t(e_{i})=s(e_{j})\ \text{ and }\ s(e_{i})\neq t(e_{j}),\\ 0&\text{otherwise.}\end{cases}\] where \(s(e_{i})\) and \(t(e_{i})\) denote the starting and the terminal vertex of \(e_{i},\) respectively. For further information about the matrix \(M\), one can refer [3, 9, 10]. The _edge Laplacian matrix_\(N(X)\) or \(N\) of a graph \(X\) is defined as \(N=D-M,\) where \(D\) is the diagonal matrix of the row sums of \(M\)_i.e.,_ the \(i^{th}\) diagonal entry equals to \(deg(t(e_{i}))-1,\) and \(deg(v)\) denotes the degree of the vertex \(v.\) Recall, a graph is _strongly connected_ if for every ordered pair of vertices \((v,w),\) there exists a directed path from \(v\) to \(w.\) In Theorem 11.10 of [10], it is proved that if \(X\) is a connected graph that is not a cycle graph and does not contain a pendant vertex, then matrix \(M\) is irreducible. Therefore the directed graph obtained from the matrix \(M\) is strongly connected, hence 0 is a simple eigenvalue of \(N\). The process of computation of matrix \(N(C_{3})\) is given in Figure 1. In the next section, we describe the structure of the matrix \(N\) which is useful to determine the bipartiteness property of the graph. Further, the spectrum of various families of graphs is provided, in particular for regular graphs, trees, and complete bipartite graphs. Lastly, it is proved that \(\phi_{N(X)}\) divides \(\phi_{N(X^{\prime\prime})},\) where \(\phi_{A}\) and \(X^{\prime\prime}\) denote the characterstic polynomial of a matrix \(A\) and the Kronecker product of a graph \(X\) with \(K_{2},\) respectively and \(K_{n}\) denotes the complete graph on \(n\) vertices. Recall, the _Kronecker product_\(X_{1}\times X_{2}\) of graphs \(X_{1}\) and \(X_{2}\) is a graph such that the vertex set is \(V(X_{1})\times V(X_{2}),\) vertices \((x_{1},x_{2})\) and \((x_{1}^{\prime},x_{2}^{\prime})\) are adjacent if \(x_{1}\) is adjacent to \(x_{1}^{\prime}\) and \(x_{2}\) is adjacent to \(x_{2}^{\prime}.\) In particular given a graph \(X,\)\(X\times K_{2}\) is called the _Kronecker double cover_ of \(X.\) From now onwards, the edge adjacency matrix of a graph is denoted by \(M\) and the diagonal matrix of its rows sums by \(D.\) ## 2. Spectrum of edge Laplacian matrix The collection of all the eigenvalues of a matrix together with its multiplicities is known as the _spectrum_ of that matrix. If \(\lambda_{1}>\lambda_{2}>\ldots>\lambda_{k}\) are the distinct eigenvalues of a matrix \(A,\) then the spectrum of \(A\) is defined by Figure 1. where \(m_{i}\) is the algebraic multiplicity of eigenvalue \(\lambda_{i}\) for \(1\leq i\leq k.\) The matrix \(N\) has the following structure \[N=D-M=\left[\begin{array}{c|c}P&Q\\ \hline R&S\end{array}\right], \tag{1}\] where \(P,Q,R,S\) are \(m\times m\) matrices. From the structure of matrix \(M,\) we have that \(Q\) and \(R\) are symmetric matrices. If \(X\) is regular, then \(P=S^{T}\) and \(N^{T}=JNJ,\) where \(J=\begin{pmatrix}0&\mathbb{I}_{m}\\ \mathbb{I}_{m}&0\end{pmatrix}\) and \(\mathbb{I}_{m}\) denotes the identity matrix of order \(m\). The row and column sums of the matrix \(N\) are zero for regular graphs which implies that the cofactors of any two elements of \(N\) are equal [1, Lemma 4.2]. It is interesting to note that the sum of the blocks of \(N,\)\(P+Q+R+S\) is the Laplacian matrix of \(L(X),\) where \(L(X)\) denotes the line graph of \(X\). As \[N=D-M=\begin{bmatrix}D_{1}&0\\ 0&D_{2}\end{bmatrix}-\begin{bmatrix}A&B\\ C&D\end{bmatrix},\] the sum of the blocks of \(M\) is equal to \(A(L(X))\)[9, p.7] and \((D_{1}+D_{2})_{ii}=deg(t(e_{i}))+deg(t(e_{i}^{-1}))-2,\) where \(1\leq i\leq m.\) It is well known from [7] that a graph \(X\) is bipartite if and only if the Laplacian matrix and the signless Laplacian matrix of \(X\) are similar. Our next result is analogous to this. **Theorem 1**.: _A graph \(X\) is bipartite if and only if \(D+M\) and \(D-M\) are similar, where \(M\) is irreducible._ Proof.: Suppose that \(X\) is bipartite with \(m\) edges. Let \(\{v_{1},v_{2},\ldots,v_{m}\}\) and \(\{v^{\prime}_{1},v^{\prime}_{2},\ldots,v^{\prime}_{n}\}\) be the vertex bipartition of \(X.\) Choose an orientation in \(X\) in such a way that \(e^{\prime}_{i}s\) are the directed edges from \(v_{i}\) to \(v^{\prime}_{j}\) for all \(1\leq i\leq m,1\leq j\leq n,\) then \(M\) can be expressed in the following form \[M=\left[\begin{array}{c|c}0&B\\ \hline C&0\end{array}\right]. \tag{2}\] It is simple to check that \(P^{T}(D+M)P=D-M\), where \(P=\begin{bmatrix}\mathbb{I}_{m}&0\\ 0&-\mathbb{I}_{m}\end{bmatrix}\). Conversely, suppose that \(D+M\) and \(D-M\) are similar, where \(M\) is irreducible. Clearly, \(0\) is an eigenvalue of \(D+M\). Let \(x=(x_{1},x_{2},\ldots,x_{2m})\) be the eigenvector corresponding to the eigenvalue \(0\) of \(D+M\). Choose \(x_{i}\) to be the maximum absolute entry of \(x.\) As \(((D+M)x)_{i}=0,\) we have \(\sum_{j=1,i\neq j}^{2m}m_{ij}x_{j}+d_{ii}x_{i}=0\). We obtain \(d_{ii}x_{i}=-\sum x_{j},\) where the summation is over the edges to which \(e_{i}\) is feeding and these edges are \(d_{ii}\) in number. By the maximality of \(x_{i},\) we have \(x_{i}=-x_{j}\) for all edges into which \(e_{i}\) is feeding. Now, we explain the construction of the vertex partition. We put the vertex \(e_{i}\) in one partition, say \(V_{1},\) and the vertices \(e^{\prime}_{j}s\) into which \(e_{i}\) is feeding in the partition \(V_{2}\) and the edges are directed from \(e_{i}\) to \(e^{\prime}_{j}s.\) Again we choose the maximum absolute entry of \(x\) and continue the same process. This shows \(M\) has the structure given in (2), hence \(X\) is bipartite. We shall now present the spectrum for several graph families. The spectrum of a regular graph can be easily obtained from Theorem 2.2 in [6]. Let \(X\) be a \(k\)-regular graph on \(n\) vertices, then the eigenvalues of \(N\) are \[k-1\pm 1,k-1-\left(\frac{\lambda_{i}\pm\sqrt{\lambda_{i}^{2}-4(k-1)}}{2}\right),(i =1,\ldots,n)\] where \(\lambda_{i}\in spec(A(X))\) and \(k-1\pm 1\) each have multiplicity \(m-n.\) Later, we computed the eigenvalues of \(N\) for trees and complete bipartite graphs in Theorems 3 and 5, respectively. In Table 1, the spectrum of \(N(X)\) is given, where \(X\) is a tree on \(6\) vertices. We state Theorem 2 which is required in the proof of Theorem 3. **Theorem 2**.: _[_11_, Theorem 3.2]_ _Let \(M\) be the edge adjacency matrix of a graph \(X.\) Then \(M\) is a nilpotent matrix if and only if \(X\) is a forest._ **Theorem 3**.: _Let \(X\) be a connected graph. Then the eigenvalues of \(N\) are the same as the eigenvalues of \(D\) if and only if \(X\) is a tree._ Proof.: Suppose that \(X\) is a tree on \(m\) edges. Let \(\phi_{D}(\lambda)=\lambda^{2m}+d_{1}\lambda^{2m-1}+d_{2}\lambda^{2m-2}+\cdots+ d_{2m-1}\lambda+d_{2m}\) and \(\phi_{N}(\lambda)=\lambda^{2m}+n_{1}\lambda^{2m-1}+n_{2}\lambda^{2m-2}+\cdots+ n_{2m-1}\lambda+n_{2m}\) be the characterstic polynomial of matrix \(D\) and \(N\), respectively. We want to show that \(d_{i}=n_{i},\) where \(1\leq i\leq 2m.\) In order to prove the claim, we use the fact that the sum of the products of the eigenvalues of a matrix \(A\) taken \(k\) at a time equals the sum of the \(k\times k\) principal minors of \(A\)[8, p.53]. It is simple to observe that \(d_{1}=n_{1}.\) The \(2\times 2\) principal submatrix of \(N\) has the following structure, \[B_{2}=\begin{pmatrix}d_{ii}&-m_{ij}\\ -m_{ji}&d_{jj}\end{pmatrix},\] where \(m_{ij}\) denotes the \(ij^{th}\) entry of \(M\). By the definition of \(M\), \(det(B_{2})=d_{ii}d_{jj},\) therefore \(d_{2}=n_{2}\). Let \(B_{k}\) denote the \(k\times k\) principal submatrix of \(N\) \begin{table} \begin{tabular}{|c|c|} \hline & Spectrum of \(N(X)\) \\ \hline & \(\begin{bmatrix}0&1\\ 2&8\end{bmatrix}\) \\ \hline & \(\begin{bmatrix}0&1&2\\ 3&4&3\end{bmatrix}\) \\ \hline & \(\begin{bmatrix}0&1&2\\ 3&4&3\end{bmatrix}\) \\ \hline & \(\begin{bmatrix}0&1&3\\ 4&2&4\end{bmatrix}\) \\ \hline & \(\begin{bmatrix}0&2\\ 4&6\end{bmatrix}\) \\ \hline & \(\begin{bmatrix}0&4\\ 5&5\end{bmatrix}\) \\ \hline \end{tabular} \end{table} Table 1. whose rows and columns are indexed by \(i_{1}>i_{2}>\ldots>i_{k}.\) It is well known that \[det(A)=\sum_{\sigma\in S_{n}}sgn(\sigma)\prod_{i=1}^{n}a_{i\sigma(i)}, \tag{3}\] where the summation is over all permutation \(\{1,2,\ldots,n\}\). As \(X\) is a tree, the directed graph obtained from the matrix \(M\) has no cycles. By (3), \(det(B_{k})=d_{i_{1}i_{1}}d_{i_{2}i_{2}}\ldots d_{i_{k}i_{k}}\), which implies \(d_{k}=n_{k}\). Conversely, suppose that \(\phi_{D}(\lambda)=\phi_{N}(\lambda)\). Let \(\phi_{M}(\lambda)=\lambda^{2m}+m_{1}\lambda^{2m-1}+m_{2}\lambda^{2m-2}+\ldots+ m_{2m-1}\lambda+m_{2m}\). We want to show that \(m_{i}=0\)\(\forall\)\(1\leq i\leq 2m\). The result is proved using strong induction. Clearly, \(m_{1}=m_{2}=0\). Assume that \(m_{i}=0\forall 1\leq i\leq k\). As \(m_{k+1}=\) sum of \((k+1)\times(k+1)\) principal minors of \(M\), by (3) \[m_{k+1}=\sum_{i_{1}<i_{2}<\ldots<i_{k+1}}\sum_{\sigma\in S_{k+1}}sgn(\sigma) \prod_{\ell=1}^{k+1}m_{i_{\ell}\sigma(i_{\ell})}, \tag{4}\] where \(1\leq i_{1},i_{2},\ldots,i_{k+1}\leq 2m.\) In the inner summation of (4), all the entries corresponding to permutations which are not a cycle of length \(k+1\) are zero, by induction hypothesis. Hence, we are left with the following expression \[m_{k+1}=\sum_{i_{1}<i_{2}<\ldots<i_{k+1}}\sum_{\sigma\in C\cup\{e\}}(-1)^{k+1} m_{i_{1}\sigma(i_{1})}m_{i_{2}\sigma(i_{2})}\ldots m_{i_{k+1}\sigma(i_{k+1})},\] where \(C\) denotes the cycle of length \(k+1\). As sum of \(k\times k\) principal minors of \(N\) is equal to the sum of \(k\times k\) principal minors of \(D\), we deduce the following expression \[\sum_{i_{1}<i_{2}<\ldots<i_{k+1}}\left(\prod_{j=1}^{k+1}n_{i_{j}i_{j}}+\sum_{ \sigma\in S_{k+1},\sigma\neq\{e\}}sgn(\sigma)\prod_{\ell=1}^{k+1}n_{i_{\ell} \sigma(i_{\ell})}\right)=\sum_{i_{1}<i_{2}<\ldots<i_{k+1}}\prod_{j=1}^{k+1}n_{ i_{j}i_{j}}.\] By above Equation we obtain \[\sum_{i_{1}<i_{2}<\ldots<i_{k+1}}\sum_{\sigma\in S_{k+1},\sigma\neq\{e\}}sgn( \sigma)\prod_{\ell=1}^{k+1}n_{i_{\ell}\sigma(i_{\ell})}=0. \tag{5}\] As \(N=D-M\), \(n_{i_{\ell}\sigma(i_{\ell})}=\begin{cases}d_{i_{\ell}i_{\ell}}&\text{if } \sigma(i_{\ell})=i_{\ell}\\ -m_{i_{\ell}\sigma(i_{\ell})}&\text{if }\sigma(i_{\ell})\neq i_{\ell}\end{cases}\). In the inner summation of (5), all the terms in the permutations except those terms corresponding to full cycle permutation are zero, by induction hypothesis. Equation (5) reduces to \[\sum_{i_{1}<i_{2}<\cdots<i_{k+1}}\sum_{\sigma\in C,\sigma\neq\{e\}}(-1)^{k+1}( -1)^{k+1}m_{i_{1}\sigma(i_{1})}m_{i_{2}\sigma(i_{2})}\cdots m_{i_{k+1}\sigma(i _{k+1})}=0.\] Hence \(m_{k+1}=0\) and by Theorem 2, the proof is complete. **Corollary 4**.: _Let \(X\) be a tree on the \(n\) vertices and \((d_{1},d_{2},\ldots,d_{n})\) be the degree sequence of \(X\). Then \(d_{i}-1\) is an eigenvalue of \(N\) with multiplicity \(d_{i}\), where \(1\leq i\leq n\)._ We now define the Kronecker product of the matrices and the Schur complement as it is required to prove our next result. The Kronecker product of the matrices \(A=[a_{ij}]\) and \(B\) is defined as the partitioned matrix \([a_{ij}B]\) and is denoted by \(A\otimes B.\) Let \(H\) be an \(n\times n\) matrix partitioned as \(H=\begin{bmatrix}A_{11}&A_{12}\\ A_{21}&A_{22}\end{bmatrix}\), where \(A_{11}\) and \(A_{22}\) are square matrices. If \(A_{11}\) is non-singular, then the Schur complement of \(A_{11}\) in \(H\) is defined to be the matrix \(A_{22}-A_{21}A_{11}^{-1}A_{12}.\) For Schur complement, we have \(det(H)=det(A_{11})det(A_{22}-A_{21}A_{11}^{-1}A_{12})\) and if \(A_{11}A_{12}=A_{12}A_{11},\) then \(det(H)=det(A_{22}A_{11}-A_{21}A_{12})\). Similarly if \(A_{22}\) is non-singular, then the Schur complement of \(A_{22}\) in \(H\) is defined to be the matrix \(A_{11}-A_{12}A_{22}^{-1}A_{21}\) and we can obtain \(det(H)=det(A_{22})det(A_{11}-A_{12}A_{21}^{-1}A_{21})\). If \(A_{21}A_{22}=A_{22}A_{21},\) then \(det(H)=det(A_{11}A_{22}-A_{12}A_{21})\). **Theorem 5**.: _Let \(X=K_{p,q}\) be the complete bipartite graph on the \(n\) vertices. Then the spectrum of \(N(K_{p,q})\) is equal to_ \[\begin{Bmatrix}0&u&\frac{u\pm\sqrt{v+4}}{2}&\frac{u\pm\sqrt{v+4(1-q)}}{2}&\frac {u\pm\sqrt{v+4(1-p)}}{2}\\ 1&1&(q-1)(p-1)&p-1&q-1\end{Bmatrix},\] _where \(u=p+q-2\) and \(v=(p-q)^{2}\)._ Proof.: Let \(V(K_{p,q})=\{v_{1},v_{2},\ldots,v_{p}\}\cup\{v_{1}^{\prime},v_{2}^{\prime}, \ldots,v_{q}^{\prime}\}\) be a bipartition of \(X\). Choosing an orientation that is defined in the proof of Theorem 1, then \(N(K_{p,q})\) can be written in the following form \[N(K_{p,q})=\begin{bmatrix}(p-1)(U)&\mathbb{J}_{p}\otimes(-\mathbb{I}_{q})+U\\ V&(q-1)(U)\end{bmatrix},\] where \(U=\mathbb{I}_{p}\otimes\mathbb{I}_{q},V=\mathbb{I}_{p}\otimes(-A(K_{q}))\), and \(\mathbb{J}_{n}\) denotes the matrix of order \(n\) with all entries one. The characterstic polynomial of \(N(X)\) is \[\phi_{N}(x)=\begin{vmatrix}(p-1-x)(U)&\mathbb{J}_{p}\otimes(-\mathbb{I}_{q})+U \\ V&(q-1-x)(U)\end{vmatrix}=0.\] A simple check shows that \[V((q-1-x)(U))=((q-1-x)(U))V=\mathbb{I}_{p}\otimes((x-q+1)A(K_{q})).\] By the Schur complement formula, we obtain the following Equation \[\phi_{N}(x) =det(((p-1-x)U)((q-1-x)U)-(\mathbb{J}_{p}\otimes(-\mathbb{I}_{q})+U)V)\] \[=det((p-1-x)(q-1-x)(U)-\mathbb{J}_{p}\otimes A(K_{q})-V).\] For the sake of convenience, assume that \[S^{\prime}=(p-1-x)(q-1-x)(U)-\mathbb{J}_{p}\otimes A(K_{q})-V,\] \[T^{\prime}=(1-p)A(K_{q})+(p-x-1)(q-x-1)\mathbb{I}_{q},U^{\prime}=A(K_{q})+(p- x-1)(q-x-1)\mathbb{I}_{q}.\] Let \(\lambda\) be an eigenvalue of \(T^{\prime}\) with its corresponding eigenvector \(v\), then \(S^{\prime}V=\lambda V,\) where \(V\) is a \(p\) dimensional vector \((v,v,\ldots,v)^{T}.\) Let \(\lambda^{\prime}\) be an eigenvalue of \(U^{\prime}\) with corresponding eigenvector \(v^{\prime}\). By a straightforward calculation, one can see that the following linearly independent vectors \[\{(v^{\prime},-v^{\prime},0,0,\ldots,0)^{T},(v^{\prime},0,-v^{\prime},0,\ldots,0)^{T},\ldots,(v^{\prime},0,0,0,\ldots,-v^{\prime})^{T}\}\] are the eigenvectors of \(S^{\prime}\). As the sum of the multiplicities of the obtained eigenvalues equals the number of vertices in the directed graph obtained from \(M\). Therefore, \(\phi_{N}(x)=(det(U^{\prime}))^{p-1}det(T^{\prime}).\) As \(A(K_{q})=\mathbb{J}_{q}-\mathbb{I}_{q}\) and the eigenvalues of \(\mathbb{J}_{q}\) are \(0\) and \(q\) with multiplicity \(q-1\) and \(1\), respectively. From here the eigenvalues of \(N\) can be deduced. The following result is about the eigenvalues of symmetric block matrices that will be used to prove the next result. **Theorem 6**.: _[_5_]_ _Let \(H=\begin{bmatrix}A^{\prime}&B^{\prime}\\ B^{\prime}&A^{\prime}\end{bmatrix}\) be a symmetric \(2\times 2\) block matrix, where \(A^{\prime}\) and \(B^{\prime}\) are square matrices of same order. Then the spectrum of \(H\) is the union of the spectra of \(A^{\prime}+B^{\prime}\) and \(A^{\prime}-B^{\prime}\)._ **Theorem 7**.: _Let \(X\) be a connected graph, then \(\phi_{N(X)}\) divides \(\phi_{N(X^{\prime\prime})}\)._ Proof.: We know that \(N(X)=D(X)-M(X)=\begin{pmatrix}D_{1}-A&-B\\ -C&D_{2}-D\end{pmatrix}\). From the proof of Theorem 4.2 in Horton [9], \(M(X^{\prime\prime})=\begin{pmatrix}0&MJ\\ JM&0\end{pmatrix}\), therefore \[N(X^{\prime\prime})=D(X^{\prime\prime})-M(X^{\prime\prime})=\begin{pmatrix}D _{1}&0&-B&-A\\ 0&D_{2}&-D&-C\\ -C&-D&D_{2}&0\\ -A&-B&0&D_{1}\end{pmatrix}.\] Note that \(P^{T}N(X^{\prime\prime})P=\begin{pmatrix}D_{1}&0&-A&-B\\ 0&D_{2}&-C&-D\\ -A&-B&D_{1}&0\\ -C&-D&0&D_{2}\end{pmatrix}\), where \(P=\begin{pmatrix}I_{m}&0&0&0\\ 0&I_{m}&0&0\\ 0&0&0&I_{m}\\ 0&0&I_{m}&0\end{pmatrix}\). The result follows from Theorem 6. ## 3. Acknowledgement The authors would like to thank the handling editor and the anonymous reviewers for their careful reading of the manuscript.
2310.00491
StreetNav: Leveraging Street Cameras to Support Precise Outdoor Navigation for Blind Pedestrians
Blind and low-vision (BLV) people rely on GPS-based systems for outdoor navigation. GPS's inaccuracy, however, causes them to veer off track, run into obstacles, and struggle to reach precise destinations. While prior work has made precise navigation possible indoors via hardware installations, enabling this outdoors remains a challenge. Interestingly, many outdoor environments are already instrumented with hardware such as street cameras. In this work, we explore the idea of repurposing existing street cameras for outdoor navigation. Our community-driven approach considers both technical and sociotechnical concerns through engagements with various stakeholders: BLV users, residents, business owners, and Community Board leadership. The resulting system, StreetNav, processes a camera's video feed using computer vision and gives BLV pedestrians real-time navigation assistance. Our evaluations show that StreetNav guides users more precisely than GPS, but its technical performance is sensitive to environmental occlusions and distance from the camera. We discuss future implications for deploying such systems at scale.
Gaurav Jain, Basel Hindi, Zihao Zhang, Koushik Srinivasula, Mingyu Xie, Mahshid Ghasemi, Daniel Weiner, Sophie Ana Paris, Xin Yi Therese Xu, Michael Malcolm, Mehmet Turkcan, Javad Ghaderi, Zoran Kostic, Gil Zussman, Brian A. Smith
2023-09-30T21:16:05Z
http://arxiv.org/abs/2310.00491v2
# StreetNav: Leveraging Street Cameras to Support ###### Abstract Blind and low-vision (BLV) people rely on GPS-based systems for outdoor navigation. GPS's inaccuracy, however, causes them to veer off track, run into unexpected obstacles, and struggle to reach precise destinations. While prior work has made precise navigation possible indoors via additional hardware installations, enabling precise navigation outdoors remains a challenge. Ironically, many outdoor environments of interest such as downtown districts are already instrumented with hardware such as street cameras. In this work, we explore the idea of repurposing street cameras for outdoor navigation, and investigate the effectiveness of such an approach. Our resulting system, StreetNav, processes the cameras' video feeds using computer vision and gives BLV pedestrians real-time navigation assistance. Our user evaluations in the COSMOS testbed with eight BLV pedestrians show that StreetNav guides them more precisely than GPS, but its performance is sensitive to lighting conditions and environmental occlusions. We discuss future implications for deploying such systems at scale. ## CCS Concepts * **Human-centered computing \(\rightarrow\) Accessibility systems and tools.** ## Keywords Visual impairments, outdoor navigation, street camera, computer vision ## 1. Introduction Outdoor navigation in unfamiliar environments is a major challenge for blind and low-vision (BLV) people. Among the many navigation systems that have been developed to assist BLV people outdoors, GPS-based systems are the most popular [28, 31, 41, 58, 64]. These systems, such as BlindSquare [41] and Microsoft Soundscape [28], guide users to a destination and notify them of surrounding points of interest (POIs). Despite GPS's undeniable impact in making outdoor environments navigable, its imprecision is a major limitation [56]. GPS precision can range from 5 meters at best to over tens of meters in urban areas with buildings and trees [42, 23, 65]. This imprecision causes BLV people to veer off track [49], run into unexpected obstacles [50, 8, 52], and struggle to reach precise destinations [56] when navigating outdoors. Prior work on indoor navigation, on the contrary, has made precise navigation assistance possible for BLV people [2, 19, 34, 44, 57]. Most approaches do so by installing a dense network of additional hardware, such as Bluetooth [2] or WiFi [19] beacons, to precisely locate a user's position. Retrofitting outdoor environments with additional hardware, however, is not feasible due to the vast scale and complex nature of outdoor spaces. It would require extensive financial investments and coordination with city authorities to install and maintain such specialized hardware, which may not be possible. Ironically, many outdoor environments of interest, such as urban districts and downtown areas, are already instrumented with hardware that has the potential to help, including street cameras, traffic sensors, and other urban infrastructure components. Street cameras, in particular, are increasingly being installed in cities for public safety, surveillance, and traffic management-related applications [4, 11, 18, 36, 39]. Although these pre-existing street cameras have been deployed for purposes unrelated to accessibility, their potential for facilitating navigation assistance for BLV people remains largely untapped. In this work, we explore the idea of leveraging existing street cameras to support outdoor navigation assistance, and we investigate the effectiveness of such an approach. We seek to answer the following research questions: 1. [leftmargin=*] 2. What challenges do BLV people face when navigating outdoors using GPS-based systems? 3. How should street camera-based systems be designed to address BLV people's challenges in outdoor navigation? 4. To what extent do street camera-based systems address BLV people's challenges in outdoor navigation? To answer RQ1, we conducted formative interviews with six BLV pedestrians and discovered the challenges BLV people face when navigating outdoors using GPS-based systems. Our participants reported challenges in following GPS's routing instructions through complex environment layouts, avoiding unexpected obstacles while using assistive technology, and crossing streets safely. To answer RQ2, we developed _StreetNav_, a system that leverages a street camera to support precise outdoor navigation for BLV pedestrians. As Figure 1 illustrates, StreetNav comprises two key components: (i) _a computer vision pipeline_ and (ii) _a companion smartphone app_. The computer vision pipeline processes the street camera's video feed and delivers real-time navigation assistance to BLV pedestrians via the smartphone app. StreetNav offers precise turn-by-turn directions to destinations while also providing real-time, scene-aware assistance to prevent users from veering off course, alert them of nearby obstacles, and facilitate safe street crossings. We developed StreetNav using the NSF PAWR COSMOS wireless edge-cloud testbed [55, 68]. StreetNav uses one of COSMOS testbed's street cameras mounted on the second floor of Columbia University's Mudd building in New York City (NYC), which faces a four-way street intersection. To answer RQ3, we conducted user evaluations involving eight BLV pedestrians who navigated routes with both StreetNav and BlindSquare [41], a popular GPS-based navigation app especially designed for BLV people. Our findings reveal that StreetNav offers significantly greater precision in guiding pedestrians compared to BlindSquare. Specifically, StreetNav guided participants to within an average of 2.9 times closer to their destination and reduced veering off course by over 53% when compared to BlindSquare. This substantial improvement was reflected in the unanimous preference of all participants for StreetNav over BlindSquare in a forced ranking. Our evaluation, however, also revealed technical considerations related to StreetNav's performance, notably its sensitivity to lighting conditions and environmental occlusions. We discuss the future implications of our findings in the context of deploying street camera-based systems at scale for outdoor navigation assistance. In summary, we contribute (1) a formative study of BLV people's challenges in outdoor navigation using GPS-based systems, (2) the StreetNav system through which we explore the concept of repurposing street cameras for precise outdoor navigation assistance, and (3) a user evaluation of StreetNav. ## 2. Related Work Our work builds on the following three main research threads: (i) outdoor navigation approaches, (ii) overhead camera-based robot navigation, and (iii) indoor navigation approaches. Outdoor Navigation ApproachesExisting approaches for outdoor navigation primarily rely on GPS-based navigation systems for guiding users to the destination and providing information about nearby POIs (Safar et al., 2016; Wang et al., 2017; Wang et al., 2018; Wang et al., 2019). BlindSquare(Wang et al., 2017), for instance, utilizes the smartphone's GPS signal to determine the user's location and then provides the direction and distance to the destination, gathered from Foursquare and Open Street Map. The GPS signal, however, offers poor precision with localization errors as big as tens of meters (Safar et al., 2016; Wang et al., 2017; Wang et al., 2018; Wang et al., 2019). The accuracy is lower in densely populated cities (Wang et al., 2018), which is even more concerning given that a disproportionately high percentage of BLV people live in cities (Wang et al., 2019). Despite GPS-based systems' undeniable impact on helping BLV people in outdoor navigation, their low precision and inability to provide real-time support for avoiding obstacles and veering off the path limits their usability as a standalone navigation solution. Our preliminary work on StreetNav (Wang et al., 2019) introduced the alternative of leveraging street cameras for outdoor navigation assistance. In this work, we investigate street cameras' potential for providing precise and real-time navigation assistance by performing a user experience evaluation of StreetNav. Another approach for outdoor navigation has explored developing personalized, purpose-built, assistive devices that support BLV people with scene-aware aspects of outdoor navigation, such as crossing streets (Wang et al., 2017; Wang et al., 2018; Wang et al., 2019), recording routes (Wang et al., 2018), and avoiding obstacles (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). While these solutions address some of the precise and real-time aspects of BLV people's outdoor navigation, support for point-to-point navigation is missing. Consequently, they do not offer a comprehensive, all-in-one solution for outdoor navigation. Furthermore, these systems place the burden of purchasing costly devices onto the BLV users. Our work, by contrast, explores the possibility of using existing street cameras to provide a comprehensive solution for outdoor navigation. We investigate repurposing existing hardware in outdoor environments to support accessibility applications, thus imbuing accessibility within the city infrastructure directly, and adding no additional cost to the BLV user. Overhead Camera-based Robot NavigationA parallel research space to street cameras for blind navigation is robot navigation using overhead cameras. One common subspace within this field is sensor fusion for improved mapping. Research in this space focuses on fusing information between sighted "guide" robots and overhead cameras (Wang et al., 2017), fusing multiple camera views for improved tracking (Wang et al., 2017; Wang et al., 2019; Wang et al., 2019), and improving homography for robust mapping, independent of camera viewing angle (Wang et al., 2019; Wang et al., 2019). Another challenge tackled within this space is robot path planning. Research in this space aims to improve path planning algorithms (Wang et al., 2017; Wang et al., 2019; Wang et al., 2019), assign navigational tasks to robot assistants (Wang et al., 2017; Wang et al., 2019), and address the balance between obstacle avoidance and path following (Wang et al., 2017; Wang et al., 2019). While prior work on robot navigation using fixed cameras explores the research space of automating "blind" robot navigation, our work explores how fixed cameras, specifically street cameras, could be repurposed to support navigation for blind pedestrians. Our work considers BLV users' needs and preferences around outdoor navigation to design and develop a system that can offer precise navigation assistance. Indoor Navigation ApproachesPrior work in indoor navigation assistance has made significant progress through the utilization of various localization technologies, which usually relies on retrofitting the environment with additional hardware like WiFi or Bluetooth beacons (Safar et al., 2016; Wang et al., 2017; Wang et al., 2019; Wang et al., 2019). These solutions have proven highly effective within indoor environments. NavCog3 (Safar et al., 2016), for example, excels in indoor navigation by employing Bluetooth beacons for precise turn-by-turn guidance. Nakajima and Haruyama (Nakajima and Haruyama, 2019) exploit the use of visible lights communication technology, utilizing LED lights and a geomagnetic correction method to localize BLV users. However, extending these approaches to support outdoor navigation is not practical. This is particularly evident when considering the substantial initial investment in hardware setup that these systems typically require, making them ill-suited for the larger, unstructured outdoor environment. Furthermore, most of these methods lack the capability to assist with obstacle avoidance and to prevent users from veering off course -- both of which are less severe issues indoors compared to outdoors (Wang et al., 2019). In contrast, our exploration of using existing street cameras is better suited to address the largely unaddressed challenge of outdoor pedestrian navigation. This approach offers precise localization without requiring supplementary hardware, harnessing street cameras for locating a pedestrian's position. Additionally, it holds the potential to effectively tackle the distinctive challenges posed by the unstructured nature of outdoor environments, including real-time obstacle detection and the interpretation of critical visual cues like street crossing signals. ## 3. Formative Interviews We conducted semi-structured interviews with six BLV participants to identify BLV pedestrians' challenges in outdoor navigation when using GPS-based systems (RQ1). ### Methods ParticipantsWe recruited six BLV participants (three males and three females, aged 29-66) by posting on social media platforms and snowball sampling (Safar et al., 2016). Table 1 summarises the participants' information. All interviews were conducted over Zoom and lasted about 90 minutes. Participants were compensated $25 for this IRB-approved study. ProcedureTo identify the specific challenges that BLV people face when navigating outdoors, we used a recent critical incident technique (CIT) (Kal asked participants to name the AT they commonly use and then asked them to elaborate on their recent experience of using it: "So, you mentioned using BlindSquare a lot. When was the last time you used it?"_ Then, we initiated a discussion by establishing the scenario for them: "_Now, let's walk through your visit from the office to this restaurant. Suppose, I spotted you at your office. What would I observe? Let's start with you getting out of your office building._" We asked follow-up questions to gain insights into what made the aspects of outdoor navigation challenging and what additional information could help address them. _Interview Analysis._ To analyze the interviews, we first transcribed the study sessions in full and then performed thematic analysis (Bartos et al., 2018) involving three members of our research team. Each researcher first independently went through the interview transcripts and used NVivo (Navarro et al., 2019) to create an initial set of codes. Then, all three iterated on the codes together to identify emerging themes. ### Findings: BLV Pedestrians' Challenges in Outdoor Navigation We found three major themes around challenges that BLV pedestrians face when navigating outdoors using GPS-based systems. _C1: Routing through complex environment layouts._ GPS-based systems, such as BlindSquare (Krishnan et al., 2019), offer navigation instructions that follow a direct path to the destination from the user's current position, often referred to as "as the crow flies," rather than providing detailed turn-by-turn instructions through a polyline path that guide BLV people through the environment layout. Since _"not everything is organized in the ideal grid-like way"_ (F1), participants reported difficulties following the "as the crow flies" instructions, failing to confidently act upon the instructions without any knowledge of the environment layout. This was particularly challenging in complex layouts, as F3 recalled: _"I didn't know if crosswalks were straight or curved or if they were angled. [It was hard] to figure out which way you needed to be to be in the crosswalk."_ Many participants cited problems such as making the wrong turns into unexpected "allevyways" (F1, F2, F4) that landed them in dangerous situations with "cars coming through" (F2). Participants cited examples about how these instructions were often inaccurate, causing them to veer off course--a common issue for BLV people in open, outdoor space (Krishnan et al., 2019)--and end up in the middle of the streets. _C2: Avoiding unexpected obstacles while using GPS-based systems._ BLV people's challenges relating to obstacles during navigation are well researched (Krishnan et al., 2019; Krishnan et al., 2019). However, we found specific nuances in their difficulties, particularly when they rely on their conventional mobility aids in conjunction with GPS-based navigation systems. Participants commonly reported the use of mobility aids like white canes alongside GPS systems for guidance. During this combined navigation process, they encountered difficulties in maintaining their focus on obstacle detection, often resulting in collisions with objects that they would have otherwise detected using their white canes. For instance, F2 shared an incident where they remarked, "_there were traffic cones [and] I tripped over those_" while following directions. Notably, moving obstacles such as pedestrians and cars, as well as temporarily positioned stationary obstacles like triangle sandwich board signs, posed significant challenges for navigation. F4 expressed this sentiment, stating, _"You know how many times I've walked into the sides of cars even though I have the right of way. Drivers have gotten angry, accusing me of scratching their vehicles. It can spoil your day [and make] you feel insecure."_ _C3: Crossing street intersections safely._ Consistent with prior research (Bartos et al., 2018; Krishnan et al., 2019; Krishnan et al., 2019), our study participants highlighted that crossing streets remained a significant challenge for them. Since GPS-based systems do not help with street-crossing, most participants relied on their auditory senses. They mentioned the practice of listening for vehicular sounds to gauge traffic flow on streets running parallel and perpendicular to their position. This auditory technique helped them assess when it was safe to cross streets. However, participants also reported instances where this method proved inadequate due to external factors: "_yeah, it can be tricky, because [there may be] really loud construction nearby that can definitely throw me off because I'm trying to listen to the traffic_" (F1). Furthermore, their confidence in street-crossing decisions was affected by their inability to ascertain the duration of pedestrian signals and the length of the crosswalk. This uncertainty led to apprehension, as they expressed a fear of becoming stranded mid-crossing, as exemplified by one participant's comment: "_I don't want to be caught in the middle [of the street]_" (F4). ## 4. The StreetNAV System StreetNav is a system that explores the concept of repurposing street cameras to support outdoor navigation for BLV pedestrians (RQ2, \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **PID** & **Age** & **Gender** & **Race** & **Occupation** & **Vision ability** & **Onset** & **Mobility aid** & **AT familiarity (1–5)** \\ \hline F1 & 29 & Female & White & Claims expert & Totally blind & At birth & White cane & 3: Moderately familiar \\ F2 & 61 & Female & White & Retired & Light perception only & Age 6 & Guide dog & 1: Not at all familiar \\ F3 & 66 & Female & White & Retired & Totally blind & Age 58 & Guide dog & 2: Slightly familiar \\ F4 & 48 & Male & Black & Unemployed & Light perception only & Age 32 & White cane & 3: Moderately familiar \\ F5 & 27 & Male & Mixed & Unemployed & Totally blind & At birth & White cane & 3: Moderately familiar \\ F6 & 38 & Male & White & AT instructor & Totally blind & At birth & White cane & 5: Extremely familiar \\ \hline \hline \end{tabular} \end{table} Table 1. Self-reported demographics of our participants. Gender information was collected as a free response; our participants identified themselves as female (F) or male (M). Participants rated their assistive technology (AT) familiarity on a scale of 1–5. RQ3). It provides users precise turn-by-turn navigation instructions to destinations (**C1**), helps prevent veering off track (**C1**), gain awareness of nearby obstacles (**C2**), and assist in crossing streets safely (**C3**). StreetNav enables these navigation affordances through its two main components: (i) _computer vision pipeline_, and (ii) _companion smartphone app_. The computer vision pipeline processes the street camera's video feeds to give BLV pedestrians real-time navigation feedback via the app. Our design and development of StreetNav considers prior work on navigation assistance, functions of traditional mobility aids, and formative interviews with BLV people (Section 3) that identified challenges they face when navigating outdoors using existing GPS-based systems. The following sections describe StreetNav's technical setup (Section 4.1), the computer vision pipeline (Section 4.2), and the smartphone app's user interface (Section 4.3). ### StreetNav: Technical Setup Figure 2 shows the street camera we used for developing and evaluating StreetNav. We chose this camera because it faces a four-way street intersection\(-\)the most common type of intersection\(-\)and is mounted on a building's second floor, offering a typical street-level view of the intersection. The camera is part of the NSF PAWR COSMOS wireless edge-cloud testbed [55, 68]. Anonymized video samples from COSMOS cameras, including the one used in this work, can be found online [13]. StreetNav's computer vision pipeline takes the real-time video feed from the camera as input. For this purpose, we deployed the computer vision pipeline on one of COSMOS' computational servers, which captures the camera's video feed in real time [20, 21]. This server runs Ubuntu 20.04 with an Intel Xeon [email protected] and an Nvidia V100 GPU. StreetNav's two components\(-\)the computer vision pipeline and the app\(-\)interact with each other via a cloud server, sharing information using the MQTT messaging protocol [43]. Since MQTT is a lightweight messaging protocol, it runs efficiently even in low-bandwidth environments. The computer vision pipeline only sends processed navigation information (e.g., routing instructions, obstacle's category and location) to the app, rather than sending video data. This alleviates any privacy concerns around streaming the video feed to the users and avoids any computational bottlenecks that may happen due to smartphones' limited processing capabilities. The StreetNav app's primary purpose is to act as an interface between the user and the computer vision pipeline. We developed StreetNav's iOS App using Swift [6], enabling us to leverage VoiceOver [7] and other built-in accessibility features. ### StreetNav: Computer Vision Pipeline StreetNav's computer vision pipeline processes the street camera's video feed in real time to facilitate navigation assistance. It consists of four components: (i) _localizing and tracking the user_: locating user's position on the environment's map; (ii) _planning routes_: generating turn-by-turn navigation instructions from user's current position to destinations; (iii) _identifying obstacles_: predicting potential collisions with other pedestrians, vehicles, and objects (e.g., trash can, pole); and (iv) _recognizing pedestrian signals_: determining when it is safe for pedestrians to cross (walk vs. wait) and calculating the duration of each cycle. Next, we describe the computer vision pipeline's four components in detail. _Localizing and tracking the user_. To offer precise navigation assistance, the system must first determine the user's position from the camera view and then project it onto the environment's map. Figure 3d shows the map representation we used, which is a snapshot from Apple Maps' [5] satellite view of the intersection where the camera is deployed. StreetNav tracks pedestrians from the camera's video feed using Nvidia's DCF-based multi-object tracker [45] and the YOLOv8 object detector [63]. The computer vision pipeline is developed using Nvidia GStreamer plugins [62, 46], enabling hardware-accelerated video processing to achieve real-time tracking. We chose this tracker for its trade-off between real-time performance and robustness to Figure 3. Gesture-based localization for determining a user’s position on the map. (a) A study participant (P1) is (c) prompted to wave one hand above their head, enabling the computer vision pipeline to distinguish them from other pedestrians in (b) the camera feed view and (d) the map. Figure 2. Street camera used for StreetNav’s development and evaluation. The COSMOS camera [13, 68] is (a) mounted on the second floor of Columbia University’s Mudd building in NYC, and (b) captures the view of a four-way intersection. occlusions. The tracker detects all pedestrians and assigns them a unique ID. However, the system needs a way to differentiate between the BLV user and other pedestrians. Figure 3 shows the _gesture-based localization_ approach we introduced to address this issue. To connect with the system, BLV pedestrians must wave one hand above their head for 2-3 seconds (Figure 3a), enabling the system to determine the BLV pedestrian's unique tracker ID. We chose this gesture after discussions with several BLV individuals, including our BLV co-author, and most agreed that this single-handed action was both convenient and socially acceptable to them. Moreover, over-the-head gestures such as waving a hand can also be detected when users are not directly facing the street camera. StreetNav implements the gesture-based localization approach by first creating image crops of all detected pedestrians and then classifying them as 'waving' or 'walking' pedestrians using CLIP [53]. CLIP classifies each pedestrian by computing visual similarity between the pedestrian's image crop and two language prompts: 'person walking' and 'person waving hand.' We experimentally fine-tuned the confidence thresholds and these language prompts. We also tried other action recognition models, such as MMaction2 [12], but found that our CLIP-based approach was much faster and robust to false positives. Finally, we transformed the user's position on the street camera view (Figure 3b) onto the map (Figure 3d) using a simple feed-forward neural network, trained on data that we manually annotated. The network takes as input the 2D pixel coordinate from the street camera view and outputs the corresponding 2D coordinate on the map. StreetNav continuously tracks the user from the camera feed and transforms its position onto the map. _Planning routes_. StreetNav represents routes as a sequence of straight lines on the map connected by waypoints. To plan routes, StreetNav requires that a map of the environment is annotated with waypoints and connections between them. This offline process is performed by manually annotating the environment's map, as shown in Figure 4. The administrator marks two types of points on the map: POIs and sidewalk corners. The POIs are potential destinations that users can choose from. The sidewalk corners act as intermediary waypoints en route to the destination. We chose sidewalk corners as waypoints because BLV pedestrians often look for the tactile engravings at sidewalk corners to help orient themselves and transition into crosswalks. Thus, these waypoints blend in well with BLV users' current navigation practices. Figure 4 shows the internal graph structure that StreetNav uses for planning routes. This graph-based representation of the environment has also been used in prior work on indoor navigation systems [2, 25, 57]. In the graph, nodes correspond to POIs and sidewalk corners, whereas edges correspond to walkable paths. Once the user chooses a destination from the POIs, StreetNav adds the user's current position as a start node to this graph representation and computes the shortest path to the chosen POI using A* algorithm [14]. Figure 4 highlights the shortest path from the user's current position to the chosen destination (cafe). This route enables StreetNav to guide users to the destination via turn-by-turn instructions. _Identifying obstacles_. Prior work on obstacle avoidance developed systems that guide BLV people around obstacles [25, 33]. StreetNav, however, aims to augment BLV pedestrians' awareness of obstacles to help them confidently avoid obstacles using their traditional mobility aids (e.g., white cane) and mobility skills. From our formative interviews, we learned that obstacles that catch BLV users unexpectedly were specifically hard to avoid in outdoor environments (**C2**). Thus, StreetNav provides users with information about the obstacle's category and relative location. This gives BLV users context on the size, shape, and location of an obstacle, enabling them to confidently use their mobility skills around unexpected obstacles. Figure 5 illustrates how the system identifies obstacles in the user's vicinity. StreetNav's multi-object tracker is used to track other objects and pedestrians. Examples of other objects include cars, bicycles, poles, and trash cans. The computer vision pipeline Figure 4. StreetNav’s internal graph representation for route planning. The user’s current position is added dynamically as a start node to the graph upon choosing a destination. The shortest path, highlighted in green, is then calculated as per this graph representation. Figure 5. Identifying obstacles in the user’s vicinity. (a) A vehicle turning left yields to the BLV pedestrian (detected in purple) crossing the street. (b) StreetNav identifies the obstacles’ category and relative location on the map to provide real-time feedback via the app. then projects the detected objects' positions onto the map. To identify obstacles in the BLV user's vicinity, StreetNav computes the distance and angle between the user and other detected objects with respect to the map (Figure 5b). Any object (or pedestrian) within a fixed radial distance from the BLV user is flagged as an obstacle. Through a series of experiments with our BLV co-author, we found that a 4 foot radius works best for StreetNav to provide users with awareness of obstacles in a timely manner. Recognizing pedestrian signalsTo determine the pedestrian signals' state (i.e., _walk vs. wait_), we leverage the fact that walk signals are always white, whereas wait signals are always red in color. StreetNav requires the pixel locations of the pedestrian signals in the video feed in order to recognize the signal state. The administrator annotates the video feed image to draw a bounding box around the pedestrian signals' screen. Since the position of pedestrian signals is fixed with respect to the mounted street camera, this process needs to be done only once during setup, along with the map annotation process described earlier. Figure 6 shows the annotated pedestrian signals in the camera's video feed. StreetNav uses these annotations first to generate image crops of the two signals and then threshold both image crops to filter all red and white pixels. It compares the number of white and red pixels in each crop to identify the signal's state: _walk_ (Figure 6a) vs. _wait_ (Figure 6b). We experimentally fine-tuned the count thresholds to accurately identify the signal state. Although the two crops are low resolution, this approach still yields accurate results since it distinguishes the state using pixel colors. Our formative interviews found that BLV pedestrians faced difficulty pacing themselves while crossing streets (**C3**). To address this challenge, StreetNav provides users with information about how much time remains for them to cross. StreetNav's computer vision pipeline computes the time remaining to cross by keeping track of the signal cycles' duration. StreetNav maintains a timer that records the moments when each signal changes its state. After observing a full cycle, StreetNav is able to accurately keep track of both the state and timing of each signal. StreetNav periodically refreshes the timer to adapt to any changes in signal duration that may happen for traffic management reasons. ### StreetNav App: User Interface The StreetNav iOS app interacts with the computer vision pipeline to allow BLV pedestrians to choose a destination and receive real-time navigation feedback that guides them to it. BLV users first initiate a connection request through the app, which activates the gesture-based localization (Section 4.2) in the computer vision pipeline. The app prompts the user to wave one hand over their head (Figure 3b), enabling the system to begin tracking their precise location on the map (Figure 3d). BLV users can then select a destination from nearby POIs and begin receiving navigation feedback through the app. Figure 7 shows the StreetNav app's user interface, which uses audiohaptic cues for (i) providing routing instructions, (ii) preventing veering off track, (iii) notifying about nearby obstacles, and (iv) assisting with crossing streets. Upon reaching the destination, the app confirms their arrival. The following sections describe the app's interface in detail. Providing routing instructionsThe app conveys routing instructions to the users by first giving an overview of the route and then announcing each instruction, in situ, based on their current location in the environment. Figure 7a shows the app screen with the path overview. Prior work on understanding BLV people's navigation behaviors (Bahdan et al., 2017; Wang et al., 2018; Wang et al., 2018) reveals that BLV people often prepare for their routes before actually walking through them. StreetNav assists them in this preparation by giving an overview of the path before beginning navigation. The path overview consists of several instructions, with each helping them get from one waypoint to the next. BLV users read through the path overview using VoiceOver (Bahdan et al., 2017). Users then tap the 'Start Navigation' button, which announces each instruction when they reach a waypoint. Figure 7b-f shows how the app dynamically updates the next instruction based on the user's location in the environment. Throughout the journey, users can access the path overview and the current navigation instructions on demand via VoiceOver. Preventing veering off trackFigure 8 illustrates the app's feedback for preventing users from veering off track. Given the user's current position, heading, and destination route, StreetNav computes the _direction_ and _extent_ of veering. To convey _direction_ of veering, we used 3D spatialized sound, which plays continuous beeping sounds from the right speaker when users veer to the left (Figure 8a) and from the left speaker when users veer to the right (Figure 8c). Users can follow the direction of the beeping sound to correct for veering. To convey the _extent_ of veering, i.e., how severely the user is veering, we render the frequency of beeps to be proportional to the angle between the user's current heading and the route. As users veer away from the correct direction, the frequency of beeps increases; and when they begin to turn towards the correct direction, the frequency of beeps decreases. Users can also leverage the frequency of beeps to determine how to correct for veering, by always moving in the direction where the beeps' frequency reduces. This enables users to correct for veering even without the spatialized sound feedback we used for direction. This eliminates the need to wear headphones to understand spatialized sound. Figure 6. Recognizing pedestrian signal states from the camera’s video feed. StreetNav compares the number of white and red pixels in the signal crops to determine its state: (a) _walk_ vs. (b) _wait_. We ran pilot experiments to test this feedback mechanism with our BLV co-author. We found that the continuous audio feedback was helpful but also became overwhelming as it forced them to strictly follow StreetNav's route. To address this, we relaxed the veering requirements by introducing a tolerance angle (\(\theta\)). Figure 8 shows the tolerance angle in green color, which is depicted as a cone centered at the user's current heading. We updated the veering feedback to only play beeping sounds when users veer off in either direction by at least \(\theta/2\) degrees. To maintain the continuity of feedback, we chose to render subtle haptic vibrations when users move in the correct direction within the tolerance angle. Within this tolerance angle, the intensity of the haptic vibration increases when users approach the exact correct heading and decreases when they start to veer off. This is similar to how the frequency of beeps increases when users veer away. In this way, the audio feedback acts as negative reinforcement, and the haptic feedback acts as positive reinforcement. Figure 8b illustrates the haptic feedback. We experimentally tuned the tolerance angle, \(\theta\), and set its value for our system to \(50^{\circ}\). To generate the audiohaptic cues, the app receives the user's current position and destination route from the computer vision pipeline. For the user's current heading, we experimented using the user's trajectory to predict their heading using the Kalman filter. This approach, however, yields inaccurate headings due to the noisy tracking data. Thus, we leveraged the smartphone's compass to determine the user's current heading. We offset the compass readings by a fixed value to ensure that its zero coincides with the map's horizontal direction. This enabled us to perform all heading-related computations with respect to the map's frame of reference. Notifying about nearby obstacles.Figure 7d shows how StreetNav alerts the user of obstacles nearby. The app announces the obstacle's category, distance, and relative location. For example, when a car approaches the user, the app announces: "_Caution! Car, 4 ft. to the left._" Similar to veering feedback, the relative location is computed using both the computer vision pipeline's outputs and the smartphone's compass reading. We tried feedback formats with varying granularity to convey the obstacle's relative location. First, we experimented with _clock-faced directions: "Car, 4 ft. at 1 o'clock_" Clock-faced directions are commonly used in many GPS-based systems such as BlindSquare to convey directions. We learned from pilot evaluations with our BLV co-author that this feedback format was too fine-grained, as it took them a few seconds to decode the obstacle's location. This does not Figure 7. The StreetNav App’s user interface. It provides routing instructions to their destination via (a) a path overview and (c, e) real-time feedback that updates their current instruction based on their location. Upon reaching a sidewalk, (b) the app informs the user about when it is safe to cross and (d) how much remains for them to cross over. It also (d) notifies the user of a nearby obstacle’s category and relative location to help them avoid it. The app (f) confirms the user’s arrival at the destination. Throughout the journey, the app provides (g) continuous audiohaptic feedback to prevent users from veering off track. fare well with moving obstacles, such as pedestrians, that may have already passed the user before they are able to decode the location. Moreover, StreetNav's goal with obstacle awareness is to give users a quick idea that something is nearby them, which they can then use to circumnavigate via their mobility skills. To address this, we tried the more coarse format with just four directions: left, right, front, and back. This was found to give users a quick intimation, compared to the clock-faced directions. Assisting with crossing streetsThe StreetNav app helps users cross streets by informing them _when_ to cross and how much time remains before the signal changes. Figure 7b and Figure 7d illustrate the feedback. Upon reaching a sidewalk corner, the app checks for the signal state recognized by the computer vision pipeline. If the signal is '_wait_' when the user arrives, the app informs the user to wait along with the time remaining before the signal changes. If the signal is '_walk_' when the user arrives, the app informs the user to begin crossing only if the time remaining is sufficient for crossing. For the intersection used in our user studies, this was experimentally found to be 15 seconds. Otherwise, the user is advised to wait for the next cycle. Once the user begins crossing on the '_walk_' signal, the app announces the time remaining for them to cross over. This feedback is repeated at fixed intervals until the user reaches the other sidewalk corner. We experimentally fine-tuned this interval with feedback from our BLV co-author. We tried several intervals, such as 5, 10, and 15 seconds, and found that shorter intervals overwhelmed the users, whereas longer intervals practically would not be repeated enough times to give them meaningful information. We settled on repeating the feedback every 10 seconds for our implementation. ## 5. User Study Our user study had three goals, related to RQ2 and RQ3. First, we wanted to evaluate the extent to which StreetNav addressed BLV pedestrians' challenges in navigating outdoor environments when using existing GPS-based systems. Through our formative interviews (Section 3), we discovered three main challenges: routing through complex environment layouts (**C1**), avoiding unexpected obstacles (**C2**), and crossing street intersections (**C3**). Second, we wanted to analyze BLV pedestrians' experience of navigating outdoors using StreetNav compared to existing GPS-based systems. Third, we wanted to see how participants rank the two navigation systems--StreetNav vs. GPS-based system--in order of their preference for outdoor navigation assistance. ### Study Description ParticipantsWe recruited eight BLV participants (five males, three females; aged 24-52) by posting to social media platforms and by snowball sampling (Srivastava et al., 2016). Participants identified themselves with a range of racial identities (Asian, Black, White, Latino, and Mixed) and all of them lived in a major city in the US. Participants also had diverse visual abilities, onset of vision impairment, and familiarity with assistive technology (AT) for navigation. Table 2 summarizes participants' information. All but three participants (P1, P7, and P8) reported themselves as being moderately-extremely experienced with AT for navigation (3+ scores on a 5-point rating scale). Only P3 reported minor hearing loss in both ears and wore hearing aids. All participants except two (P2, P9) used white cane as their primary mobility aid. P2 did not use any mobility aid, while P9 primarily used a guide dog for navigation. The IRB-approved study lasted for about 120 minutes, and participants were compensated $75 for their time. Experimental DesignIn the study, participants completed three navigation tasks at a street intersection in two conditions: (i) StreetNav and (ii) BlindSquare (BindSquare, 2017), a popular GPS-based navigation Figure 8. Audiohaptic cues for preventing users from veering off track. Sample user trajectories showing feedback when users (a) veers to the left, (b) do not veer, and (c) veer to the right. When the user’s heading coincides with the route to the destination, within a tolerance angle \(\theta\) (highlighted in green), users receive (b) subtle haptic vibrations to reinforce them. When they veer off the route, outside the tolerance angle \(\theta\), they hear spatialized beeping sounds that are rendered from the (a) right speaker when veering left, and from the (c) left speaker when veering right. Figure 9. The routes used in the navigation tasks. (A) 12 meters, stationary person to avoid on the sidewalk. (B) 30 meters, cross street, and moving person to avoid on the sidewalk. (C) 38 meters, a \(90^{\circ}\) turn, cross street, and moving person to avoid on the crosswalk. To mitigate learning effects, routes for the two conditions are symmetrically designed, situated on opposite sides of the street. app especially designed for BLV people. We evaluated the two systems via their respective iOS apps on an iPhone 14 Pro. Both systems' apps seamlessly integrated with VoiceOver, and all eight participants had a high level of familiarity with using iPhones and VoiceOver, with ratings of 3 or higher on a 5-point scale. During the study, participants continued to use their primary mobility aids, such as white canes and guide dogs, in both conditions. This approach allowed us to make a meaningful comparison between StreetNav and the BLV pedestrians' current methods of outdoor navigation, simulating their usual practice of incorporating GPS-based navigation systems alongside their mobility aids. Our study followed a within-subjects design, in which participants tested the two navigation systems in a counter-balanced order to minimize potential order-bias and learning effects. In each condition, participants were tasked with completing three distinct navigation challenges, corresponding to three specific routes. Figure 9 illustrates these three navigation routes. We deliberately chose the routes to lie within the street camera's field of view and include a range of difficulty levels for each task: (A) a short route, 12 meters, that involved avoiding a stationary person on the sidewalk, (B) a long route, 30 meters, that involved crossing a street and avoiding a moving person on the sidewalk, and (C) a complex route, 38 meters, that involved making a 90 degree turn, crossing a street, and avoiding a moving person on the crosswalk. For each of these tasks, one of our researchers assumed the role of the obstacle. Notably, none of the participants were familiar with the specific street intersection selected as the study's location. Given that participants navigated the same intersection in both conditions, the potential for learning effects as a confounding factor was carefully considered. To address this concern, we took deliberate measures by creating distinct routes for each condition. Specifically, we designed the routes in both conditions to be symmetric--rather than being identical--with the starting and ending points of each route strategically positioned on opposite sides of the street intersection, as illustrated in Figure 9. The symmetry of routes ensured that participants encountered the same challenges in both conditions. To ensure participants' safety, the researchers accompanied them at all times during the study, prepared to intervene whenever necessary. _Procedure_. We began each study condition by giving a short tutorial of the respective smartphone app for the system. During these tutorials, participants were taught how to use the app and how to interpret the various audiohaptic cues it offered. To accommodate potential challenges arising from ambient noise at the street intersection, participants were given the option to wear headphones during the study. Only two participants, namely P3 and P5, exercised that option; rest of the participants relied on the smartphone's built-in speaker to hear the audiohaptic cues. After completing the three navigation tasks for each condition, we administered a questionnaire comprising four distinct parts. These parts were designed to assess participants' experiences around challenges faced by BLV pedestrians in outdoor navigation, specifically addressing the following aspects: routing to destination (**C1**), veering off course (**C1**), avoiding obstacles (**C2**), and crossing streets (**C3**). It included questions about how well each system assisted with the challenges, if at all. Participants rated their experience on a 5-point rating scale, where a rating of "1" indicated "_not at all well_," and a rating of "5" indicated "_extremely well_." After each part of the questionnaire, we asked follow-up questions to gain deeper insights into the reasons behind their ratings and their overall experiences. Following their experience with both navigation systems, participants were asked to complete a post-study questionnaire. This questionnaire required them to rank the two navigation systems in terms of their preference for outdoor navigation. Subsequently, we directed our discussion toward StreetNav, engaging participants in a conversation about potential avenues for improvement. We also inquired about the specific scenarios in which they envision using this system in the future. In addition to the questionnaires that aimed at capturing participants' subjective experiences, we also gathered system usage logs and video recordings of participants throughout the study. These objective data sources, including usage logs and video recordings, allowed us to perform a comprehensive analysis of participants' actual performance in the navigation tasks. It is worth noting that willingness to be video-recorded was completely voluntary, i.e., did not affect participants' eligibility or compensation. All eight participants still agreed to be video-recorded, providing us with written consent to do so. _Analysis_. We report participants' spontaneous comments that best represent their overall opinions, providing further context on the quantitative data we collected during the study. We analyzed the \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **PID** & **Age** & **Gender** & **Occupation** & **Race** & **Vision ability** & **Onset** & **Mobility aid** & **AT familiarity (1–5)** \\ \hline P1 & 24 & Male & App developer & Asian & Low vision & Age 19 & White cane & 2: Slightly familiar \\ P2 & 28 & Male & Data manager & White & Low vision & At birth & None & 3: Moderately familiar \\ P3 & 48 & Male & Not employed & Black & Totally blind & Age 32 & White cane & 3: Moderately familiar \\ P4 & 46 & Female & Social worker & Latino & Totally blind & Age 40 & White cane & 4: Very familiar \\ P5 & 43 & Female & Not employed & Asian & Totally blind & At birth & White cane & 4: Very familiar \\ P6 & 52 & Male & Mgmt. analyst & Mixed & Light perception only & Age 9 & White cane & 5: Extremely familiar \\ P7 & 26 & Female & Writer & Mixed & Low vision & At birth & White cane & 2: Slightly familiar \\ P8 & 51 & Male & Not employed & Black & Light perception only & Age 26 & Guide dog & 3: Moderately familiar \\ \hline \hline \end{tabular} \end{table} Table 2. Self-reported demographics of our study participants. Gender information was collected as a free response. Participants rated their familiarity with assistive technology (AT) on a scale of 1–5. transcripts for participants' quotes and grouped them according to the (i) questionnaire's four parts: routing to destination, veering off course, avoiding obstacles, and crossing streets; (ii) overall satisfaction and ranking preferences, and (iii) how users' individual experiences influenced their preferences. ### Results Our results reveal that StreetNav helped participants reach their destinations with more precision, gain awareness of obstacles, reduce veering off course, and confidently cross streets. For the statistic analysis of each measure, we first conducted a Kolmogorov-Smirnov test to determine if the data was parametric or non-parametric. Then, when comparing between the two conditions, we used a paired t-test when the data was parametric. In addition to quantitative measures, we conducted a detailed analysis of video recordings, manually annotating the routes participants took during the study. We provide these metrics to offer additional insights into participants' performance across both experimental conditions. _Routing to Destination_. Figure 11 shows participants' average rating for their experience following routes to the destination in each condition. The mean (\(\pm\) std. dev.) rating for participants' perceived usefulness of the routing instructions in guiding them to the destination was 4.13 (\(\pm\)0.64) for StreetNav and 2.38 (\(\pm\)0.91) for BlindSquare. The condition had a significant main effect (\(p=0.014\)) on participants' experience reaching destinations with the routing instructions. The mean (\(\pm\) std. dev.) rating for participants' experience with the system's ability to track them was 4.50 (\(\pm\)0.76) for StreetNav and 2.88 (\(\pm\)1.13) for BlindSquare. The condition had a significant main effect (\(p=0.001\)) on participants' perception of how well the system tracked them en route to the destination. This indicates that participants found StreetNav more useful than BlindSquare for guiding them to the destination. Figure 10 illustrates our analysis of the video recordings, plotting the typical paths taken by participants in the third route across both conditions. We computed various metrics from their paths, that provide insights into participants' self-reported ratings. We found that when using BlindSquare, participants covered greater distances to reach the same destinations compared to when using StreetNav. On average, participants traveled a distance approximately 2.1 times longer than the shortest route when relying on BlindSquare. In contrast, when using StreetNav, they covered a distance of only about 1.1 times the shortest route to their destination. This represents a 51% reduction in the unnecessary distance traveled with StreetNav in comparison to BlindSquare. Figure (b)b shows how participants using BlindSquare often exhibited an oscillatory pattern near their destinations (P1, P8) before eventually reaching close to them. Additionally, StreetNav's routing instructions displayed a notably higher level of precision, guiding participants to their destinations with 2.9 times greater accuracy than BlindSquare. Figure 10 clearly shows this trend for the third route. On average, across the three study routes, participants using StreetNav concluded their journeys within a tighter radius of 12.53 feet from their intended destination. In contrast, participants relying on BlindSquare concluded their journeys within a radius of 35.94 feet from their Figure 11. Results for participants’ experience with routing to the destination. Participants rated the (1) usefulness of routing instructions, and (2) the system’s ability to track them en route to the destination. Participants found StreetNav’sturn-by-turn instructions significantly more useful and precise than BlindSquare’s “as the crow flies”-style routing instructions. Pairwise significance is depicted for \(p<0.01\) (\(\ast\)) and \(p<0.05\) (\(\ast\)). The error bars indicate standard error. Figure 10. Comparison of paths traveled by three participants (P1, P3, P8) for route ‘C’ using (a) StreetNav, and (b) BlindSquare. StreetNav’s routing instructions consistently guided participants to the destination via the shortest path. BlindSquare, however, caused participants to take incorrect turns (P1, P3, P8), oscillate back and forth near destinations (P1, P8), and even go around the whole intersection before getting close to the destination (P8). intended destination. Two study participants, P4 and P5, even refused to navigate to the destination in two of the three tasks with BlindSquare. This was primarily attributed to BlindSquare's low precision in tracking the participants and often guiding them to take incorrect turns. Figure 10b highlights how BlindSquare caused P8 to go around the intersection before finally getting close the destination. Participants preferred StreetNav over BlindSquare for its audiohaptic cues for turn-by-turn navigation instructions, which they found to be more useful and precise than BlindSquare's "as the crow flies"-style clock face and distance-based instructions. P3's comment encapsulates this sentiment: "_When it's time for me to turn right and walk a certain distance, [StreetNav] is very! very, very precise."_ -**P3** Although all participants preferred StreetNav's routing feedback over BlindSquare's, distinct patterns emerged in their preference and utilization of these cues. StreetNav delivers a combination of audiohaptic and speech feedback for routing, and participants adopted varying strategies for utilizing this feedback. Some individuals placed greater reliance on the veering haptic feedback as their primary directional guide, while reserving speech feedback as a fallback option. Conversely, some participants prioritized the speech feedback, assigning it a higher level of importance in their navigation process compared to audio-haptic cues. Vearing PreventionFigure 12 shows participants' average rating for their perceived ability to (1) maintain a straight walking path, i.e., prevent veering off course, and (2) intuitiveness of the feedback they received regarding direction to move in. The mean (\(\pm\) std. dev.) rating of participants' perceived ability to maintain a straight walking path with StreetNav was 4.63 (\(\pm\)0.52) and with BlindSquare was 2.75 (\(\pm\)1.17). The condition had a significant main effect (\(p=0.001\)) on participants' perceived ability to prevent veering off course. The mean (\(\pm\) std. dev.) rating for intuitiveness of the feedback that helped them know which direction to move in was 4.63 (\(\pm\)0.52) for StreetNav and 3.00 (\(\pm\)0.76) for BlindSquare. The condition had a significant main effect (\(p=0.006\)) on intuitiveness of feedback that helped participants prevent veering off path. Our examination of the video recordings aligns closely with participants' ratings. It reveals that StreetNav minimized participants' deviations from the shortest path to the destinations in comparison to BlindSquare. Over the course of the three routes, participants displayed an average deviation from shortest path, that was reduced by 53% when using StreetNav as opposed to BlindSquare. With BlindSquare, many participants reported difficulty maintaining awareness of their surroundings, including both obstacles and navigation direction, which frequently led to deviations from their intended paths. For instance, P2 reported challenges in maintaining their orientation with the need to avoid obstacles: "_[BlindSquare] basically demanded me to keep track of my orientation as I was moving, which is pretty difficult to do when you're also trying to keep other things in mind, like not bumping into things."_ -**P6** In contrast, StreetNav effectively addressed this challenge by providing continuous audiohaptic feedback for maintaining a straight walking path, instilling a sense of confidence in participants. P3, who tested StreetNav before BlindSquare, reflected on their desire for a similar continuous feedback mechanism within BlindSquare, akin to the experience they had with StreetNav: "_[with BlindSquare] even though I couldn't see the phone screen, my eyes actually went towards where I'm holding the screen. It is almost as if on a subconscious level, I was trying to get more feedback. With [StreetNav] I had enough feedback."_ -**P3** Many participants appreciated StreetNav's choice of haptic feedback for veering. Some participants envisioned the haptic feedback to be especially useful in environments with complex layouts: "_In the [areas] where the streets are very slanted and confusing. I think haptic feedback will be especially helpful."_ -**P5** Other participants highlighted the advantage of haptic feedback in noisy environments where audio and speech feedback might be less effective. However, both P4 and P6 exclaimed that StreetNav's haptic feedback would only work well when holding the phone in their hands. This meant that hands-free operation of the app may not be possible, which is important for BLV people since one of their hands is always occupied by the white cane. P4 proposed integrating the app with their smartwatch for rendering the haptic feedback to enable hands-free operation. Obstacle AwarenessFigure 13 shows participants' average rating for their perceived awareness of obstacles across the two conditions. Specifically, participants rated their ability to (1) avoid obstacles, (2) identify its category (e.g., person, bicycle, trash can), and (3) determine its relative location. The mean (\(\pm\) std. dev.) rating for participants' perceived ability to avoid obstacles was 4.38 (\(\pm\)0.74) for StreetNav and 2.88 (\(\pm\)0.99) for BlindSquare, to identify its category was 4.50 (\(\pm\)0.76) for StreetNav and 3.13 (\(\pm\)1.46) for BlindSquare, and to determine obstacle's relative location was 4.13 (\(\pm\)0.64) for StreetNav and 2.88 (\(\pm\)1.25) for BlindSquare. A paired t-test revealed that the condition had a significant main effect on participants' perceived ability to avoid obstacles (\(p=0.030\)), identify its category (\(p=0.037\)), and relative location (\(p=0.004\)). This suggests that Figure 12. Results for participants’ perceived ability to prevent veering off path. Participants rated their ability to (1) maintain a straight walking path, and (2) intuitiveness of the feedback regarding direction they should be moving in; on a scale of 1–5. StreetNav’saudiohaptic feedback was significantly more intuitive than BlindSquare’s in preventing participants from veer off path. Pairwise significance is depicted for \(p<0.01\) (*). The error bars indicate standard error. StreetNav offered users a heightened awareness of nearby obstacles compared to the baseline condition of BlindSquare. With StreetNav, participants had the option to use obstacle avoidance audio feedback in conjunction with their conventional mobility aids. However, in the case of BlindSquare, the system itself did not offer any obstacle-related information. Consequently, participants primarily relied on their traditional mobility aids in this condition, as is typical when using GPS-based systems. Our analysis of the video recordings found that in both experimental conditions, participants encountered no instances of being severely hindered by obstacles. Instead, they adeptly navigated around obstacles with the assistance of their white canes or guide dogs. Although participants generally had a positive perception of obstacle avoidance when using StreetNav, their opinions on the utility of obstacle awareness information varied. Some participants found this information beneficial, emphasizing its role in preventing _"awkward bumping into people"_ (**P2**) and boosting their confidence, resulting in greater _"speed in terms of walking"_ (**P3**). Conversely, participants who felt confident avoiding obstacles with their mobility aids regarded StreetNav's obstacle information to be extraneous. P8 also expressed concerns about the potential information overload it could cause in dense urban areas: _"To know where people are, is a bit of overkill. If you turn this thing on in Times Square, it would have your head go upside down."_ -**P8** Many participants proposed an alternative use case for StreetNav's obstacle awareness information, highlighting its potential for providing insights into their surroundings. They suggested that this information could unlock environmental affordances, including the identification of accessible light signals and available benches for resting: _"knowing there was a bench was top-notch for me"_ (**P8**). Therefore, StreetNav's obstacle awareness information served a dual purpose, aiding in both obstacle avoidance and environmental awareness, allowing users to _"know what's around"_ (**P8**) them. Crossing StreetsFigure 14 shows participants' average rating for their perceived comfort in crossing streets. The mean (\(\pm\) std. dev.) rating of participants' perceived comfort in making the decision on when to begin crossing the street was 4.50 (\(\pm\)0.76) for StreetNav and 2.88 (\(\pm\)1.64) for BlindSquare. The mean (\(\pm\) std. dev.) rating of participants' perceived comfort in safely making it through the crosswalk and reach the other end was 4.63 (\(\pm\)0.52) for StreetNav and 2.00 (\(\pm\)1.41) for BlindSquare. A paired t-test showed that the condition had a significant main effect on participants' comfort in beginning to cross streets (\(p=0.029\)) and in safely making it to the other side (\(p=0.001\)). As BlindSquare does not provide feedback specifically for crossing streets, participants reported relying on their auditory senses, listening for the surge of parallel traffic. However, during the semi-structured interviews, some participants highlighted challenging scenarios that can make this strategy less reliable. P4, for instance, pointed out that ironically, less traffic can complicate street crossings: _"don't always know when to cross because it's so quiet. And sometimes two, three light cycles go by, and I'm just standing there."_ -**P4** This issue has been exacerbated by the presence of electric cars, which are difficult to hear due to their quiet motors. For P3, their hearing impairments made it challenging to listen for traffic. Thus, most participants appreciated StreetNav's ability to assist with crossing streets: _"When it's quiet, I would cross. But now with hybrid cars, it's not safe to do that. [StreetNav] app telling you which street light is coming on is really helpful."_ -**P7** Participants made decisions to cross the streets by combining StreetNav's feedback with their auditory senses. Many participants emphasized that having information about the time remaining to cross significantly boosted their confidence, especially when this information aligned with the sounds of traffic: _"I thought it was great because I could tell that it matched up"_ (**P8**). This alignment between the provided information and their sensory perception inspired confidence in participants: _"Relying on my senses alone feels like a gamble about 90 percent of the time, so a system like [StreetNav] that Figure 14. Results for participants’ perceived comfort in crossing streets. Participants rated their perceived comfort in (1) making the decision on when to begin crossing the street, and in (2) pacing themselves when crossing. Participants were significantly more comfortable crossing streets with StreetNav in comparison to BlindSquare. Pairwise significance is depicted for \(p<0.01\) (\(*\)) and \(p<0.05\) (\(**\)). The error bars indicate standard error. Figure 13. Results for participants’ perceived obstacle awareness. Participants rated their ability to (1) avoid obstacles, (2) identify its category (e.g., person, bicycle), and (3) determine its relative location; on a scale of 1–5. StreetNav significantly improved participants’ awareness of nearby obstacles during navigation. Pairwise significance is depicted for \(p<0.01\) (\(*\)) and \(p<0.05\) (\(**\)). The error bars indicate standard error. accurately displays the amount of time 1 have to cross the street is great._"-**P2** ### Forced Ranking Results All eight participants unanimously chose StreetNav over BlindSquare as their preferred navigation assistance system. We asked participants to also rank their preferred type of routing instructions. All eight participants strongly preferred StreetNav's turn-by-turn routing instructions compared to BlindSquare's "as the crow flies," direction and distance-style routing instructions. In the semi-structured interview, participants were asked to elaborate on their rankings. Participants pointed out multiple navigation gaps in BlindSquare, with P2 summarizing participants' sentiment: "_If you're only getting somebody 90 percent of the way there, you're not really achieving what I would consider to be the prime functionality of the system._"-**P2** In contrast, participants praised StreetNav for its precision and real-time feedback, emphasizing the importance of granular and holistic information to support all facets of navigation. However, participants did acknowledge occasional "glitchiness" (**P7**) with StreetNav, which occurred when they moved out of the camera's field of view or were occluded by other pedestrians or vehicles, resulting in lost tracking. Nevertheless, participants still regarded StreetNav as a significant enhancement to their typical navigation experiences, expressing increased confidence in exploring unfamiliar outdoor environments in the future. "_It would encourage me to do things that I would not usually... It would make me more confident about going out by myself._"-**P4** Participants also appreciated StreetNav's ability to identify them in near real-time: "_What I found very interesting about the connection part is how quickly it identifies where I am, as soon as I wased my hand, it senses me._"-**P3** Participants also provided suggestions for improving StreetNav. Some participants wanted a hands-free version that would allow them to hold a white cane in one hand while keeping the other free. Additionally, while they found the gesture of waving hands for connecting with the system socially acceptable, they acknowledged that it might be perceived as somewhat awkward by others in the street. "_[Waving a hand] may seem kind of weird to people who don't understand what is going on. But for me personally, I have no issue._"-**P3** While the gesture-based localization was generally accurate, there were instances where other pedestrians were incorrectly detected as the study participant. On average, the gesture-based localization worked accurately over 90% of the time. ### How Individual Experiences Influenced Participants' Preferences Throughout the study, participants offered feedback based on their unique backgrounds. We observed distinct patterns in their preferences, affected by their (i) onset of vision impairment, (ii) level of vision impairment, and (iii) familiarity with assistive technology. Onset of vision impairment.Participants with early onset blindness preferred nuanced, concise feedback with an emphasis on environmental awareness. They used the system as an additional data point without complete reliance. In contrast, participants with late onset blindness trusted the system more and relied heavily on its feedback. Level of vision impairment.Totally blind participants appreciated the veering feedback, while low-vision users, who had more visual information, relied on their senses and did not need as much assistance with veering. Low-vision participants appreciated the street crossing feedback rather than trying to glean information from pedestrian signals across the street. Totally blind participants relied more on listening for parallel traffic--their usual mode of operation--and used StreetNav's street-crossing feedback as a confirmation. Familiarity with assistive technology (AT).We noticed that participants who commonly use AT for navigation quickly adapted to StreetNav, while those with less experience hesitated in trusting StreetNav's feedback and had a slightly steeper learning curve. Still, all participants mentioned feeling more comfortable with StreetNav as the study progressed. Both groups also expressed increased confidence in exploring new areas with StreetNav. ## 6. Discussion Our goal with StreetNav was to explore the idea of repurposing street cameras to support precise outdoor navigation for BLV pedestrians. We reflect upon our findings to discuss how street camera-based systems might be deployed at scale, implications of a street camera-based navigation approach for existing GPS-based navigation systems, and the affordances enabled by precise, real-time outdoor navigation assistance. Deploying street camera-based navigation systems at scale.StreetNav demonstrates that street cameras have the potential to be repurposed for supporting precise outdoor navigation for BLV pedestrians. Our study results show that street camera-based navigation systems can guide users to their destination more precisely and prevent them from veering off course (Figure 10). Our results also show that street camera-based systems can support real-time, scene-aware assistance by notifying users of nearby obstacles (Figure 13) and giving information about when to cross streets (Figure 14). These benefits of a street camera-based approach, over existing GPS-based systems, underscores the need for deploying such systems at scale. Although our system, StreetNav, was deployed at a single intersection, we learned insights on potential challenges and considerations that must be addressed to deploy street camera-based systems at scale. Several internal and external factors need to be considered before street cameras can be effectively leveraged to support blind navigation at scale. External factors, including lighting conditions and occlusions on the street, may affect system performance. For instance, we noticed that StreetNav's ability to track pedestrians was affected severely in low-light conditions (e.g., at night) and by occlusions due to the presence of large vehicles (e.g., trucks, buses) and the installation of scaffoldings for construction. Such challenges affect the reliability of street camera-based systems and may limit its operational hours. Internal factors, including the positioning of cameras, their field of view, and variability in resolution, may affect the extent to which such systems can promise precise navigation assistance. For instance, the visibility of the pedestrian signals from the camera feed could affect how much such systems can assist users with crossing streets. With StreetNav, we observed a drop in tracking accuracy as individuals and objects moved further away from the camera. Therefore, deploying street camera-based systems at scale would require future work to investigate the extent to which both external factors (e.g., lighting, occlusions) and internal factors (e.g., camera resolution) affect system performance and reliability. To address some of the technical limitations around tracking performance and field of view limitations, future research could explore integrating multiple cameras at various elevations and viewing angles. Prior work on robot navigation has explored the fusion of multiple cameras to improve tracking performance [(10; 48; 51)]. Future work could also explore an ecosystem of accessible street cameras that can share information to automatically manage hand-offs across street intersections, providing users with a seamless experience beyond a single street intersection. Such ecosystems, which span beyond one intersection to a whole district or city, could enable new affordances, such as automatically sensing pedestrian traffic to inform traffic signals and vice versa [(35)]. Implications for GPS-based navigation systemsWhen cameras are available, and conditions align favorably, street camera-based systems offer BLV individuals a valuable source of fine-grained, high-precision information, significantly enhancing their navigational experience and environmental awareness. These capabilities are currently beyond the reach of conventional GPS-based systems. All eight study participants unanimously chose StreetNav over BlindSquare as their preferred navigation system due to its precise, scene-aware navigation assistance (Section 5.3). However, it's important to acknowledge that street camera-based systems have their own set of limitations. The widespread availability of street cameras is not yet a reality, and ideal conditions may not always be met for their effective use. In contrast, GPS-based systems, while lacking in precision and environmental awareness, are universally accessible and resilient in varying conditions, including low light. A harmonious integration of these two approaches is a promising solution. Users can tap into street-camera information when conditions permit, seamlessly transitioning to GPS data when necessary. This can be facilitated through sensor fusion or information hand-offs, creating a synergy that ensures a smooth and reliable navigational experience. Future approaches could explore how these two systems can effectively complement each other, addressing their respective limitations and enhancing overall performance. Affordances of precise outdoor navigation assistance for BLV peoplePrevious research in indoor navigation has demonstrated the advantages of accurately pinpointing users' locations [(2; 34; 57)] and providing scene-aware navigational information [(33; 25)]. However, achieving such precision has remained a challenge in outdoor environments, primarily due to the limited accuracy of GPS technology [(23)]. StreetNav's approach of leveraging existing street cameras demonstrates that precise outdoor navigation support for BLV pedestrians is possible. Our study reveals the advantages of precise, fine-grained navigation for BLV individuals. These benefits include a substantial reduction in instances of veering and routing errors, such as deviation from the shortest path or missing intended destinations, as well as augmented environmental awareness. StreetNav offered our participants a glimpse into the potential of precise outdoor navigation. Several participants desired even greater precision, including the ability to discern the exact number of steps remaining before reaching a crosswalk's curb. Future research could delve into exploring how to best deliver such granular feedback to BLV users, alongside the necessary technological advancements needed to achieve this level of precision. These advantages, as our findings suggest, extend beyond merely improving navigation performance. Participants shared insights into how precise navigation could enhance their independence when navigating outdoors. It could empower BLV people to venture outdoors more frequently, unlocking new travel opportunities, as exemplified by P3's newfound confidence in using public transportation with StreetNav-like systems: "_I don't really use the city buses, except if I'm with somebody, but [StreetNav] would make me want to get up, go outside, and walk to the bus stop._" -**P3** This newfound confidence is particularly noteworthy, considering the unpredictable nature of outdoor environments. Future research could explore new affordances that street camera-based systems can enable for people, in general. ## 7. Limitations Our work revealed valuable insights into the benefits and effectiveness of a new approach that uses existing street cameras for outdoor navigation assistance. At the same time, we acknowledge that our work has several limitations. StreetNav was developed using a single street camera and tested at a single street intersection. This approach means that there might be other technical hurdles and design considerations we didn't encounter due to the constraints of this setup. Future research could expand upon our design and investigate how street camera-based systems can adapt to different environments and challenges. Furthermore, to ensure the safety of participants and to fit the user study within a 120-minute timeframe, we designed the study routes to be less complex and dangerous. Real-world outdoor environments can vary significantly from one part of a city, state, or country to another. Our study location may not fully capture the diversity of scenarios BLV individuals encounter when navigating outdoors. Lastly, it's important to note that our study sample consisted of only eight BLV individuals. While their insights are valuable, their preferences for outdoor navigation may not represent the broader BLV community's perspectives. StreetNav was developed in response
2308.16907
Yukawa-Lorentz Symmetry in Non-Hermitian Dirac Materials
Lorentz spacetime symmetry represents a unifying feature of the fundamental forces, typically manifest at sufficiently high energies, while in quantum materials it emerges in the deep low-energy regime. However, its fate in quantum materials coupled to an environment thus far remained unexplored. We here introduce a general framework of constructing symmetry-protected Lorentz invariant non-Hermitian (NH) Dirac semimetals (DSMs), realized by invoking masslike anti-Hermitian Dirac operators to its Hermitian counterpart. Such NH DSMs feature purely real or imaginary isotropic linear band dispersion, yielding a vanishing density of states. Dynamic mass orderings in NH DSMs thus take place for strong Hubbardlike local interactions through a quantum phase transition, hosting a non-Fermi liquid, beyond which the system becomes an insulator. We show that depending on the internal Clifford algebra between the NH Dirac operator and candidate mass order-parameter, the resulting quantum-critical fluid either remains coupled with the environment or recovers full Hermiticity by decoupling from the bath, while always enjoying an emergent Yukawa-Lorentz symmetry in terms of a unique terminal velocity. We showcase the competition between such mass orderings, their hallmarks on quasiparticle spectra in the ordered phases, and the relevance of our findings for correlated designer NH Dirac materials.
Vladimir Juricic, Bitan Roy
2023-08-31T17:59:27Z
http://arxiv.org/abs/2308.16907v2
# Yukawa-Lorentz Symmetry in Non-Hermitian Dirac Materials ###### Abstract We propose a general construction of symmetry protected Lorentz invariant non-Hermitian (NH) Dirac semimetals (DSMs), realized by invoking masslike anti-Hermitian Dirac operators to its Hermitian counterpart. They feature purely real or imaginary isotropic linear band dispersion, yielding a vanishing density of states. Dynamic mass orderings in NH DSMs thus take place for strong Hubbardlike local interactions through a quantum phase transition where nodal NH Dirac quasiparticles are strongly coupled with bosonic order-parameter fluctuations, hosting a non-Fermi liquid, beyond which the system becomes an insulator. Depending on the internal Clifford algebra between the NH Dirac operator and candidate mass order-parameter, the resulting quantum critical fluid either remains coupled with the environment or recovers full Hermiticity by decoupling from the bath, while always enjoying an emergent Yukawa-Lorentz symmetry in terms of a unique velocity. We showcase the competition between such mass orderings, their hallmarks on quasiparticle spectra in the ordered phases, and the relevance of our findings in correlated designer NH Dirac materials. _Introduction._ From classical and quantum electrodynamics involving photons to quantum chromodynamics encompassing gluons, field theoretic formulations of fundamental forces rest on the unifying bedrock of the Lorentz symmetry [1; 2], typically realized at sufficiently high energies [3; 4; 5; 6; 7; 8]. On the other hand, in Dirac crystals, realized in a number of quantum materials [9; 10; 11], although such a symmetry may not necessarily be present at the microscopic level, it rises as an emergent phenomenon in the deep infrared regime through boson mediated inter-particle interactions [12; 13; 14; 15; 16; 17]. The bosonic degrees of freedom can be vectorlike particles such as helicity-1 photons which interact with Dirac fermions and charged Cooper pairs or spinless scalar order-parameter fluctuations at the brink of dynamic mass generation for Dirac quasiparticles via spontaneous symmetry breaking. Irrespective of these microscopic scenarios, the emergent Lorentz symmetry always manifests a _unique_ velocity in the medium, generically tagged as the'speed of light', not necessarily \(c\). Necessity and elegance of this commonly occurring space-time symmetry often allow us to take it for granted. However, its fate when quantum materials interact with the environment thus far remains an untouched territory, which we here set out to explore. In the Hamiltonian language, system-to-environment couplings can be modeled by non-Hermitian (NH) operators. But, a one-to-one correspondence between them is still missing. Despite this limitation, we showcase a possible emergent Lorentz symmetry in nonspatial symmetry protected NH Dirac semimetals (DSMs), captured by Lorentz invariant NH Dirac operators, possessing purely real or imaginary eigenvalue spectra. When such an NH DSM arrives at the shore of dynamic mass generation triggered by Hubbardlike finite-range Coulomb repulsion, mediating boson-fermion Yukawa interactions, the resulting strongly coupled incoherent non-Fermi liquid always features a _unique_ terminal velocity in the deep infrared regime. Depending on the pattern of the incipient Dirac insulation, the system achieves the Yukawa-Lorentz symmetry either maintaining its coupling with the environment or by decoupling itself from the bath. These outcomes possibly suggest a generic Lorentz symmetry in NH Dirac materials, despite the variety of its interactions with the environment. Finally, we unfold the quantum (multi-)critical phenomena in correlated NH DSMs. _Minimal model._ We set out to construct a minimal effective Hamiltonianlike NH Dirac operator (\(H_{\text{NH}}\)), describing a collection of linearly dispersing gapless quasiparticles in \(d\) spatial dimensions coupled to environment, such that \(H_{\text{NH}}\) possesses either purely real or purely imaginary eigenvalues. We begin with the standard Dirac Hamiltonian of the form \(H=\sum_{\mathbf{k}}\Psi_{\mathbf{k}}^{\dagger}H_{\text{D}}\Psi_{\mathbf{k}}\), where \[H_{\text{D}}=v_{n}\ \sum_{j=1}^{d}\Gamma_{j}k_{j}\equiv v_{n}h_{0}, \tag{1}\] \(v_{n}\) bears the dimension of the Fermi velocity, Hermitian \(\Gamma\) matrices satisfy the anticommuting Clifford algebra \(\{\Gamma_{j},\Gamma_{k}\}=2\delta_{jk}\) for \(j,k=1,\cdots,d\), and \(\mathbf{k}\) is momentum, yielding a linear energy-momentum relation \(E_{\text{H}}(\mathbf{k})=\pm v_{n}|\mathbf{k}|\). Internal structure of the Dirac spinor \(\Psi_{\mathbf{k}}\) depends on the microscopic details of the system. For a minimal four-component Dirac spinor, the maximal number of mutually anticommuting Hermitian matrices is five, out of which three (two) can be chosen to be purely real (imaginary). Although our construction is applicable in arbitrary dimensions, here we primarily concentrate on \(d=2\). Then, without any loss of generality, we choose \(\Gamma_{1}\) and \(\Gamma_{2}\) to be purely imaginary. Possible mass terms, \(\Psi_{\mathbf{k}}^{\dagger}M\Psi_{\mathbf{k}}\), producing isotropically gapped ordered ground states, are then represented by the Hermitian matrices (\(M\)) that anticommute with the Dirac Hamiltonian in Eq. (1), namely \(\{M,h_{0}\}=0\) with \(M^{2}=1\). In \(d=2\), there are four such mass matrices for a four-component Dirac system \(M\in\{M_{1},M_{2},M_{3},\Gamma_{12}\}\), where \(\Gamma_{jk}=i\Gamma_{j}\Gamma_{k}\). While \(M_{j}\)s are purely real for \(j=1,2,3\), each of which breaks the SU(2) chiral symmetry of \(H_{\rm D}\), generated by \(\{M_{12},M_{23},M_{31}\}\), with \(M_{jk}=iM_{j}M_{k}\), the purely imaginary mass matrix \(\Gamma_{12}\) transforms as a scalar under the chiral rotation. It breaks the time-reversal symmetry under \(\mathcal{K}\) (complex conjugation). The crucial observation is that the product of a mass matrix and the Hamiltonian \(h_{0}\) is _anti-Hermitian_, namely \((Mh_{0})^{\dagger}=-Mh_{0}\). Therefore, we define the NH Dirac operator as a minimal extension of \(H_{\rm D}\) [Eq. (1)] by including such a masslike anti-Hermitian term, leading to \[H_{\rm NH}=(v_{\rm u}+v_{{}_{\rm NH}}M)h_{0}. \tag{2}\] The real parameter \(v_{{}_{\rm NH}}\), also bearing the dimension of the Fermi velocity, quantifies the strength of the system-to-environment coupling. The spectrum of the NH Dirac operator \(E_{\rm NH}({\bf k})=\pm\sqrt{v_{\rm u}^{2}-v_{{}_{\rm NH}}^{2}}|{\bf k}|\equiv \pm v_{\rm r}|{\bf k}|\) is purely real (imaginary) for \(v_{{}_{\rm H}}>v_{{}_{\rm NH}}\) (\(v_{{}_{\rm H}}<v_{{}_{\rm NH}}\)), where \(v_{{}_{\rm F}}=\sqrt{v_{{}_{\rm H}}^{2}-v_{{}_{\rm NH}}^{2}}\) is the effective Fermi velocity of NH Dirac fermions. Hereafter, we consider the case with the real energy eigenvalues, unless otherwise stated. The form of the NH Dirac operator (\(H_{\rm NH}\)) is restricted by four nonspatial discrete unitary and antiunitary symmetries, among which time-reversal, particle-hole and pseudo-Hermitiarity [18], with the corresponding representations given in Table 1 for a specific choice of \(M=M_{1}\). As explicitly shown, once the form of \(H_{\rm NH}\) is fixed (for a given \(M\)), none of the four _constant_ mass terms is invariant under all of these symmetries, and therefore cannot be added to \(H_{\rm NH}\) without breaking any of them. Hence, the nonspatial symmetries protect NH gapless Dirac fermions, irrespective of the specific choice of \(M\) and the dimensionality of the system (\(d\)). See Sec. S1 of the Supplemental Material (SM) [19]. _Scaling._ The form of \(H_{{}_{\rm NH}}\) is manifestly _Lorentz invariant_, with the dynamical exponent \(z=1\), measuring the relative scaling between the energy (\(E_{\rm NH}\)) and momentum (\({\bf k}\)). This, in turn, implies that the density of states vanishes in a power-law fashion \(\rho(E)\sim|E|^{d-1}/v_{\rm F}^{d}\). As a consequence, a weak local (Hubbardlike) short-range interaction is irrelevant for \(d>1\) and the concomitant quantum phase transition (QPT) into an ordered phase takes place through a strongly coupled quantum critical point (QCP) with its critical behavior described by a Gross-Neveu-Yukawa (GNY) theory, about which more in a moment. Nonetheless, a reduced effective Fermi velocity \(v_{{}_{\rm F}}<v_{\rm u}\) is expected to enhance the mass ordering propensity in NH DSM in comparison to its counterpart in Hermitian Dirac materials, which we show shortly. To gain a further insight into the nature of such an NH DSM, next we analyze its response to an external electromagnetic field in terms of the linear longitudinal optical conductivity at zero temperature and finite frequency (\(\omega\)) within the Kubo formalism. The component of the current operator in the \(l^{\rm th}\) spatial direction is \(j_{l}=(v_{{}_{\rm H}}+v_{{}_{\rm NH}}M)\Gamma_{l}\). The requisite polarization bubble reads \(\delta\Pi_{lm}^{(d)}(i\Omega)=\Pi_{lm}^{(d)}(i\Omega)-\Pi_{lm}^{(d)}(0)\), where \[\Pi_{lm}^{(d)}(i\Omega)=-{\rm Tr}\int\frac{d\omega d^{d}{\bf k}}{(2\pi)^{d+1}} \left[j_{l}G_{\rm F}(i\Omega_{+},{\bf k})j_{m}G_{\rm F}(i\omega,{\bf k}) \right], \tag{3}\] \(\Omega_{+}=\Omega+\omega\), and \(G_{\rm F}(i\omega,{\bf k})=(i\omega+H_{\rm NH})/(\omega^{2}+v_{\rm F}^{2}k^{2})\) is the fermionic Green's function. After the analytic continuation, \(i\Omega\rightarrow\omega+i\delta\) with \(\delta>0\), and using the Kubo formula, we obtain the optical conductivity \[\sigma_{lm}^{(2)}(\omega)=N_{f}\frac{e^{2}}{h}\frac{\pi}{4}\delta_{lm}\;\;{ \rm and}\;\;\sigma_{lm}^{(3)}(\omega)=N_{f}\frac{e^{2}}{h}\frac{\omega}{6v_{ {}_{\rm F}}}\delta_{lm}, \tag{4}\] in \(d=2\) and \(d=3\), respectively. Here \(N_{f}\) is the number of four-component fermion flavors. See Sec. S2 of the SM [19]. Therefore, NH DSMs and its Hermitian counterparts share the same value of the optical conductivity [20; 21; 22], with the Fermi velocity in \(d=3\) replaced by the effective one for NH Dirac fermions (\(v_{{}_{\rm F}}\)). This may imply that the associated conformal field theories feature the same central charge. _Mass orders._ The order parameter (OP) of a uniform Dirac mass that isotropically gaps out Dirac fermions can be expressed as an \(n\)-component vector \(\Phi_{j}=\langle\Psi_{\bf k}^{\dagger}N_{j}\Psi_{\bf k}\rangle\), such that \(\{H_{\rm D},N_{j}\}=0\) and \(\{N_{j},N_{k}\}=2\delta_{jk}\) for \(j,k=1,\cdots,n\). But, the appearance of a mass matrix (\(M\)) in \(H_{\rm NH}\) fragments the mass OPs and they can be classified according to the canonical (anti)commutation relations between \(N_{j}\)s and \(M\): (a) commuting class mass (CCM) for which \([N_{j},M]=0\) for all \(j\), (b) anticommuting class mass (ACM) with \(\{N_{j},M\}=0\) for all \(j\), and (c) mixed class mass (MCM) for which \([N_{j},M]=0\) for \(j=1,\cdots,n_{1}\) and \(\{N_{j},M\}=0\) for \(j=1,\cdots,n_{2}\) with \(n_{1}+n_{2}=n\). Hereafter, we mainly focus on the former two classes. The ordering tendencies toward the nucleation of these two classes of mass ordering, revealing the impact of the NH parameter (\(v_{{}_{\rm NH}}\)), can be estimated from their corresponding bare mean-field susceptibilities at zero external frequency and momentum, respectively given by \[\chi_{{}_{1}}=N_{f}\,f(d)\,\frac{\Lambda^{d-1}}{v_{{}_{\rm F}}}\,\,\,{\rm and }\,\,\chi_{{}_{2}}=\chi_{{}_{1}}\left(1+\frac{v_{{}_{\rm NH}}^{2}}{v_{{}_{\rm F }}^{2}}\right), \tag{5}\] where \(f(d)=2S_{d}/[(d-1)(2\pi)^{d}]\), \(S_{d}=2\pi^{d/2}/\Gamma(d/2)\), and \(\Lambda\) is the ultraviolet momentum cutoff up to which the energy-momentum relation remain linear. See Sec. S3 of the SM [19]. As the effective Fermi velocity in an NH DSM (\(v_{{}_{\rm F}}\)) is smaller than its counterpart in Hermitian Dirac materials (\(v_{{}_{\rm H}}\)), the bare mean-field susceptibilities are larger in the former system. Given that the critical coupling constant (\(u^{\star}\)) for any mass ordering is _inversely_ proportional to the bare mean-field susceptibility, NH DSMs are conducive to mass formation at weaker interactions, stemming from its enhanced density of states. However, the competition between CCMs and ACMs requires notion about their associated quantum critical behaviors, captured from the renormalization group (RG) flows of the velocity parameters, \(v_{{}_{\rm H}}\), \(v_{{}_{\rm NH}}\) and \(v_{{}_{\rm F}}\), which we discuss next. This analysis will also shed light on the emergent multi-critical behavior near the MCM condensation. _Quantum critical theory._ The dynamics of the bosonic OP fluctuations in a GNY quantum critical theory, describing a strong-coupling instability of NH DSMs, is captured by the \(\Phi^{4}\) theory, with the bosonic correlator given by \(G_{\rm B}(i\omega,{\bf k})=[\omega^{2}+v_{{}_{\rm B}}^{2}k^{2}]^{-1}\), where \(v_{{}_{\rm B}}\) is the velocity of the OP fluctuations. The dynamics of fermions is governed by the NH Dirac operator in Eq. (2). These two degrees of freedom are coupled through a Yukawa vertex, entering the imaginary time (\(\tau\)) action as \[S_{Y}=g\int d\tau\int d^{d}{\bf x}\sum_{j=1}^{n}\,\,\Phi_{j}(\tau,{\bf x})\,\, \Psi^{\dagger}(\tau,{\bf x})N_{j}\Psi(\tau,{\bf x}). \tag{6}\] At the upper critical three spatial dimensions, both Yukawa (\(g\)) and the \(\Phi^{4}\) coupling (\(\lambda\)) are _marginal_[23]. We, therefore, use the distance from the upper critical dimension \(\varepsilon=3-d\) as the expansion parameter to capture the low-energy phenomena close to the GNY QCP. In particular, we are interested in the fate of an emergent Yukawa-Lorentz symmetry close to such a RG fixed point, since fermionic and bosonic velocities are generically different at the lattice (ultraviolet) scale. _Yukawa-Lorentz symmetry._ To this end, we compute the fermionic [Fig. 1(a)] and bosonic [Fig. 1(b)] self-energy diagrams to the leading order in the \(\varepsilon\) expansion. Irrespective of the nature of the mass ordering (CCM or ACM), the RG flow of the Hermitian component of the Fermi velocity (\(v_{{}_{\rm H}}\)) takes the form \[\beta_{v_{{}_{\rm H}}}=-\frac{4g^{2}n}{3v_{{}_{\rm B}}(v_{{}_{\rm F}}+v_{{}_{ \rm B}})^{2}}\left(1-\frac{v_{{}_{\rm B}}}{v_{{}_{\rm F}}}\right)v_{{}_{\rm H}}, \tag{7}\] after rescaling the Yukawa coupling according to \(g^{2}\to g^{2}/(8\pi^{2})\), where \(\beta_{Q}\equiv dQ/d\ln b\) and \(b\) is the RG time. The flow equations for the remaining two velocities, when the mass matrix commutes with \(M\) (CCM) are \[\beta_{v_{{}_{\rm NH}}} = -\frac{4g^{2}n}{3v_{{}_{\rm B}}(v_{{}_{\rm F}}+v_{{}_{\rm B}})^{2 }}\left(1-\frac{v_{{}_{\rm B}}}{v_{{}_{\rm F}}}\right)v_{{}_{\rm NH}} \tag{8}\] \[{\rm and}\,\,\beta_{{}_{\rm B}} = -N_{f}\,\frac{g^{2}n}{2v_{{}_{\rm F}}^{3}}\,\left(1-\frac{v_{{}_{ \rm F}}^{2}}{v_{{}_{\rm B}}^{2}}\right)v_{{}_{\rm B}}. \tag{9}\] Figure 1: Self-energy diagrams for (a) Dirac fermion (solid lines) and (b) bosonic order parameter field (wavy lines). Their vertex corresponds to the Yukawa coupling in Eq. (6). These RG flow equations [Eq. (7)-(9)] imply that at the GNY QCP with \(g^{2}\sim\varepsilon\), the terminal velocities are such that \(v_{{}_{\rm F}}=\sqrt{v_{{}_{\rm H}}^{2}-v_{{}_{\rm NH}}^{2}}=v_{{}_{\rm H}}\), with all three velocities being _nonzero_, independent of their initial values. See Sec. S4 of the SM [19]. We also confirm this outcome by numerically solving these flow equations. See Fig. 2(a). Therefore, a new fixed point with an enlarged symmetry emerges, at which the system remains coupled to the environment and the effective Fermi velocity of NH Dirac fermions (\(v_{{}_{\rm F}}\)) is equal to the bosonic OP velocity (\(v_{{}_{\rm B}}\)), with the nonzero terminal values for \(v_{{}_{\rm H}}\), \(v_{{}_{\rm NH}}\) and \(v_{{}_{\rm B}}\). We name it _non-Hermitian Yukawa-Lorentz symmetry_. On the other hand, when a NH DSM arrives at the brink of the ACM condensation, the flow equations for \(v_{{}_{\rm NH}}\) and \(v_{{}_{\rm B}}\) take the forms (see Sec. S4 of the SM [19]) \[\beta_{v_{{}_{\rm NH}}} = -\frac{8g^{2}n}{3v_{{}_{\rm B}}(v_{{}_{\rm F}}+v_{{}_{\rm B}})^{2 }}\left(1+\frac{v_{{}_{\rm B}}}{2v_{{}_{\rm F}}}\right)v_{{}_{\rm NH}}, \tag{10}\] \[\mbox{and }\beta_{v_{{}_{\rm B}}} = -N_{f}\,\frac{g^{2}n}{2v_{{}_{\rm F}}^{3}}\left(\frac{v_{{}_{\rm H }}^{2}}{v_{{}_{\rm F}}^{2}}-\frac{v_{{}_{\rm H}}^{2}}{v_{{}_{\rm B}}^{2}}- \frac{2v_{{}_{\rm NH}}^{2}}{3v_{{}_{\rm B}}^{2}}\right)v_{{}_{\rm B}}, \tag{11}\] respectively. These two flow equations along with Eq. (7) imply that near the ACM class GNY QCP (\(g^{2}\sim\varepsilon\)), the system recovers Hermiticity by decoupling itself from the environment and a conventional Yukawa-Lorentz symmetry emerges with \(v_{{}_{\rm NH}}=0\) and \(v_{{}_{\rm F}}=v_{{}_{\rm H}}=v_{{}_{\rm H}}\), irrespective of their bare values. We further confirm this outcome by numerically solving the flow equations. See Fig. 2(b). _Mean-field theory_. Guided by the emergent quantum critical theory, we now compare the tendencies between CCM and ACM orderings. At the corresponding GNY QCPs, their renormalized susceptibilities are respectively \[\chi_{1}^{\star}=N_{f}\,f(d)\,\frac{\Lambda^{d-1}}{v_{{}_{\rm F}}}\,\,\mbox{ and }\chi_{2}^{\star}=N_{f}\,f(d)\,\frac{\Lambda^{d-1}}{v_{{}_{\rm H}}}, \tag{12}\] obtained from Eq. (5) by replacing various velocities with their terminal ones. As \(v_{{}_{\rm F}}<v_{{}_{\rm H}}\) in NH DSMs, \(\chi_{1}^{\star}>\chi_{2}^{\star}\), and thus \(u_{{}_{1}}^{\star}<u_{{}_{2}}^{\star}\), where \(u_{{}_{1}}^{\star}\) (\(u_{{}_{2}}^{\star}\)) is the critical strength of the coupling constant for the CCM (ACM) ordering. This outcome is particularly important when the mass matrix \(M\), appearing in \(H_{\rm NH}\) [Eq. (2)], is a component of a vector mass order of a Hermitian Dirac system. Any such choice of \(M\) immediately fragments the vector mass order into CCM and ACM, and our susceptibility analysis suggests that the OP corresponding to the CCM enjoys a stronger propensity toward nucleation. This prediction can be tested from the quasiparticle spectra of NH Dirac fermions inside mass ordered phases, described by an effective NH single-particle operator \[H_{\rm NH}^{\rm MF}=(v_{{}_{\rm H}}+v_{{}_{\rm NH}}M)h_{0}+\sum_{j}\Delta_{j}N _{j}, \tag{13}\] where \(N_{j}\)s are the Hermitian mass matrices, satisfying \(\{N_{j},h_{0}\}=0\) for all \(j\), and \(\Delta_{j}\) is the OP amplitude. The excitation spectrum of \(H_{\rm NH}^{\rm MF}\) crucially depends on the nature of the mass ordering. Namely, (a) for \([N_{j},M]=0\) (CCM), the energy eigenvalues are either purely real (\(v_{{}_{\rm H}}>v_{{}_{\rm NH}}\)) or complex (\(v_{{}_{\rm H}}<v_{{}_{\rm NH}}\)), whereas (b) for \(\{N_{j},M\}=0\) (ACM) the excitation spectrum is _complex_ for any \(v_{{}_{\rm H}}\) and \(v_{{}_{\rm NH}}\). Next we exemplify these outcomes by focusing on a specific microscopic scenario. _NH Hubbard model_. Consider a collection of massless Dirac fermions on graphene's honeycomb lattice for which the Neel antiferromagnetic (AFM) order corresponds to the three-component vector mass that breaks the O(3) spin rotational symmetry, favored by a strong on-site Hubbard repulsion at half-filling [24]. We construct an NH Dirac operator by choosing the easy \(z\)-axis component of the AFM order as \(M\), which then naturally becomes a CCM order. By contrast, two planar or easy-plane components of the AFM order fall within the category of ACM. See Sec. S5 of the SM [19]. Our susceptibility calculation predicts that such an NH Hubbard model should prefer nucleation of an Ising symmetry breaking easy-axis AFM order over the XY symmetry breaking easy-plane one, and the quasi-particle spectra inside the ordered phase must be purely _real_ unless \(v_{{}_{\rm H}}<v_{{}_{\rm NH}}\) at the bare level. These predictions can serve as the litmus tests for our quantum critical theory of correlated NH Dirac materials in quantum Monte Carlo simulations and exact numerical diagonalizations [25]. _NH criticality_. Finally, we compute the critical exponents near the GNY QCPs associated with the \(n_{1}\)-component CCM and \(n_{2}\)-component ACM orderings. The RG flow equations for the corresponding Yukawa couplings respectively read as (see Sec. S6 of the SM [19]) \[\beta_{g_{1}^{2}} = \varepsilon g_{1}^{2}-(2N_{f}+4-n_{1})g_{1}^{4}+n_{2}g_{1}^{2}g_{2 }^{2}\delta_{v_{{}_{\rm NH}}^{0},0}^{\alpha}, \tag{14}\] \[\beta_{g_{2}^{2}} = \varepsilon g_{2}^{2}-(2N_{f}+4-n_{2})g_{2}^{4}+n_{1}g_{1}^{2}g_{2 }^{2}\delta_{v_{{}_{\rm NH}}^{0},0}^{\alpha}, \tag{15}\] after rescaling the coupling constants according to \(g_{1}^{2}/(8\pi^{2}v_{{}_{\rm F}}^{2})\to g_{1}^{2}\) and \(g_{2}^{2}/(8\pi^{2}v_{{}_{\rm H}}^{2})\to g_{2}^{2}\). The Yukawa Figure 3: Schematic RG flow in the Yukawa plane of (a) non-Hermitian and (b) Hermitian Dirac systems [Eqs. (14) and (15)]. Both \(g_{1}^{2}\) and \(g_{2}^{2}\) are measured in units of \(\varepsilon\). Colored dots correspond to RG fixed points. The green one in (b) with \(g_{1}^{2}=g_{2}^{2}\) and an enlarged \(O(n)\) symmetry drives a continuous DSM-MCM QPT, while that in (a) with \(g_{1}^{2}\neq g_{2}^{2}\) controls a first order transition between the CCM and ACM. fixed points at \(g_{{}_{1,*}}^{2}=\varepsilon/[2N_{f}+4-n_{1}]\) [red dot in Fig. 3(a)] and \(g_{{}_{2,*}}^{2}=\varepsilon/[2N_{f}+4-n_{2}]\) [blue dot in Fig. 3(a)] respectively control the continuous QPTs into the CCM and ACM orders. At the Yukawa fixed points the fermionic and bosonic anomalous dimensions are respectively \[\eta_{{}_{\Phi,j}}=\frac{n_{j}}{2}g_{{}_{j,*}}^{2}\;\;\text{and}\;\;\eta_{{}_{ \Phi,j}}=2N_{f}g_{{}_{j,*}}^{2} \tag{16}\] for \(j=1,2\), and the fermionic Green's functions scale as \(G_{\text{F}}^{-1}\sim(\omega^{2}+v_{\text{F}}^{2}|\mathbf{k}|^{2})^{1-\eta_{ \Phi,1}}\) and \((\omega^{2}+v_{\text{H}}^{2}|\mathbf{k}|^{2})^{1-\eta_{\Phi,2}}\), indicating the onset of non-Fermi liquids therein. The last terms in Eqs. (14) and (15) are pertinent in the proximity to an \(n\)-component MCM ordering, where \(n=n_{1}+n_{2}\), and they are nontrivial only when the bare value of the NH Fermi velocity (\(v_{\text{NH}}^{0}\)) is zero, corresponding to a Hermitian Dirac system. Then a fully stable quantum multi-critical point with enlarged \(O(n)\) symmetry emerges at \(g_{{}_{1,*}}^{2}=g_{{}_{2,*}}^{2}=\varepsilon/[2N_{f}+4-n]\)[26; 27]. These terms, however, do not exist in NH Dirac system as CCM and ACM components of the MCM order possess distinct Yukawa-Lorentz symmetries (hence cannot be coupled). The fixed point located at \((g_{{}_{1,*}}^{2},g_{{}_{2,*}}^{2})=(\varepsilon/[2N_{f}+4-n_{1}],\varepsilon/ [2N_{f}+4-n_{2}])\) thus controls a direct first-order phase transition between them, tunable by bosonic masses (\(m_{1}^{2}\) or \(m_{2}^{2}\)), which we do not discuss in any further detail. These two scenarios are shown in Fig. 3. The RG flow equations for the \(\Phi^{4}\) couplings associated with the CCM and ACM orderings assume the forms \[\beta_{\lambda_{j}}=\varepsilon\lambda_{j}-4N_{f}g_{{}_{j}}^{2}\left[\lambda_ {j}-6g_{{}_{j}}^{2}\right]-\frac{8+n_{1}}{6}\lambda_{j}^{2} \tag{17}\] for \(j=1\) and \(2\), respectively, in terms of the rescaled coupling constant \(\lambda_{1}/(8\pi^{2}v_{\text{F}})\to\lambda_{1}\) and \(\lambda_{2}/(8\pi^{2}v_{\text{H}})\to\lambda_{2}\). The fixed point values \(\lambda_{j,*}\) (somewhat lengthy expression) are obtained from the solution of \(\beta_{\lambda_{j}}=0\). The RG flow equation for the bosonic mass parameters that serve as the tuning parameter for the NH DSM to CCM and ACM QPTs respectively take the forms \[\beta_{m_{j}^{2}}=m_{j}^{2}\left[2-2N_{f}g_{{}_{j}}^{2}-\frac{2+n_{j}}{6} \lambda_{j}\right] \tag{18}\] for \(j=1\) and \(2\), yielding the correlation length exponents \[\nu_{j}=\frac{1}{2}+\frac{N_{f}}{2}g_{{}_{j,*}}^{2}+\frac{2+n_{j}}{24}\lambda_ {j,*}. \tag{19}\] _Discussion & Outlook_. In this work, we develop a general formalism of constructing symmetry protected Lorentz invariant NH DSMs in any dimension by introducing masslike anti-Hermitian Dirac operator to its Hermitian counterpart, featuring purely real or imaginary eigenvalue spectra. Their thermodynamic and transport responses closely mimic the ones in conventional Dirac materials, however in terms of an effective Fermi velocity \(v_{\text{F}}=\sqrt{v_{\text{H}}^{2}-v_{\text{NH}}^{2}}\) of NH Dirac fermions, where \(v_{\text{H}}\) (\(v_{\text{NH}}\)) is the Fermi velocity associated with the Hermitian (anti-Hermitian) component of the NH Dirac operator. We show that when NH DSMs arrive in the close proximity to any mass ordering via spontaneous symmetry breaking, the emergent non-Fermi liquid features NH (for CCM) or Hermitian (for ACM) Yukawa-Lorentz symmetry in terms of a unique terminal velocity for all the participating degrees of freedom. We also determine the non-trivial critical exponents associated with the NH DSM to Dirac insulator QPTs, revealing their non-Gaussian nature in \(d=2\), favored by strong Hubbardlike local interactions. Combining mean-field theory and RG analysis, we address the competition among different classes of mass orderings and argue that the nature of the Dirac insulators can be identified from the quasiparticle spectra inside the ordered phases, which can serve as a direct test of our proposed effective description of correlated NH DSMs in numerical simulations [25] and experiments. In three spatial dimensions, the NH DSM to insulator QPTs are Gaussian in nature as they take place at \(g_{{}_{j}}^{2}=\lambda_{{}_{j}}=0\) since \(\varepsilon=0\) therein, yielding mean-field critical exponents \(\nu=1/2\) and \(\eta_{{}_{\Phi}}=\eta_{{}_{\Phi}}=0\). Still, the fermionic and bosonic quasiparticle poles suffer logarithmic corrections, producing a _marginal_ Fermi liquid, which also manifests the emergent Yukawa-Lorentz symmetry. For experimental realizations of our proposal, consider a paradigmatic Dirac system in \(d=2\), graphene [9], where conventional Dirac Hamiltonian (\(h_{0}\)) results from electronic hopping between the nearest-neighbor sites of \(A\) and \(B\) sublattices. As such graphene can accommodate a plethora of Dirac masses [28; 29] and any one of them can be chosen as \(M\) in \(H_{\text{NH}}\) [Eq. (2)]. Consider the simplest one, charge-density-wave with staggered pattern of electronic density between two sublattices [30]. Then the anti-Hermitian operator \(Mh_{0}\) also corresponds to nearest-neighbor hopping on the honeycomb lattice. But, the hopping strength from \(A\) to \(B\) sites is stronger or weaker than that along the opposite direction, yielding non-Hermiticity. Such a simple NH Dirac operator can be engineered in designer electronic and optical honeycomb lattices, on which Hermitian graphene has already been realized [31; 32]. On optical lattices, hopping imbalance can be generated by tuning the laser strengths in opposite directions, as it has been recently demonstrated on a one-dimensional chain [33]. On designer electronic lattices hopping imbalance can be accomplished by placing electronic valves along the nearest-neighbor paths, allowing unidirectional hopping of electrons, however at slightly different operating voltages along the \(AB\) and \(BA\) paths. Given that Hubbard repulsion driven AFM ordering has been observed on optical graphene [32] and the tunable NH coefficient \(v_{\text{NH}}\) reduces the critical interaction for this ordering, since AFM is a CCM when \(M\) is the charge-density-wave order, our predicted mass nucleation at weak coupling and the associated quantum critical phenomena should be observed in these two NH designer Dirac materials. Simplicity of this construction should also make the proposed phenomena observable at least on three-dimensional NH optical Dirac lattices. Altogether our Lorentz invariant construction of the NH Dirac operator constitutes an ideal theoretical framework to extend the realm of various exotic many-body phenomena in the presence of system-to-environment interactions. Among them quantum electrodynamics, topological defects in the ordered phases, magnetic catalysis, superconductivity, chiral anomaly, to name a few in NH Dirac materials, are the prominent and fascinating ones. The present work lays the foundation of systematic future investigations of these avenues. _Acknowledgments._ V.J. acknowledges support of the Swedish Research Council (VR 2019-04735) and Fondecyt (Chile) Grant No. 1230933. Nordita is partially supported by Nordforsk. B.R. was supported by NSF CAREER Grant No. DMR- 2238679.
2309.12690
Strongly Coupled Spin Waves and Surface Acoustic Waves at Room Temperature
Here, we report the observation of strong coupling between magnons and surface acoustic wave (SAW) phonons in a thin CoFeB film constructed in an on-chip SAW resonator by analyzing SAW phonon dispersion anticrossings. Our device design provides the tunability of the film thickness with a fixed phonon wavelength, which is a departure from the conventional approach in strong magnon--phonon coupling research. We detect a monotonic increase in the coupling strength by expanding the film thickness, which agrees with our theoretical model. Our work offers a significant way to advance fundamental research and the development of devices based on magnon--phonon hybrid quasiparticles.
Yunyoung Hwang, Jorge Puebla, Kouta Kondou, Carlos Gonzalez-Ballestero, Hironari Isshiki, Carlos Sánchez Muñoz, Liyang Liao, Fa Chen, Wei Luo, Sadamichi Maekawa, Yoshichika Otani
2023-09-22T08:02:52Z
http://arxiv.org/abs/2309.12690v1
# Strongly Coupled Spin Waves and Surface Acoustic Waves at Room Temperature ###### Abstract Here, we report the observation of strong coupling between magnons and surface acoustic wave (SAW) phonons in a thin CoFeB film constructed in an on-chip SAW resonator by analyzing SAW phonon dispersion anticrossings. Our device design provides the tunability of the film thickness with a fixed phonon wavelength, which is a departure from the conventional approach in strong magnon-phonon coupling research. We detect a monotonic increase in the coupling strength by expanding the film thickness, which agrees with our theoretical model. Our work offers a significant way to advance fundamental research and the development of devices based on magnon-phonon hybrid quasiparticles. Hybridization between two systems can be characterized by a comparison of the coupling strength \(g\) and the relaxation rates of each system \(\kappa_{1}\) and \(\kappa_{2}\). When \(g/\text{max}(\kappa_{1},\kappa_{2})>1\), the hybridized state is in the strong coupling regime [1]. In the case of magnon-phonon coupling, reducing the magnon relaxation rate is experimentally challenging, thus increasing \(g\) is the most efficient route to realize strong coupling. The most straightforward approach to achieve higher coupling strength is increasing the number of spins coupled in phase to the desired mode [1]. This approach is typically done for magnons coupling to photons in cavity magnonics experiments by just increasing the volume of the magnet [2]. The rationale behind this approach lies in the relatively spatially homogeneous microwave cavity modes throughout the magnet, especially for magnets significantly smaller than the microwave wavelength (\(\sim\) mm). However, in the context of magnon-phonon coupling research, this approach is not always straightforward as the magnon and phonon are not necessarily in phase across the sample, and thus the magnon-phonon coupling depends non-trivially on the sample geometry. For instance, in a spherical magnet, the magnon-phonon coupling decreases with the volume [3; 4; 5]. Nonetheless, one can circumvent this apparent limitation by choosing an appropriate geometry for the magnetic medium: a thin film with a much smaller thickness than the involved acoustic wavelength. In this system and regime, the coupling strength recovers its characteristic monotonic increase with the expansion of the film thickness, and hence, the thin film limit allows us to explore and systematically characterize the dependence of magnon-phonon coupling on the number of spins. In order to make it possible, it is crucial to attain independent control over the phonon wavelengths and the geometry of the magnetic material, thereby enabling the realization of the thin film limit and promoting high magnon-phonon coupling. One viable approach to accomplish this limit is injecting surface acoustic waves (SAWs) with variable wavelengths into the magnetic material. However, while there have been significant research on the coupling between magnons and SAW phonons over the past two decades [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17], no study has reported the observation of strong coupling between magnons and SAW phonons. In this Letter, we demonstrate for the first time the coupling between magnons and SAW phonons caused by SAW-driven spin wave resonance (SWR) in a Co\({}_{20}\)Fe\({}_{60}\)B\({}_{20}\) (CFB) thin film on a LiNbO\({}_{3}\) substrate at room temperature by measuring anticrossing of SAW phonon dispersion, an indication of strong interaction [18]. We generate SAWs from outside the CFB film, which possesses a high magnetoelastic coupling coefficient, using a high-frequency two-port SAW resonator that consists of SAW generation and detection devices enclosed by distributed Bragg reflector-like stripes, forming an acoustic cavity [12; 15; 17; 19; 20; 21], similar to cavity quantum electrodynamics experiments. This device design allows driving acoustic modes with any wavelength \(\lambda_{p}\) (frequency \(\omega_{p}\)) enabling coupling to magnons at the wavelength \(\lambda_{m}=\lambda_{p}\) by SWR, as depicted in Fig. 1(a), circumventing the reliance of \(\lambda_{p}\) on the material geometry. By harnessing this breakthrough, we successfully estimated the magnon-phonon coupling strength (\(g\) values) of samples with the same device structure but varying CFB thicknesses (\(t_{\text{CFB}}\)) by fitting the observed anticrossing with our theoretical model. Our systematic study reveals a monotonic increase in \(g\) with increasing \(t_{\text{CFB}}\), which agrees with our expectation and our magnon-phonon coupling model. This increase of \(g\) allowed us to achieve strong coupling; \(g/\text{max}(\kappa_{m},\kappa_{p})>1\), where \(\kappa_{m(p)}\) is the magnon (phonon) relaxation rate, for devices with \(t_{\text{CFB}}\geq 20\) nm. To generate SAWs, we utilized interdigital transducers (IDT) that can generate and detect SAWs on a piezoelectric substrate [19]. We fabricated acoustic cavity devices including Ti (8 nm) / CFB (\(t_{\text{CFB}}\)) / Ti (5 nm) layers on a 128\({}^{\circ}\) Y-cut LiNbO\({}_{3}\) substrate, as shown in Figs. 1(b) and 1(c), where the parameters inside the parentheses exhibit the thickness of each layer. To pattern the IDTs and the acoustic reflectors, we used electron beam lithography (Elionix ELS-7700H) and deposited 35 nm of Al by electron beam evaporation for stripes of IDTs and acoustic reflectors. Each Al stripe of an IDT has a length of 120 \(\mu\)m and a width of \(w=150\) nm. Each Al stripe of the acoustic reflectors has a length of 100 \(\mu\)m and the same width as the IDT stripes, \(w\). All metallic stripes of the IDTs and the acoustic reflectors are separated by a distance of \(d=150\) nm. IDT1 and IDT2 have the same structure and each of them has 20 pairs of stripes. Each set of acoustic reflectors has 200 stripes. After the fabrication of the acoustic cavity, we patterned a \(190\mu\times 110\mu\)m rectangle in between the two IDTs by photolithography (PMT D-light DL1000SG/RWC). The Ti / CFB / Ti layers are deposited to the rectangle pattern by dc magnetron sputtering. We fabricated samples that have the same structure but different \(t_{\text{CFB}}=10\), 20, 25, 30, and 35 nm. SAW transmission (\(|S_{21}|^{2}\)) is measured using a vector network analyzer (VNA) while applying an external in-plane magnetic field \(H\), where the in-plane angle of \(H\) to the SAW wavevector \(\mathbf{k}\) (\(\phi_{H}\)). This transmission is shown in Fig. 1(b). When \(H\) is far enough from the resonant field of SWR driven by our SAW frequency, the SAW spectrum shows the phonon signal with no contributions from magnons, which remain unexcited. Such a situation is shown in Fig. 2(a) which depicts \(|S_{21}|^{2}\) of the sample with \(t_{\text{CFB}}=20\) nm out of magnetic resonance when \(\mu_{0}H=100\) mT and \(\phi_{H}=0\). The resonant frequency of the strongest SAW peak is \(f_{r}=6.58\) GHz. The designed wavelength of the SAW (\(\lambda_{r}\)) is determined by the designed structure of our acoustic device; \(\lambda_{r}=2(w+d)=600\) nm. Using these parameters, one can calculate the SAW velocity as \(v=f_{r}\lambda_{r}=3,950\) m/s. This value aligns with the typical SAW velocity propagating on a 128\({}^{\circ}\) Y-cut LiNbO\({}_{3}\)[22]; further supporting our assertion that this frequency peak is the Figure 1: (Color online) (a) Schematic illustration of strong magnon–phonon coupling in an acoustic cavity that confines phonons with a wavelength \(\lambda_{p}\) and a frequency \(\omega_{p}\). The yellow curves denote acoustic waves (phonons) created within the acoustic cavity and the red arrows represent magnetization dynamics. The phonons propagating through a CFB thin film excite a spin wave (magnon), which is represented by the purple curve, with a matched wavelength \(\lambda_{m}=\lambda_{p}\). (b) Schematic top view of the device structure used in this research. IDT1 and IDT2 include 20 pairs of Al stripes and each set of reflectors includes 200 Al stripes. The gap (\(d\)) and the width (\(w\)) of one Al stripe of IDTs and reflector gratings are both 150 nm. IDT1 and IDT2 are respectively connected to port1 and port2 of a vector network analyzer (VNA). (c) Schematic side view of the Ti/CFB/Ti stack. (d) Optical microscope image of one of the devices used in this study. Figure 2: (Color online) SAW measurement results of the sample \(t_{\text{CFB}}=20\) nm. (a) Measured SAW transmission signal (\(|S_{21}|^{2}\)) by VNA in the frequency domain. An in-plane magnetic field of 100 mT is externally applied in a direction parallel to the SAW propagation which is far from the resonant field of SWR driven by our SAW frequency as shown in (b), thus the transmission signal displays only the phonon response. The black curve exhibits the measured spectrum, and the red curve shows the multiple Lorentzian fitting. The blue dashed curves show the single Lorentzian peaks of the fitting. (b) SAW transmission spectra at the frequency of 6.58 GHz as a function of the amplitude of the externally applied in-plane magnetic field (\(\mu_{0}H\)). The in-plane angles (\(\phi_{H}\)) of each curve are shown in the legend. (c) SAW transmission spectrum at the frequency of 6.58 GHz as a function of \(\mu_{0}H\) and \(\phi_{H}\). (d) Calculated SAW transmission at the frequency of 6.58 GHz as a function of \(\mu_{0}H\) and \(\phi_{H}\). resonance of the acoustic cavity. In addition to the main resonance, we found one more peak on the lower frequency side of the main resonance [see the fitting to two Lorentzian peaks in the blue dashed curves in Fig. 2(a)]. The presence of these two peaks originates from the existence of two modes allowed within our cavity device. A detailed explanation of the origin of the two SAW peaks can be found in Sec. 1 of Ref. [23]. This Lorentzian fitting is used to extract the phonon linewidth \(\delta_{p}\) to estimate the phonon relaxation rate below. The SAW transmission out of magnetic resonance decreases when \(H\) approaches the resonance field condition since the phonons are used to excite magnons [6; 7; 11; 12; 24]. Figure 2(b) shows the absorption of \(|S_{21}|^{2}\) of the main SAW peak in Fig. 2(a) when \(\phi_{H}=0\), \(25^{\circ}\), and \(50^{\circ}\). Figure 2(c) shows \(|S_{21}|^{2}\) as a function of \(\mu_{0}H\) and \(\phi_{H}\). In the case of a typical magnetoelastic coupling excited by a Rayleigh-type SAW, the absorption amplitude shows maximum when \(\phi_{H}=45^{\circ}\)[6; 7; 11; 12; 13; 17; 24; 25; 26]. However, our results indicate that the maximum absorption across all \(\phi_{H}\) ranges at \(\phi_{H}\sim 25^{\circ}\) and no absorption was detected when \(\phi_{H}>30^{\circ}\). This is because the magnon dispersion is raised to a higher frequency due to the dipolar field and does not meet the phonon dispersion at \(\phi_{H}>30^{\circ}\) (Fig. 7 of Ref. [23]). Note that the in-plane uniaxial magnetic anisotropy of our CFB film [27] aligned in-plane perpendicular to \(\mathbf{k}\) causes the dips of SAW transmission around \(\mu_{0}H=0\). Assuming the uniaxial magnetic anisotropy and the magnon-phonon coupling strength obtained by experiments (see below) allow us to calculate the SAW transmission as a function of an external in-plane magnetic field. As shown in Fig. 2(d), the calculation agrees well with the measured result. A detailed description of the calculation can be found in Sec. 5A of Ref. [23]. The distinct observation of anticrossing, represented by split features in SAW absorption, becomes most pronounced when \(\phi_{H}\sim 0\). This splitting does not originate from the longitudinal strain, commonly considered the dominant one in SAWs generated on a \(128^{\circ}\) Y-cut LiNbO\({}_{3}\) substrate. The strain tensor is expressed as \(\varepsilon_{ij}=(\partial_{j}u_{i}+\partial_{i}u_{j})/2\), where \(u_{i}\) and \(u_{j}\) (\(i,j=x,y,z\)) are components of the elastic deformation vector field. When the longitudinal component \(\varepsilon_{xx}\) couples to magnons, the maximum magnetoelastic coupling occurs when the angle between the SAW propagation and the in-plane magnetization \(\phi\) is \(45^{\circ}\), whereas the shear component \(\varepsilon_{xy}\) of the SAW shows the maximum magnetoelastic coupling when \(\phi=0\) or \(90^{\circ}\)[7]. While in our device, \(\varepsilon_{xx}\) is larger than \(\varepsilon_{xy}\) (Fig. 2(c) of Ref. [23]), \(\varepsilon_{xx}\) decreases abruptly away from the surface due to a change of sign of \(u_{x}\)[14]. Therefore, \(\varepsilon_{xx}\) faces a limitation in terms of penetration depth, which results in an insufficient coupling strength to observe magnon-phonon anticrossing. On the contrary, \(\varepsilon_{xy}\) has a larger penetration depth, making the associated coupling strength dominant. Furthermore, the observation of significant split features is strongly supported by the fact that when the phonon (magnon) wavevector aligns parallel to the external magnetic field, the transverse strain can exhibit substantial magnetoelastic coupling, whereas the longitudinal strain cannot [28; 29]. Having clarified the origin of the coupling, we now focus on the magnon-phonon coupling at \(\phi_{H}=0\), where the magnetoelastic coupling is dominated by the shear strain \(\varepsilon_{xy}\). Figure 3(a) shows the SAW transmission signal \(|S_{21}|^{2}\) of the sample with \(t_{\text{CFB}}=20\) nm as a function of frequency and \(\mu_{0}H\), and Fig. 3(b) the SAW transmission spectrum when \(\mu_{0}H=100\) mT. As we mentioned above, a SAW peak exists at a lower frequency side to the main peak, giving rise to multiple anticrossing features. However, as each SAW mode couples only to the magnon mode with its same wavenumber, each SAW branch shows a single anticrossing. Therefore, we focus on anticrossing of one mode per device to estimate its coupling. We first fit the phonon branches in \(\omega\)-\(H\) dispersion taken by the local maximum of the SAW spectrum [see the marker in Fig. 3(b)] at each field with our magnon-phonon coupling Figure 3: (Color online) SAW transmissions when the external magnetic field is applied in the direction of SAW propagation; \(\phi_{H}=0\). (a),(b) SAW transmission signal (\(|S_{21}|^{2}\)) of the sample with \(t_{\text{CFB}}=20\) nm under (a) various amplitudes of the magnetic field \(\mu_{0}H\) and (b) \(\mu_{0}H=100\) mT. The green marker in (b) represents the local maximum used for the anticrossing fitting, shown as the green curves in (a). (c) Calculated SAW transmission of the sample with \(t_{\text{CFB}}=20\) nm as a function of the frequency and \(\mu_{0}H\). (d),(e) \(|S_{21}|^{2}\) of the sample with \(t_{\text{CFB}}=30\) nm under (d) various \(\mu_{0}H\) and (e) \(\mu_{0}H=100\) mT. The green marker in (e) represents the local maximum used for the anticrossing fitting, shown as the green curves in (d). (f) Calculated SAW transmission of the sample with \(t_{\text{CFB}}=30\) nm as a function of the frequency and \(\mu_{0}H\). (g),(h) \(|S_{21}|^{2}\) of the sample with \(t_{\text{CFB}}=30\) nm, but in the absence of acoustic reflectors (g) under various \(\mu_{0}H\) and (h) \(\mu_{0}H=100\) mT. model: \[\omega^{2}=\frac{\omega_{m}^{2}+\omega_{p}^{2}}{2}\pm\frac{1}{2}\sqrt{\left(\omega_ {m}^{2}-\omega_{p}^{2}\right)^{2}+\left(2\delta\omega_{\rm bare}^{2}\right)^{2}}, \tag{1}\] where \(\omega_{m}\) and \(\omega_{p}\) are the magnon and phonon resonant frequencies. The derivation of Eq. (1) and the definition of \(\delta\omega_{\rm bare}\) can be found in Sec. 4 of Ref. [23]. The anticrossing fitting is shown in Fig. 3(a) as green curves. From the parameters obtained by the fitting, we reproduce the anticrossing detected by SAW transmission spectra using our SAW transmission model. We modeled two phonon modes coupling to each magnon mode corresponding to its wavenumber. For the details of the model, see Sec. 5 of Ref. [23]. As a result, the SAW transmission is well reproduced as shown in Fig. 3(c). Furthermore, in Figs. 3(d)-3(f), we present the same analysis as shown in Figs. 3(a)-3(c) but for the sample with \(t_{\rm CFB}=30\) nm. It is notable that the upper phonon branch of the main SAW peak in Fig. 3(d) is vaguely visible at \(\mu_{0}H\sim 0\), however, it is no longer detectable at \(0<\mu_{0}H<40\) mT, as this peak shifts out of the frequency range of our cavity due to a redshift given by the strong mode interaction. Therefore, we fitted anticrossing of the peak at 6.54 GHz in Fig. 3(e) to obtain the parameters as shown in Fig. 3(d). The results and calculations of samples with other \(t_{\rm CFB}\) are shown in Fig. 8 of Ref. [23]. Additionally, it should be noted that a device with the same structure as used in this experiment but without the presence of the acoustic reflectors does not show any anticrossing as shown in Figs. 3(g) and 3(h). This is due to the considerably higher phonon relaxation and smaller coupling strength originating from smaller \(\varepsilon_{xy}\) when there is an absence of reflectors that form an acoustic cavity (Secs. 2 and 3B of Ref. [23]). Lastly, we present the magnon-phonon coupling estimation in the devices with varying CFB thicknesses. Figure 4(a) shows the coupling strength \(g\) as a function of \(t_{\rm CFB}\) taken by the anticrossing fittings in Fig. 3 and Fig. 8 of Ref. [23] with Eq. (1) and Eqs. (9), (12) of Ref. [23]. While our magnon-phonon coupling model (Sec. 4 of Ref. [23]) predicts \(g\sim\sqrt{t_{\rm CFB}}\), this prediction does not hold true, as shown in Fig. 4(a), due to variations in the effective magnetoelastic coupling coefficient \(b\) with changes in the thickness of the ferromagnetic layer \(t_{\rm FM}\). It is known that \(b\) is determined by contributions of the bulk (\(b_{v}\)) and surface (\(b_{s}\)) magnetoelastic couplings [30; 31; 32] as \(b=b_{v}+b_{s}/t_{\rm CFB}\). Figure 4(b) shows \(b\) as a function of the inverse of \(t_{\rm CFB}\), determined by the values of \(g\) and Eq. (12) of Ref. [23]. The fitting of \(b\) with the surface magnetoelastic coupling shown as a solid line yields \(b_{v}=-18.7\) MJ/m\({}^{3}\) and \(b_{s}=104\) mJ/m\({}^{2}\), where the values are in agreement with the known value for CFB [32]. Next, to evaluate the attainment of strong coupling we estimate the relaxation rates of magnons (\(\kappa_{m}\)) and phonons (\(\kappa_{p}\)). To obtain \(\kappa_{m}\), we used the Gilbert damping \(\alpha\) of our CFB films as \(\kappa_{m}=\omega_{m}\alpha\). The Gilbert damping is measured by ferromagnetic resonance (FMR) using coplanar waveguides (see Sec. 6 and Fig. 9 of Ref. [23]). For \(\kappa_{p}\), we obtain the linewidth of \(|S_{21}|^{2}\) peaks (\(\delta_{p}\)) by Lorentzian fittings as shown as the dashed curves in Fig. 2(a) and determine \(\kappa_{p}=2\pi\delta_{p}\). The estimated relaxation rates of devices with varying \(t_{\rm CFB}\) are depicted alongside the coupling strength in Fig. 4(a). As one can find from Fig. 4(a), \(\kappa_{m}\) is always larger than \(\kappa_{p}\) in our devices, thus confirming the ratio \(\Gamma=g/\kappa_{m}>1\) attains strong coupling. For the devices with \(t_{\rm CFB}\geq 20\) nm, \(\Gamma>1\), i.e., strong coupling is achieved. However, for the devices with \(t_{\rm CFB}=10\) nm, \(\Gamma<1\), thus it is not in the strong coupling regime. We would also like to note that, the FMR experiments with the same Ti/CFB/Ti layer using photon excitations by coplanar waveguides do not show evident anticrossing features (see Fig. 9(a) of Ref. [23]). Therefore, the anticrossing behavior presented in this work seems to be exclusive to strong magnon-phonon coupling enabling the potential existence of a magnon-phonon quasiparticle called the magnon-polaron. To summarize, we achieved strong coupling between magnons and SAW phonons in Co\({}_{20}\)Fe\({}_{60}\)B\({}_{20}\) thin films using an acoustic cavity constructed by a two-port SAW resonator. SAW phonon anticrossings are observed when an external magnetic field is applied parallel to the SAW propagation direction. By fitting anticrossings with our theoretical model, we estimated the coupling strength of our devices. For the devices with \(t_{\rm CFB}\geq 20\) nm, we confirmed the achievement of strong coupling. The variety in selecting magnetic materials and the usage of well-established SAW devices, facilitated by the independent behavior of magnons and phonons in our system, will pave the way to explore magnon-phonon strong coupling physics with on-chip devices at room temperature. Moreover, it holds the potential to offer insights into studying coherently coupled magnon-phonon hybridized quasiparticles enabling the development of magnetic field-controlled acoustic devices and less lossy magnon-based information processing devices. Besides, the ascending coupling strength achieved by increasing the thickness of the magnetic material within our model, combined with the flexibility in selecting magnetic materials, makes provision for achieving magnon-phonon ultrastrong Figure 4: (Color online) (a) The coupling strength (\(g\); the black curve) and magnon (\(\kappa_{m}\); the purple square) and phonon (\(\kappa_{p}\); the blue triangle) relaxation rates as a function of the thickness of the CFB layer. (b) The effective magnetoelastic constant as a function of the inverse of the thickness of the CFB layer. The solid line exhibits the surface magnetoelastic coupling fitting. The data points and error bars of (a) and (b) are the mean and s.d. of measurements of three or more different devices, respectively. coupling where the ratio between the coupling strength and the resonant frequency is comparable; \(g/\omega_{\text{res}}\gtrsim 0.1\)[1]. According to our magnon-phonon coupling model, employing a material with a 4 times higher magnetoelastic coupling coefficient than CFB and a saturation magnetization of 1 T, such as Tb\({}_{0.3}\)Dy\({}_{0.7}\)Fe\({}_{2}\)[33], with the same device design we used, and the material thickness of 20 nm, one could potentially reach \(g/\omega_{\text{res}}\sim 0.1\). This work was supported by Grants-in-Aid for Scientific Research (S) (No. 19H05629) and the Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research (No. 20H01865). Y.H. thanks to RIKEN Junior Research Associate Program for supporting this work. L.L. would like to thank the support from JSPS through Research Program for Young Scientists (No. 23KJ0778). F.C. and W.L. acknowledge China National Key Research and Development Plan (2022YFE0103300), and Y.O. acknowledges the RIKEN-China cooperation project. The authors thank Kei Yamamoto for fruitful discussions.
2309.12848
Chimeras in the two-community Kuramoto model with an external drive
We study the bifurcations of a special case of the Kuramoto model with two communities of oscillators and an external drive. We use Ott-Antonsens ansatz to derive the low-dimensional system of differential equations that governs the macroscopic dynamics of the high-dimensional problem. The choice of parameters of the system is motivated by the search for so-called Chimera states; stable phase configurations with partial synchronization. Our main result is the derivation of the low-dimensional system following Ott-Antonsens Ansatz and findings of periodic and chaotic Chimeras.
Jens Grønborg
2023-09-22T13:20:03Z
http://arxiv.org/abs/2309.12848v1
# Chimeras in the two-community Kuramoto model with an external drive ###### Abstract We study the bifurcations of a special case of the Kuramoto model with two communities of oscillators and an external drive. We use Ott-Antonsen's ansatz to derive the low-dimensional system of differential equations that governs the macroscopic dynamics of the high-dimensional problem. The choice of parameters of the system is motivated by the search for so-called Chimera states; stable phase configurations with partial synchronization [10, 1]. Our main result is the derivation of the low-dimensional system following Ott-Antonsens Ansatz and findings of periodic and chaotic Chimeras. ## Introduction The Kuramoto model has since its introduction in 1984 [9] found application in scientific fields with interacting oscillators dynamics. Oscillators appear in a wide range of areas from modeling Josephson Junctions [6, 16] used in constructing superconducting integrated circuit chips for quantum computing, to describing modern power grid configurations [5]. In ecology the Kuramoto model is used to model the interaction between pests found on coffee plants [15] and in neuroscience where neuron dynamics are characterized by chemical oscillators [4, 12, 2]. Ott and Antonsens 2008 paper [13] marked a breakthrough in the study of Kuramoto-like models with a large number of oscillators. The paper discusses the original model as well as variations on it. They all have global dynamics governed by a low-dimensional system under a series of constraints on the initial configuration of the system. This paper is motivated by combining two of these variations [13, p.9-11]; the community generalization with an external drive \[d\theta_{\sigma k}/dt=\omega_{\sigma k}+\sum_{\sigma^{\prime}=1}^{2}\frac{K_{ \sigma\sigma^{\prime}}}{N_{\sigma^{\prime}}}\sum_{j=1}^{N_{\sigma^{\prime}}} \sin(\theta_{\sigma^{\prime}j}-\theta_{\sigma k}-\beta)+\Lambda\sin(\Omega t -\theta_{\sigma k}). \tag{1}\] \(\theta_{\sigma k}\) is the phase of an oscillator \(k\) belonging to the community \(\sigma\). \(\omega_{\sigma k}\) is its initial frequency, \(K_{\sigma\sigma^{\prime}}\) the coupling constants between communities \(\sigma\) and \(\sigma^{\prime}\) and \(\beta\) the phase distortion factor. The external drive is characterized by a weight \(\Lambda\) and a frequency \(\Omega\). The phase distortion parameter \(\beta\) acts as a destabilizer by moving the fixed points of the coupling function away from the synchronized state enabling more complex dynamics [14]. ## Ott-Antonsen Ansatz The continuum case of Kuramoto-like models where \(N\to\infty\) has received much attention in parts due to its complexity and in part its applicability. Ott and Antonsen's paper [13] sets off by considering the oscillators from the point of view of statistical mechanics. The starting point is the Vlasov equation which is a conservation model from Plasma Physics. It models the conservation of oscillators and discounts collision effects, the same conditions that underpin the Kuramoto model \[\partial f_{\sigma}/\partial t+\partial/\partial\theta(f_{\sigma}\cdot v_{\sigma })=0. \tag{2}\] The phase and frequency of each oscillator are modeled as a continuous random variable with density function \(f_{\sigma}(\omega_{\sigma},\theta_{\sigma},t)\). The velocity \(v_{\sigma}\) term is given by the Kuramoto model. Ott and Antonsen proceeds by defining a so-called order parameter \(r=1/N\sum e^{i\theta}\) is introduced which is used to rewrite 1 as a single equation \[d\theta_{\sigma}/dt=\omega_{\sigma}-\Omega+\frac{1}{2i}\sum_{\sigma^{\prime}=1 }^{2}(K_{\sigma\sigma^{\prime}}r_{\sigma^{\prime}}e^{-i\beta}+\Lambda)e^{-i \theta_{\sigma}}-c.c., \tag{3}\] where c.c. is the complex conjugate of the prior term. \(r\) encodes the synchronicity of a community of oscillators in its absolute value and is in the continuum case defined in terms of density function \(f\) \[r_{\sigma}=\int_{-\infty}^{\infty}\int_{0}^{2\pi}f_{\sigma}e^{i\theta_{\sigma }}\mathrm{d}\theta_{\sigma}\mathrm{d}\omega_{\sigma}. \tag{4}\] Following the steps from Ott and Antonsen [13] we expand the density \(f_{\sigma}\) as a Fourier series in the phase variable \[f_{\sigma}=\frac{g_{\sigma}(\omega)}{2\pi}(1+\sum_{n=1}^{\infty}f_{\sigma n}( \omega_{\sigma},t)e^{in\theta_{\sigma}}+c.c.), \tag{5}\] Ott-Antonsen chose the Fourier coefficients as a power series of a real function \(\alpha_{\sigma}\) \[f_{\sigma n}(\omega_{\sigma},t)=(\alpha_{\sigma}(\omega_{\sigma},t))^{n}. \tag{6}\] With this special choice of coefficients, we insert 3 and 5 in 2 \[\partial f_{\sigma}/\partial t+\partial/\partial\theta(f_{\sigma} \cdot v_{\sigma})=\partial\alpha_{\sigma}/\partial t\sum_{n=1}^{\infty}n \alpha_{\sigma}^{n-1}e^{in\theta_{\sigma}}+c.c.\\ +(\omega_{\sigma}-\Omega)\sum_{n=1}^{\infty}in\alpha_{\sigma}^{n }e^{in\theta_{\sigma}}+c.c.\\ +\left(\frac{1}{2i}\sum_{\sigma^{\prime}=1}^{2}(K_{\sigma\sigma^{ \prime}}r_{\sigma^{\prime}}e^{-i\beta}+\Lambda)e^{-i\theta_{\sigma}}-c.c. \right)\sum_{n=1}^{\infty}in\alpha_{\sigma}^{n}e^{in\theta_{\sigma}}+c.c.\\ -\left(\frac{1}{2}\sum_{\sigma^{\prime}=1}^{2}(K_{\sigma\sigma^{ \prime}}r_{\sigma^{\prime}}e^{-i\beta}+\Lambda)e^{-i\theta_{\sigma}}+c.c. \right)\left(1+\sum_{n=1}^{\infty}\alpha_{\sigma}^{n}e^{in\theta_{\sigma}}+c.c.\right) \\ =\partial\alpha_{\sigma}/\partial t\sum_{n=1}^{\infty}n\alpha_{ \sigma}^{n-1}e^{in\theta_{\sigma}}+c.c.\\ +(\omega_{\sigma}-\Omega)\sum_{n=1}^{\infty}in\alpha_{\sigma}^{n }e^{in\theta_{\sigma}}+c.c.\\ +\frac{1}{2}\sum_{n=1}^{\infty}\sum_{\sigma^{\prime}=1}^{2}(K_{ \sigma\sigma^{\prime}}r_{\sigma^{\prime}}e^{-i\beta}+\Lambda)(n-1)\alpha_{ \sigma}^{n}e^{i(n-1)\theta_{\sigma}}-\sum_{\sigma^{\prime}=1}^{2}(K_{\sigma \sigma^{\prime}}r_{\sigma^{\prime}}e^{-i\beta}+\Lambda)(n+1)\alpha_{\sigma}^{n \ast}e^{-i(n+1)\theta_{\sigma}}\] \[-\sum_{\sigma^{\prime}=1}^{2}(K_{\sigma\sigma^{\prime}}r_{\sigma^{ \prime}}e^{-i\beta}+\Lambda)^{*}(n+1)\alpha_{\sigma}^{n}e^{i(n+1)\theta_{\sigma}} +\sum_{\sigma^{\prime}=1}^{2}(K_{\sigma\sigma^{\prime}}r_{\sigma^{\prime}}e^{-i \beta}+\Lambda)^{*}(n-1)\alpha_{\sigma}^{n*}e^{-i(n-1)\theta_{\sigma}}\] \[-\left(\frac{1}{2}\sum_{\sigma^{\prime}=1}^{2}(K_{\sigma\sigma^{ \prime}}r_{\sigma^{\prime}}e^{-i\beta}+\Lambda)e^{-i\theta_{\sigma}}+c.c.\right)\] \[=\partial\alpha_{\sigma}/\partial t\sum_{n=1}^{\infty}n\alpha_{ \sigma}^{n-1}e^{in\theta_{\sigma}}+c.c.+(\omega_{\sigma}-\Omega)i\alpha_{ \sigma}\sum_{n=1}^{\infty}n\alpha_{\sigma}^{n-1}e^{in\theta_{\sigma}}+c.c.\] \[+\left(\frac{1}{2}\sum_{\sigma^{\prime}=1}^{2}\alpha_{\sigma}^{ 2}(K_{\sigma\sigma^{\prime}}r_{\sigma^{\prime}}e^{-i\beta}+\Lambda)-(K_{ \sigma\sigma^{\prime}}r_{\sigma^{\prime}}e^{-i\beta}+\Lambda)^{*}\right)\sum _{n=1}^{\infty}n\alpha_{\sigma}^{n-1}e^{in\theta_{\sigma}}+c.c.=0 \tag{7}\] We notice that surprisingly the \(\theta\)-dependency can be factored out leaving \[\partial\alpha_{\sigma}/\partial t+i(\omega_{\sigma}-\Omega)\alpha_{\sigma}+ \frac{1}{2}\sum_{\sigma^{\prime}=1}^{2}(K_{\sigma\sigma^{\prime}}r_{\sigma^{ \prime}}e^{-i\beta}+\Lambda)\alpha_{\sigma}^{2}-(K_{\sigma\sigma^{\prime}}r_{ \sigma^{\prime}}e^{-i\beta}+\Lambda)^{*}=0. \tag{8}\] Despite the simplification, eq. 4 and 8 which determines the dynamics of the global order parameter \(r\) is still an infinite-dimensional problem because \(\alpha\) is a function. To address this choose the frequency density \(g\) to be a Lorentzian, \(g_{\sigma}(\omega)=\Delta_{\sigma}/\pi((\omega_{\sigma}-\omega_{\sigma 0}^{2})+\Delta_{ \sigma}^{2})^{-1}\), and the integrals in 4 have a closed analytic form \[r_{\sigma}=\int_{-\infty}^{\infty}\int_{0}^{2\pi}f_{\sigma}e^{i\theta}\mathrm{ d}\theta_{\sigma}\mathrm{d}\omega_{\sigma}=\int_{-\infty}^{\infty}\alpha( \omega,t)^{*}g(\omega)\mathrm{d}\omega_{\sigma}=\alpha_{\sigma}(\omega_{ \sigma 0}-i\Delta_{\sigma},t)^{*}. \tag{9}\] With this result 8 simplifies to a set of complex ODEs \[\partial r_{\sigma}/\partial t=\Delta_{\sigma}+i(\omega_{\sigma 0}-\Omega)r_{ \sigma}^{*}+\frac{1}{2}\sum_{\sigma^{\prime}=1}^{2}(K_{\sigma\sigma^{\prime}} r_{\sigma^{\prime}}^{*}e^{-i\beta}+\Lambda)r_{\sigma}^{2}-(K_{\sigma\sigma^{ \prime}}r_{\sigma^{\prime}}^{*}e^{-i\beta}+\Lambda)^{*}. \tag{10}\] ## Numerical analysis Any numerical analysis of 10 will be local in nature and non-exhaustive. Our focus is to show that 1 has stable Chimera configurations in the continuum limit. We make parameter reductions in line with what can be found in related works [1, 3]. We simplify the search by choosing an initial value that is a Chimera. This is achieved by letting the first community of oscillators be synchronized, \(|r_{1}|=1\), and let the oscillators be identical by pinching the scale parameter \(\Delta_{\sigma}\to 0\). We only consider the case when the internal and external coupling constants are symmetric \(K_{12}=K_{21},K_{11}=K_{22}\). We normalize the two and define a coupling constant discrepancy parameter \(A\), \(K_{11}+K_{12}=1,A\equiv K_{11}-K_{12}\). It is convenient to shift the Sakaguchi-parameter \(\beta\) such that \(\beta\rightarrow\pi/2-\beta\) and define a second discrepancy parameter between the frequencies of the external drive and the oscillators initial frequency \(\omega_{\sigma}\equiv\Omega-\omega_{0\sigma}\). These constraints reduces 10 to \[\begin{split} dr/dt&=\frac{1-r^{2}}{4}\left((1-A)\sin( \phi_{1}-\phi_{2}+\beta)+(1+A)r\sin(\beta)+2\Lambda\cos(\phi_{2})\right)\\ d\phi_{1}/dt&=\omega_{1}-\frac{1-A}{2}r\cos(\phi_{ 2}-\phi_{1}+\beta)-\frac{1+A}{2}\cos(\beta)-\Lambda\sin(\phi_{1})\\ d\phi_{2}/dt&=\omega_{2}-\frac{1+r^{2}}{4r}\left(( 1-A)\sin(\phi_{1}-\phi_{2}+\beta)+(1+A)r\cos(\beta)+2\Lambda\sin(\phi_{2}) \right).\end{split} \tag{11}\] where \(r=|r_{2}|\) in order to simplify the notation. Solutions to this system exist in a torus-shaped subset of \(\mathbb{R}^{3}\), Chimera states are found in the interior of the torus, as solutions on the surface represent a completely synchronized system with \(r=1\). Fixed points of 11 can be found in the interior and are the basis for bifurcation diagrams as seen in Figure 1. We use the software packages auto-07p [7] and xppaut [8] to continue bifurcations and investigate state space solutions. Figure 1 shows a complicated cut in the \((A,\beta)\) parameter space which we will discuss. Reading the diagram in a clockwise fashion a subcritical Hopf bifurcation originates from the Bogdanov-Takens (BT) point. When \(\beta\) is decreased the Hopf line eventually intersects the saddle-node bifurcation in a Generalized-Hopf (GH) point and becomes supercritical. The now supercritical Hopf line intersects the saddle-node a second time, this time tangentially in a Zero-Hopf (ZH) point. We also observe that this marks the beginning of a stable saddle-node bifurcation. A fixed point attractor can be found in the parameter space between the stable saddle-node line Figure 1: Bifurcations in the \((A,\beta)\) space for fixed parameters \(\omega_{1}=0.203,\omega_{2}=0.9,\Lambda=0.5\), dotted lines mark unstable bifurcations. The subdiagram shows how the original Hopf cycle loses stability in subsequent period doublings in the \((A,L^{2}-norm)\) space. and the supercritical Hopf line. This solution is a Chimera state, as the second community for this set of parameter values is in a stable desynchronized state. The degree of desynchronization, as measured by \(r\), is however constant. When the discrepancy \(A\) between the interior and exterior coupling constants is increased the fixed point attractor undergoes a Hopf bifurcation resulting in an oscillating Chimera state. A further increase leads to a loss of stability in a period-doubling (PD) bifurcation. For a region of \(\beta\in[0.1,0.14]\) we observe a cascade of period-doublings resulting in a stable chaotic attractor. The magnified diagram in Figure 1 shows this stability transition for increasing \(A\) and fixed \(beta=0.13\). A Solution on the chaotic attractor which appears after the second period-doubling is shown in figure 1(b). We continue the original cycle that lost stability in the first period-doubling by increasing \(A\) further, and eventually observe a rapidly increasing period indicating the presence of a homoclinic orbit. We use auto-07p's Homcont module to continue a high period cycle towards the Bogdonov-Takens point. Unfortunately, Homcont was not able to continue all the way leading to a convergence failure (MX) when the homoclinic closes in on the subcritical Hopf line. Homcont can be configured to detect test functions equivalent to definitions found in [11, p. 528]. The saddle point to the green homoclinic line is of the type saddle-focus, and for \(\beta\gtrapprox 0.15\) the eigenvalues \(\lambda_{1},\mu_{1},\mu_{2}\) have values such that \(\lambda_{1}+\text{Re}(\mu_{1})+\text{Re}(\mu_{2})=0\). To the left of this neutrally-divergent saddle-focus point, the homoclinic is a single orbit, and to the right, it is a chaotic repeller Figure 2: Chaotic solutions. of the Shilnikov type. See Figure 2a. This concludes our numerical findings. We have shown how the Kuramoto model with communities and an external drive defined in 1 can be analyzed using Ott and Antonsens Ansatz. Using their method we derived the low-dimensional system, and analyzed it hoping to find Chimera states. We found that the model supports all the different types of Chimeras, including chaotic ones which came about from period-doubling cascade.
2308.00195
Compact All-Fiber Quantum-Inspired LiDAR with > 100dB Noise Rejection and Single Photon Sensitivity
Entanglement and correlation of quantum light can enhance LiDAR sensitivity in the presence of strong background noise. However, the power of such quantum sources is fundamentally limited to a stream of single photons and cannot compete with the detection range of high-power classical LiDAR transmitters. To circumvent this, we develop and demonstrate a quantum-inspired LiDAR prototype based on coherent measurement of classical time-frequency correlations. This system uses a high-power classical source and maintains the high noise rejection advantage of quantum LiDARs. In particular, we show that it can achieve over 100dB rejection (with 100ms integration time) of indistinguishable(with statistically identical properties in every degrees of freedom) in-band noise while still being sensitive to single photon signals. In addition to the LiDAR demonstration, we also discuss the potential of the proposed LiDAR receiver for quantum information applications. In particular, we propose the chaotic quantum frequency conversion technique for coherent manipulation of high dimensional quantum states of light. It is shown that this technique can provide improved performance in terms of selectivity and efficiency as compared to pulse-based quantum frequency conversion.
Han Liu, Changhao Qin, Georgios Papangelakis, Meng Lon Iu, Amr S Helmy
2023-07-31T23:23:47Z
http://arxiv.org/abs/2308.00195v2
Compact All-Fiber Quantum-Inspired LiDAR with \(>\) 100dB Noise Rejection and Single Photon Sensitivity ###### Abstract Entanglement and correlation of quantum light can enhance LiDAR sensitivity in the presence of strong background noise. However, the power of such quantum sources is fundamentally limited to a stream of single photons and cannot compete with the detection range of high-power classical LiDAR transmitters. To circumvent this, we develop and demonstrate a quantum-inspired LiDAR prototype based on coherent measurement of classical time-frequency correlations. This system uses a high-power classical source and maintains the high noise rejection advantage of quantum LiDARs. In particular, we show that it can achieve over 100dB rejection (with 100ms integration time) of indistinguishable (with statistically identical properties in every degrees of freedom) in-band noise while still being sensitive to single photon signals. In addition to the LiDAR demonstration, we also discuss the potential of the proposed LiDAR receiver for quantum information applications. In particular, we propose the chaotic quantum frequency conversion technique for coherent manipulation of high dimensional quantum states of light. It is shown that this technique can provide improved performance in terms of selectivity and efficiency as compared to pulse-based quantum frequency conversion. ## I Introduction In any optical sensing instrumentation, the light source and detection system used play a pivotal role in dictating the performance. In recent years, a radical approach to enhancing optical sensing system sensitivity has been to use quantum light sources and measure their non-classical properties. This serves to surpass the performance limit imposed by the classical laws of physics. A manifestation of this idea in the target detection domain is quantum illumination (QI), where quantum entanglement is utilized to reject the background noise of the target detection channel[1, 2, 3, 4]. In a QI setup, the probe light that is entangled with, the locally stored reference light, interrogates the target. Back-reflected probe light (if the target is present) mixed with strong background noise light is collected and undergoes a joint detection measurement along with the reference light to determine the target's presence or absence. In contrast to the common perception of quantum light being fragile, the performance advantage of QI over classical detection is most pronounced in the high loss and high noise regime. Similar enhancement of LiDAR sensitivity has also been demonstrated through phase-insensitive measurement of photon-photon correlations[5, 6, 7]. Despite its unrivaled performance over classical LiDAR with equal probe power, practical applications of QI are severely curtailed owing, in part to its fundamental power limit: not only the flux of an entangled light source is difficult to increase, but also the performance enhancement it offers diminishes as the power increases[8, 9, 10]. As such, it is difficult for QI to meet the demand of real-world sensing applications where high probe power is needed to extend the detection range beyond that of a laboratory setup. Therefore, a natural line of inquiry could pose the question of whether it is possible to borrow methodology from QI to enhance classical LiDAR protocols while retaining the essential performance enhancement. Similar approaches, other than QI have already been proven successful for sensing protocols that were initially believed to rely on quantum effects, those include ghost imaging[11] and quantum optical coherence tomography[12, 13]. In this work, we demonstrate a LiDAR design and the associated prototype with a similar setup and operation principle as a QI, except that it uses a classical time-frequency correlation source with high power thereby greatly enhancing the potential for achieving a significant operating range. It is shown that by using classical time-frequency correlation, probe light that has random and chaotic time-frequency characteristics can be selectively converted to a single frequency with near-unity quantum efficiency while in-band noise with identical time-frequency characteristics can be reduced to a negligible level. In addition, it is interesting to note that this noise rejection technique is conceptually related to quantum frequency conversion (QFC) [14, 15, 16, 17, 18], a quantum information processing technique in which a particular time-frequency mode (probe) is selectively separated from the rest of the band (indistinguishable noise) with preserved quantum properties. For this reason, we term the LiDAR protocol as chaotic-QFC LiDAR. Nevertheless, the proposed receiver is different from existing QFC protocols in its use of chaotic time-frequency modes in which the probe light resides. It can be shown that the high-dimensional and chaotic nature of chaotic modes can provide substantially improved performance in terms of efficiency and selectivity as compared to conventional Hermite-Gaussian modes based QFC. The rest of this manuscript is organized as follows. In the first section, we formulate the theory of chaotic-QFC and confirm its validity with numerical Monte Carlo simulation. In the result section, we design and build a LiDAR prototype that resembles our theoretical formulation. We then experimentally benchmark this setup in terms of different LiDAR performance metrics including quantum efficiency, noise resilience, and ranging accuracy, which are in good agreement with our theoretical prediction. In the discussion section, we draw connections and differences between the classical time-frequency correlation used in this work and non-classical time-frequency entanglement that can be used in QI[4], as well as other correlation-based LiDAR protocols. Finally, we extend the theoretical framework of chaotic-QFC to showcase its application in quantum information applications. ## II Theory In this section, we theoretically model the chaotic-QFC LiDAR that is based on classical time-frequency correlation. In particular, we analyze the generation and detection process of such correlation and show how it enables substantial noise suppression while still maintaining single photon sensitivity for LiDAR operation. The probe and reference light of chaotic-QFC LiDAR is generated through difference frequency generation (DFG) between a single frequency pump (\(2\omega_{0}\)) and broadband amplified spontaneous emission (ASE, frequency higher than \(\omega_{0}\)). In the limit of broadband DFG phase matching, the probe and reference light temporal amplitudes \(A_{p},A_{r}\) (with carrier frequency \(\omega_{0}\) subtracted) are chaotic and complex conjugated to each other. Under the quasi-cw assumption, the probe and reference light can be modeled as stationary Gaussian random processes that are characterized by their correlation function \(f(t-t^{\prime})\): \[\langle A_{p}(t)A_{p}^{*}(t^{\prime})\rangle=P_{p}f(t-t^{\prime}) \tag{1}\] \[A_{r}(t)=\sqrt{\frac{P_{r}}{P_{p}}}A_{p}^{*}(t) \tag{2}\] where \(P_{p},P_{r}\) are the photon flux of the probe and reference light, respectively, and angled brackets stand for statistical averaging. The chaotic and conjugated phases of the probe and reference light can be understood as classical time-frequency correlation (see the Discussion section for details). Background noise light with flux \(P_{n}\) will naturally be uncorrelated with the reference light and is assumed to have the same temporal-spectral characteristics (correlation function) as probe light: \[\langle A_{n}(t)A_{n}^{*}(t^{\prime})\rangle=P_{n}f(t-t^{\prime}) \tag{3}\] For simplicity, the correlation function \(f\) is assumed to be Gaussian: \[f(t-t^{\prime})=\exp\left(-\frac{(t-t^{\prime})^{2}\sigma^{2}}{2}\right) \tag{4}\] where \(\sigma\) is the common bandwidth of probe, reference, and noise light. This condition is a worst-case scenario asymptotic case, where it is assumed that the noise involved is fully in-band, which cannot be removed through simple filtering. In the LiDAR transceiver, probe light is mixed with strong background noise and collected by the telescope (Fig.2). At the receiver section, the locally stored reference light can selectively convert probe light to another frequency via SFG, with little crosstalk from the noise light that has identical time-frequency properties. To analyze this, consider first the small signal SFG regime, in which the SFG output amplitude is given by[19]: \[A_{SFG}(t)=\frac{\gamma}{\Delta\beta}\left(A_{p}(t)A_{r}(t)+A_{n }(t)A_{r}(t)\right)*\Pi\left(\frac{t}{\Delta\beta L}\right) \tag{5}\] where \(\gamma,L\) specifies the normalized nonlinearity and waveguide length and \(*\) stands for convolution. The rectangle function \(\Pi(t)\) equals unity for \(-1/2<t\leq 1/2\) and zero otherwise. The inverse group velocity difference \(\Delta\beta\) is defined as the difference between the inverse group velocity of SFG and probe light. The group velocity of noise and reference light is assumed to be the same as the probe light because of negligible dispersion around \(\omega_{0}\)[15]. As a consequence, all frequency components of the probe, reference, and noise light are assumed to take part in the SFG process with the same efficiency. Because the probe, reference, and noise light are all stationary random processes, so is the SFG output. Therefore the SFG power spectral density \(S(\omega)\) is given by the Fourier transform of the correlation function: \[S(\omega)=\frac{1}{2\pi}\mathcal{F}\langle A_{SFG}(t)A_{SFG}^{*} (t+\tau)\rangle_{t} \tag{6}\] \[=\gamma^{2}L^{2}\mathrm{sinc}^{2}\left(\frac{\Delta\beta L\omega} {2}\right)\{P_{p}P_{r}\delta(\omega)\] \[\quad+\frac{(P_{n}+P_{p}/2)P_{r}}{2\sqrt{\pi}\sigma}\exp\left(- \frac{\omega^{2}}{4\sigma^{2}}\right)\} \tag{7}\] where \(\delta(\omega)\) is the delta function. The two terms above can be understood as follow: the random, but correlated, phase of the reference and probe light cancel each other and result in a single-frequency coherent SFG (c-SFG) peak. In contrast, the noise and reference light will only contribute to broadband incoherent SFG (i-SFG). It is worth noting that the probe and reference light will also generate a small amount of i-SFG (as compared to c-SFG) in the absence of noise light. This is because the inherent intensity fluctuation of the chaotic probe and reference light will create a chaotic component (i-SFG) of the SFG output. Noise reduction of chaotic-QFC LiDAR is achieved by applying narrowband optical filtering to separate c-SFG from i-SFG. The signal-to-noise ratio (SNR) of chaotic-QFC LiDAR is then given by the ratio between c-SFG and i-SFG power: \[\text{SNR}_{QFC}=\frac{2\sqrt{\pi}\sigma}{BW}\frac{2P_{p}}{2P_{n}+P_{p}} \tag{8}\] where \(BW\) is the filter (optical and electrical) bandwidth. If the receiver does not have access to the correlation information (reference light), background noise will appear to be completely indistinguishable from the probe light. Then the only useful information that can be extracted from collected light is the change of optical power due to target reflection. The SNR of such direct detection is given by \(\text{SNR}_{DD}=P_{p}/(P_{n}+P_{p})\), compared to which the SNR enhancement of chaotic-QFC LiDAR is given by: \[\frac{SNR_{QFC}}{SNR_{DD}}=\frac{2\sqrt{\pi}\sigma}{BW} \tag{9}\] For example, if the probe spectrum is 7.5nm wide around 1560nm (in FWHM, obtainable with commercial filters for fiber optical communication) and the filter bandwidth is 10Hz, the SNR enhancement is around 111dB. The chaotic-QFC LiDAR sensitivity is dictated by the c-SFG efficiency beyond the small signal SFG regime, which is defined as the probability of converting an incoming probe photon to a c-SFG photon. To maximize the c-SFG efficiency \(\eta_{c}\), the i-SFG power needs to be minimized due to the energy conservation constraint. This can be achieved with a narrow SFG phase matching bandwidth (long waveguide) or cavity resonance at the c-SFG frequency[20]. In the limit of negligible i-SFG power, the c-SFG efficiency can be analytically solved beyond the small signal SFG regime[19]: \[\eta_{c}=\sin^{2}(\gamma\sqrt{P_{r}}L)\exp\left(-\frac{\beta^{2}\Delta L^{2} \sigma^{2}}{2}\right) \tag{10}\] where \(\Delta L\) is the relative distance between the probe and reference light. In particular, for zero relative delay \(\Delta L=0\) and some finite reference power \(P_{r}\), the coherent conversion efficiency can reach 100%. Such conversion will also preserve the quantum properties of the input probe light because c-SFG is an intrinsically noiseless process that acts like a frequency domain beam-splitter[21]. The oscillatory behavior of the SFG efficiency is because of the transition of SFG to DFG after the probe light is completely depleted. For the general case of non-negligible i-SFG power, the c-SFG efficiency can be calculated through Monte Carlo simulation by drawing different samples of the chaotic probe and reference amplitudes and numerically solving the coupled mode equations. Numerical results show that for a 5cm long waveguide with 0.09 group index difference (extracted from the second harmonic generation phase-matching bandwidth, for around 1560nm pump light) between SFG and probe/reference light, the maximal c-SFG efficiency can reach 92%. Unlike QI, the target distance for chaotic-QFC LiDAR does not need to be stabilized down to the sub-wavelength level even though a similar coherent receiver is used. This is because the c-SFG power is not fast varying as a function of the probe light phase. The distance of the target, however, can be determined by scanning the reference light delay and monitoring the c-SFG Figure 1: (a)The schematic diagram of the chaotic-QFC LiDAR and its various stages. (b) the spectrum of the DFG pump (blue) light and ASE (green) before the DFG process, (c) the spectrum of the probe (yellow), reference (green), and DFG pump light after the DFG process, (d) the spectrum of the reference and collected probe light back-scattering (mixed with noise), (e) the spectrum of the reference light, depleted probe light, c-SFG peak (red) and unfiltered i-SFG (red) background, (f) the narrowband optical filtering of i-SFG, (g) the homodyne beating frequency \(f_{HDD}\) between the DFG pump light and frequency shifted c-SFG. power. The distance resolution is inversely proportional to the probe light bandwidth (\(\simeq\)100um for 7.5nm probe bandwidth). The schematic of different stages of the LiDAR system and corresponding spectra of interacting light waves are summarized in Fig. 1 (a-g). The probe light is created as a spectral mirror image (Fig.1 (b)) of the reference light (ASE) with conjugated phase, as per the energy conservation requirement of DFG. Before SFG, the group velocity dispersion and relative delay of the probe and reference light need to be compensated, such that different frequency components of the probe and reference light contribute to c-SFG constructively. The filtering of i-SFG consists of three stages: (1) the phase matching bandwidth (Fig.1 (e)) limits the i-SFG generation spectrum and (2) an optical bandpass filter (Fig.1 (f)) to optically reject most of the i-SFG power to prevent detector saturation and (3) a very narrowband electrical filter (integration time) of the phase-sensitive (balanced homodyne) detection (Fig.1 (g)) to maximize the noise rejection. To avoid low-frequency technical noise, the frequency of the homodyne detection signal can be shifted to a non-dc frequency \(f_{HDD}\) by frequency shifting the probe light(Fig.1 (g)). ## III Result The LiDAR experimental setup is shown in Fig.2. In the source section, the correlated probe and reference light (Fig.3(a)) is generated through DFG between the 781.7nm pump light and ASE light. The spectrum of the ASE is limited to around 7.5nm in full-width half max (FWHM) by a tunable band pass filter. A WDM filter and erbium-doped fiber amplifiers in C and L band are then used to separate and boost up the probe and reference light power. To achieve the maximal c-SFG efficiency, the total group velocity dispersion (1.6ps/nm) and path length difference of the probe and reference light is compensated by a programmable amplitude-phase filter (Finisar waveshaper 1000A) and an optical delay line. The transceiver section consists of a probe light collimator, a homemade telescope, and a target object placed around 4 meters away. Two different target objects are tested: a piece of grounded glass stabilized on the optical table and a quadcopter drone (with rough plastic surfaces) placed on top of a cart outside the optical table. An adjustable level of noise power is simulated by mixing the telescope output with noise light that is obtained by tapping the probe amplifier output. Since the tapped probe amplifier output is not correlated with reference light due to unbalanced delay, it can be regarded as uncorrelated background noise that has the same time-frequency properties as the probe light. In the receiver section, collected probe light and reference light undergo SFG in a PPLN waveguide (HCP-RPE-5cm, details in coating, loss, and phase matching information can be found in [19]), whose output first goes through a narrowband optical filter (diffraction grating) and then is phase-sensitively detected through balanced homodyne detection (with the local oscillator light tapped from the DFG pump light). To measure the homodyne signal away from dc, the probe light is frequency-shifted by 2.1MHz through saw-tooth wave phase modulation (serrodyne). The remaining non-converted probe and reference light are Figure 2: The experimental setup of the LiDAR system. spectrally separated and the power is monitored. The LiDAR sensitivity is benchmarked by the efficiencies of total SFG and c-SFG, which are defined as the probabilities of converting an incoming probe photon to an SFG (including both i-SFG and c-SFG) photon or a c-SFG photon. The total SFG efficiency is determined from the depletion of probe power throughput when the reference light is turned on, as shown by the blue dots in Fig. 3(b). The proportion of c-SFG within total SFG is measured by comparing the phase-sensitive (balanced homodyne) detection and the intensity detection result of the total SFG. Then the c-SFG efficiency can be calculated as the product of total SFG efficiency and c-SFG proportion, as shown by the red error bars in Fig. 3(b). To calibrate the measured c-SFG proportion to be independent of optical and detection efficiencies, a baseline measurement (Fig. 3(b), green dots) is done with single-frequency probe and reference light, which only produces c-SFG. The experimentally measured efficiencies of total SFG, c-SFG, and single frequency SFG are in agreement with the Monte Carlo simulation (solid lines in Fig.3(b))[19]. The c-SFG efficiency with around 260mW input reference power reaches up to 90%. The measurement uncertainty is mainly due to the instability of the DFG pump laser's coherence. The LiDAR noise resilience is benchmarked by comparing the homodyne signal level for different probe and noise powers (as measured at the output of the SFG waveguide). It is worth emphasizing that the noise light has identical time-frequency properties as probe light and Figure 3: (a) the spectrum of reference light (dashed red), probe light before SFG (solid red), probe light after SFG (orange), and SFG light (blue). The SFG spectrum (0.01nm resolution) consists of a c-SFG peak and broadband i-SFG background. (b) the theoretically predicted (solid line) and experimentally measured (errorbar or dot) c-SFG efficiency (red) and total SFG efficiency (blue). The SFG efficiency of the single-frequency probe and reference light is also plotted (green dot and curve) as a baseline. Figure 4: (a)the homodyne signal level (relative to shot noise level, 100ms integration time) for different probe and noise power. Red: probe light back-reflected from a rough glass surface mounted on the optical table, magenta: probe light back-reflected from the quadcopter with phase noise induced by mechanical vibration, black: noise light. The reference power is kept around 260mW to maximize the conversion efficiency. (b) the ESA spectrum of the homodyne signal, when the stabilized ground glass (red) or quadcopter (blue), is used as the target. Both spectra are normalized in the maximal spectral power level. The difference in the bandwidths is due to mechanical vibration-induced broadening. therefore cannot be separated from probe light via conventional time-frequency filtering. Instead, the probe light is converted to c-SFG and separated from noise light induced i-SFG via narrowband optical and electrical filtering. In the current experimental setup, the waveguide phase matching and diffraction grating contribute to around 97% reduction of i-SFG (0.12nm filter bandwidth being applied on 3.7nm non-phase matched i-SFG bandwidth). Additional optical filtering can be applied by adding a high-fineness Fabre-Perot filter. The narrowband electrical filtering is simulated by using 10Hz resolution bandwidth (100ms integration time) of an electrical spectrum analyzer (ESA). As can be seen in Fig.4(a), to produce the same homodyne signal level, the noise power needs to be 107dB higher than probe light back-reflected from a stabilized target. The reason for the measured 107dB noise rejection being lower than the theoretical prediction (111dB from Eq. (9)) can be attributed to many factors, including non-perfect frequency shift, non-Gaussian shaped spectra of probe and reference light, etc. When the unstabilized quadcopter is used as the target object, back-reflected probe light (with the same flux as the stabilized target case) generated a weaker homodyne signal and provides less noise rejection (95dB). This is because mechanical vibrations induce \(\simeq\)1kHz phase noise of the back-reflected probe light. Such phase noise is transferred to the c-SFG spectrum (Fig. 4(b)). As a result, the 10Hz filter also reduces the level of the measured homodyne signal due to over-filtering (10Hz\(<\)1000Hz). It is worth noting that in the current setup, the nonzero DFG pump bandwidth (75KHz) does not spectrally broaden the homodyne signal. This is because phase fluctuations of c-SFG and the local oscillator are identical and cancel each other in the homodyne detection. To benchmark the ranging performance, the delay of the reference light is scanned with a homemade mechanical delay line to determine the target object(quadcopter) distance (Fig.5(a)). This is possible since c-SFG efficiency is non-negligible only if the probe and reference light have a balanced delay as can be seen from (10). When scanning, a Doppler frequency shift of homodyne signal that is proportional to the scanning speed is observed. The ranging result shows multiple side peaks, which is possibly caused by diffusive reflections of probe photons into different directions, resulting in different probe path lengths inside the telescope. It is confirmed that when an object with specular reflection (mirror) is used as the target object, the ranging signal is a single peak localized in distance. It is also worth mentioning that the speed and dynamic range of delay scanning in our current setup can potentially be improved with commercial solid-state optical switches, and fixed fiber optical delay lines. ## IV Discussion The idea of utilizing sum frequency generation to analyze classical time-frequency correlation has been demonstrated in chirped pulse interferometry (CPI)[12; 13]. However, unlike in CPI where correlation is manually created by preparing laser pulses with opposite chirps, the correlation used chaotic-QFC LiDAR originates from the random and chaotic nature of broadband ASE light and the DFG energy conservation constraint, in a way that is similar to the generation of time-frequency entangled photon pairs. This can be seen from Eq. (7) and (10) as follows: probe and reference light generate SFG mostly at a single frequency (c-SFG) despite both being broadband in the spectrum. Also, c-SFG is efficient only if the probe and reference light has zero relative delay, despite both being stationary in time. This is similar to the SFG process of time-frequency entangled photon pairs where two entangled photons must have close to zero relative delay and can only generate SFG photons at the sum frequency[4]. Also, time-frequency entanglement and classical correlation are both generated via parametric down-conversion Figure 5: (a)The normalized homodyne signal as a function of reference light (relative) delay. The resolution bandwidth is increased to 1KHz to reduce the impact of the Dopler effect on the homodyne signal level. (b)the relationship between the probe mode heralding probability and the temporal duration \(\sigma_{+}\) and coherence time \(\sigma_{-}\) of the probe/reference CMs. processes but with different input stimulation (vacuum versus ASE). The essential difference between the classical correlation and entanglement is that: while entanglement can be simultaneously measured in time and frequency domain through joint measurement[4], classical time and frequency correlation are actually the same correlation being represented in the time or frequency basis. As a result, classical time-frequency correlations are subjected to the time-frequency uncertainty constraint. For LiDAR, this implies that the noise rejection provided by the correlation is exactly half (in log scale) of that provided by entanglement. Nevertheless, the classical nature of the correlation allows for probe and reference power beyond the single photon level and the option of classical amplification. This is of paramount utility for practical applications where high probing power is required to extend the detection range beyond that of a proof-of-principle laboratory setup with single photon sources. The noise light for chaotic-QFC LiDAR is assumed to be statistically identical to probe light since out-of-band noise (in either time or frequency domain) can be filtered without impacting the LiDAR performance. Also, from a practical point of view, indistinguishable noise light simulates active jamming attacks or unintentional interference (e.g. identical adjacent LiDARs operating at the same time), which are important safety concerns that degrade the sensitivity and increase the error rates of LiDARs[22]. The conventional approach to this problem is to generate spread spectrum probe light and perform correlation analysis in the radio frequency domain, i.e. chaotic LiDARs based on chaotic lasers[23] and random modulation[24; 22]. In comparison, the time-frequency correlation used in chaotic-QFC LiDAR can be considered a spread spectrum technique in the optical domain that has two vital differences. First, the optical time-frequency correlation is much broader in bandwidth than the RF domain spread spectrum. This translates to orders of magnitude superior noise suppression and ranging resolution. Secondly, the time-frequency correlation is optically analyzed through \(\chi^{(2)}\) nonlinear process, and doing so also avoids fundamental limitations of electrical domain measurement and signal processing such as detector saturation, electrical interference, finite sampling rate, digitization noise, and imperfect common mode rejection ratio of balanced photodetection, etc. A brief review and comparison of different types of LiDAR-based on classical correlation are given in [23; 24; 25; 26; 19]. In chaotic-QFC LiDAR, probe light with a specific chaotic amplitude is separated from uncorrelated noise light via c-SFG. It is interesting to note that a one-to-one correspondence can be made between this and the process of QFC: in QFC, a quantum state of light (corresponding to the LiDAR probe light) in a particular time-frequency mode (probe mode) is coherently converted to another mode (the c-SFG mode) and separated from other quantum states (noise light), through nonlinear interaction with a strong pump light (reference light). Also, the high-efficiency nature of chaotic-QFC preserves the quantum properties[14] of the input probe light like other QFC protocols. For this reason, chaotic-QFC can be considered as a variant of QFC that is equipped with chaotic modes (CMs) defined by chaotic temporal amplitudes. Nevertheless, there are important differences between chaotic-QFC from conventional QFC based on Hermite-Gaussian modes, in terms of orthogonality and statistical equivalence[19]. These two differences have important implications on QFC performance metrics, which are summarised as follows. First, different CMs are approximately orthogonal due to their independent chaotic nature. Nevertheless, if each CM has a sufficiently large time-bandwidth product, such approximate orthogonality is close to exact and can simultaneously provide high conversion efficiency and selectivity (the conversion efficiency ratio of the target mode versus other crosstalk modes[27; 16]). To give a concrete example, consider a target CM with 100ns duration and 1THz bandwidth. Since the 100ns pulse width is much longer than the nonlinear interaction time, the previous quasi-cw modeling of chaotic-QFC still applies. Therefore the c-SFG efficiency will still reach over 90% as has been experimentally observed (can be further improved with narrower SFG phase matching). The generated c-SFG is a 100ns pulse with a single carrier frequency, whereas i-SFG from other crosstalk modes is evenly distributed around 1THz bandwidth. As a result, by using a \(\simeq 10\)MHz optical filter, the i-SFG power can be rejected by around \(\simeq 1-10^{5}\). In comparison to this, state of art QFC based on cascaded nonlinear conversion has been shown to provide \(\simeq 90\%\) of crosstalk mode rejection[27]. Another important difference of CMs is that they have similar _noise-like_ time-frequency characteristics, whereas Hermite Gaussian modes of different orders have substantially different time-frequency amplitudes. Therefore it is much easier for chaotic-QFC to multiplex many CMs in parallel to span a large Hilbert space for high-dimensional quantum information processing. This can be carried out, for example, by taking different temporal segments of an ASE source, which can be done with integrated optical delay lines. An additional advantage of chaotic-QFC as compared to Hermite-Gaussian mode based QFC is that the generated SFG is in the same time-frequency mode (c-SFG) regardless of the input CMs. This makes the subsequent quantum processing of SFG light easier to standardize. Moreover, this also allows for the interference of c-SFG from different probe modes. When combined with the recently developed multiple output QFC technique [28; 29], the chaotic-QFC technique can be useful for frequency domain Boson sampling[30]. The superior performance of chaotic-QFC does, however, come at a price of reduced channel capacity: to ensure approximate orthogonality, the number of CMs must be much lower than the total time-bandwidth product of the channel to ensure sufficiently low mutual overlap. It is worth emphasizing that chaotic-QFC, like other QFC protocols, does not generate quantum states of light. Instead, it serves as a means to manipulate quantum light that already exists in CMs through SFG. Therefore, the question remaining is whether quantum states of light can be generated in CMs by some other nonlinear processes in the first place. After all, a quantum light-generating process such as spontaneously parametric down-conversion (SPDC), does favor an intrinsic set of modes (e.g. Schmidt modes [31] and Bloch-Messiah modes[32]) depending on the nonlinear medium and pumping. What we shall show next is that phase conjugated pairs of CMs are _approximately_ intrinsic (Schmidt) mode pairs of SPDC processes that have narrow band pump and broadband phasematching. For such SPDC processes, the joint temporal amplitude (JTA) can be approximated by a delta function \(\delta(t_{p}-t_{r})\), whose Schmidt decomposition is _not_ unique: \[|\text{pair}\rangle=\sum_{k}\int dt_{p}f_{k}(t_{p})|t_{p}\rangle\int dt_{r}f_{k} ^{*}(t_{r})|t_{r}\rangle \tag{11}\] where \(\{f_{k}(\omega)\}\) is an arbitrary basis of square normalized functions and single photon probe and reference states at time \(t_{p},t_{r}\) are denoted as \(|t_{p}\rangle,|t_{r}\rangle\), respectively. The arbitrariness of \(f_{k}(t)\) implies if a photon is created in a CM with amplitude \(A_{p}(t)\), its twin photon will also be created in the conjugated CM with amplitude \(A_{r}(t)=A_{p}^{*}(t)\). To verify the validity of the above argument beyond the limiting case discussed above, consider the following photon pair state with a Gaussian JTA: \[|\text{pair}\rangle=\iint dt_{p}dt_{r}\] \[\sqrt{\frac{1}{2\pi\sigma_{p}\sigma_{m}}\exp\left(-\frac{(t_{p}-t _{r})^{2}}{4\sigma_{m}^{2}}-\frac{(t_{p}+t_{r})^{2}}{4\sigma_{p}^{2}}\right)} |t_{p}\rangle|t_{r}\rangle \tag{12}\] where \(\sigma_{m},\sigma_{p}\) are the temporal correlation and duration of SPDC photon pairs. Then consider a pair of CMs whose temporal amplitude \(A_{p}(t),A_{r}(t)\) are specified by the correlation function: \[\langle A_{p}(t)A_{p}^{*}(t^{\prime})\rangle=\frac{1}{\sqrt{2\pi}\sigma_{-}} \exp\left(-\frac{(t-t^{\prime})^{2}}{8\sigma_{-}^{2}}-\frac{(t+t^{\prime})^{2} }{8\sigma_{+}^{2}}\right) \tag{14}\] \[A_{r}(t)=A_{p}^{*}(t) \tag{15}\] where \(\sigma_{-},\sigma_{+}\) are the temporal correlation and duration of two CMs. Then fidelity of these two CMs as a pair of Schmidt modes can be benchmarked by the heralding efficiency in mode \(A_{r}(t)\) when a photon in \(A_{p}(t)\) is detected. As can be seen in Fig. 5, near unity heralding efficiency can be achieved when the coherence time \(\sigma_{-}\) of the CMs is longer than the SPDC temporal correlation time while the temporal duration \(\sigma_{+}\) of the CMs needs to be shorter than the SPDC photon temporal duration \(\sigma_{p}\). ## V Conclusion In summary, we proposed a quantum-inspired LiDAR prototype based on coherent measurement of classical time-frequency correlation. It retains the high noise resilience advantage (\(>\)100dB rejection of indistinguishable noise) of quantum LiDARs while still allowing for high-power classical sources and single photon sensitivity to extend the detection range. Its principle also resembles chaotic LiDAR but the implementation with coherent optical processing avoids fundamental limitations of electrical domain detection and signal processing. As such, the LiDAR prototype we demonstrate here combines both practical implementation and substantial performance enhancement. We expected it to become a useful tool in the near future for real-world LiDAR application that requires high rejection of crosstalk and noise jamming. The chaotic mode conversion technique that is derived from the LiDAR receiver, can also be applied in quantum information applications and provide performance enhancement as compared to pulse-based quantum frequency conversion. Its advantage of high efficiency and selectivity can be useful for high dimensional quantum information processing applications such as Boson sampling. ## Data Availability The authors declare that all necessary data to interpret, verify and extend the reported result have been provided in the main text and the supplementary information. ## Code Availability The algorithm for the Monte-Carlo SFG simulation is described in the supplementary information and the code is available upon requests.
2309.12860
A Graph-Based Technique for the Automated Control-Oriented Modeling of District Heating Networks
Advanced control strategies for delivering heat to users in a district heating network have the potential to improve performance and reduce wasted energy. To enable the design of such controllers, this paper proposes an automated plant modeling framework that captures the relevant system dynamics, while being adaptable to any network configuration. Starting from the network topology and system parameters, the developed algorithm generates a state-space model of the system, relying on a graph-based technique to facilitate the combination of component models into a full network model. The accuracy of the approach is validated against experimental data collected from a laboratory-scale district heating network. The verification shows an average normalized root mean square error of 0.39 in the mass flow rates delivered to the buildings, and 0.15 in the network return temperature. Furthermore, the ability of the proposed modeling technique to rapidly generate models characterizing different network configurations is demonstrated through its application to topology optimization. The optimal design, obtained via a branch and bound algorithm, reduces network heat losses by 15% as compared to the conventional length-minimized topology.
Audrey Blizard, Stephanie Stockar
2023-09-22T13:32:14Z
http://arxiv.org/abs/2309.12860v1
# A Graph-Based Technique for the Automated Control-Oriented Modeling of District Heating Networks ###### Abstract District heating networks (DHNs) offer numerous advantages over traditional heating methods, including reduced carbon emissions, flexible integration of a variety of intermittent renewable energy sources, and economies of scale, reducing the cost of heating buildings in urban environments. They are being increasingly adopted in cities of all sizes to increase energy efficiency and transition away from less sustainable heat sources. However, the performance of DHNs can be greatly improved by the implementation of advanced control strategies which consider individual users demands to deliver heat more effectively throughout the network [1]. The first step to design such control strategies is the development of a control-oriented model that considers demand at the individual building level while being implementable on large-scale networks with the limited network data available. One existing approach is to use data-driven techniques to simulate the behavior of the system. For example, machine learning has been used to forecast the heat demands placed on a district heating network [2]. While shown to provide an accurate 48 hour prediction window, this model does not provide data on individual buildings demands and requires 3 years of data to train the model. Additionally, black box techniques have been used to learn acceptable fluid flow behaviors to enable the diagnosis of leakage faults that occur in the network [3]. In one model, the steady-state heat losses in the piping network are estimated using a genetic algorithm trained using data from smart metering devices located throughout the network [4]. However, in most DHNs, a very limited amount of granular data is available on the dynamics within the network. Therefore, a black box modeling approach will not be effective for designing real-time control for the individual users throughout the network, and a physics-based approach is preferable. Alternatively, a physics-based approach can be used where the behavior of all relevant components in the network are characterized thought first-principle equations. For example, computational fluid dynamics tools have been used to model the heat delays experienced by users different distances from the heat source [5]. A modified finite volume technique was also developed, discretizing each pipe into segments [6]. A detailed pipe-oriented model has been developed in Modelica, where the time delay, spacial temperature distribution, advection, and thermal inertia are all considered in the pipe model [7]. While these models are comprehensive and can be used to accurately capture the DHN's behavior, calibrating and performing simulations is time-consuming, and would need to be repeated if the network configuration is modified. Moreover, high fidelity models are not very practical for control design, where the model's ability to scale to networks with hundreds of users is crucial. Furthermore, some physics-based models have been derived targeting specific aspects of the network control. For example, one model aggregates regions of the network into substations where the behavior of multiple users are considered as a group, allowing for the fast simulation of large networks [8]. However, the granularity provided by this model is not sufficient for the control of individual buildings. A substation-based model was also used to determine the level of thermal flexibility available in the network, allowing for the shifting of loads based on available heat sources [9]. Modeling for demand-side control has also been considered, where the heating demand of the entire network has been gathered into a single lumped parameter based on meteorological observations [10]. A model of the flexible heat capacity of a DHN has been developed in order to optimize the ability to better utilize renewable, intermittent heat sources [11]. Another well-explored modeling approach for DHNs is a graph-based approach. Due to the inherent structure of a DHN, with long stretches of pipe connecting individual user nodes, many sources have chosen to represent the topology of a DHN as a graph. For example, a graph-based approach has been used for model order reduction, where spectral clustering of the network graph was used to group users into well connected subgroups [12]. A graph-based model has also been used to reduce the complexity of the fluid-flow calculation, mixing a black-box and physics-based approach [13]. Graph based approaches have also been used in the design of DHN configurations to minimize installation costs [14] and pressure drops [15]. The flexibility and scalability of graph-based modeling techniques have also allowed them successfully employed in a variety of other control applications. For example, a graph-based approach was effectively used to develop a control-oriented model of the various components in an aircraft [16]. Graph-based models have also been used to partition large-scale systems into subsystems for use in distributed model predictive control [17]. In summary, while a variety of modeling approaches have been developed, none are designed to address building-level demand while remaining feasible for implementation on large-scale networks with the limited data available on granular network operation. The model equations presented in Saletti et al. [18] provides an appropriate level of detail for control development. However, without a framework for interconnecting the model equations, developing the model of a single DHN is time-consuming and thereby considering a variety of configurations is challenging. Additionally, no validation is offered on the models ability to accurately capture the temperature dynamics in the network. The main contribution of this paper is a technique for generating a state-space model of the network from a graph-based topology model of the network, with an algebraic calculation of the flow rates throughout the network. Temperature dynamics will be considered through a similar first-principles approach, with each pipe represented by a single energy conservation equation. The ability of the proposed methodology to capture the system dynamics is validated on data collected from a two-user lab-scale DHN. The automated nature of the proposed control-oriented modeling technique is demonstrated by considering a static design optimization problem, where a layout that minimizes the total length of the network is compared to one that considers the enthalpy losses during peak heat demands. The optimization problem will be solved by combining a graph theory problem formulation with the developed modeling technique, representing the problem as an optimum branch problem with prize collection constraints. It is solved using a branch and bound approach, where the proposed modeling technique will serve to calculate the enthalpy losses in each potential network configuration. The remainder of the paper is organized as follows. Section 2 presents an overview of the system and introduces the fundamental equations for the component models. In Section 3, the graph-based technique is presented. The validation of the methodology and the experimental campaign are described in Section 4. The formulation of the energy minimization design problem is presented in Section 5, while the results for a selected case study based on a real-world DHN are presented in Section 5.3. Finally, conclusions are presented in Section 6. ## 2 Modeling of Network Components DHNs consist of three main elements: the heating plant, the distribution network, and the users [19]. An example of a six user network is shown in Fig. 1. The water for the network is heated at the centralized heating plant which includes a pump to supply water through the distribution network to the users. The distribution network consists of two sets of pipes buried underground, the feeding and return networks. The feeding network delivers the heated water to the users and return network sends the cooled water back to the plant to be reheated. The feeding network is composed of pipe segments, that are connected at split nodes, where the water supply is divided between multiple branches of the network. The return network has mixing nodes to rejoin the flow from these branches to be returned to the plant. ## 2 PREPRINT for REVIEW Once the water has reached the user, a subnetwork of pipes together with a control valve, shown in Fig. 2, deliver water to the building. The control valve modulates the flow from the feeding segment \(F\) into segment \(S1\), which transports water to the heat exchanger, S2. Here, heat is removed from the network to be used by the building. The cooled water is returned through \(S3\) to a mixing node, connected to return segment \(R\). For users at the end of network branches, the feeding line is connected directly to the return line through a bypass segment, \(B\), allowing any unused flow to be recirculated. ### Component Models. The pipe segments making up the the feeding and return network, the split and mixing nodes, and the segments in a user block will be modeled via a first-principles based approach. First-principle models of DHNs components consider two domains, the thermal domain and the fluids domain [18]. The thermal domain, the relevant dynamics of the system, are considered using time-varying conservation of energy. The fluids domain is characterized by static pressure losses and continuity equations; this approach is chosen due to the relatively fast pressure dynamics when compared to changes in the network temperature [20]. The temperature dynamics in a network segment are described by \[\frac{d}{dt}T=\frac{\dot{m}}{\rho V}\left(T_{in}-T\right)-\frac{1}{\rho c_{p}V }\dot{Q} \tag{1}\] where \(T\) is the bulk temperature throughout the pipe, \(T_{in}\) is the inlet water temperature, \(V\) is the volume of the pipe, \(\dot{m}\) is the mass flow rate, \(\rho\) is the density, \(c_{p}\) heat capacity of the working fluid, and \(\dot{Q}\) is the heat loss to either the environment or building. For pipes in the feeding and return network, heat is lost to the environment and \(\dot{Q}\) is given by \[\dot{Q}=hA_{s}(T-T_{amb}) \tag{2}\] ### Transactions of the ASME Fig. 1: Sample six user DHN. Fig. 2: Components of a single user loop with and without bypass segment. where \(T_{amb}\) is the ambient temperature, and \(hA_{s}\) is the total conductive heat transfer coefficient between pipe and ground. The energy equation is then written in the compact form \[\frac{d}{dt}T=c_{1}T_{in}+c_{2}T_{amb}+c_{3}T \tag{3}\] where \[c_{1}=\frac{\dot{m}}{\rho V} \tag{4a}\] \[c_{2}=\frac{hA_{s}}{\rho c_{p}V}\] (4b) \[c_{3}=-(c_{1}+c_{2}) \tag{4c}\] For segment \(S2\), the heat demand from the building is an input of the model, hence \(\dot{Q}=\dot{Q}_{b}\), and the compact energy equation becomes \[\frac{d}{dt}T_{S2}=c_{1}T_{in}-c_{1}T_{S2}+c_{4}\dot{Q}_{b} \tag{5}\] where \[c_{4}=-\frac{1}{\rho c_{p}V} \tag{6}\] The pressure loss in any pipe segment is calculated by \[\Delta P=\frac{1}{2\rho\Lambda_{c}^{2}}\left(k+\lambda\frac{L}{D}\right)\dot{m }^{2}=\zeta\dot{m}^{2} \tag{7}\] where \(k\) is the concentrated pressure loss coefficient, which accounts for changes in geometry in the pipe, and \(\lambda\) is the distributed pressure loss coefficient, which accounts for friction with the pipe wall and is dependent on pipe material. These two coefficients can either be calibrated separately using material properties and pipe geometry, or combined into one equivalent parameter \(\zeta\), which is calibrated using experimental pressure loss data. Finally, \(L\), \(D\), and \(A_{c}\) represent the pipes length, diameter, and cross sectional area respectively. The mass flow is split between the branches in the network to cause the pressure losses in each branch connected to the same split node to be equal. This split is calculated using a pressure balance equation: \[\Delta P_{Br}^{\{i\}}\left(\dot{m}_{in}^{\{i\}}\right)=\Delta P_{Br}^{\{j\}} \left(\dot{m}_{in}^{\{j\}}\right)\quad\forall i,j=1\ldots n \tag{8}\] where \(\dot{m}_{in}\) is the mass flow into the branch and \(n\) is the total number of branches leaving the split node. Additionally the flow at each split node obeys the conservation of mass: \[\dot{m}_{in}=\sum_{i=1}^{n}\dot{m}^{\{i\}} \tag{9}\] where \(\dot{m}_{in}\) is the mass flow rate into the split node. Finally, the heating plant is modeled as a heat supply with mass flow rate \(\dot{m}_{0}\) and a controllable temperature \(T_{0}\). ### Journal of Dynamic Systems, Measurement and Control #### User Block. The component models are combined to generate a model of a single building connected to the network. The single user model serves as a building block when constructing the entire system. Here, a single user has five pipe segments, the three segments within the user, and the feeding and return line connecting the user to the network as shown in Fig. 2(a). Following this, the state equation for a single user \(\{i\}\) is given by \[\frac{d}{dt}\begin{bmatrix}T_{F}^{\{i\}}\\ T_{S1}^{\{i\}}\\ T_{S2}^{\{i\}}\\ T_{S3}^{\{i\}}\\ T_{R}^{\{i\}}\end{bmatrix}=A_{U}^{\{i\}}\begin{bmatrix}T_{F}^{\{i\}}\\ T_{S1}^{\{i\}}\\ T_{S1}^{\{i\}}\\ T_{S2}^{\{i\}}\\ T_{S3}^{\{i\}}\\ T_{R}^{\{i\}}\end{bmatrix}+E_{U}^{\{i\}}\begin{bmatrix}T_{amb}\\ Q_{b}^{\{i\}}\\ \end{bmatrix} \tag{10}\] where \(T_{Sj}^{\{i\}}\), \(j=1\ldots 3\) are the temperatures of each pipe segment in the user, \(T_{F}^{\{i\}}\) is the feeding line temperature, \(T_{R}^{\{i\}}\) is the return line temperature, \(A_{U}^{\{i\}}\in\mathbb{R}^{5\times 5}\) is the user's state transition matrix and \(E_{U}^{\{i\}}\) is the user's external disturbance matrix. The matrix \(A_{U}^{\{i\}}\) is constructed based on the interconnection between the pipe segments \[A_{U}^{\{i\}}=\begin{bmatrix}c_{3}^{\{i\}}&0&0&0&0\\ c_{1}^{\{i\}}&c_{3}^{\{i\}}&0&0&0&0\\ 0&c_{1}^{\{i\}}&c_{3}^{\{i\}}&-c_{1}^{\{i\}}&0&0\\ 0&0&c_{1}^{\{i\}}&c_{3}^{\{i\}}&0&0\\ 0&0&0&\frac{\dot{m}_{U}^{\{i\}}}{\dot{m}_{R}^{\{i\}}}c_{1}^{\{i\}}&c_{3}^{\{i\} }&c_{3}^{\{i\}}\end{bmatrix} \tag{11}\] where the flow mixing at the inlet of the return node is accounted for by mass flow rate ratio seen in the first nonzero term of the final row. The matrix \(E_{U}^{\{i\}}\) is constructed from two column vectors, \[E_{U}^{\{i\}}\in\mathbb{R}^{6\times 2}=\begin{bmatrix}e_{1}^{\{i\}}&e_{2}^{\{i\}} \end{bmatrix} \tag{12}\] where the first column describes the heat transfer with the environment: \[e_{1}^{\{i\}}=\begin{bmatrix}c_{2}^{\{i\}}&c_{2}^{\{i\}}&0&c_{2}^{\{i\}}&c_{2 }^{\{i\}}\end{bmatrix}^{T} \tag{13}\] and second accounts for the heat flux into the building: \[e_{2}^{\{i\}}=\begin{bmatrix}0&0&c_{4}^{\{i\}}&0&0\end{bmatrix}^{T} \tag{14}\] For users connected to a bypass segment, as shown in Fig. 2(b), the model is augmented to include the additional temperature \[\frac{d}{dt}\begin{bmatrix}\overline{T}_{U}^{\{i\}}\\ T_{B}^{\{i\}}\end{bmatrix}=A_{B}^{\{i\}}\begin{bmatrix}\overline{T}_{U}^{\{i\}} \\ T_{B}^{\{i\}}\end{bmatrix}+E_{B}^{\{i\}}\begin{bmatrix}T_{amb}\\ \dot{Q}_{b}^{\{i\}}\end{bmatrix} \tag{15}\] \[A_{B}^{\{i\}}\in\mathbb{R}^{6\times 6}=\begin{bmatrix}&0^{4\times 1}\\ A_{U}^{\{i\}}&\frac{m_{U}^{\{i\}}}{\dot{m}_{R}^{\{i\}}}c_{1}^{\{i\}}\\ c_{1}^{\{i\}}&0^{1\times 4}&c_{3}^{\{i\}}\end{bmatrix} \tag{16}\] \[E_{B}^{\{i\}}=\begin{bmatrix}e_{3}^{\{i\}}&e_{4}^{\{i\}}\end{bmatrix}=\begin{bmatrix} e_{1}^{\{i\}}&e_{2}^{\{i\}}\\ c_{2}^{\{i\}}&0\end{bmatrix} \tag{17}\] where \(T_{B}^{\{i\}}\) is the temperature in the bypass, and \(\overline{T}_{U}^{\{i\}}\) are the states given in Eq. (10). ## 3 Modeling of Network Configurations In this section, the method for combining the component models and user blocks to generate the state-space representation of a complete network is presented. ### Two-User Model. An illustrative example is given where the two-user network shown in Fig. 3 is considered. This network has two parallel branches meaning each user has its own bypass segment, and there is one split node. The temperature dynamics of this two-user network are given by \[\frac{d}{dt}\begin{bmatrix}\frac{T_{R}^{\{1\}}}{T_{R}^{\{2\}}}\\ \frac{T_{R}^{\{1\}}}{T_{R}^{\{2\}}}\\ \end{bmatrix}=A_{aug}\begin{bmatrix}T_{R}^{\{3\}}\\ \frac{T_{R}^{\{1\}}}{T_{R}^{\{1\}}}\\ \frac{T_{R}^{\{2\}}}{T_{R}^{\{2\}}}\\ \end{bmatrix}+B_{aug}\begin{bmatrix}T_{0}\\ \end{bmatrix}+E_{aug}\begin{bmatrix}T_{amb}\\ \dot{Q}_{b}^{\{1\}}\\ \dot{Q}_{b}^{\{2\}}\\ \end{bmatrix} \tag{18}\] where \(\overline{T}_{B}^{\{i\}}\), \(i=1,2\) is the state vector for each user with bypass as described by Eq. (10), and \(T_{F}^{\{3\}},T_{R}^{\{3\}}\) are the temperatures of the feeding and return lines connected to the split node. The corresponding state transition matrix is \[A_{aug}=\begin{bmatrix}c_{3}^{\{3\}}&0^{1\times 6}&0^{1\times 6}&0\\ b_{21}^{\{1\}}&A_{B}^{\{1\}}&0^{6\times 6}&0^{6\times 1}\\ b_{21}^{\{1\}}&0^{6\times 6}&A_{B}^{\{2\}}&0^{6\times 1}\\ 0&b_{32}^{\{3,1\}}&b_{32}^{\{3,2\}}&c_{3}^{\{3\}}\end{bmatrix} \tag{19}\] As user blocks are connected to the overall network, additional vectors must be included in \(A_{aug}\) to indicate these connections. The inlet temperature for a user's feeding line from the split node is given by \(a_{21}^{\{i\}}\) for users without a bypass, and \(b_{21}^{\{i\}}\) for users with a bypass segment, defined as \[a_{21}^{\{i\}}=\begin{bmatrix}c_{1}^{\{i\}}\\ 0^{6\times 1}\\ \end{bmatrix},\ b_{21}^{\{i\}}=\begin{bmatrix}a_{21}^{\{i\}}\\ 0\\ \end{bmatrix} \tag{20}\] The connection of the users return segment to the return network is give by \(a_{32}^{\{i,j\}}\) for users without bypass segments and \(b_{32}^{\{i,j\}}\) for users with a bypass segment, defined as \[a_{32}^{\{i,j\}}=\begin{bmatrix}0^{1\times 4}&\frac{m_{b}^{\{i\}}}{m_{R}^{ \{i\}}}c_{1}^{\{i\}}\\ \end{bmatrix},\ b_{32}^{\{i,j\}}=\begin{bmatrix}a_{32}^{\{i,j\}}&0\\ \end{bmatrix} \tag{21}\] where \(j\) is the user number and \(i\) is the return line to which the user is connected. The full network also includes the central heating plant, which supplies water with temperature, \(T_{0}\), corresponding to the inlet temperature to the first feeding line in the network and is assumed to be controllable. The matrix \(B_{aug}\) is given by \[B_{aug}=\begin{bmatrix}c_{1}^{\{3\}}\\ 0^{6\times 1}\\ 0^{6\times 1}\\ 0\\ \end{bmatrix} \tag{22}\] The uncontrollable disturbances for the entire network are included in the \(E_{aug}\) matrix as \[E_{aug}=\begin{bmatrix}c_{2}^{\{3\}}&0&0\\ e_{3}^{\{3\}}&e_{4}^{\{1\}}&0^{6\times 1}\\ e_{3}^{\{2\}}&0^{6\times 1}&e_{4}^{\{2\}}\\ c_{2}^{\{3\}}&0&0\\ \end{bmatrix} \tag{23}\] ### Multi-User Model. Larger network configurations are modeled following a similar procedure as demonstrated on the two-user example. In these cases, the challenge lies in efficiently converting connections between network elements into the state-space. To overcome this, the topology of the DHN configuration is represented by an unweighted rooted directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). The dimensions of the network model are characterized by the number of users \(n_{u}\) and split nodes \(n_{s}\). The set of nodes \(\mathcal{V}=\{v_{0},\ldots,v_{n_{u}+n_{s}}\}\) is decomposed into three sets \(\mathcal{V}=\{v_{0},\mathcal{V}_{u},\mathcal{V}_{s}\}\), where \(v_{0}\) is the node for the heating plant, \(\mathcal{V}_{u}=\{v_{1}\ldots,v_{n_{u}}\}\) contains the \(n_{u}\) user nodes, and \(\mathcal{V}_{s}=\{v_{n_{u}+1}\ldots,v_{n_{u}+n_{s}}\}\) contains the \(n_{s}\) split nodes. Additionally, let \(\mathcal{V}_{leaf}\subset\mathcal{V}_{u}\) be the set of leaf nodes in the tree. All leaf users must have bypass segments, hence the total number of states in the network is \[n_{p}=5n_{u}+n_{s}+\text{card}\{\mathcal{V}_{leaf}\} \tag{24}\] where \(\text{card}\{\cdot\}\) is the cardinality of a set. The set of directed edges \(\mathcal{E}=\{\varepsilon_{1}\ldots\varepsilon_{n_{u}+n_{s}}\}\) represents the connections between the components in the network, based on the network's topology. The first step in calculating the network temperature dynamics is determining the mass flow rate in every pipe segment. Without loss of generality, this paper considers the case where the mass flow split at the split nodes has no actuation and the network has no booster pumps. The presence of booster pumps or additional actuators can be easily incorporated through additional conditions on the pressure balance equations. Additionally, it is assumed that the mass flow rate drawn by each user is known. This is a reasonable assumptions, as if this mass flow rate is unknown, it can be determined from the control valve position or heat demand. From the pressure losses in the pipe segments, an implicit system of ## 4 / Preprint for Review Fig. 4: Configurations of the two types of branches. Fig. 3: Layout of a two-user DHN with two parallel branches. equations is generated to find the mass flow rates in the network, while ensuring the pressure balance Eq. (8) and conservation of mass Eq. (9) at each split node is maintained. To facilitate this, groups of users connected directly to each other with no split nodes are identified as branches. Parallel branches (Fig. 4(_a_)) are terminated by a user and are identified by the leaf nodes in the tree, while series branches (Fig. 4(_b_)) end in split nodes and are characterized by the in-edge of said node. The pressure loss in any network branch of \(n\) users is given by \[\Delta P_{Br(n)}=\sum_{i=0}^{n-1}2^{i}\zeta_{F}^{\{n-i\}}\left(\dot{m}_{in}- \sum_{j=1}^{i}\dot{m}_{U}^{\{n+1-j\}}\right)^{2}+\phi \tag{25}\] where \(\dot{m}_{in}\) is the flow rate into the branch, \(\dot{m}_{U}^{\{i\}}\) is the flow drawn by user \(i\). For a parallel branch, \(\phi\) is given by \[\phi=2^{n}\zeta_{B}\left(\dot{m}_{in}-\sum_{i=1}^{n}\dot{m}_{U}^{\{i\}}\right) ^{2} \tag{26}\] while for a series branch, \(\phi\) is given by \[\phi=2^{n}\zeta_{v_{s}}^{F}(\dot{m}_{out})^{2}+2^{n}\sum_{i=1}^{n_{ds}}\Delta P _{i}^{Br} \tag{27}\] where \(\dot{m}_{out}\) is the outlet flow, given by \[\dot{m}_{out}=\dot{m}_{in}-\sum_{i=1}^{n}\dot{m}_{U}^{\{i\}} \tag{28}\] and \(n_{ds}=\operatorname{card}\{\mathcal{V}_{v_{s}}^{-}\}\) is the number of direct successors of the terminating split node \(v_{s}\). This equation is constructed via induction. The pressure loss in any branch with \(n\) users is given by \[\Delta P_{Br(n)}= \Delta P_{F}^{\{n\}}+\Delta P_{U}^{\{n\}}+\Delta P_{Br(n-1)} \tag{29}\] \[= \Delta P_{F}^{\{n\}}+2\Delta P_{Br(n-1)} \tag{30}\] where \(\Delta P_{U}^{\{n\}}=\Delta P_{Br(n-1)}\) due to the pressure balance at the control valve of user \(n\). The base case differs depending on branch type. As parallel branches include a bypass segment, the pressure loss for a single user in a parallel branch is given by \[\Delta P_{Br(1)}= \Delta P_{F}^{\{1\}}+\Delta P_{U}^{\{1\}}+\Delta P_{B} \tag{31}\] \[= \Delta P_{F}^{\{1\}}+2\Delta P_{B}\] (32) \[= \zeta_{F}^{\{1\}}(\dot{m}_{in}^{\{1\}})^{2}+2\zeta_{B}\left(\dot {m}_{in}^{\{1\}}-\dot{m}_{U}^{\{1\}}\right)^{2} \tag{33}\] where the substitution \(\Delta P_{U}^{\{1\}}=\Delta P_{B}\) is due to the pressure balance at the control valve. However, because series branches have additional downstream branches, the pressure loss for a single-user series branch is given by \[\Delta P_{Br(1)}= \Delta P_{F}^{\{1\}}+\Delta P_{U}^{\{1\}}+\Delta P_{F}^{\{v_{s} \}}+\sum_{i=1}^{n_{ds}}\Delta P_{Br}^{\{i\}} \tag{34}\] \[= \Delta P_{F}^{\{1\}}+2\Delta P_{F}^{\{v_{s}\}}+2\sum_{i=1}^{n_{ds }}\Delta P_{Br}^{\{i\}} \tag{35}\] ### Journal of Dynamic Systems, Measurement and Control \[\begin{split}&\equiv\zeta_{F}^{\{1\}}(\dot{m}_{in}^{\{1\}})^{2}+2 \zeta_{F}^{\{v_{s}\}}(\dot{m}_{out})^{2}+2\sum_{i=1}^{n_{ds}}\Delta P_{Br}^{\{i \}}\end{split} \tag{36}\] where the substitution \(\Delta P_{U}^{\{1\}}=\Delta P_{F}^{\{v_{s}\}}\sum_{i=1}^{n_{ds}}\Delta P_{Br}^{\{ i\}}\) is also due to the pressure balance at the control valve. Combining Eqs. (33) and (36) with the induction term gives the final pressure equation as a function of branch type and number of users. Using Eq. (25) to describe the pressure loss in each network branch, the full pressure balance is resolved. The first step is to establish variables for the percent mass flow in each branch leaving the split nodes, \(\alpha_{j}^{\{i\}}\), \(\forall\) i s.t. \(v_{i}\in\mathcal{V}_{s}\), \(j=1\ldots\operatorname{card}\{\mathcal{V}_{v_{i}}^{-}\}\). Subsequently, each split node adds one conservation of mass equations given by \[\sum_{j=1}^{\operatorname{card}\{\mathcal{V}_{v_{s}}^{-}\}}\alpha_{j}^{\{i\}}=1 \tag{37}\] Additionally, each split node adds \(\left(\operatorname{card}\{\mathcal{V}_{v_{i}}^{-}\}-1\right)\) pressure balance equations: \[\Delta P_{Br}^{\{1\}}=\Delta P_{Br}^{\{j\}}\quad j=2\ldots\operatorname{card} \{\mathcal{V}_{v_{i}}^{-}\} \tag{38}\] where the pressure losses in each branch are found using Eq. (25) where \(\dot{m}_{in}\) is the product of the upstream \(\alpha\) values and the supply mass flow rate, \(\dot{m}_{in}\). Hence, each split node adds \(\operatorname{card}\{\mathcal{V}_{v_{s}}^{-}\}\) unknowns and \(\operatorname{card}\{\mathcal{V}_{v_{s}}^{-}\}\) constraints, meaning there will be exactly one solution, which is calculated using a numeric solver due to the implicit form of the final equation. #### 3.2.1 Temperature Modeling Once the mass flow rate in each pipe segment is obtained as a function of the user's flow demands \(\dot{m}_{U}^{\{i\}}\), the connections between the nodes in the graph, along with the types of nodes, are used to identity the relationships between the states in the network. For each node, the graph provides the set of direct predecessors, \(\mathcal{V}_{v_{i}}^{-}\), and direct successors, \(\mathcal{V}_{v_{i}}^{+}\), which are used to generate the state-space model of the network \[\frac{d}{dt}T_{aug}=A_{aug}T_{aug}+B_{aug}\left[T_{0}\right]+E_{aug}\left[ \begin{matrix}T_{amb}\\ \dot{Q}_{b}^{\{1\}}\\ \vdots\\ \dot{Q}_{b}^{\{m_{b}\}}\end{matrix}\right] \tag{39}\] \[T_{aug}=\left[\begin{matrix}T_{F}^{\{n_{u}+1\}}&\ldots&T_{F}^{\{n_{u}+n_{s} \}},&\overline{T}_{U,B}^{\{1\}}&\ldots\\ &&\\ &\overline{T}_{U,B}^{\{n_{u}\}},&T_{R}^{\{n_{u}+n_{s}\}}&\ldots&\overline{T}_{R}^{\{n_{u }+n_{s}\}}\end{matrix}\right]^{T} \tag{40}\] The augmented state matrix \(A_{aug}\), originally defined in Eq. (19) is generalized for a network of any size by \[A_{aug}=\left[\begin{matrix}A_{11}&A_{12}&0^{n_{s}\times n_{s}}\\ A_{21}&A_{22}&A_{23}\\ 0^{n_{s}\times n_{s}}&A_{32}&A_{33}\end{matrix}\right] \tag{41}\] and it has seven nonzero submatrices, each of which characterize a different set of interconnections in the generalized network: 1. \(A_{11}\in\mathbb{R}^{n_{s}\times n_{s}}\): split nodes to feeding network 2. \(A_{12}\in\mathbb{R}^{n_{s}\times n_{s}}\): split nodes' feeding lines to users 3. \(A_{21}\in\mathbb{R}^{n_{s}\times n_{s}}\): users' feeding lines to split nodes 4. \(A_{22}\in\mathbb{R}^{n_{p}\times n_{p}}\): user blocks and users to users 5. \(A_{23}\in\mathbb{R}^{n_{p}\times n_{p}}\) split nodes' return line to users 6. \(A_{32}\in\mathbb{R}^{n_{s}\times n_{p}}\) users' return lines to split nodes 7. \(A_{33}\in\mathbb{R}^{n_{s}\times n_{s}}\) split nodes to the return network Complex network configurations have additional connection types beyond those seen in the two-user case, and populating the system matrices requires defining additional vectors to describe these interconnections. Specifically, \(A_{21}\) contains terms \(a_{12}^{\{i\}}\) defined by \[a_{21}^{\{i\}}=\begin{bmatrix}c_{1}^{\{i\}}&0^{1\times 4}\end{bmatrix} \tag{42}\] \(A_{22}\) contains terms \(a_{R22}^{\{i,j\}},\ b_{R22}^{\{i,j\}}\) defined by \[a_{R22}^{\{i,j\}}=\begin{bmatrix}0^{4\times 4}&0^{4\times 1}\\ 0^{1\times 4}&\frac{m_{R}^{\{i\}}}{m_{R}^{\{i\}}}c_{1}^{\{i\}}\end{bmatrix},\ b_{R22}^{\{i,j\}}= \begin{bmatrix}a_{R22}^{\{i,j\}}&0^{5\times 1}\end{bmatrix} \tag{43}\] and terms \(a_{F22}^{\{v_{i}\}},\ b_{F22}^{\{v_{j}\}}\) defined by \[a_{F22}^{\{i\}}=\begin{bmatrix}c_{1}^{\{i\}}&0^{1\times 4}\\ 0^{4\times 1}&0^{4\times 4}\end{bmatrix},\ b_{F22}^{\{i\}}=\begin{bmatrix}a_{R22}^{\{ i\}}\\ 0^{1\times 2}\end{bmatrix} \tag{44}\] \(A_{23}\) contains terms \(a_{23}^{\{i,j\}}\), defined by \[a_{23}^{\{i,j\}}=\begin{bmatrix}\frac{m_{R}^{\{i\}}}{m_{R}^{\{i\}}}c_{1}^{\{i\} }\\ 0^{4\times 1}\end{bmatrix} \tag{45}\] The detailed algorithm for generating the components of the \(A_{aug}\) matrix is given in Algorithm 1. The \(B_{aug}\) and \(E_{aug}\) matrices presented in Eq. (22) and Eq. (23) respectively are similarly generalized for a network of any size. The \(B_{aug}\) matrix is supplemented to contain terms for \(v_{i}\in\mathcal{V}_{v_{0}}^{\star}\) and is found according to Algorithm 2. The \(E_{aug}\) includes terms for the heat losses of every pipe in the network and includes heat transferred into every building served by the network. The methodology for generating \(E_{aug}\) is presented in Algorithm 3. ## 4 Validation of the Dynamic Model Due to the lack of available high spatial and temporal resolution data about the dynamic operation of full-scale DHNs, a lab-scale experimental DHN was developed by the authors and presented in [21]. The lab-scale DHN was designed via the Buckingham \(\pi\) theorem to match the dynamic response of a full-scale DHN while performing accelerated tests. The laboratory setup allows for the collection of detailed temperature and flow data used in this paper to validate the ability of this modeling approach to capture the temperature response of a real-world DHN. The developed test bench has the same configuration as two-user example provided in Section 3.1. Data from this network is collected using the 17 thermistors, 9 pressure transducers, and 4 mass flow rate sensors located at key points in the network, as shown in Fig. 5. The data was collected using two synchronized data acquisitions systems, with sampling rates of 1 Hz. Furthermore, the lab-scale DHN is equipped with two characterized control valves controlled through PID controllers, which regulate the flow to the thermal masses to achieve the desired thermal mass temperature setpoints. A similar PID controller is implemented in the simulation, where the difference between the simulated building temperature and experimental building temperature is used to set the simulated mass flow rate delivered to the users. The simulation is performed in discrete time, using the bilinear transform to discretize the state space model, with a time step of one second. To simplify the integration of the heat losses in the simulation, the temperatures of the buildings are appended to the developed state-space model. The temperature of a single building is given by \[\frac{d}{dt}T_{b}=\frac{(hA_{s})_{S2}}{\left(\rho c_{p}V\right)_{b}}\left(T_{ S2}-T_{b}\right)-\frac{(hA_{s})_{b}}{\left(\rho c_{p}V\right)_{b}}\left(T_{b}-T_{amb}\right) \tag{46}\] ``` 1:for all\(i\) s.t. \(v_{i}\in\mathcal{V}_{u}\backslash\mathcal{V}_{leaf}\)do 2:\(A_{22}(i,i)\gets A_{U}^{\{i\}}\)\(\forall\)\(j\) s.t. \(v_{j}\in\mathcal{V}_{v_{i}}^{\star}\cap\mathcal{V}_{u}\backslash\mathcal{V}_{leaf}\) 3:\(A_{22}(i,j)\gets b_{R22}^{\{i,j\}}\)\(\forall\)\(j\) s.t. \(v_{j}\in\mathcal{V}_{v_{i}}^{\star}\cap\mathcal{V}_{leaf}\) 4:\(A_{22}(i,j)\gets b_{R22}^{\{i,j\}}\)\(\forall\)\(j\) s.t. \(v_{j}\in\mathcal{V}_{v_{i}}^{\star}\cap\mathcal{V}_{s}\) 5:\(A_{23}(i,j-n_{u})\gets a_{23}^{\{i,j\}}\)\(\forall\)\(j\) s.t. \(v_{j}\in\mathcal{V}_{v_{i}}^{\star}\cap\mathcal{V}_{s}\) 6:\(A_{21}(i,j-n_{u})\gets a_{21}^{\{i\}}\)\(\forall\)\(j\) s.t. \(v_{j}\in\mathcal{V}_{v_{i}}^{\star}\cap\mathcal{V}_{s}\) 7:\(A_{22}(i,j)\gets a_{F22}^{\{i\}}\)\(\forall\)\(j\) s.t. \(v_{j}\in\mathcal{V}_{v_{i}}^{\star}\cap\mathcal{V}_{u}\) 8:endfor 9:for all\(i\) s.t. \(v_{i}\in\mathcal{V}_{leaf}\)do 10:\(A_{22}(i,i)\gets A_{B}^{\{i\}}\)\(\forall\)\(j\) s.t. \(v_{j}\in\mathcal{V}_{v_{i}}^{\star}\cap\mathcal{V}_{s}\) 11:\(A_{21}(i,j-n_{u})\gets b_{21}^{\{i\}}\)\(\forall\)\(j\) s.t. \(v_{j}\in\mathcal{V}_{v_{i}}^{\star}\cap\mathcal{V}_{u}\backslash\mathcal{V}_{leaf}\) 12:\(A_{22}(i,j)\gets b_{R22}^{\{i,j\}}\)\(\forall\)\(j\) s.t. \(v_{j}\in\mathcal{V}_{v_{i}}^{\star}\cap\mathcal{V}_{u}\backslash\mathcal{V}_{leaf}\) 13:endfor 14:for all\(i\) s.t. \(v_{i}\in\mathcal{V}_{s}\)do 15:\(A_{11}(i-n_{u},i-n_{u})\gets c_{3}^{\{i\}}\)\(\forall\)\(i\) 16:\(A_{33}(i-n_{u},i-n_{u})\gets c_{3}^{\{i\}}\)\(\forall\)\(j\) s.t. \(v_{j}\in\mathcal{V}_{v_{i}}^{\star}\cap\mathcal{V}_{s}\) 17:\(A_{11}(i-n_{u},j-n_{u})\gets c_{1}^{\{i\}}\)\(\forall\)\(j\) s.t. \(v_{j}\in\mathcal{V}_{v_{i}}^{\star}\cap\mathcal{V}_{s}\) 18:\(A_{12}(i-n_{u},j)\gets a_{23}^{\{i\}}\)\(\forall\)\(j\) s.t. \(v_{j}\in\mathcal{V}_{v_{i}}^{\star}\cap\mathcal{V}_{u}\backslash\mathcal{V}_{leaf}\) 19:\(A_{32}(i-n_{u},j)\gets b_{32}^{\{i,j\}}\)\(\forall\)\(j\) s.t. \(v_{j}\in\mathcal{V}_{v_{i}}^{\star}\cap\mathcal{V}_{leaf}\) 20:\(A_{32}(i-n_{u},j)\gets b_{32}^{\{i,j\}}\)\(\forall\)\(j\) s.t. \(v_{j}\in\mathcal{V}_{v_{i}}^{\star}\cap\mathcal{V}_{leaf}\) 21:\(A_{33}(i-n_{u},j-n_{u})\leftarrow\frac{m_{R}^{\{i\}}}{m_{R}^{\{i\}}}c_{1}^{\{i\}} \)\(\forall\)\(j\) s.t. \(v_{j}\in\mathcal{V}_{v_{i}}^{\star}\cap\mathcal{V}_{s}\) 22:endfor ``` **Algorithm 1** Generation of \(A_{aug}\) submatrices From this equation, the heat demand of the building can be calculated according to \[Q_{b}=(hA_{s})_{S2}\left(T_{S2}-T_{b}\right) \tag{47}\] where \((hA_{s})_{S2}\) is the convective heat transfer coefficient of the heat exchanger and \(T_{b}\) is the building temperature. The data used to calibrate and validate the model was collected from 17.5 hour experiment, based on the one used in [18], which is equivalent to two days in a full-scale system. Realistic occupancy cycles were used to provide the temperature set points for the building, where the buildings are allowed to cooled overnight when unoccupied. This weight profile allows a variety of conditions to be explored, including heating, cooling, and temperature sustaining operation of a DHN. The simulation was performed using a realistic ambient temperature profile, representative of the average winter temperatures in northern Italy. The temperature set-points, ambient temperature profile, and supply temperature, are presented in Fig. 6. Additionally, the mass flow rate supplied to the network \(\dot{m}_{0}\) is also presented. The collected data was split into two parts, indicated by the dashed line. The first data set was used to calibrate the model parameters while the second set was used to validate the model's performance. The calibration data was used to select the heat transfer coefficients \(h\) for each pipe segment that minimize the sum of the root mean square error between the actual temperatures collected by the 16 thermistors and the simulated temperatures. In the calibration, the mass flow rate split between the network branches was taken from the data collected by M2. The resulting \(h\) values for each pipe segments are provided in Table 1, and the normalized root mean square errors (nRMSE), normalized with respect to the value's range, for a selected subset of the sensors are provided in Table 2. ### Journal of Dynamic Systems, Measurement and Control The calibrated \(h\) values are then used in a closed loop validation as shown in Figs. 7 to 9. In this simulation, the mass flow split between the branches is calculated from the pressure losses in the network. The mass flow rate supplied to the users is compared in Fig. 7. The temperature response of the two buildings are compared in Fig. 8, while the temperatures at the outlet of return network, and at the inlet of the user's return segments are compared in Fig. 9. The nRMSE are summarized in Table 2. Additionally, the results from the other network pipe segments were compared and have similar dynamics and errors to those shown. As shown in Table 1, the heat transfer coefficients of the various segments are different, even though the pipe segments has the same insulation and material properties. This indicates that the model's accuracy is dependent on proper calibration of these parameters, and if the network configuration changes, or additional users are added, the calibration procedure will have to be repeated. Furthermore, this calibration only considers a winter profile, as this is where the most potential for energy savings exists. Additional calibrations can be performed for different ambient conditions. ## 5 Solving the Energy-Reduced Design Problem The model's performance in optimization problems is critical for its application to novel control design. Additionally, the flexibility of the modeling technique when considering a variety of different layouts will allow it to easily adapt to changes in network topology, such as during faults, or as the system is partitioned for distributed control. This capability is demonstrated by rapidly generate models of various network configurations needed to resolve an optimal design problem. ### Problem Formulation. The goal of the optimal design problem is minimizing the enthalpy losses of the pipes in the supply network by changing the network layout, \(\mathcal{G}\), where the change in \begin{table} \begin{tabular}{c c c} \hline \hline Node & Segment & \(h\) [\(W/K\)] \\ \hline \multirow{4}{*}{\{1\}} & F & 0.0102 \\ & S1 & 301.1 \\ & S2 & \(1.627\times 10^{3}\) \\ & S3 & 0.0163 \\ & R & 0.0134 \\ & B & 0.0036 \\ \hline \multirow{4}{*}{\{2\}} & F & 0.0040 \\ & S1 & 244.0 \\ & S2 & \(1.408\times 10^{3}\) \\ & S3 & 0.0126 \\ & R & 0.0121 \\ & B & 0.0036 \\ \hline \multirow{4}{*}{\{3\}} & F & 3.759 \\ & R & 0.0016 \\ \hline \hline \end{tabular} \end{table} Table 1: Calibrated \(h\) values ([\(W/K\)]). \begin{table} \begin{tabular}{c c c} \hline \hline Value & Calibration nRMSE & Validation nRMSE \\ \hline \(T_{b}^{\{1\}}\) & 0.0085 & 0.0085 \\ \(T_{b}^{\{2\}}\) & 0.0112 & 0.0121 \\ \(T_{b}^{\{3\}}\) & 0.1432 & 0.1498 \\ \(T_{b\{n\}}^{\{1\}}\) & 0.0979 & 0.0769 \\ \(T_{b\{n\}}^{\{2\}}\) & 0.1126 & 0.1110 \\ \(\dot{m}_{b}^{\{1\}}\) & 0.2144 & 0.4732 \\ \(\dot{m}_{U}^{\{2\}}\) & 0.1940 & 0.3144 \\ \hline \hline \end{tabular} \end{table} Table 2: nRMSE of network parameters. Figure 5: Diagram of the two-user lab-scale DHN with sensor locations labeled. Figure 6: Profile of conducted experiment enthalpy gives a measure of thermal power lost to the environment. The problem is formulated as \[\begin{split} C^{*}=\min_{\theta}\sum_{\forall~{}i~{}\text{s.t.}~{}v _{i}\in\mathcal{V}}\left(inc_{P}(T_{in}-T)\right)^{\{i\}}\\ \text{s.t.}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{ split nodes are similar to complex split nodes, but contain direct connections to users. Figure 11 shows an example of each type of split node, and the algorithm used to generate the candidate midpoints is presented in Algorithm 4. In this problem, all split nodes are assumed to be located equidistant from the users or split nodes to which they connect. However, possible geographic limitations seen in the design of real-world heating networks can be considered by modifying the rules used to locate the candidate split nodes. ``` 1:global\(\mathcal{V}_{s}\), \(Pr_{s}\) 2:for\(i=1\ldots n_{u}\)do 3: Append \(\mathcal{V}_{simple}\) to \(\mathcal{V}_{s}\) 4: Append \(v_{i}\in\mathcal{V}_{simple}\) to \(Pr_{s}\) 5:endfor 6:\(i\gets 2\) 7:while\(i<\text{card}\{\mathcal{V}_{s}\}\)do 8: Add\((v_{i},Pr_{s}(i),i)\) 9:\(i++\) 10:endwhile 11:procedureAdd\((v,Pr,ii)\) 12:if\(\text{card}(Pr)<n_{u}\)then 13:\(k\gets 1\) 14:for all\(j<ii\) s.t. \(Pr_{s}(j)\cap Pr=\varnothing\)do 15: Create split node \(v_{new}\) from \(v\) and \(\mathcal{V}_{s}(j)\) 16: Append \(v_{new}\) to \(\mathcal{V}_{s}\) 17: Append \(Pr_{s}(j)\cup Pr\) to \(Pr_{s}\) 18: Add\(\left[\begin{matrix}v&\mathcal{V}_{s}(j)\end{matrix},Pr_{s}(j)\cup Pr,ii+k \right)\) 19:\(k++\) 20:endfor 21:endif 22:endprocedure ``` **Algorithm 4** Generation of Candidate Split Nodes ### Solution Method. The problem of connecting the users and candidate midpoints is considered as a dynamic optimum branching problem with a set of prizes and collection costs and is solved via a branch and bound algorithm. The goal of an optimum branching problem is to connect all nodes of a directed graph, the problem graph, in a minimum cost tree, and is a well-known problem in graph theory [27]. In this formulation, the problem conditions are modified so that not all nodes must added to the final tree. Instead, the complete set of prizes, the unique set of \(n_{u}\) users, must be collected [28]. Each node has a prize set associated with it, which denotes the subset of users connected to the node. The set of prize sets for all candidate split nodes is denoted as \(Pr_{s}\). Additionally, some nodes also have a cost associated with them, so that when added to the tree, there is an additional cost beyond the edge cost to connect them. Finally, the problem is dynamic as the edge and node cost are variable based on how the tree is connected. The problem graph is formulated as follows. The nodes are the centralized plant, which will function as the root, the users, and candidate split nodes. The edges of the problem graph represent the directed connection between all nodes and the root, and between all nodes that do not contain overlapping prize sets. Due to the dynamic nature of the problem, a best-case scenario must be assumed in order to solve this problem using a branch and bound algorithm. This best-case scenario allows for a calculation of the lower bound cost (LBC) for the split nodes and the edges connecting the plant, split nodes, and users. Two factors which are only available after the generation of the complete graph influence the cost of the connection: the number of downstream users, and the inlet temperature supplied by the previous pipe. The number of downstream users affects the pipe diameter and the mass flow rate through the pipe. In both cases, a higher number of downstream users increases the enthalpy loss by increasing the necessary pipe diameter and mass flow rate. Therefore, the LBC assumes that no additional users are connected downstream. The other assumption is a bound on the maximum temperature drop in the network. Here an arbitrary maximum drop \(\Delta T_{i}\) is assumed. With this, the LBC is calculated using \(T_{0}-\Delta T_{i}\) as the inlet temperature. After the final minimum cost spanning tree is found, the maximum temperature drop \(\Delta T_{max}\) is calculated, and the entire solution process is iterated if \(\Delta T_{i}<\Delta T_{max}\). Using these bounding assumptions, two sets of costs are calculated, the LBC of connecting the users to the split nodes (\(LBC_{s}\)), which serve as the price for adding a split node to the tree, and the edge LBC (\(LBC_{E}\)), which provides a cost for adding a node to the tree. After a complete minimum LBC tree is found, the true cost (TC) is calculated, using the modeling technique described in Section 2. For this problem, because the goal is to minimize the heat lost to the environment during heat delivery, the return network is assumed to have the same characteristics of the feeding network, and the mass flow rate demanded by the users is assumed to be known, only the feeding lines must be considered. Therefore, the states for \(S1,S2,S3,R\) are neglected leaving \[\frac{d}{dt}T_{red}=A_{red}T_{red}+B_{red}\left[T^{0}\right]+E_{red}\left[T_{ amb}\right] \tag{49}\] \[T_{red}=\left[T_{F}^{\{n_{u}+1\}}\quad\ldots\quad T_{F}^{\{n_{u}+n_{h}\}}, \quad T_{F}^{\{1\}}\quad\ldots\quad T_{F}^{\{n_{u}\}}\right]^{T} \tag{50}\] where \(A_{red}\), \(B_{red}\), and \(E_{red}\) the augmented matrices where the rows and columns associated with the removed states are also removed. \(A_{red}\) only contains \(A_{11}\) and parts of the \(A_{12},A_{21},A_{22}\) matrices. After a candidate tree is generated, the parameters \(c_{1},c_{2},c_{3}\) are calculated using Eq. (4). Then, because the problem considers steady state operation of the network, the dynamic term is set to zero, and the temperature equation is rearranged to give the steady state inlet and outlet temperatures using \[T_{red}=-A_{red}^{-1}\left(B_{red}\left[T_{0}\right]+E_{red}\left[T_{amb}\right]\right) \tag{51}\] where \(A_{red}\) is invertible, as it can always be written as a lower-triangular matrix with all nonzero diagonal elements \(c_{3}^{\{i\}},\ \forall i=1\ldots n_{s}+n_{u}\) by reordering the nodes. These temperatures are used to solve for the true cost of a particular network configuration. The branch and bound algorithm used to find the optimal branching goes as follows. First an initial bound is established by connecting the minimum LBC split node containing all prizes to root. Then, the true cost is calculated as described in the problem definition, using Eq. (51). After initializing the cost, potential trees are generated, starting from the root, adding valid edges to the tree as long as the total LBC does not exceed the current minimum TC. Three factors define valid edges that can be added to the tree. The first ensures that each prize is only collected once, using the running list of collected prizes to determine edges that connect nodes with no overlapping prizes. The second condition is that only edges whose parent nodes are currently in the tree may be added, ensuring the branching nature of the final result. The third condition is that, based on an arbitrary ordering of the edges, only higher numbered edges can be added. This condition avoids repetition in exploring trees that would be caused considering the order Fig. 11: Sample of split node types for 4 user DHN edges are added to the same parent node as unique trees. Once all prizes have been collected, the tree is considered complete, and the TC of the configuration is calculated using the proposed modeling technique. This TC is compared to the current minimum TC, and if lower, the minimum TC and optimal branching are updated. A detailed algorithm for the branch and bound technique is presented in Algorithm 5. ``` 1:inputs\(\mathcal{G}_{full}=(\mathcal{V},\mathcal{E})\), \(Pr_{s},LBC_{\mathcal{E}},LBC_{s}\) 2:global\(TC_{best},tr_{best}\) 3:\(z_{init}\gets\mathcal{E}(i)\) s.t. \(\mathcal{E}_{i}^{-}=0\) and \(\text{card}\{Pr_{s}(\mathcal{E}_{i}^{+})\}=n_{u}\) 4:\(C_{init}\gets LBC_{\mathcal{E}}(c_{init})+LBC_{s}(\mathcal{E}_{i}^{+})\) 5:\(G_{init}\leftarrow(\{v_{0},\mathcal{E}_{i}^{+}\},\{\epsilon_{init}(i)\})\) s.t. \(C_{init}\) (\(i=\min C_{init}\) 6: Find \(TC\) for tree \(G_{init}\) 7:\(TC_{best}\gets TC\) 8:\(\mathcal{G}_{best}\leftarrow\mathcal{G}_{init}\) 9:for all\(i\) s.t. \(\mathcal{E}_{i}^{-}=0\)do 10:\(\mathcal{G}_{i}\leftarrow(\{v_{0},\mathcal{E}_{i}^{+}\},\{\epsilon_{i}\})\) 11:\(Pr_{i}\gets Pr_{s}(\mathcal{E}_{i}^{+})\) 12:\(LBC_{\text{return}}\gets LBC_{\mathcal{E}}(\mathcal{E}_{i})+LBC_{\mathcal{ V}}(\mathcal{E}_{i}^{+})\) 13:\(\textsc{GrowTree}(\mathcal{G}_{i},Pr_{i},LBC_{\text{run}},i)\) 14:endfor 15:procedureGrowTree(\(\mathcal{G}_{i},Pr_{i},LBC_{\text{run}},i\)) 16:if\(\text{card}(Pr_{i})=n_{u}\) and \(LBC<TC_{best}\)then 17: Find \(TC\) for tree \(\mathcal{G}\) 18:if\(TC<TC_{best}\)then 19:\(TC_{best}\gets TC\) 20:\(\mathcal{G}_{best}\leftarrow\mathcal{G}\) 21:endif 22:else if\(LBC<TC_{best}\)then 23:for all\(j<i\) s.t. \(\mathcal{E}_{j}^{-}=\mathcal{E}_{\min i}^{-}\) and \(Pr_{s}(\mathcal{E}_{j}^{+})\cap Pr=\varnothing\)do 24:\(\mathcal{G}_{new}\leftarrow\mathcal{G}\cup\mathcal{E}_{j}\) 25:\(Pr_{new}\gets Pr\cup Pr_{s}(\mathcal{E}_{j}^{+})\) 26:\(LBC_{new}\gets LBC+LBC_{\mathcal{E}}(c_{j})+LBC_{\mathcal{V}}(\mathcal{E }_{i}^{+})\) 27:\(\textsc{GrowTree}(\mathcal{G}_{new},Pr_{new},LBC_{new},\left[ii\quad j\right])\) 28:endfor 29:for all\(j\) s.t. \(\mathcal{E}_{i}^{-}=\mathcal{E}_{ii}^{+}\) and \(Pr_{s}(\mathcal{E}_{j}^{+})\cap Pr=\varnothing\)do 30:\(\mathcal{G}_{new}\leftarrow\mathcal{G}\cup\mathcal{E}_{j}\) 31:\(Pr_{new}\gets Pr\cup Pr_{s}(\mathcal{E}_{j}^{+})\) 32:\(LBC_{new}\gets LBC+LBC_{\mathcal{E}}(c_{j})+LBC_{\mathcal{V}}(\mathcal{E }_{i}^{+})\) 33:\(\textsc{GrowTree}(\mathcal{G}_{new},Pr_{new},LBC_{new},j)\) 34:endfor 35:endif 36:endprocedure ``` **Algorithm 5** Branch and bound for optimum branching ### 5.3 Results. The layouts resulting from minimizing the length of the pipes is compared to the layout resulting from minimizing the enthalpy losses in Fig. 12. Additionally, the total length of each configuration and total enthalpy loss are provided for each layout in Table 4. Overall, considering heat transfer during the design of the DHN layout resulted in a 15% reduction in enthalpy losses as compared to the length minimized layout. The length minimized layout favors connecting multiple users to the same branch, while the enthalpy minimized layer results in more branches being created for individual users. This is due to the heat loss reduction of the reduced flow rate and smaller diameter pipes in branches with single users. Additionally, the length minimized layout uses a midpoint, to keep flow together and only split close to the user, while the enthalpy minimized layout favors splitting earlier, allowing for more smaller-diameter pipes to minimize the heat losses as the water is being delivered to the users. The candidate midpoint set contained over 1 million potential midpoints and the solution graph contained over 2.5 million edges. Using the developed modeling technique, all viable potential configurations were evaluated in around 44 hours. ## 6 Conclusion This paper presents a novel set of algorithms needed to generate the state-space model of a branching multi-user DHN. The proposed model equations are validated using data collected from a lab-scale DHN considering a two-user layout. The calibrated model accurately captures both the fast and slow temperature dynamics observed in the network, with an nRMSE of 0.15 in return temperature. Additionally, the ability to automatically generate models of new network configurations is demonstrated through its application to the optimal design problem considering energy losses during steady-state peak network operations. The optimal design problem is formulated as an dynamic optimum branching problem and solved via a branch and bound algorithm. Over 2.5 million configurations are considered, and the resulting design reduces heat losses by 15% as compared to the length minimized layout. Future work will use this modeling framework to develop advanced model predictive controllers for large-scale DHNs. The flexibility of the modeling technique to new topology configurations will allow it to be used to rapidly generate models of a partitioned large-scale DHN for distributed control, and allow the plant model to easily adapt to any topology changes due to added users or network faults. Furthermore, the dynamic optimum branching problem formulation and solution technique developed here can be adapted to consider the optimal placement of distributed renewable energy sources, a key decision when integrating low temperature heat sources in a DHN. ## 7 Transactions of the ASME \begin{table} \begin{tabular}{l c c} \hline \hline Layout & Length [m] & Enthalpy [W] \\ \hline Length-Minimized & \(1.26\times 10^{3}\) & \(1.63\times 10^{5}\) \\ Loss-Minimized & \(1.60\times 10^{3}\) & \(1.39\times 10^{5}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Results of the energy loss minimization problem. Figure 12: Resulting length optimized layout (left) and energy loss optimized layout (right). ## Funding Data This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1343012. ## Nomenclature ### Variables \(T\) = Temperature \(t\) = Time \(\dot{m}\) = Mass flow rate \(\rho\) = Density \(V\) = Volume \(c_{p}\) = Constant pressure specific heat capacity \(Q\) = Rate of heat transfer \(h\) = Conductive heat transfer coefficient \(A_{s}\) = Surface area \(c\) = Temperature equation coefficient \(\Delta P\) = Pressure drop \(A_{c}\) = Cross sectional area \(k\) = Concentrated pressure loss coefficient \(\lambda\) = Distributed pressure loss coefficient \(\zeta\) = Total pressure loss coefficient \(L\) = Length \(D\) = Diameter \(a\), \(b\) = Vector component of \(A\) matrix \(e\) = Vector component of \(E\) matrix \(n\) = number of elements \(\mathcal{G}\) = Graph \(\mathcal{V}\) = Set of nodes \(\mathcal{E}\) = Set of edges \(v\) = Single node of graph \(\varepsilon\) = Single edge of graph \(\alpha\) = Percent mass flow rate split \(C\) = Cost \(Pr\) = Prize set \(LBC\) = Lower bound cost \(TC\) = True cost ### Subscripts \(in\) = Inlet \(amb\) = Ambient \(b\) = Building \(O\) = Supply \(S1,S2\), \(S3\) = User supply segment \(F\) = Feeding segment \(R\) = Return segment \(U\) = User without bypass \(B\) = User with bypass \(u\) = Users \(s\) = Split nodes \(p\) = Pipes \(leaf\) = Leaf nodes ### Superscripts \(-\) = Predecessor \(\{i\}\) = Element number
2309.15869
Unsupervised Pre-Training for Vietnamese Automatic Speech Recognition in the HYKIST Project
In today's interconnected globe, moving abroad is more and more prevalent, whether it's for employment, refugee resettlement, or other causes. Language difficulties between natives and immigrants present a common issue on a daily basis, especially in medical domain. This can make it difficult for patients and doctors to communicate during anamnesis or in the emergency room, which compromises patient care. The goal of the HYKIST Project is to develop a speech translation system to support patient-doctor communication with ASR and MT. ASR systems have recently displayed astounding performance on particular tasks for which enough quantities of training data are available, such as LibriSpeech. Building a good model is still difficult due to a variety of speaking styles, acoustic and recording settings, and a lack of in-domain training data. In this thesis, we describe our efforts to construct ASR systems for a conversational telephone speech recognition task in the medical domain for Vietnamese language to assist emergency room contact between doctors and patients across linguistic barriers. In order to enhance the system's performance, we investigate various training schedules and data combining strategies. We also examine how best to make use of the little data that is available. The use of publicly accessible models like XLSR-53 is compared to the use of customized pre-trained models, and both supervised and unsupervised approaches are utilized using wav2vec 2.0 as architecture.
Khai Le-Duc
2023-09-26T21:12:09Z
http://arxiv.org/abs/2309.15869v1
# FH Aachen ###### Abstract We present a novel method for computing the Fachbereich Medizintechnik and Technomathematik. We show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizintechnik is a good approximation of the Fachbereich Medizintechnik. We also show that the Fachbereich Medizinnechnik is a good approximation of the Fachbereich Medizinnechnik. We also show that the Fachbereich Medizinnechnik is a good approximation of the Fachbereich Medizinnechnik. We also show that the Fachbereich Medizinnechnik is a good approximation of the Fachbereich Medizinnechnik. We also show that the Fachbereich Medizinnechnik is a good approximation of the Fachbereich Medizinnechnik. We also show that the Fachbereich Medizinnechnik is a good approximation of the Fachbereich Medizinnechnik. We also show that the Fachbereich Medizinnechnik is a good approximation of the Fachbereich Medizinnechnik. We also show that the Fachbereich Medizinnechnik is a good approximation of the Fachbereich Medizinnechnik. We also show that the Fachbereich Medizinnechnik is a good approximation of the Fachbereich Medizinnechnik. We also show that the Fachbereich Medizinnechnik is a good approximation of the Fachbereich Medizinnechnik. We also show that the Fachbereich Medizinnechnik is a good approximation of the Fachbereich Medizinnechnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnechnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnechnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnechnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnechnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbere Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbere Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbere Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbere Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. We also show that the Fachbereich Medizinnik is a good approximation of the Fachbereich Medizinnik. **Eigenstandigkeitserklarung** Ich erklare hiermit, dass ich diese Bachelorarbeit selbststandig, ohne unzulassige Hilfe durch Dritte und ohne Benutzung anderer als der angegeben Hilfsmittel angefertigt habe. Inbesondere versichere ich, die aus anderen Quellen direkt oder indirekt ubernommenen Daten und Konzepte sind unter Angabe der Quelle gekennzeichnet. Mir ist bekannt, dass meine Arbeit zum Zwecke eines Plagiatsabgleichs mittels einer Plagiatserkennungssoftware auf ungekennzeichnete Ubernahme von fremdem geistigem Eigentum uberpruft werden kann. Julich, Le Duc Khai ###### Contents * 1 Abstract * 2 Introduction * 2.1 HYKIST Project * 2.2 Motivation * 2.3 Related work * 3 Theory * 3.1 Hybrid ASR framework * 3.1.1 Bayes theorem * 3.1.2 Audio features * 3.1.3 Acoustic modeling * 3.1.4 Language modeling * 3.1.5 Decoding * 3.1.6 Recognition Performance * 3.2 Neural network * 3.2.1 Multilayer perceptron * 3.2.2 Training a neural network * 3.2.3 Parameter tuning * 3.2.4 Convolutional Neural Network * 3.2.5 Recurrent Neural Network * 3.2.6 Bidirectional Long Short-Term Memory * 3.2.7 Transformer * 3.3 Semi-supervised learning * 3.3.1 Wav2vec 2.0 * 3.3.2 Cross-lingual speech representation * 3.3.3 In-domain Match Level and Diversity Level * 4 Experiments * 4.1 Data * 4.1.1 HYKIST data * 4.1.2 In-house data * 4.1.3 YouTube * 4.1.4 CommonVoice Vietnamese * 4.1.5 VIVOS * 4.1.6 Monolingual text data * 4.1.7 Domain * 4.2 Lexicon and language model * 4.2.1 Lexicon * 4.2.2 Language model * 4.3 Acoustic model * 4.3.1 Supervised-only models * 4.3.2 Models using unsupervised pre-training * 4.3.3 Data augmentation * 4.3.4 Intermediate loss * 4.3.5 L2 regularization * 4.3.6 On-off Regularization * 5 Experimental results * 5.1 Supervised baselines * 5.2 Unsupervised Pre-training * 5.2.1 Monolingual pre-training * 5.2.2 Multilingual pre-training * 5.2.3 _XLSR-53_ as pre-training initialization * 5.2.4 Comparison to supervised baselines * 5.3 Encoder and initialization comparison * 5.3.1 Encoder comparison * 5.3.2 Initialization comparison * 5.4 Effectiveness of intermediate loss * 5.4.1 Effectiveness of Intermediate Cross-Entropy Loss * 5.4.2 Effectiveness of Intermediate Focal Loss * 5.5 Intermediate loss analysis * 5.5.1 Studies on Intermediate Focal Loss design * 5.5.2 On-off Regularization technique * 5.5.3 Combination of L2 regularization and Intermediate Focal Loss * 6 Conclusion * 6.1 Overall results * 6.2 Future work * A Bibliography * B List of Abbreviations and Glossaries * List of Figures * List of Tables ## 1 Abstract In today's interconnected globe, moving abroad is more and more prevalent, whether it's for employment, refugee resettlement, or other causes. Language difficulties between natives and immigrants present a common issue on a daily basis, especially in medical domain. This can make it difficult for patients and doctors to communicate during anamnesis or in the emergency room, which compromises patient care. The goal of the HYKIST Project is to develop a speech translation system to support patient-doctor communication with _ASR_ and _MT_. _ASR_ systems have recently displayed astounding performance on particular tasks for which enough quantities of training data are available, such as LibriSpeech [53]. Building a good model is still difficult due to a variety of speaking styles, acoustic and recording settings, and a lack of in-domain training data. In this thesis, we describe our efforts to construct _ASR_ systems for a conversational telephone speech recognition task in the medical domain for Vietnamese language to assist emergency room contact between doctors and patients across linguistic barriers. In order to enhance the system's performance, we investigate various training schedules and data combining strategies. We also examine how best to make use of the little data that is available. The use of publicly accessible models like _XLSR-53_[14] is compared to the use of customized pre-trained models, and both supervised and unsupervised approaches are utilized using _wav2vec 2.0_[6] as architecture. ## Chapter 2 Introduction ### 2.1 Hykist Project Migration to foreign countries is becoming more common in our globally connected world, whether for work, refugee movements, or other reasons. As a result, language barriers between locals and foreigners are a common daily issue. It is commonly known that, speaking with patients when they arrive at the hospital is crucial to their care. In medical care, a lack of or incorrect communication leads to underuse and misuse of medical services, lower quality of care, an increased rate of treatment errors, ineffective preventive measures for patients, and medical staff dissatisfaction. The doctors then inquire about the patient's problems as well as his or her medical history. However, there are currently 20.8 million immigrants in Germany, with up to 30% having only basic German language skills1. If doctors and patients do not speak the same language, information communication is severely constrained, which has a negative impact on the patients' care. In the event that no common language is available, doctors can contact Triraphon which provides translators to aid communication between the patient and the doctor. These bi-lingual interpreters then assist in communication between the patient and the doctor. Footnote 1: [https://www.apptek.com/news/germanys-federal-ministy-of-health-awards-hykist-project-to-apptek-to-equip-critical-care-with-artificial-intelligence-driven-automatic-speech-translation-technology](https://www.apptek.com/news/germanys-federal-ministy-of-health-awards-hykist-project-to-apptek-to-equip-critical-care-with-artificial-intelligence-driven-automatic-speech-translation-technology) In the HYKIST scenario, the doctor talks German to the patient, who speaks only Arabic or Vietnamese. Meanwhile, German and Arabic, or German and Vietnamese, are the languages spoken by the interpreters. The interpreters are not professional translators, instead, they are volunteers who contribute their time to the translation. This is problematic because the interpreters may require time to look up unfamiliar words, such as medical termini, or they may make a mistake. The ultimate goal of the HYKIST project is to facilitate doctor-patient communication in a growing number of languages with the help of _ASR_ and _MT_ in order to meet the robust medical domain requirements via following steps: The interpreter is summoned via the hospital phone, which has an audio sampling rate of 8 kHz. We then create manual annotations with helps of our native-speaker volunteers. We investigate the use of additional outside-the-domain data for training as well as unsupervised methods because gathering project-specific data is an expensive and time-consuming operation. _ASR_ and _MT_ technologies are linked with a dialogue system for initial anamnesis and integrated into an existing telecommunications platform for this purpose. First and foremost, the project collects dialogues in Arabic, Vietnamese, and German, which serve as the foundation for the development of algorithms and applications. During the project, the first technical tests for the accuracy and quality of the automated translations are already being performed. Following that, the overall system must be tested in a pilot test with clinical application partners for the area of emergency admissions and initial anamnesis in acute situations, as well as evaluated in a final clinical study for user acceptance. The partners in the HYKIST Project are Triaphon2, Fraunhofer Focus3 and AppTek GmbH4. Footnote 2: [https://triaphon.org/](https://triaphon.org/) Footnote 3: [https://www.fokus.fraunhofer.de/en](https://www.fokus.fraunhofer.de/en) Footnote 4: [https://www.apptek.com](https://www.apptek.com) ### Motivation Large amounts of labeled training data benefit neural networks. However, labeled data is much more difficult to obtain in many settings than unlabeled data: current speech recognition systems require thousands of hours of transcribed speech to achieve acceptable performance, which is not available for the vast majority of the nearly 7,000 languages spoken globally [42]. Learning solely from labeled examples is not comparable to human language acquisition: infants learn language by listening to adults around them - a process that necessitates the acquisition of good representations of speech. Therefore, semi-supervised learning aims to work like the natural language acquisition of human. Unsupervised and semi-supervised methods have been shown to be successful in _ASR_ in recent years. _wav2vec 2.0_[6], in particular, has demonstrated excellent performance. _wav2vec 2.0_ is pre-trained using an unsupervised loss before being fine-tuned on labeled data. The goal of the paper is to offer a framework for self-supervised learning of representations from raw audio data. This framework opens the door for speech recognition models to be used in a low-resource language like Vietnamese in medical domain where previously much more transcribed audio data was required to provide acceptable accuracy. The model is then fine-tuned on labeled data in a hybrid framework [46] after pre-training on unlabeled speech. In the HYKIST Project, we want to utilize the _wav2vec 2.0_ model. One interesting aspect of _wav2vec 2.0_ is that the unsupervised pre-training is well suited for exploiting unlabeled multilingual data so that supervised training on a target language gains benefit from multilingual speech representations. In [14], the authors focused on learning representations from unlabeled data that generalize across languages in a multilingual scenario. They built on _wav2vec 2.0_ pretraining technique, in which a discrete vocabulary of _Latent Speech Representations_ is learned alongside contextualized speech representations. We can utilize their public model _XLSR-53_ because it was unsupervised pretrained on 8 languages from Multilingual LibriSpeech [57], 17 languages from the BABEL benchmark [18], which is conversational telephone data with Vietnamese language included, as well as 36 languages from CommonVoice [3], which is a corpus of read speech. With the exception of resource-rich languages, multilingual pretraining surpassed monolingual pretraining in most circumstances. ### Related work Having been an established and effective method for _ASR_, hybrid modeling has made steady progress in recent years and outperformed _End-to-End (E2E)_ approach in most _ASR_ situations [46]. Besides, the recent introduction of novel neural encoders has been reported to significantly improve the performance [76, 23, 73]. Other methods can also be used to achieve even greater improvements, like feature combination [71] or additional losses in the intermediate layers [68]. Furthermore, unsupervised approaches have grown in popularity due to their potential for high performance with little annotated data [48]. Semi-supervised learning was applied to an _ASR_ task by [34, 62, 6] by running unsupervised pre-training on a large unlabeled dataset, followed by fine-tuning on a small annotated dataset. This technique can significantly reduce the amount of labeled data required to build _ASR_ systems. The successes sparked additional research into improving the modeling approach [29, 64] and analyzing which individual components contribute most to the performance [55]. Besides, data used for pre-training and fine-tuning was deeply investigated as well, for example, in a domain-shift scenario [30] in English language or using multilingual data for the sake of improvements on monolingual benchmarks [14]. Because the contrastive loss is computed solely on the input speech audio and does not require labels, it is especially simple to use for monolingual or multilingual data. Therefore, a number of papers have begun to apply this loss for _ASR_ research [14, 72, 77, 7]. Previously, supervised training with multilingual data could improve low resource languages by using a separate output layer for each language [69]. There has also been research specifically addressing medical domain tasks. However, a common problem for medical _ASR_ faced by researchers is difficult acoustic conditions and a lack of transcribed medical audio data [17, 13, 33]. Another difficulty likely to be met is the medical terminology. In [60], a multilingual system for the medical domain is presented. Another method for dealing with the medical domain is to correct _ASR_ errors at the output level [47]. To the best of our knowledge, unsupervised pretraining methods have mostly been investigated on well-known academic datasets, with no work done on applying them to difficult low-resource medical tasks. Furthermore, no previous work has been published that investigates the use of unsupervised pretraining methods for telephone speech directly on the 8kHz signal without resampling. Besides, the analysis of different pretraining data combination and regularization for a medical _ASR_ system has never been presented. ## Chapter 3 Theory ### 3.1 Hybrid ASR framework #### Bayes theorem Given a sequence of acoustic observations \(x_{1}^{T}\) whose length is \(T\), the most likely word sequence to be recognized is \(w_{1}^{N}\). A variety of subword units, such as phonemes, and the acoustic representation of the audio signal are connected through acoustic models. In terms of probabilities, the relation \(w^{*}\) between the acoustic and word sequence is described as: \[w^{*}=\arg\max_{w_{1}^{N}}\,p(w_{1}^{N}|x_{1}^{T}) \tag{3.1}\] As stated in the introduction, conventional _ASR_ systems typically consist of a number of modules, including dictionaries, language models, and acoustic models. By utilizing Bayes' Theorem to break out the posterior probability, it is possible to show the connections between them. For the maximization, the probability \(p(x)\) can be ignored because it just acts as a normalization and has no bearing on the outcome. \[p(w_{1}^{N}|x_{1}^{T})=\frac{p(x_{1}^{T}|w_{1}^{N})p(w_{1}^{N})}{p(x_{1}^{T})} \propto p(x_{1}^{T}|w_{1}^{N})p(w_{1}^{N}) \tag{3.2}\] \[w^{*}=\arg\max_{w_{1}^{N}}\underbrace{p(x_{1}^{T}|w_{1}^{N})}_{\text{acoustic model language model}}\cdot\underbrace{p(w_{1}^{N})}_{\text{language model}} \tag{3.3}\] #### Audio features The classification model uses features, which are representations taken from audio samples and used as input. There are many features, and they all show the spoken audio's frequency information. Statistical models must learn some rather long-term dependencies within the input data due to the high resolution in the time-domain, which is often quite challenging and computationally expensive. As a result, we leverage acoustic features to simplify the signal while preserving the most crucial statistics. **Mel-frequency cepstral coefficient (MFCC)**: The windowing of the signal, application of the _Discrete Fourier Transform (DFT)_, calculation of the magnitude's log, warping of the frequencies on a Mel scale, and application of the inverse _Discrete Cosine Transform (DCT)_ are the main steps in the _MFCC_ feature extraction technique. Below is a short explanation [58] of each stage in the _MFCC_ feature extraction process. 1. Pre-emphasis: Filtering that highlights the higher frequencies is referred to as pre-emphasis. Its function is to balance the spectrum of spoken sounds, which roll off sharply at high frequencies. 2. Frame blocking and windowing: Speech analysis over a short enough time span is required for stable acoustic features. The analysis must therefore always be performed on short segments where the speech signal is believed to be stationary. 3. _DFT_ spectrum: Each windowed frame is converted into magnitude spectrum by applying _DFT_ 4. Mel spectrum: The Fourier transformed signal is run through the Mel-filter bank, a collection of band-pass filters, to compute the Mel spectrum. A Mel is a unit of measurement based on the perceived frequency by human ears. 5. _Discrete Cosine Transform (DCT)_: Because the vocal tract is smooth, there is a tendency for adjacent bands' energy levels to correlate. When the converted Mel frequency coefficients are applied to the _DCT_, a set of cepstral coefficients are generated. 6. Dynamic MFCC features: Since the cepstral coefficients only include data from a single frame, they are frequently referred to as static features. By computing the first and second derivatives of the cepstral coefficients, additional information on the temporal dynamics of the signal is gained. **Gammatone features**: The Gammatone filter [1], which is intended to mimic the human auditory filter, is the foundation for Gammatone features. They were initially presented for large vocabulary _ASR_ in [61]. A filterbank of Gammatone filters with center frequencies sampled from the Greenwood function [22] is applied after pre-emphasizing the speech signal. Below is a summary of each stage in the Gammatone feature extraction process: 1. Typically, a Hanning window of 25 ms width with 10 ms shifts is used to perform the temporal integration of the absolute values of the filter outputs. 2. A spectral integration with a 9-channel window and a 4-channel shift followed. 3. (10th root or log) compression was performed, followed by cepstral decorrelation resulting in 16 cepstral coefficients. 4. Following the use of the 10th root compression, a discrete cosine transform (DCT)-based cepstral decorrelation and normalizing methods are used. **Extracted features from raw waveform**: The features from raw waveform encoder are extracted by _CNN_ feature encoder. First, the feature encoder's raw waveform input is normalized to zero mean and unit variance. The feature encoder contains seven blocks and the temporal convolutions in each block have 512 channels with strides (5,2,2,2,2,2,2) and kernel widths (10,3,3,3,3,2,2). Besides, layer normalization [5], and the _GELU_ activation function [26] are also applied. This results in an encoder output frequency of 49 hz with a stride of about 20ms between each sample, and a receptive field of 400 input samples or 25ms of audio. The convolutional layer modeling relative positional embeddings has kernel size 128 and 16 groups. #### Acoustic modeling When modeling the probability \(p(x_{1}^{T}|w_{1}^{N})\), the length of time sequence \(T\) and of word sequence \(N\) are often not the same because \(N\) is usually much smaller than \(T\). The alignment between the acoustic observations \(x_{1}^{T}\) and labels \(w_{1}^{N}\) is unknown and commonly even unclear. The _Hidden Markov Model (HMM)_ is a statistical model that introduces a latent alignment by states \(s_{1}^{T}\) and subsequently modeling the probability of \(x_{1}^{T}\) for a given alignment to \(w_{1}^{N}\)[8]. The probability \(p(x_{1}^{T}|w_{1}^{N})\) is then calculated by adding all possible alignments between the acoustic observation and the labels. Assuming conditional independence of observations when states are given and that states only depend on their predecessor, this sum results in the equation below: \[p(x_{1}^{T}|w_{1}^{N})=\sum_{[s_{1}^{T}]}\prod_{t=1}^{T}p(x_{t},s_{t}|s_{t-1}, w_{1}^{N})=\sum_{[s_{1}^{T}]}\prod_{t=1}^{T}\underbrace{p(s_{t}|s_{t-1},w_{1}^{N})} _{\text{transition prob.}}\cdot\underbrace{p(x_{t}|s_{t},s_{t-1},w_{1}^{N})}_{ \text{emission prob.}} \tag{3.4}\] A widely accepted simplification is to make the assumption that the last state for the emission probability is independent such that: \[p(x_{t}|s_{t},s_{t-1},w_{1}^{N})=p(x_{t}|s_{t},w_{1}^{N}) \tag{3.5}\] The transition model calculates the probabilities of moving from one state to the next. The emission probability models the probability of an acoustic observation based on the current and previous states. When the probability is simplified, it only depends on the current state. The transition model can have several topologies, but the _0-1-2_ topology is the most commonly used. The topology is state-independent and has different transition probabilities: staying in the current state, jumping to the next state, or jumping to the second next state. By jumping faster or slower in time, the jump and stay property allows the alignment of labels and acoustic observations to adjust. The emission model calculates the probability of an acoustic observation in the current and previous states. **Context-Dependent Phone**: Because a language's vocabulary is typically very large, modeling words directly in the classification is impractical. Phonemes, on the other hand, are frequently used for subword modeling. For better learning, the acoustic articulation of a phoneme is determined by its surroundings, for example the beginning, the middle and the ending part. As a result, multiple phonemes are combined to create triphone or allophone labels. _Classification and Regression Tree (CART)_[10]: However, because of the cubic number of phonemes, these are a large class of labels. The possible triphones are greater than the number of observed triphones. Therefore, some share the same _GMM_ model. _CART_ is a decision tree used to cluster triphones that can share the same _GMM_ model. To reduce the number of labels, allophones are clustered using a _CART_, and the subsequent clusters are used as labels. **Baum-Welch algorithm**: In practical training of _HMM_, inferring the parameters of the _HMM_ is not simple and cannot be done manually. An automated data-driven approach based on the _Expectation-maximization (EM)_ algorithm is used instead, with a dataset of acoustic observations with transcriptions. Because the best alignment between acoustic observations and transcriptions is not always available, the _EM_ algorithm is initially leveraged with a sub-optimal linear alignment. The observation model and alignment are then iteratively optimized using the steps below: 1. Maximization: Estimate the model parameters using the previously obtained alignment by maximizing the log-likelihood function. 2. Expectation: Using the parameters from step 1, estimate a new alignment. 3. Get back to step 1 until the model fully converges. _GMM/HMM_: The _HMM_ can be used to model the transition between phones and the corresponding observable. A widely used approach is modelling the emission probabilities for each label with a parametrized _GMM_, resulting the _GMM_/_HMM_ method. The _GMM_ is a weighted sum over \(K\) normal distributions \[p(x_{t}|s_{t},s_{t-1},w_{1}^{N})=\sum_{i=1}^{K}c_{i}\cdot\mathcal{N}(x_{t}|\mu_ {i},\sigma_{i}^{2}), \tag{3.6}\] resulting in a multimodal emission probability with parameters \(\mu_{i},\sigma_{i}\) and mixture weights \(c_{i}\) for \(i\in\llbracket 1,K\rrbracket\). The mixture weights are non-negative and sum up to unity. Using the simplification in Equation 3.5 the state \(s_{t-1}\) can be additionally dropped. _DNN/HMM_: Another approach that has been popular is modelling the posterior probability \(p(a_{s_{t}}|x_{1}^{T})\) discriminatively. Usually _Deep Neural Network (DNN)_ is leveraged for this purpose, resulting in the _DNN/HMM_ approach. The purpose of _GMM/HMM_ system is to generate alignments for the training of _DNN/HMM_ system [46]. The emission probability in the _HMM_ can afterwards be calculated by applying Bayes rule such that: \[p(x_{1}^{T}|a_{s_{t}})=\frac{p(a_{s_{t}}|x_{1}^{T})p(x_{1}^{T})}{p(a_{s_{t}})}. \tag{3.7}\] The probability \(p(a_{s_{t}})\) can be estimated as the relative frequency of \(a_{s_{t}}\). In order to simplify the Bayes decision rule, the probability \(p(x_{1}^{T})\) is constant and therefore can be removed. #### Language modeling In a hybrid system, we use the 4-gram count based _Language model (LM)_, using Kneser-Ney Smoothing algorithm [37]. The _LM_s employed all use full-words in the first-pass decoding [9]. In other words, lattice rescoring is not performed in the second-pass decoding. In order to deal with multiple monolingual text corpora, the first step is to create an _LM_ for each monolingual text corpus. Following that, we use a weighting process to combine the _LM_s into a single _LM_, yielding one _LM_ for Vietnamese language. #### Decoding In order to recognize the speech given the acoustic observations, the _AM_ and _LM_ need to be combined following the Bayes decision rule, resulting in: \[w_{1}^{N}=\arg\max_{N,w_{1}^{N}}p\Big{(}\prod_{n=1}^{N}p(w_{n}|w_{n-m}^{n-1}) \cdot\sum_{[s_{1}^{T}]}\prod_{t=1}^{T}p(x_{t},s_{t}|s_{t-1},w_{1}^{N})\Big{)} \tag{3.8}\] With dynamic programming, this maximization can be solved by Viterbi algorithm which recursively computes the maximum path in \(O(k^{2}T)\) where \(k\) and \(T\) are vocabulary size and sequence length respectively. The Viterbi approximation can be applied as \[w_{1}^{N}=\arg\max_{N,w_{1}^{N}}p\Big{(}\prod_{n=1}^{N}p(w_{n}|w_{n-m}^{n-1}) \cdot\max_{[s_{1}^{T}]}\prod_{t=1}^{T}p(x_{t},s_{t}|s_{t-1},w_{1}^{N})\Big{)}, \tag{3.9}\] so that the optimization reduces to a best-path problem in the alignment graph of all possible predicted words to the acoustic observations. Besides, beam search (_AM_ and _LM_ pruning) is used in the searching process which only focuses on the most promising predicted words at each time step [51]. #### Recognition Performance The _Word-error-rate (WER)_ is a widely used indicator of how well an _ASR_ system is performing. The percentage of words that were incorrectly predicted is shown by this number. The _ASR_ system performs better with a lower value; a _WER_ of 0 equals a perfect result. _WER_ can be calculated as: \[\text{WER}=\frac{\text{Substitutions}+\text{Insertions}+\text{Deletions}}{ \text{Reference words}} \tag{3.10}\] ### 3.2 Neural network A neural network is a set of algorithms that attempts to recognize underlying relationships in a set of data using a process that mimics how the human brain works. Neural network contains layers of interconnected nodes. Each node is known as a perceptron. #### Multilayer perceptron By adding one or more hidden layers, we can get around the drawbacks of linear models. Stacking a lot of fully connected layers on top of one another is the simplest approach to accomplish this. Up until we produce outputs, each layer feeds into the layer above it. The first layers serve as our representation, and the top layer serves as our linear predictor. This design is frequently referred to as a _Multilayer Perceptron (MLP)_. This _MLP_ has 4 inputs, 3 outputs, and 5 hidden units in its hidden layer. Because the input layer does not require any computations, producing outputs with this network necessitates implementing computations for both the hidden and output layers; thus, the number of layers in this _MLP_ is 2. It should be noted that both layers are fully connected. Every input influences every neuron in the hidden layer, and every neuron in the output layer influences every neuron in the hidden layer. We denote by the matrix \(X\in R^{n\times d}\) a minibatch of \(n\) examples where each example has \(d\) inputs (features). For a one-hidden-layer _MLP_ whose hidden layer has \(h\) hidden units, we denote by \(H\in R^{n\times h}\) the outputs of the hidden layer, which are hidden representations. Since the hidden and output layers are both fully connected, we have hidden-layer weights Figure 3.1: An MLP with a hidden layer of 5 hidden units [75] \(W^{(1)}\in R^{d\times h}\) and biases \(b^{(1)}\in R^{1\times h}\) and output-layer weights \(W^{(2)}\in R^{h\times q}\) and biases \(b^{(2)}\in R^{1\times q}\). This allows us to calculate the outputs of the one-hidden-layer MLP as follows: \[\begin{split} H&=XW^{(1)}+b^{(1)}\\ O&=HW^{(2)}+b^{(2)}\end{split} \tag{3.11}\] To fully realize the potential of multilayer architectures, one more key component is required: a nonlinear activation function to be applied to each hidden unit after the affine transformation. For instance, a popular choice is the ReLU (Rectified Linear Unit) activation function [49]\(\sigma(x)=\max(0,x)\) operating on its arguments element-wise. The outputs of activation functions are called activations. In general, with activation functions in place, our _MLP_ cannot be collapsed into a linear model. \[\begin{split} H&=\sigma(XW^{(1)}+b^{(1)})\\ O&=HW^{(2)}+b^{(2)}\end{split} \tag{3.12}\] #### Training a neural network **Epoch**: one iteration where the model sees the whole training set to update its weights. **Mini-batch gradient descent**: during the training phase, updating weights is usually not based on the whole training set at once due to computation complexities or one data point due to noise issues. Instead, the update step is done on mini-batches, where the number of data points in a batch is a hyperparameter (batch size) that we can tune. **Loss function**: In order to quantify how a given model performs, the loss function \(L\) is usually used to evaluate to what extent the actual outputs \(y\) are correctly predicted by the model outputs \(z\). **Cross-entropy loss**: In the context of binary classification in neural networks, the cross-entropy loss \(L(z,y)\) is commonly used and is defined as follows: \[L(z,y)=-[y\log(z)+(1-y)\log(1-z)] \tag{3.13}\] **Forward propagation**: The calculation and storage of intermediate variables (including outputs) for a neural network from the input layer to the output layer is referred to as forward propagation (or forward pass). **Backpropagation**: The method of calculating the gradient of neural network parameters is known as backpropagation. In short, the method traverses the network in reverse order, from the output to the input layer, using calculus' chain rule. While calculating the gradient with respect to some parameters, the algorithm stores any intermediate variables (partial derivatives). **Updating weights**: In a neural network, weights are updated as follows: Step 1: Take a batch of training data and perform forward propagation (feedforward) to compute the loss. Step 2: Backpropagate the loss to get the gradient of the loss with respect to each weight Step 3: Use the gradients to update the weights of the network. #### Parameter tuning Weights initialization: Xavier initialization [21]: Rather than simply randomizing the weights, Xavier initialization allows for initial weights that take into account characteristics that are unique to the architecture. Weights and inputs are centered at zero, while biases are initialized as zeros. Transfer learning: It is frequently useful to leverage pre-trained weights from massive datasets that took days/weeks to train and apply them to our use case. Figure 3.3 shows some options for leveraging data, depending on how much we have: Optimizing convergence: Figure 3.2: Updating weights in a neural network [2] Learning rate: indicates how quickly the weights are updated. It can be fixed or changed adaptively. The most popular method at the moment is Adam [36], which is a method that adapts the learning rate. Adaptive learning rates: Allowing the learning rate to vary when training a model can help to reduce training time while also improving the numerical optimal solution. While the Adam optimizer is the most commonly used technique, the following in figure 3.4 are also useful: Figure 3.4: Adaptive learning rates methods [2] Figure 3.3: Transfer learning strategy [2] Regularization: Dropout [65]: to avoid overfitting the training data by removing neurons with probability \(p>0\). It forces the model to avoid relying too heavily on specific sets of features. Weight regularization: Regularization techniques are typically used on the model weights to ensure that the weights are not too large and that the model is not overfitting the training set. Early stopping: to halt training as soon as the validation loss reaches a plateau or begins to rise. SpecAugment [54]: Rather than augmenting the input audio waveform, SpecAugment applies an augmentation policy directly to the audio spectrogram (i.e., an image representation of the waveform). The spectrogram is altered by warping it in time, masking blocks of consecutive frequency channels, and masking blocks of utterances in time. These augmentations are chosen to help the network to be robust against deformations in the time direction, partial loss of frequency information and partial loss of small segments of speech of the input. #### Convolutional Neural Network Architecture of a traditional _Convolutional Neural Network (CNN)_ is generally composed of the following layers: Convolution layer (CONV): This layer employs filters that perform convolution operations while scanning the input \(I\) in terms of its dimensions. The filter size \(F\) and stride \(S\) are two of its hyperparameters. The resulting output \(O\) is referred to as a feature map or an activation map. Pooling layer (POOL): a downsampling operation used after a convolution layer to achieve spatial invariance. Max and average pooling, in particular, are types of pooling that take the maximum and average value, respectively. Fully connected layer (FC): works with a flattened input, with each input connected to all neurons. FC layers, when present, are typically found near the end of _CNN_ architectures and can be used to optimize objectives such as class scores. #### Recurrent Neural Network _Recurrent Neural Network (RNN)_ is a deep learning model that captures the dynamics of sequences through recurrent connections, which can be viewed as node cycles in a network (connections between nodes can create a cycle). _RNN_s are unrolled across time steps (or sequence steps) using the same underlying parameters at each step. While standard connections are used synchronously to propagate activations from one layer to the next at the same time step, recurrent connections are dynamic, passing information across adjacent time steps. As illustrated in Figure 3.5, _RNN_s are feedforward neural networks in which the parameters of each layer (both conventional and recurrent) are shared across time steps. #### Bidirectional Long Short-Term Memory The most popular designs include mechanisms to mitigate _RNN_s' infamous numerical instability, as exemplified by vanishing and exploding gradients. We present the key concepts underlying the most successful _RNN_ architectures for sequence, which are based on two papers published in 1997. _Long-Short Term Memory (LSTM)_[27] is the first paper to introduce the memory cell, a unit of computation that replaces traditional nodes in a network's hidden layer. With these memory cells, networks can overcome training difficulties encountered by previous recurrent networks. To avoid the vanishing gradient problem, the memory cell keeps values in each memory cell's internal state cascading along a recurrent edge with weight 1 across many Figure 3.5: Recurrent connections are depicted on the left as cyclic edges. The RNN is unfolded over time steps on the right. Recurrent edges are computed synchronously, while conventional connections span adjacent time steps. [75] successive time steps. A set of multiplicative gates assists the network in determining which inputs to allow into the memory state and when the memory state's content should influence the model's output. Given memory cell \(c_{t}\), input gate \(i_{t}\), forget gate \(f_{t}\), output gate \(o_{t}\) associated with weight matrices \(W_{j}\), \(U_{j}\) and weight vector \(b_{j}\) where \(j\in\{i,f,o,c\}\), \(\mathit{LSTM}\) is described as: \[\begin{split} i_{t}&=\mathrm{sigmoid}_{g}(W_{i}x_{ t}+U_{i}h_{t-1}+b_{i})\\ f_{t}&=\mathrm{sigmoid}_{g}(W_{f}x_{t}+U_{f}h_{t -1}+b_{f})\\ o_{t}&=\mathrm{sigmoid}_{g}(W_{o}x_{t}+U_{o}h_{t -1}+b_{o})\\ c_{t}&=f_{t}\odot c_{t-1}+i_{t}\odot\mathrm{sigmoid }_{c}(W_{c}x_{t}+U_{c}h_{t-1}+b_{c})\\ h_{t}&=o_{t}\odot\mathrm{sigmoid}_{h}(c_{t}) \end{split} \tag{3.14}\] The second paper, Bidirectional _Recurrent Neural Network (RNN)_[63], describes an architecture that uses information from both the future (subsequent time steps) and the past (preceding time steps) to determine the output at any point in the sequence. This is in contrast to previous networks, in which only previous input could influence output. Bidirectional _RNN_s have become a mainstay in audio sequence labeling tasks, among many others. Fortunately, the two innovations are not mutually exclusive and have been successfully combined for phoneme classification and handwriting recognition. #### Transformer The Transformer employs the encoder-decoder architecture, as shown in the left and right halves of Figure 3.6, with stacked self-attention and point-wise, fully connected layers for both the encoder and decoder. The encoder is built up from N identical layers. Each layer is divided into two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, fully connected feed-forward network that is positionally connected. Following layer normalization [5], a residual connection [24] is used around each of the two sub-layers. **Attention**: A query and a set of key-value pairs are mapped to an output by an attention function, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, with the weight assigned to each value determined by the query's compatibility function with the corresponding key. **Scaled Dot-Product Attention**: The input consists of queries and keys of dimension \(d_{k}\), and values of dimension \(d_{v}\). The query's dot products are computed with all keys, divided by \(\sqrt{d}_{k}\), and a softmax function is applied to get the weights on the values. In practice, we compute the attention function on a set of queries at the same time, which we pack into a matrix \(Q\). The keys and values are also packed into matrices \(K\) and \(V\). We compute the output matrix as follows: \[\mathrm{Attention}(Q,K,V)=\mathrm{softmax}(\frac{QK^{T}}{\sqrt{d}_{k}})V \tag{3.15}\] **Multi-Head Attention**: Instead of performing a single attention function with \(d_{model}\)-dimensional keys, values and queries, we perform the attention function in parallel on each of the projected versions of queries, keys, and values, yielding \(d_{v}\)-dimensional output values. These are concatenated and projected again, yielding the final values: Figure 3.6: The Transformer model architecture [70] \[\text{MultiHead}(Q,K,V)=\text{Concat}(\text{head}_{1},...,\text{head}_{h})W^{O} \tag{3.16}\] where: \(\text{head}_{i}=\text{Attention}(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V})\) and the projections are parameter matrices \(W_{i}^{Q}\in R^{d_{\text{model}}\times d_{k}}\), \(W_{i}^{K}\in R^{d_{\text{model}}\times d_{k}}\), \(W_{i}^{V}\in R^{d_{\text{model}}\times d_{v}}\) and \(W^{O}\in R^{hd_{v}\times d_{\text{model}}}\), \(h\) is the number of attention heads. ### Semi-supervised learning Semi-supervised learning is a method of machine learning in which a small amount of labeled data is combined with a large amount of unlabeled data during training. Semi-supervised learning is intermediate between unsupervised (no labeled training data) and supervised learning (with only labeled training data). It is an example of weak supervision. When combined with a small amount of labeled data, unlabeled data can significantly improve learning accuracy. Acquiring labeled data for a learning problem frequently necessitates the use of a skilled human agent (e.g., to transcribe an audio segment in _ASR_ tasks). The cost of labeling may thus make large, fully labeled training sets unfeasible, whereas acquiring unlabeled data is relatively inexpensive. Semi-supervised learning can be extremely useful in such situations. #### Wav2vec 2.0 Due to self-supervised training, _wav2vec 2.0_ is one of the current _SOTA_ models for _ASR_. This is a relatively novel concept in this sector. We can pre-train a model on unlabeled data, which is always more accessible, using this method of training. The model can then be fine-tuned for a specific purpose using a specific dataset. The model consists of a multi-layer convolutional feature encoder \(f:X\to Z\) that receives raw audio \(X\) as input and produces _Latent Speech Representations_\(z_{1},...,z_{T}\) for \(T\) time steps. They are then supplied into a _Transformer_\(g:Z\to C\), which generates representations \(c_{1},...,c_{T}\) that capture data from the full sequence. In the self-supervised objective, the output of the feature encoder is discretized to \(q_{t}\) using a quantization module \(Z\to Q\) to represent the objectives (Figure 3.7). The approach constructs context representations over continuous speech representations, and self-attention captures dependencies throughout the whole sequence of latent representations. **Feature encoder**: The encoder is made up of many blocks that include temporal convolution, layer normalization [5], and the _GELU_ activation function [26]. The encoder's raw waveform input is normalized to zero mean and unit variance. The number of time-steps T that are input to the _Transformer_ is determined by the encoder's total stride. **Contextualized representations with Transformers**: The feature encoder's output is sent into a context network that uses the _Transformer_ architecture [70]. We utilize a convolutional layer that acts as a relative positional embedding instead of fixed positional embeddings that encode absolute positional information. We implement layer normalization after adding the convolution output followed by a _GELU_ to the inputs. **Contrastive learning**: Contrastive learning is a notion that involves the input being altered in two ways. The model is then trained to recognize whether two input transformations are still the same item. The _Transformer_ layers are the first method of transformation in _wav2vec 2.0_; the second is quantization. In more technical terms, we would like to get such a context representation \(c_{t}\) for a masked latent representation \(z_{t}\) in order to guess the proper quantized representation \(q_{t}\) among alternative quantized representations. **Quantization module**: Quantization is a process of converting values from a continuous space into a finite set of values in a discrete space [67]. A language's number of phonemes is limited. Furthermore, the number of posible phoneme pairs is limited. It means that the same _Latent Speech Representations_ can correctly represent both of them. Furthermore, because the quantity is limited, we can design a codebook that contains all potential phoneme combinations. The quantization process then involves selecting the appropriate code word from the codebook. However,the total number of conceivable sounds is enormous. To make it easier to learn and use, we use product quantization [32] to discretize the output of the feature encoder \(z\) to a finite set of speech representations for self-supervised training. This choice yielded positive results, which acquired discrete units first and then contextualized representations. Concatenating quantized representations from several codebooks is what product quantization is all about. We take one item from each codebook and concatenate Figure 3.7: Illustration of our framework which jointly learns contextualized speech representations and an inventory of discretized speech units. the resulting vectors \(e_{1},...,e_{G}\) (Figure 3.8), then perform a linear transformation \(R^{d}\to R^{f}\) to get \(q\in R^{f}\), given \(G\) codebooks or groups with \(V\) entries \(e\in R^{V\times d/G}\). #### Cross-lingual speech representation Cross-lingual learning seeks to create models that use data from other languages to improve performance. By pretraining _Transformer_ blocks with multilingual masked language models, unsupervised cross-lingual representation learning has shown great success [40, 35]. The authors in [14] studied cross-lingual speech representations by extending _wav2vec 2.0_[6] to the cross-lingual setting. Their method teaches a single set of quantized latent speech representations that are shared by all languages. They pre-trained _XLSR-53_ on 56k hours of speech data from 53 languages (including Vietnamese language), then evaluated it on 5 languages from the BABEL benchmark (conversational telephone data) [18] and 10 languages from CommonVoice [3] - a corpus of read speech. #### In-domain Match Level and Diversity Level In this part, to better and easier analyze the effect of pre-training data on the performance of cross-lingual and domain-shift experiments, we introduce 2 new concepts, namely **"In-domain Match Level"** and **"Diversity Level"**. Figure 3.8: Quantization process: For each codebook, the best entry is extracted and concatenated with each other (from orange to purple entry) **In-domain Match Level**: Given 3 datasets A, B and C, where A is the target telephone dataset used for recognition, B is also recorded by the telephone but its conversation is different from A's and C is the audio book recordings. The dataset B is more overlapped with the A than the C because both A and B are telephone recordings, so the **In-domain Match Level** of B is higher than the one of C. In general, the **In-domain Match Level** is determined by the similarity between recording conditions, naturalness and conversational topics. **"Diversity Level"**: Given another dataset D, which is recorded by more speakers with more diverse accents than B and C, then the **Diversity Level** of D is the highest compared to the rest. To some extent, the **Diversity Level** of the multilingual dataset is higher than the monolingual one because the first is able to represent more learnable phonemes which are likely to be helpful to target language in semi-supervised learning. ## 4 Experiments ### 4.1 Data The first difficulty faced during the research in the HYKIST project is the lack of medical telephone speech dataset. Having a small medical dataset - HYKIST, we therefore use HYKIST only for the recognition and use in-house non-medical telephone speech dataset for training. This poses a challenge to reach a high-performance ASR because of the mismatch in training and recognition datasets. In addition, real-life dataset like HYKIST is difficult to be accurately transcribed by ASR models because of background noises, variation of speaking speed, unfamiliar pronunciation of medical terms... #### HYKIST data Our HYKIST project partner Triraphon recorded conversations between three people: a patient, a doctor, and an interpreter. The patient communicates in the non-German language - Arabic or Vietnamese - while the doctor communicates in German. The interpreter is fluent in both languages and assists the patient and doctor in communicating. In HYKIST, we have unique accents, foreign-born accents, from both interpreter and patient sides. This directly makes HYKIST more difficult for machines and humans to transcribe, leading understandable bad recognition performance. We received the audio recordings and had our transcribers perform speech transcription within the recordings. We divide the audio data into two sets: dev and test, with no speaker overlap between the two. The data statistics for the dev and test sets for each individual language can be seen in Table 4.1. We only have a limited amount of data because we create it ourselves. Furthermore, the number of speakers is limited, resulting in a low level of diversity in the testing data. This may result in over-optimization of the evaluation data. To address the impact of the data issues, we obtained additional training data from our industry partner Apptek and other sources. #### In-house data AppTek, an industry partner, supplied us with annotated 8kHz conversational telephone speech data. The audio data was collected during telephone conversations between customers and various call centers. Table 4.1 displays the data statistics for the training sets for each of the three languages. We can see that the amount of training data available varies between languages. We also have speakers with accents and/or dialects for the Arabic and Vietnamese data. For the Arabic data, we have four different datasets with distinct dialects: Syrian, Lebanese, Gulf, and Egyptian. Besides, our Vietnamese dataset has dominantly 2 accents, Northern and Central Vietnamese, and a very small fraction of Southern Vietnamese accent. The speakers with accents in the Vietnamese data are combined into a single dataset. #### YouTube We collected Vietnamese audio data from _YouTube (YT)_ under Fair Use Policies1 in addition to our annotated datasets. The domain in question is purely read speech, such as podcasts, audiobooks, radio stories, or something similar. Pre-processing was done manually by removing non-speech parts such as music and noise, leaving only speech. The audio files were then divided into 10-30 second segments. Table 4.1 displays the data statistics for the \begin{table} \begin{tabular}{|l|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{Language} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{Usage} & \multirow{2}{*}{\# Spks} & \multirow{2}{*}{Hours} & \multirow{2}{*}{Domain} & In-domain & Diversity \\ & & & & & match & level \\ \hline Arabic & In-house & pretr. & 3379 & 786 & Tel., Conv. & Medium & Medium \\ \hline German & In-house & pretr. & 1723 & 177 & Tel., Conv. & Medium & Medium \\ \hline \multirow{4}{*}{Vietnamese} & In-house & pretr., & \multirow{2}{*}{2240} & \multirow{2}{*}{219} & \multirow{2}{*}{Tel., Conv.} & Medium & Medium \\ & & finetu. & & & & & \\ \cline{1-1} \cline{3-8} & \multirow{2}{*}{HYKIST} & adapt & 1 & 1 & \multirow{2}{*}{Tel., Conv.,} & \multirow{2}{*}{High} & \multirow{2}{*}{Low} \\ & & dev & 3 & 3 & & & \\ \cline{1-1} \cline{3-8} & & test & 2 & 2 & & & \\ \cline{1-1} \cline{3-8} & YouTube & pretr. & - & 1.204 & Read books & Low & \\ \hline \multirow{2}{*}{Multi} & In-house* & pretr. & 7342 & 1.182 & Tel., Conv. & Medium & \\ \cline{1-1} \cline{3-8} & XLSR-53 & pretr. & - & 56.000 & Various & Low & \\ \hline \end{tabular} \end{table} Table 4.1: Data statistics for acoustic data. *The multilingual in-house training dataset is the combination of the Arabic, German and Vietnamese ones listed above. Domain: Telephone (Tel.), Conversational (Conv.), Medical (Med.). web scraped data. During data collection, we headed to the balance of accents and genders. Therefore, the dataset is divided into Northern and Southern accents, yielding four subsets: Northern Female (518h), Northern Male (213h), Southern Female (290h) and Southern Male (183h). #### CommonVoice Vietnamese We obtain the Vietnamese dataset from the massively-multilingual speech corpus [4]. We use the data version 9.02, which includes 17 hours of noisy read speech data recorded by the large number of volunteer speakers. The dataset is split into train/dev/test set. We evaluate our models by directly recognizing on dev and test sets. Footnote 2: [https://commonvoice.mozilla.org/en/datasets](https://commonvoice.mozilla.org/en/datasets) #### Vivos VIVOS [44] is a clean Vietnamese read speech corpus consisting of 15 hour recordings. We obtain the dataset3 split into train/test sets. We evaluate our models by directly recognizing on test set. The test set includes 19 speakers and 48 minutes of duration in total. Footnote 3: [https://ailab.hcmus.edu.vn/vivos](https://ailab.hcmus.edu.vn/vivos) #### Monolingual text data Aptek, our project partner, provided monolingual text data for all three languages. Text from various sources is included in the data. The number of running words for each language is shown in Table 4.2. #### Domain As shown in Table 4.1 the data spans several domains. The HYKIST project's target domain is medical conversational telephone speech. The training data does not cover this specific domain. This domain mismatch in our data is highlighted. By listening to the audios and comparing them to our target domain, we can determine the in-domain match and diversity level. ### Lexicon and language model #### Lexicon The Babel project4 provided us the initial lexicon for the Vietnamese language. The training lexicon is then created by extending the initial lexica with the toolkit Sequitur Grapheme-To-Phoneme5[11]. We supplement the lexicon with medical terms provided by our project partner Tiraphon in order to decode the HYKIST data. The final recognition lexica for Vietnamese are 11k in size as shown in Table 4.2. Footnote 4: [https://www.iarpa.gov/research-programs/babel](https://www.iarpa.gov/research-programs/babel) Footnote 5: [https://github.com/sequitur-g2p/sequitur-g2p](https://github.com/sequitur-g2p/sequitur-g2p) #### Language model _Language models (LMs)_ used are 4-grams and use entire words. We create our _LMs_ using the training pipeline from the SRILM toolkit [66]. The first step is to create a _LM_ for each monolingual text corpus separately. Then, using a weighting procedure, we merge all _LMs_ into a single _LM_, producing one _LM_ for Vietnamese language. Using the development text, interpolation weights can be determined by giving highest weight to the source language models that have the lowest perplexity on the specified development set. Table 4.2 demonstrates how the _LM_ performs. Vietnamese _LM_ achieves a _Perplexity (PPL)_ of 67 and a _Out-of-vocabulary (OOV)_ rate of 0.1% on dev set. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(\#\) words & vocab & \multicolumn{2}{c|}{dev} & \multicolumn{2}{c|}{test} \\ \hline in train & size & OOV & PPL & OOV & PPL \\ \hline 500M & 11k & 0.1\% & 67 & 0.2\% & 69 \\ \hline \end{tabular} \end{table} Table 4.2: 4-gram LM for Vietnamese. ### Acoustic model In this part, our experimental setups for acoustic models are described. We use the toolkit RETURNN6[15] for supervised training experiments and Fairseq7[52] for unsupervised _wav2vec 2.0_ training. The recognition is done by using RASR8[59]. We convert the Fairseq models to RETURNN models with an automatic conversion toolkit9. We will release all training and decoding configurations online1011. Footnote 6: [https://github.com/rwth-if6/returnn](https://github.com/rwth-if6/returnn) Footnote 7: [https://github.com/facebookresearch/fairseq](https://github.com/facebookresearch/fairseq) Footnote 8: [https://github.com/rwth-ifo/rasr](https://github.com/rwth-ifo/rasr) Footnote 9: [https://github.com/rwth-ifo/pytorch-to-return-converter](https://github.com/rwth-ifo/pytorch-to-return-converter) Footnote 10: [https://github.com/rwth-ifo/returnn-experiments](https://github.com/rwth-ifo/returnn-experiments) #### Supervised-only models The training schedule for Vietnamese language's basic systems are similar and simply differ in the specifics. Assuming all models, we generate alignments obtained through the use of a \(GMM/HMM\) procedure will be utilized as labels for neural network training. In a supervised setting using \(\mathit{fCE}\), all models are trained from scratch. The labels used in the \(AM\) modeling are context-dependent phonemes, more specific triphones. With 4501 \(\mathit{CART}\) labels in the end, we use a \(\mathit{CART}\) to tie the states. We employ the 40-dimensional Gammatone features as the \(AM\)'s input [61]. There is no pre-training, so the fine-tuning begins with a random initialization. All the fine-tunings from scratch takes 33 epochs. We use two distinct neural \(AM\) architectures: _Transformer_[70], and _Bidirectional Long-Short Term Memory (BLSTM)_[28]. **BLSTM**: We strictly adhere to the training recipe in [46] for the _BLSTM_ model. The _BLSTM_ uses 5 layers and 512 per-direction units. The following hyperparameters are used for fine-tuning: The initial learning rate is set at \(0.0005\), followed by a hold phase, and finally an exponential decay with decay factor of 0.8 in order to control the learning rate based on CE development set scores. In addition, we use Adam optimizer with Nesterov momentum (Nadam) [16]. Furthermore, a dropout of 10% is applied to all modules and batch shuffling is turned off. A batch size of 40000 frames is employed. The SpecAugment [54] algorithm is used for entire model training with masking of 50% in the time dimension and 10% in the feature dimension. This leads to the _BLSTM_ size of 25M parameters. **Transformer**: Our _Transformer_ training schedule was obtained from [73, 74]. _Transformer_ has 12 blocks. The attention dimension of each Multi-Head Self-Attention module is 768 with 12 attention heads. The dimension of the feed-forward module is 1536 with _ReLU_ working as an activation function. The following hyperparameters are used for fine-tuning: The initial learning rate is set at \(10^{-5}\) and a linear warm-up phase to \(10^{-4}\) is used, followed by a hold phase, and finally an exponential decay with decay factor of 0.9 until the minimum learning of \(10^{-7}\) is reached. In addition, we use Adam optimizer with Nesterov momentum (Nadam) [16]. Furthermore, a dropout of 10% is applied to all layers of encoder network and we use batch size of 8000 frames. Batches are constructed with shuffled data. The SpecAugment [54] algorithm is used for entire model training with masking of 50% in the time dimension and 10% in the feature dimension. This leads to the _Transformer_ size of 90M parameters. #### Models using unsupervised pre-training **XLSR-53**: We look into using a publically accessible model, _XLSR-53_[14], in addition to pre-training our own models on our specific data. We utilize the checkpoint that was not fine-tuned to any language12. This was pre-trained on 56k hours of speech data from 53 different languages for 19 epochs. Additionally, we explore with initializing the _wav2vec 2.0_ pre-training on our custom data using the _XLSR-53_ model, followed by corresponding fine-tuning. Note that 16kHz data were used to train the _XLSR-53_. We shorten the stride of one _CNN_ layer in the feature extractor to half because we work with 8kHz telephone conversation. In this method, we receive features at the desired frame rate while reducing the down-sampling factor from the waveform to the feature frames by a factor of 2. Footnote 12: [https://github.com/facebookresearch/fairseq/tree/main/examples/wav2vec](https://github.com/facebookresearch/fairseq/tree/main/examples/wav2vec) **Pretraining cases**: For each pretrained model we divide into the following cases. 1. Instead of a custom pre-training with our available datasets, _XLSR-53_ is applied directly for the fine-tuning. 2. Pre-training on our available datasets from scratch. 3. The parameters are initialized with the _XLSR-53_ checkpoint and the pre-training is done with our available datasets. We call this type of pre-training continued pretraining. **Wav2vec 2.0 architectures**: We use the topologies from _wav2vec 2.0_ for the experiments with unsupervised pre-training [6] and customize our own topologies into: _Base_, _Large_ and _Large_\({}_{1\text{-}8}\). All architectures work with the raw audio waveform and have a feature extractor that uses 7 _CNN_ layers. However, the _Large_ architecture has an encoder stack made up of 24 _Transformer_ layers with dimension of the feed-forward module being 1024 and the number of attention heads being 16. _Base_ only has 12 _Transformer_ layers with dimension of the feed-forward module being 768 and the number of attention heads being 12. _wav2vec 2.0_ Large model trained on multilingual data makes up the _XLSR-53_ model [14]. The 24 _Transformer_ layers employed in the Large architecture place a heavy burden on the GPU's memory. Training times are dramatically increased when GPU memory is traded for a smaller batch size. We suggest discontinuing the _wav2vec 2.0_ Large network after the 8th _Transformer_ block and referring to the model as _Large_\({}_{1\text{-}8}\) in order to mitigate. We discovered that an optimal trade-off between a large enough batch size and a model size that still fits into memory is 8 layers. The cut-off reduces the model size of the full architecture _Large_ from 317M parameters to 115M of _Large_\({}_{1\text{-}8}\) and is therefore much closer to 95M parameters of _Base_ architecture. In addition to the difference between architectures, **Pretraining**: During pretraining we employ the proposed hyperparameters in the _XLSR-53_ paper [14] but apply the learning rate of _wav2vec 2.0_ for the monolingual pre-trainings. The pre-trainings are done for 300 epochs if there is nothing mentioned otherwise. A linear warm-up is used during the first 30 epochs until the learning rate reaches 0.0005 and then a linear decay starts. The mini-batch size in the existing Fairseq implementation is specified in samples of the waveform. For both _Base_ and _Large_\({}_{1\text{-}8}\) we use a dropout of 10% in the feature extractor, 5% in the encoder and 10% in the latent representations between the feature extractor and encoder. We do not apply dropout to pre-trainings with the _Large_ architecture. a _NN_ is pre-trained on unlabeled data using the contrastive loss and diversity loss as described in [6] using the _wav2vec 2.0_ framework. **Finetuning**: To finetune the acoustic model, we use the training system described in [46] to create a baseline _GMM_/_HMM_ model for Vietnamese language. This model is used to generate alignments of the speech data with the _CART_ labels for the _DNN_ system. The hybrid model's _NN_ is trained on these alignments in a supervised manner using the _Framewise Cross-entropy (fCE)_ loss. An application of a two-stage training configuration is made when using unsupervised pre-training. After pretraining, the _NN_ is then fine-tuned by adding a softmax output layer, initializing with a checkpoint from pre-training, training with the _fCE_ loss on labeled data, and using the same alignment as in the fully supervised scenario. The following hyperparameters are used for fine-tuning: The initial learning rate is set to and uses a linear warm-up phase to \(10^{-4}\) followed by a hold phase and afterwards ends with exponential decay of 0.9. The _wav2vec 2.0_ SpecAugment variant introduced in [6] is used with the masking done by choosing independent random starting points in the time/feature dimension and the subsequent \(10/64\) steps are masked. We employ a mini-batch size of 1875 frames with length of 10ms, leading to the audio of 18.75 seconds. Furthermore, a gradient noise of 10% is used and we also apply a dropout of 5% to all layers of both feature extractor and Transformer encoder network. In addition to dropout [65], we also investigate the performance of other regularization techniques like the intermediate loss, L2 and On-off Regularization described in 4.3.4, 4.3.5 and 4.3.6 #### Data augmentation In this thesis, apart from the use of SpecAugment [54] stated above, we also use other data augmentation techniques in the pretraining stage. The augmentation was exclusively done using the speed perturbation {90%, 110%, 115%} [38], random pitch perturbation {-350:-250; 250:350} [12] and reverberation perturbation [56]. We did not go further to analyze if these augmentation options are the most optimal. #### Intermediate loss Our intermediate loss setups are based on [68, 73]. Besides, we have 2 variants of intermediate loss, namely _Intermediate Cross-Entropy Loss (ICE Loss)_, which uses _Cross-entropy (CE)_ loss and _Intermediate Focal Loss (IF Loss)_, which replaces_Cross-entropy (CE)_ loss with focal loss [43]. _ICE Loss_: We conducted multiple experiments with intermediate loss scales ranging in {0.1, 0.2, 0.3, 0.4, 0.5} and dropout [65] values ranging in {0.05, 0.1}. We saw that the combination of loss scale 0.3 and dropout value 0.1 yielded the best results for all pretrained models and architectures, so we take this as default for all next experiments. _IF Loss_: We experimented with 3 ways of integrating focal loss into the vanilla intermediate loss setup: only in the network _CE_ output layer, only in the intermediate loss layer and in both of them. We found that putting the focal loss in both 2 positions yielded better result. To find a good focal loss value, we conducted experiments with multiple values in {1.5, 2.0, 2.5, 3.0}. The higher the value is, the more on labels the network is forced to "focus". We saw that the 2 values {1.5, 2.0} did not make difference in results, while for higher focal values {2.5, 3.0}, the model gained more benefits on in-house training set but hurt the performance on out-domain recognition test sets. We highly recommend the use of focal value 2.0 so that the model generalizes on all different test sets. #### L2 regularization To find good values of L2 regularization [39], we used grid-search technique. We tested the value ranging in {0.01, 0.005, 0.001, 0.0005, 0.0001} to see the resulting _Word-error-rate (WER)_. Each pretraining model and architecture has its own unique L2 value to work best. We put L2 regularization at all linear layers in the network. #### On-off Regularization To further improve the accuracy performance of _IF Loss_, we introduce a new regularization technique called "On-off Regularization technique". We turn off all regularizations (Dropout, SpecAugment and _IF Loss_) in the first stage of training (3-10 first epochs). We call this stage "Off Regularization". We then reset the learning rate and turn all regularizations back on in the second stage of training, which we call "On Regularization". The second stage of training ends when the model is fully converged. ## 5 Experimental results ### 5.1 Supervised baselines The baseline for Vietnamese is trained using the relevant in-house 8kHz monolingual telephone speech data. The performance of the baseline _ASR_ systems is displayed in Table 5.1. The intrinsic difficulty of the language and the data causes the systems to function differently. We believe there are various causes for this. Due to the natural flow of speakers, the Vietnamese transcriptions are hard to reach high quality. Additionally, Vietnamese also incorporates accented speech which is even difficult for native speakers to fully understand. Furthermore, the accent mismatch between Vietnamese fine-tuning and recognition data is also a major factor to the degradation of performance. Our Vietnamese in-house dataset has dominantly 2 native accents, Northern and Central Vietnamese, and a very small fraction of Southern Vietnamese native accent, while HYKIST, because of being a simulation dataset, has unique accents - foreign-born accents - from both interpreter and patient sides. On the HYKIST data, switching from a _GMM_/_HMM_ framework to a hybrid _HMM_ framework with a _RNN-BLSTM_ results in a reduction of _WER_ from 62.2% and 59.7% to 32.9% and 38.4% on dev and test set respectively. Besides, the _WER_s continue decreasing to 31.0% and 35.1% by replacing _BLSTM_ with _Transformer_ encoder. \begin{table} \begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{AM} & \multicolumn{2}{c|}{WER [\%]} \\ \cline{2-3} & Hykist dev & Hykist test \\ \hline GMM & 62.2 & 59.7 \\ \hline BLSTM & 32.9 & 38.4 \\ \hline Transformer & 31.0 & 35.1 \\ \hline \end{tabular} \end{table} Table 5.1: _WER_s [%] for supervised-only baselines on Vietnamese HYKIST data [45]. Models are trained on the monolingual in-house data. The labels are context-dependent phonemes (_CART_ state tying) with their size being 4501. ### Unsupervised Pre-training #### Monolingual pre-training Table 5.2 shows the outcomes from models pretrained on monolingual data. The number of pre-training epochs is decided upon using the best downstream _WER_ on Vietnamese. Even though no additional data is included for pre-training here, pre-training on the monolingual in-house data for Vietnamese reveals a reduction of _WER_s from 32.1% and 36.6% to 31.4% and 33.4% on dev and test set respectively. This proves that on _wav2vec 2.0_ architecture, the unsupervised pretraining helps the _WER_ performance. Next, when we pretrain with the augmented in-house data, we achieve a small improvement to 31.0% and 32.3% on dev and test set respectively. This shows that data augmentation for pretraining is helpful. We then examine the impact of pre-training on the _YouTube (YT)_ data for Vietnamese, which results improvements to 29.8% and 35.2%. Although _YT_ data is much more than the in-house data (1168h compared to 219h), both results seem to similar in terms of the average result on dev and test set. This proves that having more data is not always helpful, because of 2 reasons. The first reason is that the domain of the in-house data is closer to that of HYKIST (both of them are telephone domain), while the domain mismatch between \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Pre-training & \multicolumn{2}{c|}{Fine-tuning} & \multicolumn{2}{c|}{WER [\%]} \\ \hline Data (hours) & Epochs & Epochs & Hykist dev & Hykist test \\ \hline None & None & 33 & 32.1 & 36.6 \\ \hline Viet. in-house (219h) & 100 & & 31.4 & 33.4 \\ \hline Aug. Viet. in-house (1168h) & & 26 & 31.0 & 32.3 \\ \hline Viet. YT (1168h) & & & 29.8 & 35.2 \\ \hline Viet. in-house + YT (1168h) & & & 25.3 & 27.2 \\ \hline \end{tabular} \end{table} Table 5.2: _WER_s [%] for models pretrained on monolingual data. All fine-tunings use the _Large_\({}_{1-8}\) architecture and are trained until full convergence on Vietnamese in-house data and the recognition is done on HYKIST. All pre-trainings have been done with random initialization. Pre-training data ”None” in the 3rd row means fine-tuning from scratch with _wav2vec 2.0_ _Large_\({}_{1-8}\) architecture. \(Y\!T\) and HYKIST is larger (read speech compared to telephone speech). Another reason is that \(Y\!T\) data has less speakers, leading to worse generalization while pretraining. The greatest significant improvement is achieved by combining the in-house and \(Y\!T\) data leading to a reduction of \(W\!ER\)s to 25.3% and 27.2%. This is the best result produced using solely monolingual data. Because we substitute 200 hours of \(Y\!T\) with in-house data, the amount of pre-training data used here is comparable to that of only \(Y\!T\) pre-training. This result proves that a diversity of domains and speakers in the pretraining stage is necessary for better performance on test sets. #### Multilingual pre-training We then examine models that have already been multilingually pre-trained in Table 5.3. For Vietnamese dev/test, combining the Arabic, German, and Vietnamese in-house data to create a custom multilingual pre-training significantly outperforms the non-pretraining baseline, at \(W\!ER\)s of 26.8% and 28.7% on dev and test set respectively. However, the monolingual combination of in-house and \(Y\!T\) data is still better for Vietnamese, at 25.3% and 27.2% on dev and test set respectively. These results reject [14]'s conclusion where multilingual pretraining is proved to outperform monolingual pretraining. Strong increases of \(W\!ER\)s can also be seen by fine-tuning only utilizing the _XLSR-53\({}_{1\!-\!8}\)_ checkpoint. With the exception of the Vietnamese test set, where it is up to 11% worse, it performs only relatively worse than the custom pre-training on the multilingual in-house data, at \(W\!ER\)s of 27.6% and 31.9%. This may be due to the absence of 8kHz data in the pre-training of _XLSR-53_. Nevertheless, adopting it in a fast and simple manner can result in considerable benefits. #### 5.2.3 _Xlsr-53_ as pre-training initialization As an alternative, we might use _XLSR-53\({}_{\text{1-8}}\)_ as an initialization for a customized pre-training, as shown in Table 5.4. On the in-house Vietnamese data, the _WER_s reduce from 31.4% and 33.4% (Table 5.2) to 27.6% and 29.5% on dev and test set respectively, compared to 27.6% and 31.9% of direct finetuning with _XLSR-53\({}_{\text{1-8}}\)_. This proves that continued pretraining using _XLSR-53_ model outperforms the pretraining using random initialization and the direct finetuning using _XLSR-53_. A _Large_ model initialized with _XLSR-53_ is also pre-trained on the monolingual in-house data before being reduced to a smaller size for fine-tuning. This performs better than pre-training with the smaller _Large\({}_{\text{1-8}}\)_ (26.2% and 29.0% compared to 27.6% and 29.5% on dev and test set respectively), but at the expense of increased pre-training's resource usage. Therefore, if the resource usage is neglected, the _Large_ model should be chosen for better _WER_. For the pretraining on the _YT_ data using _XLSR-53_ as initialization, the _WER_s reduce from 29.8% and 35.2% (Table 5.2) to 24.3% and 28.1% on dev and test set respectively. The \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{Pre-training} & \multicolumn{2}{c|}{WER [\%]} \\ \hline Architecture & Data (hours) & Epochs & Hykist dev & Hykist test \\ \hline \multirow{2}{*}{_Large\({}_{\text{1-8}}\)_} & None & None & 27.6 & 31.9 \\ \cline{2-5} & Viet. in-house & 25 & 27.6 & 29.5 \\ \hline _Large_ & (219h) & 100 & 26.2 & 29.0 \\ \hline \multirow{4}{*}{_Large\({}_{\text{1-8}}\)_} & Viet. YT & \multirow{2}{*}{100} & \multirow{2}{*}{24.3} & \multirow{2}{*}{28.1} \\ & (1168h) & & & \\ \cline{1-1} \cline{2-5} & Viet. in-house + YT & \multirow{2}{*}{24.5} & \multirow{2}{*}{27.2} \\ & (1168h) & & & \\ \cline{1-1} \cline{2-5} & Multilingual in-house & \multirow{2}{*}{50} & \multirow{2}{*}{23.9} & \multirow{2}{*}{27.4} \\ \cline{1-1} \cline{2-5} & (1168h) & & & \\ \hline \end{tabular} \end{table} Table 5.4: _WER_s [%] for models using unsupervised pre-training with the public _XLSR-53_ model as initialization (3rd row is for direct finetuning and the rest are initialization for pretraining on specific data). All fine-tunings use the _Large\({}_{\text{1-8}}\)_ architecture and are trained until full convergence on Vietnamese in-house data and the recognition is done on HYKIST. The 1st model is the direct finetuning on Vietnamese in-house data, and the remaining models use _XLSR-53_ as initialization for pretrainings (full model _Large_ or cut-off model _Large\({}_{\text{1-8}}\)_). benefits of integrating _XLSR-53_ into the multilingual data are substantially lower, with _WER_s being reduced from 26.8% and 28.7% (Table 5.3) to 23.9% and 27.4%. On the domain-diverse dataset (the combination of monolingual in-house and _YT_ data), the benefits of continued pretraining are also reduced, with _WER_s being reduced from 25.3% and 27.2% (Table 5.2) to 24.5% and 27.2% on dev and test set respectively. This shows that the continued pretraining is beneficial for both the monolingual and the multilingual scenario. However, the continued pretraining on less diverse data benefits more from the diverse and multilingual data. #### Comparison to supervised baselines As shown in Table 5.5, we can see that fine-tuning using _wav2vec 2.0 Large_1-8 from scratch is worse when we compare with the findings from the supervised-only baseline (32.1% and 36.6% vs. 31.0% and 35.1% on dev and test set respectively). With monolingual pre-training on the identical data, there is still no apparent advantage (31.4% and 33.4%). When we increase the pretraining data to 5 times with a less diverse data (_YT_ data), the performance also does not clearly outperform the supervised-only baseline (29.8% and 35.2%). This proves that the _wav2vec 2.0_ unsupervised pretraining does not always outperform the _Transformer_ supervised-only approach, especially when the pretrained data is not diverse enough. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{AM} & \multirow{2}{*}{Init} & Pre-training & \multicolumn{2}{c|}{WER [\%]} \\ \cline{3-5} & & Data (hours) & Hykist dev & Hykist test \\ \hline Transformer & & & 31.0 & 35.1 \\ \hline \multirow{4}{*}{wav2vec 2.0} & \multirow{2}{*}{random} & None & 32.1 & 36.6 \\ \cline{3-5} & & Viet. in-house & \multirow{2}{*}{31.4} & 33.4 \\ \cline{3-5} & & (219h) & & \\ \cline{3-5} & & Viet. YT & \multirow{2}{*}{29.8} & 35.2 \\ \cline{3-5} & & (1168h) & & \\ \cline{3-5} & & & & \\ \cline{3-5} & & & & \\ \cline{3-5} & & (1168h) & & \\ \hline \end{tabular} \end{table} Table 5.5: _WER_s [%] for models using unsupervised pre-training and supervised-only training. All fine-tunings use the _Large_1-8 architecture and are trained until full convergence on Vietnamese in-house data and the recognition is done on HYKIST. The 1st model is the supervised-only training using Transformer. The 2nd and 3rd models are pretrained on specific data using random initializaton. The 4th and 5th models are continued pretraining methods (using _XLSR-53_1-8 as initialization). However, we are able to significantly outperform the supervised baselines when applying continued pretraining. In comparison to the best supervised-only baseline, the best results for continued pretraining show a reduction of _WER_s to 24.5 % and 27.2% on monolingual data and to 23.9% and 27.4% on multilingual data. Therefore, we can conclude that continued pretraining should be used to gain the most benefits in terms of accuracy. ### Encoder and initialization comparison #### Encoder comparison We compare the performance of 2 types of encoder: _Base_ and _Large_\({}_{1\text{-}8}\). As shown in Table 5.6, we receive mix results for various pretraining schedules: no pretraining, pretraining on in-house data and pretraining on multilingual data. It is mentioned by [31] in language modeling that the _Base_ architecture works better than the _Large_. However, in acoustic modeling in _ASR_, our results prove against this statement. Considering the amount of parameters between _Base_ and _Large_\({}_{1\text{-}8}\), 97M vs. 118M, we recommend the use of _Base_ in order to keep the performance competitive to _Large_\({}_{1\text{-}8}\) while reducing the number of trainable parameters. #### Initialization comparison In the case of super short pretraining (1 epoch pretraining on only 0.01h of data), the results outperform those of raw waveform from scratch for both _Base_ and _Large_\({}_{1\text{-}8}\) architecture as shown in Table 5.7. The reason for the improvement comes from the difference of initialization schemes. The parameters from the pretrained model are first initialized by Fairseq [52] using Kaiming Initialization [25], and then fed into RETURN [15], while the parameters for raw waveform training are initialized directly by RETURN using Glorot (also known as Xavier) Initialization [20]. We therefore recommend the use of Kaiming Initialization for _wav2vec 2.0_ architecture. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Architecture} & Pretraining & \multicolumn{2}{c|}{WER [\%]} \\ \cline{2-4} & Data (hours) & Hykist dev & Hykist test \\ \hline _Base_ & \multirow{2}{*}{None} & 35.8 & 39.9 \\ \cline{1-1} \cline{3-4} \(\textit{Large}_{1\text{-}8}\) & & 35.0 & 40.7 \\ \hline _Base_ & Viet. in-house & 30.2 & 33.3 \\ \cline{1-1} \cline{3-4} \(\textit{Large}_{1\text{-}8}\) & (219h) & 31.5 & 33.4 \\ \hline _Base_ & Multilingual in-house & 26.2 & 28.8 \\ \cline{1-1} \cline{3-4} \(\textit{Large}_{1\text{-}8}\) & (1168h) & 26.8 & 28.7 \\ \hline \end{tabular} \end{table} Table 5.6: _WERs_ [%] for architecture _Base_ and _Large_\({}_{1\text{-}8}\) using different pretraining schedules: no pretraining, pretraining on in-house data and on multilingual data. All fine-tunings are done until full convergence on Vietnamese in-house data and the recognition is done on HYKIST. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Architecture} & \multirow{2}{*}{Init. scheme} & \multicolumn{2}{c|}{Pretraining} & \multicolumn{2}{c|}{WER [\%]} \\ \cline{3-6} & & Data (hours) & Epochs & Hykist dev & Hykist test \\ \hline _Base_ & Kaiming Init. & Viet. in-house & \multirow{2}{*}{1} & 30.6 & 35.2 \\ \cline{3-6} \cline{5-6} ### Effectiveness of intermediate loss #### Effectiveness of Intermediate Cross-Entropy Loss **Improvement on HYKIST data**: In Table 5.8, when using in-house telephone dataset to train and transcribe the HYKIST dataset with the help of _ICE Loss_, we report the total improvement in performance for from scratch experiment where the _WER_s decrease from 35.6% and 40.7% to 33.8% and 38.1% on dev and test set respectively. For _YT_ experiment, the _WER_s decrease from 29.8% and 35.2% to 27.3% and 31.5%. We also report a small improvement for the combination of Vietnamese in-house data and _YT_ data, from 25.3% and 27.2% to 25.1% and 27.1%. **Degradation on HYKIST data**: As shown in Table 5.9, for the directly finetuning experiment with _XLSR-53_ preloaded, the performance is hurt totally (both _WER_s on dev and test sets increase). Besides, both continued pretrainings on Vietnamese in-house and on _YT_ data experience the partial improvements (only _WER_s on test sets are slightly increased but _WER_s on dev sets decrease). The rest pretraining schedules in Table 5.9 also experience partial improvements. **Improvement on CommonVoice and VIVOS data**: In the situation of more out-of-domain recognition shown in Table 5.10, which means using the model finetuned on our in-house spontaneous telephone speech dataset to do the recognition on read speech datasets like CommonVoice and VIVOS, we report the total improvements in performance for _Large_1-8 in \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Pre-training data} & \multirow{2}{*}{With ICE} & \multicolumn{2}{c|}{WER [\%]} \\ \cline{3-4} & & Hykist dev & Hykist test \\ \hline \multirow{2}{*}{None} & No & 35.6 & 40.7 \\ \cline{2-4} & Yes & 33.8 & 38.1 \\ \hline Viet. YT & No & 29.8 & 35.2 \\ \cline{2-4} (1168h) & Yes & 27.3 & 31.5 \\ \hline Viet. in-house + YT & No & 25.3 & 27.2 \\ \cline{2-4} (1168h) & Yes & 25.1 & 27.1 \\ \hline \end{tabular} \end{table} Table 5.8: Improvements of _WER_s [%] on HYKIST data between pretraining schedules when applying _ICE Loss_. All models are finetuned until full convergence on Vietnamese in-house data. Only 1 intermediate layer is applied in the middle _Transformer_ block, e.g. position 4 for _Large_1-8 and 6 for _Base_ architecture. house pretraining, from scratch and \(Y\!T\) experiments. Notable is from scratch training where _WER_s reduce from 20.8%, 44.7%, 34.9% to 18.6%, 42.1%, 33.1%; and \(Y\!T\) pretraining where _WER_s reduce from 16.4%, 34.4%, 28.7% to 15.6%, 32.2%, 27.6% on CommonVoice dev/test and VIVOS test set respectively. Together with the improvements on HYKIST reported in Table 5.8, we conclude that using _ICE Loss_ for from scratch training and for pretraining on \(Y\!T\) data improves the recognitions on both telephone and read speech domain. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Pre-training data} & \multirow{2}{*}{With ICE} & \multicolumn{2}{c|}{WER [\%]} \\ \cline{3-5} & & CV dev & CV test & Vivos \\ \hline \multirow{2}{*}{None} & No & 20.8 & 44.7 & 34.9 \\ \cline{3-5} & Yes & 18.6 & 42.1 & 33.1 \\ \hline \multirow{2}{*}{Viet. YT} & No & 16.4 & 34.4 & 28.7 \\ \cline{2-5} & Yes & 15.6 & 32.2 & 27.6 \\ \hline \multirow{2}{*}{Viet. in-house (219h)} & No & 16.4 & 35.6 & 31.3 \\ \cline{2-5} & Yes & 16.1 & 34.8 & 30.4 \\ \hline \end{tabular} \end{table} Table 5.10: Improvements of _WER_s [%] on CommonVoice and VIVOS between pretraining schedules when applying _ICE Loss_. All models are finetuned on Vietnamese in-house data. Only 1 intermediate layer is applied in the middle _Transformer_ block, e.g. position 4 for _Large\({}_{1-8}\)_ and 6 for _Base_ architecture. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Arch.} & \multirow{2}{*}{Init.} & \multirow{2}{*}{Pre-training data} & \multirow{2}{*}{With ICE} & \multicolumn{2}{c|}{WER [\%]} \\ \cline{3-5} & & & & \multicolumn{1}{c|}{Hykist dev} & \multicolumn{1}{c|}{Hykist test} \\ \hline \multirow{4}{*}{\(\begin{array}{c}\text{\emph{Large}}_{1-8}\end{array}\)} & \multirow{2}{*}{None} & No & 27.9 & 32.3 \\ \cline{3-5} & & & Yes & 28.4 & 33.3 \\ \cline{3-5} & None & & No & 30.4 & 33.4 \\ \cline{3-5} & None & & Yes & 29.1 & 33.7 \\ \cline{3-5} & \multirow{2}{*}{\(\begin{array}{c}\text{\emph{XLSR-53}}\end{array}\)} & \multirow{2}{*}{Viet. in-house (219h)} & No & 25.5 & 29.1 \\ \cline{3-5} & & & Yes & 25.2 & 29.2 \\ \hline \multirow{2}{*}{\(\begin{array}{c}\text{\emph{Base}}\end{array}\)} & \multirow{2}{*}{None} & No & 30.2 & 33.3 \\ \cline{3-5} & & & Yes & 29.7 & 33.4 \\ \hline \multirow{4}{*}{\(\begin{array}{c}\text{\emph{Large}}_{1-8}\end{array}\)} & \multirow{2}{*}{None} & Multiling. in-house (1168h)} & No & 26.8 & 28.7 \\ \cline{3-5} & & (1168h) & Yes & 25.5 & 29.4 \\ \cline{3-5} & \multirow{2}{*}{\(\begin{array}{c}\text{\emph{XLSR-53}}\end{array}\)} & Viet. YT (1168h)} & No & 24.3 & 28.1 \\ \cline{3-5} & & (1168h) & Yes & 23.7 & 28.2 \\ \hline \end{tabular} \end{table} Table 5.9: Degradations of _WER_s [%] on HYKIST data between pretraining schedules when applying _ICE Loss_. All models are finetuned until full convergence on Vietnamese in-house data. Only 1 intermediate layer is applied in the middle _Transformer_ block, e.g. position 4 for _Large\({}_{1-8}\)_ and 6 for _Base_ architecture. #### 5.4.2 Effectiveness of Intermediate Focal Loss **Effectiveness on HYKIST data**: As shown in Table 5.12 and Table 5.13 below, when using _IF Loss_, we see the _WER_s on HYKIST improved compared to the baselines for various pretraining schedules (7/9 experiments experience total improvements), compared to only 3/9 experiments experiencing total improvements using _ICE Loss_ (_ICE Loss_ results are shown in Table 5.8 and Table 5.9 above). In addition, we report all _WER_s of _IF Loss_ experiments to be lower than those of _ICE Loss_ experiments, except the one on HYKIST test set of from scratch training. We therefore conclude that, when finetuning and recognizing on the same telephone domain, _IF Loss_ works better than _ICE Loss_. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Arch.} & \multirow{2}{*}{Init.} & \multirow{2}{*}{Pre-training data} & \multirow{2}{*}{With ICE} & \multicolumn{2}{c|}{WER [\%]} \\ \cline{5-6} & & & CV dev & CV test & Vivos \\ \hline \multirow{4}{*}{\(Large_{1-8}\)} & \multirow{4}{*}{\(XLSR\)-53} & \multirow{4}{*}{None} & No & 14.8 & 32.5 & 30.3 \\ \cline{3-6} & & & Yes & 15.8 & 33.9 & 30.0 \\ \cline{3-6} & & & No & 11.5 & 29.4 & 27.2 \\ \cline{3-6} & & & Yes & 12.3 & 29.8 & 27.7 \\ \hline \multirow{2}{*}{\(Base\)} & \multirow{4}{*}{\((219h)\)} & \multirow{2}{*}{\((219h)\)} & No & 16.6 & 35.4 & 30.9 \\ \cline{3-6} & & & Yes & 15.4 & 34.2 & 31.3 \\ \hline \multirow{4}{*}{\(Large_{1-8}\)} & \multirow{2}{*}{None} & Multiling. in-house & No & 15.2 & 29.7 & 29.5 \\ \cline{3-6} & & (1168h) & Yes & 14.8 & 30.5 & 28.8 \\ \cline{3-6} & & Viet. in-house + YT & No & 12.9 & 26.5 & 21.0 \\ \cline{1-1} \cline{2-6} & & (1168h) & Yes & 13.6 & 28.2 & 21.9 \\ \cline{1-1} \cline{2-6} & & Viet. YT & No & 11.8 & 28.4 & 25.6 \\ \cline{1-1} \cline{2-6} & & (1168h) & Yes & 12.3 & 28.3 & 25.0 \\ \hline \end{tabular} \end{table} Table 5.11: Degradations of _WER_s [%] on CommonVoice and VIVOS between pretraining schedules when applying _ICE Loss_. All models are finetuned on Vietnamese in-house data. Only 1 intermediate layer is applied in the middle _Transformer_ block, e.g. position 4 for \(Large_{1-8}\) and 6 for _Base_ architecture. Compared to our strongest continued pretraining baseline, the application of _IF Loss_ on the combination of Vietnamese in-house and _YT_ data (24.5% and 27.1%) outperforms the results of continued pretraining on the combination of Vietnamese in-house and _YT_ data (24.5% and 27.2% on dev and test set respectively as shown in Table 5.4). However, we believe that the _IF Loss_ can further reduce _WER_s for this continued pretraining schedule, as it does with continued pretraining on _YT_ data. Among all total improvements reported in Table 5.12, notable is the _WER_s reduction of _YT_ experiment from 29.8% and 35.2% to 26.1% and 30.8% on dev and test set respectively, whose relative _WER_ is around 12.5% in average. For a more diverse pretrained data (Vietnamese in-house data), we report the _WER_s reduction from 30.4% and 33.4% to 28.6% and 33.0%, whose relative _WER_ is around 3.6% in average. For even more diverse pretrained data (Vietnamese in-house + _YT_ data), we report the _WER_s reduction from 25.3% and 27.2% to 24.5% and 27.1%, whose relative _WER_ is around 1.8% in average. We therefore conclude that the effectiveness of _IF Loss_ decreases when the pretrained data becomes more diverse. As shown in Table 5.13, only for the case of directly finetuning with _XLSR-53_, using _IF Loss_ \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Arch.} & \multirow{2}{*}{Init.} & \multirow{2}{*}{Pre-training data} & \multirow{2}{*}{With IF} & \multicolumn{2}{c|}{WER [\%]} \\ \cline{5-6} & & & & \multicolumn{1}{c|}{Hykist dev} & \multicolumn{1}{c|}{Hykist test} \\ \hline \multirow{5}{*}{_Large\({}_{1\text{-}8}\)_} & \multirow{5}{*}{None} & No & 35.6 & 40.7 \\ \cline{3-6} & & & Yes & 33.0 & 38.8 \\ \cline{3-6} & & & No & 30.4 & 33.4 \\ \cline{3-6} & & & Yes & 28.6 & 33.0 \\ \cline{3-6} & _XLSR-53_ & \multirow{2}{*}{Viet. in-house (219h)} & No & 25.5 & 29.1 \\ \cline{3-6} & & & Yes & 24.7 & 29.1 \\ \cline{3-6} & & & No & 30.2 & 33.3 \\ \cline{3-6} & & & Yes & 29.0 & 33.0 \\ \hline \multirow{5}{*}{_Large\({}_{1\text{-}8}\)_} & \multirow{5}{*}{None} & Viet. & \multirow{2}{*}{No} & 29.8 & 35.2 \\ \cline{3-6} & & & (1168h) & Yes & 26.1 & 30.8 \\ \cline{3-6} & & & & No & 25.3 & 27.2 \\ \cline{3-6} & & & (1168h) & Yes & 24.5 & 27.1 \\ \cline{1-1} \cline{2-6} & \multirow{2}{*}{_XLSR-53_} & Viet. & \multirow{2}{*}{No} & 24.3 & 28.1 \\ \cline{1-1} \cline{3-6} & & & (1168h) & Yes & 23.4 & 28.1 \\ \hline \end{tabular} \end{table} Table 5.12: Improvements of _WER_s [%] on HYKIST data between pretraining schedules when applying _IF Loss_. All models are finetuned until full convergence on Vietnamese in-house data. Only 1 intermediate layer is applied in the middle _Transformer_ block, e.g. position 4 for _Large\({}_{1\text{-}8}\)_ and 6 for _Base_ architecture. makes the _WERs_ on HYKIST increased compared to the baselines. However, the degradation is rather small, from 27.9% and 32.3% to 28.0% and 32.8% on dev and test set respectively. Besides, a partial degradation of performance is reported in the multilingual in-house data experiment, where the average _WER_ of dev and test set (25.2% and 29.3%) is even lower than the baseline (26.8% and 28.7%). Hence, in a rapid deployment of an _ASR_ system, we recommend the direct use of _IF Loss_ in training without the need of one more training as a baseline for performance comparison. #### Effectiveness on CommonVoice and VIVOS data: In the larger domain-shift recognition, we still receive the significant reduction of _WERs_ in multiple experiments as shown in Table 5.14. The notable reduction of _WERs_ compared to baselines is again on _YT_ data, whose _WERs_s decrease from 16.4%, 34.4% and 28.7% to 14.5%, 30.9% and 26.9% respectively for 3 read speech sets, that makes _WERR_ about 9.3% in average. The _ICE Loss_ in Table 5.10 makes 3 experiments totally improved, while the _IF Loss_ makes 4. Furthermore, the _WERs_ for _IF Loss_ on 3 read speech datasets are as competitive as _ICE Loss_. In addition, when finetuning and recognizing on the same telephone domain, _IF Loss_ works better than _ICE Loss_ as proved above. We therefore conclude that _IF Loss_ works better than _ICE Loss_ in all domains. However, in the larger domain-shift recognition, we still meet degradations of performance in experiments pretrained on diverse data, as shown in Table 5.15. We therefore recommend the use of _IF Loss_ only for less diverse pretrained data if the domain of finetuning and recognition data are too different. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Init.} & \multirow{2}{*}{Pre-training data} & \multirow{2}{*}{With IF} & \multicolumn{2}{c|}{WER [\%]} \\ \cline{3-5} & & & Hykist dev & Hykist test \\ \hline \multirow{2}{*}{_XLSR-53_} & \multirow{2}{*}{None} & No & 27.9 & 32.3 \\ \cline{3-5} & & Yes & 28.0 & 32.8 \\ \hline \multirow{2}{*}{None} & Multiling. in-house & No & 26.8 & 28.7 \\ \cline{3-5} & (1168h) & Yes & 25.2 & 29.3 \\ \hline \end{tabular} \end{table} Table 5.13: Degradations of _WERs_ [%] on HYKIST data between pretraining schedules when applying _IF Loss_. All models are finetuned until full convergence on Vietnamese in-house data. Only 1 intermediate layer is applied in the middle _Transformer_ block, e.g. position 4 for _Large_\({}_{1-8}\) and 6 for _Base_ architecture. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Init.} & \multirow{2}{*}{Pre-training data} & \multirow{2}{*}{With IF} & \multicolumn{3}{c|}{WER [\%]} \\ \cline{3-6} & & & CV dev & CV test & Vivos \\ \hline \multirow{4}{*}{_XLSR-53_} & \multirow{2}{*}{None} & No & 14.8 & 32.5 & 30.3 \\ \cline{3-6} & & Yes & 15.4 & 33.6 & 30.0 \\ \cline{2-6} & Viet. in-house & No & 11.5 & 29.4 & 27.2 \\ \cline{2-6} & (219h) & Yes & 13.0 & 29.8 & 27.6 \\ \hline \multirow{4}{*}{None} & Multiling. in-house & No & 15.2 & 29.7 & 29.5 \\ \cline{2-6} & (1168h) & Yes & 14.5 & 30.6 & 28.1 \\ \cline{2-6} & Viet. in-house + YT & No & 12.9 & 26.5 & 21.0 \\ \cline{1-1} \cline{2-6} & (1168h) & Yes & 12.7 & 28.6 & 22.1 \\ \hline \multirow{2}{*}{_XLSR-53_} & Viet. YT & No & 11.8 & 28.4 & 25.6 \\ \cline{2-6} & (1168h) & Yes & 13.2 & 29.1 & 24.5 \\ \hline \end{tabular} \end{table} Table 5.15: Degradations of _WERs_ [%] on CommonVoice and VIVOS between pretraining schedules when applying _IF Loss_. All models are finetuned on Vietnamese in-house data. Only 1 intermediate layer is applied in the middle _Transformer_ block, e.g. position 4 for _Large_\({}_{1-8}\) and 6 for _Base_ architecture. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Arch.} & \multirow{2}{*}{Pre-training data} & \multirow{2}{*}{With IF} & \multicolumn{3}{c|}{WER [\%]} \\ \cline{3-6} & & & CV dev & CV test & Vivos \\ \hline \multirow{4}{*}{_Large_\({}_{1-8}\)} & Viet. in-house & No & 16.4 & 35.6 & 31.3 \\ \cline{2-6} & (219h) & Yes & 15.8 & 34.5 & 29.6 \\ \cline{2-6} & \multirow{2}{*}{None} & No & 20.8 & 44.7 & 34.9 \\ \cline{3-6} & & Yes & 19.7 & 43.1 & 33.9 \\ \hline \multirow{2}{*}{_Base_} & Viet. in-house & No & 16.6 & 35.4 & 30.9 \\ \cline{2-6} & (219h) & Yes & 15.9 & 34.4 & 30.5 \\ \hline \multirow{2}{*}{_Large_\({}_{1-8}\)} & Viet. YT & No & 16.4 & 34.4 & 28.7 \\ \cline{2-6} & (1168h) & Yes & 14.5 & 30.9 & 26.9 \\ \hline \end{tabular} \end{table} Table 5.14: Improvements of _WERs_ [%] on CommonVoice and VIVOS between pretraining schedules when applying _IF Loss_. All models are finetuned on Vietnamese in-house data. Only 1 intermediate layer is applied in the middle _Transformer_ block, e.g. position 4 for _Large_\({}_{1-8}\) and 6 for _Base_ architecture. ### Intermediate loss analysis #### Studies on Intermediate Focal Loss design In Table 5.16, we study variants of putting _IF Loss_ at different layers. For \(\textit{Large}_{1\text{-}8}\) model, we observe the performance degradation when moving the single _IF Loss_ to different layers, while this gives mix results for the _Base_ model. [41] also reports the same behavior when using Intermediate CTC Loss on 12-layer, 24-layer and 48-layer models in a supervised-only scenario. When applying 2 intermediate layers, we meet the degradation of performance for both _Base_ and \(\textit{Large}_{1\text{-}8}\) models. From the experimental results, we therefore conclude that: Single _IF Loss_ in the middle network layer yields the best result among variants. #### On-off Regularization technique To better exploit the _IF Loss_, we introduce the "**On-off Regularization** technique". We experiment this technique for raw waveform from scratch training. Experimental results in Table 5.17 show that, if we train without any regularization techniques ("Off Regularization" \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{Viet. in-house \(\textit{Large}_{1\text{-}8}\)} \\ \hline Layer & Hykist dev & Hykist test & CV dev & CV test & Vivos \\ \hline None & 30.4 & 33.4 & 16.4 & 35.6 & 31.3 \\ \hline 2 & 29.4 & 34.0 & **15.5** & 35.7 & 30.0 \\ \hline 4 & **28.6** & **33.0** & 15.8 & **34.5** & **29.6** \\ \hline 6 & 29.1 & **33.0** & 16.6 & 34.1 & 29.9 \\ \hline 2,6 & 29.1 & 34.1 & 15.6 & 35.5 & 30.7 \\ \hline 3,5 & 29.1 & 33.3 & 16.4 & 35.0 & 30.1 \\ \hline \multicolumn{6}{|c|}{Viet. in-house \(\textit{Base}\)} \\ \hline None & 30.2 & 33.3 & 16.6 & 35.4 & 30.9 \\ \hline 3 & **28.7** & 33.4 & 17.0 & 35.5 & **30.1** \\ \hline 6 & 29.0 & 33.0 & **15.9** & **34.4** & 30.5 \\ \hline 9 & 29.5 & **32.6** & 14.8 & 34.4 & 30.5 \\ \hline 4,8 & 29.3 & 33.5 & 16.2 & 35.0 & 30.3 \\ \hline \end{tabular} \end{table} Table 5.16: _WERs_ [%] comparison of _IF Loss_ on different layers between 2 architecture sizes: _Base_ and \(\textit{Large}_{1\text{-}8}\). All models are finetuned until full convergence on Vietnamese in-house data and recognized on HYKIST, CommonVoice and VIVOS dataset. Layer ”None” means the baseline (no application of _IF Loss_). stage) for the first 3 epochs and then reset the learning rate and continue training with all regularizations turned on ("On Regularization" stage), we achieve the _WER_s reduction from 33.0% and 38.8% to 32.5% and 38.4% on dev and test set respectively compared to the baseline. In the future work, we plan to apply the "On-off Regularization" technique to other pretraining schedules. #### Combination of L2 regularization and Intermediate Focal Loss \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{\multirow{2}{*}{Arch.}} & \multirow{2}{*}{Init.} & \multirow{2}{*}{Pre-training data} & \multirow{2}{*}{Reg.} & \multicolumn{2}{c|}{WER [\%]} \\ \cline{5-6} & & & & Hykist dev & Hykist test \\ \hline \multirow{5}{*}{\(Large_{1-8}\)} & \multirow{5}{*}{None} & Viet. in-house (219h) & With IF + L2 & 28.6 & 33.0 \\ \cline{3-6} & & & With IF + L2 & 28.6 & 32.9 \\ \cline{3-6} & & & With IF & 33.0 & 38.8 \\ \cline{3-6} & & & With IF + L2 & 31.4 & 36.4 \\ \cline{3-6} \cline{3 Due to time constraint and project requirement, we only tune L2 regularization values in favor of HYKIST data performance. By using grid-search technique for right L2 value selection, we are able to reduce _WERs_ of multiple pretraining schedules as shown in Table 5.18. Notable results are seen on raw waveform from scratch training, where _WERs_ reduce from 33.0% and 38.8% (only _IF Loss_) to 31.4% and 36.4% (combination with L2 regularization) on dev and test set respectively, that makes _WERR_ 5.5% in average. We stick with default parameters for _ICE Loss_ and _IF Loss_ because _WERs_ do not fluctuate significantly when choosing other parameters. However, the right parameters for L2 regularization are chosen based on grid-search strategy and different parameters make the _WER_s vary greatly. Therefore, we recommend the use of L2 regularization should be the last regularization effort in the entire regularization pipeline due to its higher sentitivity to _WER_s compared to _ICE Loss_ and _IF Loss_. ## Chapter 6 Conclusion ### 6.1 Overall results In this thesis, we describe our efforts to develop HYKIST-related _ASR_ systems for conversational telephone speech in the medical field for Vietnamese language. Firstly, we use various acoustic encoder topologies to present supervised-only baselines while deploying the hybrid _HMM_ framework. Secondly, we use unsupervised _wav2vec 2.0_ pretraining to improve system performance and analyze the effects of pretraining data on performance. The experimental findings demonstrate that this is especially effective when diverse pretraining data is used, e.g. data on multiple domains, multi-speaker data, augmented data... Also, multilingual pretraining does not always outperform monolingual pretraining. It is also shown that cost-effective model development is possible by utilizing the _XLSR-53_ model, which is freely available. We then compare with the baselines and show that the _wav2vec 2.0_ unsupervised pretraining does not always outperform the _Transformer_ supervised-only approach, especially when the pretrained data is not diverse enough. Thirdly, our best method to further improve the accuracy is using continued pretraining approach, where we pretrain multiple 8kHz datasets using parameters initialized by the 16kHz multilingual _XLSR-53_ model. We show that continued pretraining is beneficial for both the monolingual and the multilingual scenario. However, the continued pretraining on less diverse data benefits more than the diverse data. Fourthly, we compare the performance of _wav2vec 2.0_ encoders and recommend the _Base_ architecture instead of _Large\({}_{1\_8}\)_ for the sake of both accuracy and inference performance. We also recommend the use of Kaiming Initialization for better accuracy of _wav2vec 2.0_ architecture, instead of Xavier Initialization. Finally, we apply and analyze the use of intermediate loss - _Intermediate Cross-Entropy Loss (ICE Loss)_ and _Intermediate Focal Loss (IF Loss)_ - to make _wav2vec 2.0_ more robust for all recognition domains. We prove that _IF Loss_ works better than _ICE Loss_ in all data domains. In addition, for the small out-of-domain recognition _IF Loss_ works well but for the large out-of-domain recognition it should only be applied on less diverse pretrained data. In order to further improvement of accuracy, we integrate _IF Loss_ with On-off Regularization and L2 Regularization. ### 6.2 Future work During the work of this thesis, we have discovered some promising directions which are planned for future work. First, section 5.2 shows that the system performance benefits from the unsupervised pretraining on diverse data but pretraining on the in-domain data, medical speech data in other words, is not compared yet. Second, we show that the data augmentation in pretraining stage is effective. However, such data augmentation for finetuning is not investigated yet. Third, due to time constraint, the effectiveness of On-off Regularization for different pretraining schedules is not studied. This leads to the question if sequence discriminative training [19], which also uses learning rate reset, works well with _wav2vec 2.0_. Finally, Wav2vec 2.0 - Conformer [50] has been popular lately. However, its effectiveness on Vietnamese has not been investigated yet.
2309.10911
Language-Conditioned Affordance-Pose Detection in 3D Point Clouds
Affordance detection and pose estimation are of great importance in many robotic applications. Their combination helps the robot gain an enhanced manipulation capability, in which the generated pose can facilitate the corresponding affordance task. Previous methods for affodance-pose joint learning are limited to a predefined set of affordances, thus limiting the adaptability of robots in real-world environments. In this paper, we propose a new method for language-conditioned affordance-pose joint learning in 3D point clouds. Given a 3D point cloud object, our method detects the affordance region and generates appropriate 6-DoF poses for any unconstrained affordance label. Our method consists of an open-vocabulary affordance detection branch and a language-guided diffusion model that generates 6-DoF poses based on the affordance text. We also introduce a new high-quality dataset for the task of language-driven affordance-pose joint learning. Intensive experimental results demonstrate that our proposed method works effectively on a wide range of open-vocabulary affordances and outperforms other baselines by a large margin. In addition, we illustrate the usefulness of our method in real-world robotic applications. Our code and dataset are publicly available at https://3DAPNet.github.io
Toan Nguyen, Minh Nhat Vu, Baoru Huang, Tuan Van Vo, Vy Truong, Ngan Le, Thieu Vo, Bac Le, Anh Nguyen
2023-09-19T20:10:01Z
http://arxiv.org/abs/2309.10911v1
# Language-Conditioned Affordance-Pose Detection in 3D Point Clouds ###### Abstract Affordance detection and pose estimation are of great importance in many robotic applications. Their combination helps the robot gain an enhanced manipulation capability, in which the generated pose can facilitate the corresponding affordance task. Previous methods for affordance-pose joint learning are limited to a predefined set of affordances, thus limiting the adaptability of robots in real-world environments. In this paper, we propose a new method for language-conditioned affordance-pose joint learning in 3D point clouds. Given a 3D point cloud object, our method detects the affordance region and generates appropriate 6-DoF poses for any unconstrained affordance label. Our method consists of an open-vocabulary affordance detection branch and a language-guided diffusion model that generates 6-DoF poses based on the affordance text. We also introduce a new high-quality dataset for the task of language-driven affordance-pose joint learning. Intensive experimental results demonstrate that our proposed method works effectively on a wide range of open-vocabulary affordances and outperforms other baselines by a large margin. In addition, we illustrate the usefulness of our method in real-world robotic applications. Our code and dataset are publicly available at [https://3DAPNet.github.io](https://3DAPNet.github.io). ## I Introduction In robotic research, affordance detection and pose estimation are among the most important and well-concerned problems [1, 2]. Understanding object affordance helps robots decide the inherent possibilities and potential actions within an environment, while pose estimation is considered a prerequisite for robots to interact with and manipulate their surrounding objects effectively. Combining affordance detection and pose estimation holds the potential to help robots gain a more comprehensive understanding of their environment's possibilities and, at the same time, achieve enhanced manipulation abilities [3]. However, prior research has predominantly focused on solving these problems independently, while less works tackled both tasks simultaneously [4, 5, 6]. This is because the concept of affordance can be arbitrary, and without extra information (e.g., text input), it is challenging to detect the associated pose. Recently, with the availability of depth cameras, several works have addressed the task of affordance detection in 3D point clouds [7, 8, 9, 10, 5]. Most of them treated the problem as a supervised task of labeling predefined affordance labels for each point in the point cloud [5, 8]. Lately, the authors in [10] explored the open-vocabulary affordance detection task, a new research direction liberating the constraint of a predefined affordance label set with the utilization of language models [11, 12]. The work in [10] increased the flexibility of the affordance learning process, getting closer to universal affordance detection, however, it does not provide 6-DoF poses that supports the corresponding affordance. As a result, the task remains a visionary problem and currently hinders its practical application on real robots. Other works exhibit a combination of affordance detection and pose estimation [5, 13, 14, 15], yet their methods are still limited to a predefined set of affordance tasks. In this research, we take a step further by integrating the tasks of _open-vocabulary_ affordance detection and _language-driven pose estimation_. Given a 3D point cloud, our goal is to simultaneously detect the unconstrained affordance and generate poses based on the input text query. To realize that objective, we first establish a new dataset for the task of 3D Affordance-Pose joint learning, namely 3DAP dataset. Our dataset is composed of several triplets of a 3D point cloud, an affordance label in the form of the natural text, and a set of multiple 6-DoF poses associated with the affordance. We then present a joint learning framework consisting of a language-driven affordance detection branch and a pose estimation branch which is a guided diffusion model that generates 6-DoF poses conditioned on the given point cloud object and the affordance text. Our choice of the diffusion model is motivated by its recent remarkable results in generating diverse data modalities from multiple conditions [16, 17, 18], yet its application to pose estimation remains limitedly explored [19]. Our method is an end-to-end pipeline where via a text prompt, the robot can perform a manipulation task using the affordance and the detected pose. Figure 1 shows the main concept of our work. Fig. 1: Our framework allows the simultaneous detection of affordance region and corresponding supporting poses given the input point cloud object and an arbitrary affordance text. Our contributions are summarized as follows: * We introduce 3DAP, a new dataset of 3D point cloud objects with affordance language labels and affordance-specific 6-DoF poses. * We propose 3DAPNet, a new method that effectively tackles the task of affordance-pose joint learning. * We validate our method through intensive experiments and demonstrate the usefulness of 3DAPNet in several real-world robotic manipulation tasks. ## II Related Work **Affordance Detection.** Many works tackled the task of affordance detection in RGB images [20, 21, 22, 23, 24] and 3D point clouds [25, 5, 8, 9, 10]. In particular, Luo _et al._[22] leveraged affinity from human-object interaction to detect affordances of non-interactive objects in 2D images. Authors in [7] detected affordance maps on 3D point cloud scenes through interactive manipulation. Also working on point clouds, authors in [9] proposed a framework that detects affordance maps from object-object interaction. Most of these works focus on detecting a set of predefined affordances rather than open-vocabulary setting. Lately, Nguyen _et al._[10] introduced a framework that allows the detection of arbitrary affordance given in form of a text description. While achieving promising results, the common shortcoming of previous methods is that they solely detect the affordance regions while usually neglecting the corresponding poses that support the detected affordances. This limitation poses a challenge for the robot to effectively execute necessary affordance tasks in real-world manipulation settings. **3D Pose Generation.** Given a single object or multiple objects in a cluttered environment, the goal of 3D pose generation algorithms is to find a pose configuration that can support manipulation tasks [2, 3]. Initial works addressed the problem by employing analytical approaches [26, 27], which are practically limited since they assume complete knowledge of object properties like shape, geometry, and material. The rapid development of grasping simulators [28, 29, 30] in the following years led to the rise of data-driven approaches. Early data-driven methods primarily used whether hand-crafted features [31, 32, 33] or traditional machine learning algorithms [34, 35, 36]. Recently witnessed the groundbreaking performances of deep learning methods for pose estimation [37, 38, 39, 40, 5]. In particular, Lou _et al._[39] presented a method that can generate collision-free poses in challenging environments, while authors in [38] synthesized poses using a variational autoencoder network. With the recent remarkable results in various generation tasks, diffusion models have also been applied for the task of pose generation [42, 19]. Different from these approaches which are affordance-agnostic, our proposed diffusion model tackles the task of affordance-specific pose generation. Some other earlier works leveraged the affordance learning for the problem of task-specific grasping [13, 5, 14]. Nonetheless, their methods are restricted to a predetermined set of affordance tasks. In comparison, our method focuses on the open-vocabulary setting. **Language-Conditioned Robotic Manipulation.** With the stunning advancements of large language models [43, 44, 45, 46, 47], several recent works [48, 49, 50, 51, 52, 53, 54] have utilized the rich semantics of language for the tasks of robotic manipulation. For instance, Ahn _et al._[50] proposed a method that constrains the language model to recommend actions that are both plausible to the robot and contextually appropriate. Silva _et al._[49] proposed to use language to support generalization in multi-task manipulation. The authors in [51] presented a framework that can learn meaningful skill abstractions from language-based expert demonstrations. More recently, Ren _et al._[53] introduced a language-conditioned and meta-learning approach that learns efficient policies adaptable to novel tools from text descriptions. Different from these works, our method addresses the task of language-conditioned affordance-pose joint learning, where the affordance language simultaneously grounds the affordance region and 6-DoF pose configurations. ## III The 3D Affordance-Pose Dataset We present the 3D Affordance-Pose dataset (3DAP) as a dataset for affordance-pose joint learning. To construct this dataset, we apply a semi-automatic pipeline in which we first collect affordance-annotated 3D point clouds from 3D AffordanceNet [8], a widely-used and currently the largest dataset for affordance detection in 3D point clouds. Next, we leverage 6-DoF GraspNet [38] method to generate a large number of 6-DoF pose candidates. Afterwards, we manually select the affordance-specific poses for each affordance that the object affords. **Point Cloud Collection.** We collect affordance-annotated point clouds from the recent 3D AffordanceNet dataset [8]. Each point cloud represents a single object and is an un-order set of \(2,048\) points. Each point is represented by its Euclidean coordinate. The point coordinate of every point cloud are normalized to be in \([0,1]\). In order to well represent the real-world objects, we scale the point clouds by different scale factors so that the longest side of an axis-aligned bounding box for each object is from \(5\,\mathrm{cm}\) to \(30\,\mathrm{cm}\). The collected objects are of well-used categories in the daily manipulation tasks, such as knife, bottle, or mug, etc. We express affordance labels as natural language descriptors. This facilitates open-vocabulary affordance detection, so that methods trained on our 3DAP dataset can potentially generalize to unseen affordances. Fig. 2: Affordance-specific pose examples. **Poses Collection.** We utilize 6-DoF GraspNet [38] to automatically generate a large number of pose candidates for each collected point clouds. In particular, for each object, we pick \(1,000\) successful parallel-jaw poses with highest evaluating scores. Following the Robotiq 2F-85 setting [55], the collected poses have the maximum grip aperture of \(85\,\mathrm{mm}\). From the generated poses, we manually select the affordance-specific poses for each object. Given an object and an affordance, we select among \(1,000\) candidates ones that best support the affordance task. For example, with a bottle and affordance open, the poses whose contact points lie on the lid are curated. In total, our dataset contains 28K gripper poses for a wide variety of affordance tasks. Examples of affordance-specific poses in our dataset are presented in Figure 2. ## IV Affordance-Pose Joint Learning ### _Problem Formulation_ We present 3DAPNet, a new method for affordance-pose joint learning. Given the captured 3D point cloud of an object and a set of affordance labels conveyed through natural language texts, our objective is to jointly produce both the relevant affordance regions and the appropriate pose configurations that facilitate the affordances. Particularly, 3DAPNet takes as input a point cloud denoted by \(\mathbf{C}=\{\mathbf{p}_{1},\mathbf{p}_{2},...,\mathbf{p}_{n}\}\), containing \(n\) points in 3D Euclidean space, alongside \(m\) arbitrary affordance labels articulated in natural language. The desired output from our framework encompasses an affordance map \(\mathbf{A}=\{a_{1},a_{2},...,a_{n}\}\) that assigns an affordance label to each point, and \(m\) sets of 6-DoF poses that facilitate corresponding affordances. We consider a 6-DoF pose as configured by \([\mathbf{g_{qu}},\mathbf{g_{r}}]\), in which \(\mathbf{g_{qu}}\) is a unit-norm quaternion representing the rotation and \(\mathbf{g_{tr}}\) is a translation vector. The overview of our network is illustrated in Figure 3. ### _Open-Vocabulary Affordance Detection_ We follow the recent work [10] to detect affordances with open-vocabulary setting. The input point cloud \(\mathbf{C}\) is plugged into a PointNet++ model [56] to extract \(n\) point-wise feature vectors \(\mathbf{P}_{1},\mathbf{P}_{2},...,\mathbf{P}_{n}\). Next, the \(m\) affordance language labels are fed into a text encoder \(\mathcal{T}\) to extract \(m\) text embeddings \(\mathbf{t}_{1},\mathbf{t}_{2},...,\mathbf{t}_{m}\). Similar to other works [10, 57, 58], the choices for the text encoder are versatile. To enable open-vocabulary affordance detection, we compute the semantic relations between the point cloud affordance and its potential labels by correlating the text embeddings and point features using cosine similarity function. Concretely, the \(F_{i,j}\) element at the \(i\)-th row and \(j\)-th column of the correlation matrix \(\mathbf{F}\in\mathbb{R}^{n\times m}\), which is the correlation score of the point feature \(\mathbf{P}_{i}\) and the affordance text embedding \(\mathbf{t}_{j}\), is computed as: \[F_{i,j}=\frac{\mathbf{P}_{i}^{\top}\mathbf{t}_{j}}{\left\|\mathbf{P}_{i} \right\|\left\|\mathbf{t}_{j}\right\|}. \tag{1}\] During the training, we optimize the PointNet++ to provide point embeddings that are close to the corresponding label text embeddings. The point-wise softmax output of every point \(i\) is then computed as: \[S_{i,j}=\frac{\exp\left(F_{i,j}/\eta\right)}{\sum_{k=1}^{m}\exp(F_{i,k}/\eta)} \, \tag{2}\] where \(\eta\) is a learned parameter. The loss function for affordance detection is computed as the negative log-likelihood of the softmax output over the entire point cloud: \[\mathcal{L}_{\text{aff}}=-\sum_{i=1}^{n}\log S_{i,a_{i}}. \tag{3}\] ### _Language-Conditioned Pose Generation_ Our key contribution is a new guided diffusion model to address the task of affordance-specific pose generation. Our Fig. 3: The overview of our 3DAPNet. Our network includes two branches: one for affordance detection and one for pose generation. The unrestricted affordance label represented in natural language form enables the open-vocabulary setting. In inference, the predicted affordance map and the generated pose are combined to further support the appropriate task. diffusion model is designed to produce poses that not only based on the point cloud, but also facilitate the affordance task by conditioning on the input text. **Forward Process.** Given a pose from the dataset \(\mathbf{g}_{0}\sim q(\mathbf{g})\), in the forward process, we gradually add to the pose small amounts of Gaussian noise in \(T\) steps, creating a sequence of noisy poses \(\mathbf{g}_{1},\mathbf{g}_{2},\ldots,\mathbf{g}_{T}\). When \(T\rightarrow\infty\), \(\mathbf{g}_{T}\) is equivalent to \(\mathcal{N}\left(\mathbf{0},\mathbf{I}\right)\)[59]. The noise step sizes are specified by a predefined variance schedule \(\left\{\beta_{t}\in\left(0,1\right)\right\}_{t=1}^{T}\). From that, the forward process is formulated as \(q\left(\mathbf{g}_{t}\mid\mathbf{g}_{t-1}\right)=\mathcal{N}\left(\sqrt{1- \beta_{t}}\mathbf{g}_{t-1},\beta_{t}\mathbf{I}\right)\). The noisy sample at any arbitrary time step \(t\) can be obtained in a closed form of: \[\mathbf{g}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{g}_{0}+\sqrt{1-\bar{\alpha}_{t} }\boldsymbol{\epsilon}, \tag{4}\] where \(\bar{\alpha}_{t}=\prod_{i=1}^{t}\alpha_{t}\) with \(\alpha_{t}=1-\beta_{t}\) and \(\boldsymbol{\epsilon}\sim\mathcal{N}\left(\mathbf{0},\mathbf{I}\right)\). **Reverse Process.** The reverse process allows us to generate a pose from the Gaussian noise by gradually denoising through \(T\) steps via the reverse probability \(q\left(\mathbf{g}_{t-1}\mid\mathbf{g}_{t},\mathbf{c},\mathbf{t}\right)\). In this probability, \(\mathbf{c}\) is the point cloud feature produced by the PointNet++ encoder and \(\mathbf{t}\) is the text embedding of the affordance of interest. \(\mathbf{c}\) and \(\mathbf{t}\) represent two guidances that our model need to condition on, i.e., the point cloud object and the affordance text. As \(q\left(\mathbf{g}_{t-1}\mid\mathbf{g}_{t},\mathbf{c},\mathbf{t}\right)\) is intractable [59], we approximate it with a neural network. More particularly, we approximate the noise \(\boldsymbol{\epsilon}\) at every timestep \(t\) by a denoising network \(\boldsymbol{\epsilon}_{\boldsymbol{\theta}}\left(\mathbf{g}_{t},\mathbf{c}, \mathbf{t},t\right)\). \(\boldsymbol{\epsilon}\) is updated to minimize the difference between the real and approximated noises. The loss function for pose generation is therefore computed as: \[\mathcal{L}_{\text{pose}}=\mathbb{E}_{\mathbf{c},\mathbf{g}_{0},\mathbf{c}, \mathbf{t},t}\left[\left\|\boldsymbol{\epsilon}-\boldsymbol{\epsilon}_{ \boldsymbol{\theta}}\left(\mathbf{g}_{t},\mathbf{c},\mathbf{t},t\right) \right\|^{2}\right]. \tag{5}\] Following other works [60], to balance between the quality and the diversity of the generated poses, we randomly drop the conditions \(\mathbf{c}\) and \(\mathbf{t}\) to train unconditionally with a probability \(p_{\text{uncond}}\). Our design allows conditional training and unconditional training to use a single network. Subsequently, we detail the design of our denoising network \(\boldsymbol{\epsilon}_{\boldsymbol{\theta}}\). Kindly refer to Figure 3 for an illustrative demonstration. In particular, following other works of diffusion models [59], we employ a downscale-upscale U-Net architecture [61] for the network. The noisy pose \(\mathbf{g}_{t}\) at timestep \(t\) is first plugged into three consecutive downscaling MLPs, and then, three other consecutive MLPs are used in the upscaling phase. To form the input to each upscaling MLP, we combine the output of the preceding one, the feature from skip connection, the time embedding \(\tau\) computed from the timestep \(t\), and the unified context \(\mathbf{u}\) combining the point cloud feature \(\mathbf{c}\) and the text feature \(\mathbf{t}\). The unified context \(\mathbf{u}\) is obtained via a ContextNet module. In this ContextNet, we first compute the point cloud influence mask \(\mathbf{m}_{\mathbf{c}}\) and text influence mask \(\mathbf{m}_{\mathbf{t}}\) using two MLPs and a softmax layer. The influence masks are at the same size as the two features. The unified context \(\mathbf{u}\) is then computed as: \[\mathbf{u}=\mathbf{c}\odot\mathbf{m}_{\mathbf{c}}+\mathbf{t}\odot\mathbf{m}_ {\mathbf{t}}, \tag{6}\] where \(\odot\) represents the element-wise multiplication. **Pose Sampling.** When finishing the model training, we can sample poses from Gaussian noise by applying the reverse process from timestep \(T\) to \(0\) using the formulation: \[\mathbf{g}_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}\left(\mathbf{g}_{t}-\frac{1- \alpha_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\bar{\boldsymbol{\epsilon}}_{\boldsymbol {\theta}}\left(\mathbf{g}_{t},\mathbf{c},\mathbf{t},t\right)\right)+\sqrt{ \beta_{t}}\mathbf{z}, \tag{7}\] where \(\mathbf{z}\sim\mathcal{N}\left(\mathbf{0},\mathbf{I}\right)\) if \(t>1\), else \(\mathbf{z}=\mathbf{0}\), and \(\bar{\boldsymbol{\epsilon}}_{\boldsymbol{\theta}}\left(\mathbf{g}_{t},\mathbf{c },\mathbf{t},t\right)\) is calculated as: \[\bar{\boldsymbol{\epsilon}}_{\boldsymbol{\theta}}\left(\mathbf{g}_{t},\mathbf{c },\mathbf{t},t\right)=\left(w+1\right)\boldsymbol{\epsilon}_{\boldsymbol{ \theta}}\left(\mathbf{g}_{t},\mathbf{c},\mathbf{t},t\right)-w\boldsymbol{ \epsilon}_{\boldsymbol{\theta}}\left(\mathbf{g}_{t},t\right), \tag{8}\] where \(w\) is a guidance scale hyperparameter and \(\boldsymbol{\epsilon}_{\boldsymbol{\theta}}\left(\mathbf{g}_{t},t\right)\) is the predicted noise when the conditions are discarded. ### _Training and Inference_ We define the overall loss function as \(\mathcal{L}=\mathcal{L}_{\text{aff}}+\mathcal{L}_{\text{pos}}\). The number of points in each point cloud is fixed to \(n=2,048\). We utilize the state-of-the-art CLIP text encoder [11] and freeze it during training. For the diffusion model, we set \(T=1,000\), and set the forward diffusion variances to constants increasing linearly from \(\beta_{1}=10^{-4}\) to \(\beta_{T}=0.02\). The unconditional training probability is set to \(p_{\text{uncond}}=0.05\). The whole network is trained end-to-end over 200 epochs on a 24GB-RAM NVIDIA GeForce RTX 3090 Ti with a batch size of 32. The Adam optimizer [62] with the learning rate \(10^{-3}\) and the weight decay \(10^{-4}\) is used. When sampling poses, we set the guidance scale to \(w=0.2\). Our framework takes \(180\,\mathrm{ms}\) to detect affordances and generate \(2,000\) corresponding 6-DoF poses for one point cloud. ## V Experiments In this section, we conduct several experiments to demonstrate the effectiveness of our proposed 3DAPNet trained on our 3DAP dataset. We start by comparing our method with other baselines. Second, we present 3DAPNet's notable qualitative results. Third, we provide different ablation studies for a more in-depth investigation of our method. Finally, we validate our framework in real robotic experiments. ### _Quantitative Comparisons_ **Baselines.** We compare our 3DAPNet with the following methods: 6D-TGD [5], OpenAD [10], 6D-GraspNet [38]. Note that, OpenAD does not support pose estimation, while 6D-GraspNet does not support affordance detection. We tailor 6D-GraspNet [38] to open-vocabulary setting by incorporating the affordance text branch to the network input. All methods are trained on our dataset with the splitting ratio for training, evaluation, and testing of 7:1:2. **Metrics.** For affordance detection, following [10], we evaluate the methods using three metrics, i.e., mIoU (mean IoU over all affordance classes), Acc (overall accuracy of all points), and mAcc (mean accuracy over all affordances). For pose generation, we use two metrics as in recent works: the mean evaluated similarity metric (mESM) [5] and the mean coverage rate (mCR) [38]. To validate the pose estimation results, we generate 200 poses for each pair of object-affordance during the testing. **Result.** Table I illustrates the performance of our 3DAPNet across both tasks, consistently achieving the highest scores across all five metrics when compared to alternative approaches. Specifically, in the affordance detection task, 3DAPNet exhibits improvements of 0.8% on mIoU, 1.63% on Acc, and 1.04% on mAcc relative to its closest competitors. Regarding pose estimation, 3DAPNet is nearly twice as good as the runner up 6D-TGD on mESM metric (0.120 compared to 0.219). Furthermore, 3DAPNet significantly outperforms competitors in the mCR metric, with a score of 44.63% compared to the second-best 6D-TGD's 30.10%. ### _Qualitative Results_ **Qualitative Comparison.** We present qualitative results to compare our 3DAPNet with other methods in pose generation capability. Particularly, we select poses generated by our method, 6D-GraspNet [38], and 6D-TGD [5]. The example poses are shown in Figure 4. We observe that our method produces more poses that directly support the affordance tasks, while in contrast, the other two baselines generate a large number of poses that do not facilitate them. This result further highlights the enhanced effectiveness of our approach. **Generalization to Unseen Affordances.** We present several examples demonstrating 3DAPNet's ability to generalize to unseen affordances in Figure 5. For affordances in the training set, 3DAPNet yields high-quality results both in affordance detection and pose generation. With the reference of seen affordances, when evaluating on unseen affordances, our method still succeeds in detecting the associated regions and generating the corresponding appropriate poses, though those affordances do not appear in the training set. **Generalization to Unseen Objects.** We extend our assessment to the broader context of unseen object categories. Concretely, we curate new objects from the ShapeNetCore dataset [63] and feed both seen and unseen affordances to the model. The reasonable outcomes shown in Figure 6 reaffirm the generalization of our 3DAPNet. ### _Ablation Study_ **Single branch vs. jointly learned network.** We investigate the performance of each branch in our 3DAPNet on its corresponding task while excluding the other. The results are detailed in Table II. In the case of pose generation only, we retain the usage of the point cloud and text embeddings by keeping the PointNet++ encoder and the CLIP text encoder, while remove the PointNet++ decoder and the correlation head. The results indicate that the combination of the two branches yields the highest performance on both tasks when compared to each branch operating individually. This further validates the efficacy of our design, where the learning processes of the two tasks mutually benefit each other. **Effectiveness of ContextNet.** We further validate the effectiveness of our framework design by performing an ablation experiment on the ContextNet. Specifically, we report the performances of our framework with and without the ContextNet in Table III. In the case of ContextNet is Fig. 4: Qualitative comparison. Green color denotes poses that are related to the input text, while red indicates poses not related to the input text. Fig. 5: Qualitative results of 3DAPNet’s generalization to unseen affordances. The unseen affordances are shown in orange. Fig. 6: Qualitative results of 3DAPNet’s generalization to unseen object categories. The unseen affordances are shown in orange. removed, we combine the point cloud and affordance text conditions naively by adding them. The empirical result shows that our framework with ContextNet completely dominates the other one, with a very large gap on the task of pose generation, which is the main target of our design. **Text Encoder.** As the text encoder is critical in our framework, we conduct extensive study to investigate the performances of different text encoders. In particular, we use three state-of-the-art text encoder, which are BERT [12], RoBERTa [64], and CLIP [11]. The result is shown in Table IV. We observe that the CLIP encoder significantly outperforms its counterparts on both tasks, especially on pose generation. This result demonstrates the superiority of CLIP in language-vision understanding. ### _Robotic Demonstration_ The experiment setup, shown in Figure 7, comprises three main modules, i.e., the inference module, the ROS module, and the real-time controller module. In the inference module, after receiving point cloud data of the environment from the RealSense D435i camera, we utilize the state-of-the-art object localization method [66] to identify the object, then perform point sampling to get \(2,048\) points. We then feed this point cloud with a text affordance command into our proposed 3DAPNet. Then, the generated affordance pose from 3DAPNet is sent to ROS for planning and trajectory generation. Analytical inverse kinematics [65] and trajectory optimization [67] are employed to compute optimal trajectories of the robot to reach the computed pose provided by our network. Note that, using our 3DAPNet, we can have a general input command and are not restricted to a predefined affordance label set. Several demonstrations can be found in our Demonstration Video. ### _Discussion_ Despite promising results, it is important to acknowledge that our method has not fulfilled the perfect ability in universal affordance detection and pose estimation. There are cases where our method shows its limitations, which are presented in Figure 8. Particularly, on the left are two cases of fail and false-positive detection of unseen affordances. In two cases on the right, we show examples where our method generates poses that do not facilitate the corresponding affordances. Furthermore, our method can only detect affordance from single objects due to the dataset limitation. This leads to the fact that it is not straightforward to perform evaluation on real robots with other methods. Therefore, having a large-scale dataset with cluttered point cloud scenes would enable more qualitative comparisons and applications. ## VI Conclusions We have tackled the task of open-vocabulary affordance detection and pose estimation in 3D point clouds. In particular, we have presented the 3DAP dataset for affordance-pose joint learning and proposed the 3DAPNet method that can simultaneously detect open-vocabulary affordances and generate affordance-specific 6-DoF poses. Experimental results show that our approach outperforms other methods by a large margin on both tasks. We extensively demonstrated the effectiveness of 3DAPNet in real-world robotic manipulation applications. We hope that the prospective results of our 3DAPNet could encourage more future researchers to further investigate this important yet challenging problem. Our code and trained model will be made publicly available. Fig. 8: Some wrong cases of our method. Fig. 7: The overview of the robot experiment setup. More qualitative results can be found in our Demonstration Video.
2307.00076
Clonoids between modules
Clonoids are sets of finitary functions from an algebra $\mathbb{A}$ to an algebra $\mathbb{B}$ that are closed under composition with term functions of $\mathbb{A}$ on the domain side and with term functions of $\mathbb{B}$ on the codomain side. For $\mathbb{A},\mathbb{B}$ (polynomially equivalent to) finite modules we show: If $\mathbb{A},\mathbb{B}$ have coprime order and the congruence lattice of $\mathbb{A}$ is distributive, then there are only finitely many clonoids from $\mathbb{A}$ to $\mathbb{B}$. This is proved by establishing for every natural number $k$ a particular linear equation that all $k$-ary functions from $\mathbb{A}$ to $\mathbb{B}$ satisfy. Else if $\mathbb{A},\mathbb{B}$ do not have coprime order, then there exist infinite ascending chains of clonoids from $\mathbb{A}$ to $\mathbb{B}$ ordered by inclusion. Consequently any extension of $\mathbb{A}$ by $\mathbb{B}$ has countably infinitely many $2$-nilpotent expansions up to term equivalence.
Peter Mayr, Patrick Wynne
2023-06-30T18:30:32Z
http://arxiv.org/abs/2307.00076v3
# Clonoids between modules ###### Abstract. Clonoids are sets of finitary functions from an algebra \(\mathbf{A}\) to an algebra \(\mathbf{B}\) that are closed under composition with term functions of \(\mathbf{A}\) on the domain side and with term functions of \(\mathbf{B}\) on the codomain side. For \(\mathbf{A},\mathbf{B}\) (polynomially equivalent to) finite modules we show: If \(\mathbf{A},\mathbf{B}\) have coprime order and the congruence lattice of \(\mathbf{A}\) is distributive, then there are only finitely many clonoids from \(\mathbf{A}\) to \(\mathbf{B}\). This is proved by establishing for every natural number \(k\) a particular linear equation that all \(k\)-ary functions from \(\mathbf{A}\) to \(\mathbf{B}\) satisfy. Else if \(\mathbf{A},\mathbf{B}\) do not have coprime order, then there are infinitely many clonoids from \(\mathbf{A}\) to \(\mathbf{B}\) and not all of them are finitely generated. Consequently any extension of \(\mathbf{A}\) by \(\mathbf{B}\) has countably infinitely many \(2\)-nilpotent expansions up to term equivalence. Key words and phrases:closed function classes, clones, abelian Mal'cev algebras, linear functional equations 2020 Mathematics Subject Classification: 08A40, 08A02 The research of the first author was partially supported by the NSF under grant no. DMS-1500254 and by the Austrian Science Fund (FWF): P33878 ## 1. Introduction Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space and \(\mathbf{B}\) a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space and \(\mathbf{B}\) a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space and \(\mathbf{B}\) a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space and \(\mathbf{B}\) a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space and \(\mathbf{B}\) a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{B}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. Let \(\mathbf{A}\) be a \(\mathbb{N}\)-dimensional vector space. \(\mathbf{A}_{1},\mathbf{A}_{2}\) on the same universe, \(\mathbf{A}_{1}\) is an _expansion_ of \(\mathbf{A}_{2}\) if \(\operatorname{Clo}(\mathbf{A}_{1})\supseteq\operatorname{Clo}(\mathbf{A}_{2})\); we say \(\mathbf{A}_{1}\) and \(\mathbf{A}_{2}\) are _term equivalent_ if \(\operatorname{Clo}(\mathbf{A}_{1})=\operatorname{Clo}(\mathbf{A}_{2})\). All of the function classes mentioned above are instances of clonoids from \(\mathbf{A}\) to \(\mathbf{B}\) in the sense of Definition 1.1 for the appropriate choice of algebras. In this paper we collect basic properties and background on clonoids in Section 2. Then we mainly investigate clonoids between algebras that are polynomially equivalent to modules in the vein of [6, 11]. Recall that an algebra \(\mathbf{A}\) is polynomially equivalent to a module if all basic operations of \(\mathbf{A}\) are affine functions on that module (see Section 2.4). In particular we consider the following problem: **Question 1.2**.: Given finite (algebras that are polynomially equivalent to) modules \(\mathbf{A}\) and \(\mathbf{B}\), how many clonoids are there from \(\mathbf{A}\) to \(\mathbf{B}\)? Are they all finitely generated? From work of Aichinger and the first author of this paper in [1] it follows that for \(\mathbf{B}\) a finite algebra with few subpowers (in particular, for \(\mathbf{B}\) polynomially equivalent to a module), all clonoids from a finite algebra \(\mathbf{A}\) to \(\mathbf{B}\) are finitely related and hence there are at most countably infinitely many (see Theorem 2.9). Kreinecker [11] showed that there are countably infinitely many clonoids from \((\mathbb{Z}_{p},+)\) to \((\mathbb{Z}_{p},+)\) for any prime \(p\), which he then used to construct infinitely many non-finitely generated clones on \(\mathbb{Z}_{p}^{2}\) that contain \(+\). We generalize his result to obtain the following in Section 5. **Theorem 1.3**.: _Let \(\mathbf{A},\mathbf{B}\) be finite modules whose orders are not coprime. Then there are countably infinitely many clonoids from \(\mathbf{A}\) to \(\mathbf{B}\) and not all of them are finitely generated._ Together with Lemma 2.6 below and the subsequent remark, this yields that any finite module \(\mathbf{A}\) with a submodule \(B\) and \(\gcd(|A/B|,|B|)>1\) has countably infinitely many 2-nilpotent expansions up to term equivalence. Fioravanti proved for \(\mathbf{A}\) and \(\mathbf{B}\) finite coprime regular modules over products of finite fields that every clonoid from \(\mathbf{A}\) to \(\mathbf{B}\) is generated by its unary functions (and so there are finitely many of them) [6, Theorem 1.2]. Note that under his assumptions the lattices of submodules of \(\mathbf{A}\) and \(\mathbf{B}\) are distributive (in fact, Boolean). As a consequence, he then showed that a finite abelian group has finitely many expansions up to term equivalence if and only if the order of that group is squarefree in [7]. Recall that a module is _distributive_ if its lattice of submodules is distributive. Our main result generalizes Fioravanti's result and settles Question 1.2 for \(\mathbf{A}\) (polynomially equivalent to) a distributive module. **Theorem 1.4**.: _Let \(\mathbf{A}\) be polynomially equivalent to a finite distributive \(\mathbf{R}\)-module, let \(n\) be the nilpotence degree of the Jacobson radical of \(\mathbf{R}\), and let \(\mathbf{B}\) be polynomially equivalent to an \(\mathbf{S}\)-module such that \(|A|\) is invertible in \(\mathbf{S}\)._ 1. _Then every clonoid from_ \(\mathbf{A}\) _to_ \(\mathbf{B}\) _is generated by its_ \(n+1\)_-ary functions (by its_ \(n\)_-ary functions if_ \(\mathbf{A}\) _is an_ \(\mathbf{R}\)_-module)._ 2. _If_ \(\mathbf{B}\) _is finite, then there are only finitely many clonoids from_ \(\mathbf{A}\) _to_ \(\mathbf{B}\)_._ For example, let \(m\in\mathbb{N}\) and let \(n\) be the exponent of the largest power of a prime \(p\) such that \(p^{n}\) divides \(m\). Then \(\mathbf{A}:=(\mathbb{Z}_{m},x-y+z)\) is polynomially equivalent to \(\mathbf{A}_{0}:=(\mathbb{Z}_{m},+)\), the regular module of the ring \(\mathbf{R}:=(\mathbb{Z}_{m},+,\cdot,1)\). Clearly \(\mathbf{A}_{0}\) is distributive and the Jacobson radical \(J(\mathbf{R})\) satisfies \(J(\mathbf{R})^{n}=0\). Let \(\mathbf{B}\) be a finite module of order coprime to \(m\). By Theorem 1.4 every clonoid from \(\mathbf{A}_{0}\) into \(\mathbf{B}\) is generated by \(n\)-ary functions and every clonoid from \(\mathbf{A}\) to \(\mathbf{B}\) is generated by \(n+1\)-ary functions. We first prove Theorem 1.4 for modules \(\mathbf{A},\mathbf{B}\) in Section 3. Our main tool for this is that under the given assumptions, for every \(k\in\mathbb{N}\) we find a specific linear equation that is satisfied by every \(k\)-ary function from \(\mathbf{A}\) to \(\mathbf{B}\) (Theorem 3.1). As it turns out, every finitary function from \(\mathbf{A}\) to \(\mathbf{B}\) can be uniformly generated by its \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors (see Sections 2.1, 2.7). In Section 4 we then extend these results from modules to algebras that are polynomially equivalent to modules. The main difference there is that if \(\mathbf{A}\) does not have a constant term function \(0\), we have to replace that by a projection, i.e., an additional variable \(z\), in our interpolation arguments. This additional variable yields that the clonoids in the general case are generated by \(n+1\)-ary functions instead of \(n\)-ary as for modules. Finally we investigate necessary conditions such that every clonoid from algebras \(\mathbf{A}\) to \(\mathbf{B}\) is generated by \(n\)-ary functions in Section 5. We show that every subalgebra of \(\mathbf{A}\) has to be generated by \(n\) elements in Lemma 5.1. Further, if every clonoid from a regular \(\mathbf{R}\)-module \(\mathbf{A}\) to \(\mathbf{B}\) is generated by unary functions, then \(\mathbf{R}\) is semisimple (Lemma 5.2). Hence for a ring \(\mathbf{R}\) with \(J(\mathbf{R})\neq 0\) and \(J(\mathbf{R})^{2}=0\), the assertion of Theorem 1.4 that all clonoids from distributive \(\mathbf{R}\)-modules \(\mathbf{A}\) to coprime modules \(\mathbf{B}\) are generated by binary functions cannot be strengthened to unary functions. As consequence and partial converse to Theorem 1.4 we can characterize the pairs of finite modules \(\mathbf{A}\) over commutative rings and modules \(\mathbf{B}\) such that every clonoid from \(\mathbf{A}\) to \(\mathbf{B}\) is generated by unary functions. **Theorem 1.5**.: _For a finite faithful \(\mathbf{R}\)-module \(\mathbf{A}\) over a commutative ring \(\mathbf{R}\) and a finite \(\mathbf{S}\)-module \(\mathbf{B}\) the following are equivalent:_ 1. _Every clonoid from_ \(\mathbf{A}\) _to_ \(\mathbf{B}\) _is generated by unary functions._ 2. _The orders of_ \(\mathbf{A}\) _and_ \(\mathbf{B}\) _are coprime,_ \(\mathbf{R}\) _is a direct product of finite fields and_ \(\mathbf{A}\) _is isomorphic to the regular_ \(\mathbf{R}\)_-module._ The proof is given in Section 5. Note that item (2) in Theorem 1.5 implies that \(\mathbf{A}\) is a distributive module. In general it remains open whether the condition that \(\mathbf{A}\) is distributive is necessary so that the number of clonoids from \(\mathbf{A}\) into any coprime module is finite. We do not even know the answer to the following. **Question 1.6**.: For a prime \(p\), how many clonoids are there from the group \((\mathbb{Z}_{p}^{2},+)\) into a finite module of order coprime to \(p\)? ## 2. Preliminaries and Notation ### General We write \(\mathbb{N}:=\{1,2,\dots\}\) for the set of positive integers and \([n]:=\{1,\dots,n\}\) for \(n\in\mathbb{N}\). Algebras are pairs \(\mathbf{A}:=(A,F)\) with _universe_ (domain) \(A\) and a set \(F\) of finitary operations on \(A\) (the _basic operations_ of \(\mathbf{A}\)). **Definition 2.1**.: Let \(\mathbf{A},\mathbf{B}\) be algebras with universes \(A,B\), respectively. 1. Let \(F(A,B):=\bigcup_{k\in\mathbb{N}}B^{A^{k}}\) denote the clonoid of finitary functions from \(\mathbf{A}\) to \(\mathbf{B}\). 2. For \(F\subseteq F(A,B)\), we denote the smallest clonoid from \(\mathbf{A}\) to \(\mathbf{B}\) that contains \(F\) by \(\langle F\rangle_{\mathbf{A},\mathbf{B}}\) and call it the clonoid _generated_ by \(F\). For \(f\in F(A,B)\), we also write \(\langle f\rangle_{\mathbf{A},\mathbf{B}}\) instead of \(\langle\{f\}\rangle_{\mathbf{A},\mathbf{B}}\). 3. A clonoid \(C\) from \(\mathbf{A}\) to \(\mathbf{B}\) is _finitely generated_ if it is generated by some finite set \(F\subseteq F(A,B)\) 4. If \(g\in\langle f\rangle_{\mathbf{A},\mathbf{B}}\) for \(f\in F(A,B)\), we call \(g\) an \(\mathbf{A},\mathbf{B}\)-_minor_ of \(f\). If \(\mathbf{A}\) and \(\mathbf{B}\) are clear from context, we also write \(\langle F\rangle\) and \(\langle f\rangle\) instead of \(\langle F\rangle_{\mathbf{A},\mathbf{B}}\) and \(\langle f\rangle_{\mathbf{A},\mathbf{B}}\). **Definition 2.2**.: For \(C\subseteq F(A,B)\) and \(k\in\mathbb{N}\), let \(C^{(k)}:=C\cap B^{A^{k}}\) denote the set of \(k\)-ary functions in \(C\). ### The lattice of clonoids For algebras \(\mathbf{A}\) and \(\mathbf{B}\), we denote the set of all clonoids from \(\mathbf{A}\) to \(\mathbf{B}\) by \(\mathcal{C}_{\mathbf{A},\mathbf{B}}\). Then \(\mathcal{C}_{\mathbf{A},\mathbf{B}}\) forms a lattice with meet given by intersection and the join of clonoids \(C,D\) given by the clonoid generated by the union \(\langle C\cup D\rangle_{\mathbf{A},\mathbf{B}}\). This is a bounded lattice with bottom element \(\emptyset\) and top element \(F(A,B)\). We note some straightforward relations for clonoids between algebras, their expansions, quotients and subalgebras. **Lemma 2.3**.: _Let \(\mathbf{A},\mathbf{B}\) be a algebras with expansions \(\mathbf{A}^{+},\mathbf{B}^{+}\), respectively. Then \(\mathcal{C}_{\mathbf{A}^{+},\mathbf{B}^{+}}\) is a meet-subsemilattice of \(\mathcal{C}_{\mathbf{A},\mathbf{B}}\)._ Proof.: Every clonoid from \(\mathbf{A}^{+}\) to \(\mathbf{B}^{+}\) is also a clonoid from \(\mathbf{A}\) to \(\mathbf{B}\). Clearly the intersection of clonoids in \(\mathcal{C}_{\mathbf{A}^{+},\mathbf{B}^{+}}\) is the same as in \(\mathcal{C}_{\mathbf{A},\mathbf{B}}\). Since joins in \(\mathcal{C}_{\mathbf{A}^{+},\mathbf{B}^{+}}\) require closure under \(\operatorname{\mathrm{Clo}}(\mathbf{A}^{+})\) and \(\operatorname{\mathrm{Clo}}(\mathbf{B}^{+})\), they may properly contain the joins in \(\mathcal{C}_{\mathbf{A},\mathbf{B}}\). For any equivalence relation \(\alpha\) on \(A\), any function \(f\colon(A/\alpha)^{k}\to B\) can be lifted to a function \(f^{\prime}\colon A^{k}\to B\) that is constant on \(\alpha\)-blocks via \[f^{\prime}(x_{1},\ldots,x_{k}):=f(x_{1}/\alpha,\ldots,x_{k}/\alpha).\] **Lemma 2.4**.: _Let \(\mathbf{A}\) and \(\mathbf{B}\) be algebras, let \(\alpha\) be a congruence on \(\mathbf{A}\) and let \(\mathbf{B}^{\prime}\) be a subalgebra of \(\mathbf{B}\). Then_ \[\varphi\colon\mathcal{C}_{\mathbf{A}/\alpha,\mathbf{B}^{\prime}}\to\mathcal{C }_{\mathbf{A},\mathbf{B}},\ C\mapsto\{f^{\prime}\ :\ f\in C\},\] _is a lattice embedding._ Proof.: Let \(C\in\mathcal{C}_{\mathbf{A}/\alpha,\mathbf{B}^{\prime}}\). To show \(\varphi(C)\in\mathcal{C}_{\mathbf{A},\mathbf{B}}\), let \(f\in C^{(k)}\), \(s_{1},\ldots,s_{k}\in\operatorname{\mathrm{Clo}}(\mathbf{A})^{(m)}\), and \((x_{1},\ldots,x_{m})\in A^{m}\). Then \[[f^{\prime}(s_{1},\ldots,s_{k})](x_{1},\ldots,x_{m}) =f^{\prime}(s_{1}(x_{1},\ldots,x_{m}),\ldots,s_{k}(x_{1},\ldots,x_ {m}))\] \[=f(s_{1}(x_{1},\ldots,x_{m})/\alpha,\ldots,s_{k}(x_{1},\ldots,x_{m} )/\alpha)\] \[=f(s_{1}(x_{1}/\alpha,\ldots,x_{m}/\alpha),\ldots,s_{k}(x_{1}/ \alpha,\ldots,x_{m}/\alpha))\] \[=[f(s_{1},\ldots,s_{k})]^{\prime}(x_{1},\ldots,x_{m})\in\varphi(C).\] Also for \(f_{1},\ldots,f_{k}\in C^{(m)}\), \(t\in\operatorname{\mathrm{Clo}}(\mathbf{B})^{(k)}\) and \((x_{1},\ldots,x_{m})\in A^{m}\), \[[t(f_{1}^{\prime},\ldots,f_{k}^{\prime})](x_{1},\ldots,x_{m}) =t(f_{1}^{\prime}(x_{1},\ldots,x_{m}),\ldots,f_{k}^{\prime}(x_{1},\ldots,x_{m}))\] \[=t(f_{1}(x_{1}/\alpha,\ldots,x_{m}/\alpha),\ldots,f_{k}(x_{1}/ \alpha,\ldots,x_{m}/\alpha))\] \[=[t(f_{1},\ldots,f_{k})]^{\prime}(x_{1},\ldots,x_{m})\in\varphi(C).\] Hence \(\varphi(C)\) is a clonoid from \(\mathbf{A}\) to \(\mathbf{B}\). That \(\varphi\) is injective and preserves meets and joins is immediate. **Lemma 2.5**.: _Let \(\mathbf{A}\) and \(\mathbf{B}\) be algebras and let \(\mathbf{A}^{\prime}\) be a subalgebra of \(\mathbf{A}\). Then restriction of functions from \(A\) to \(A^{\prime}\) yields a lattice homomorphism from \(\mathcal{C}_{\mathbf{A},\mathbf{B}}\) to \(\mathcal{C}_{\mathbf{A}^{\prime},\mathbf{B}}\)._ Proof.: Straightforward since term functions of \(\mathbf{A}^{\prime}\) are restrictions of term functions of \(\mathbf{A}\) We give one easy example of how clonoids arise in the description of expansions of algebras. Let \(\mathbf{A}\) be a module with a submodule \(B\). For \(f\colon A^{k}\to A\) and \(g\colon(A/B)^{k}\to B\) write \[f+g^{\prime}\colon A^{k}\to A,\ x\mapsto f(x)+g(x+B^{k}).\] For a clonoid \(C\) from \(\mathbf{A}/B\) to \(\mathbf{B}\) let \[\operatorname{Clo}(\mathbf{A})+C:=\{f+g^{\prime}\ :\ k\in\mathbb{N},f\in \operatorname{Clo}(\mathbf{A})^{(k)},g\in C^{(k)}\}.\] **Lemma 2.6**.: _Let \(\mathbf{A}\) be a module with a submodule \(\mathbf{B}\) and let \(\mathcal{D}\) be the lattice of clones on \(A\) that contain \(\operatorname{Clo}(\mathbf{A})\). Then_ \[\varphi\colon\mathcal{C}_{\mathbf{A}/B,\mathbf{B}}\to\mathcal{D},\ C\mapsto \operatorname{Clo}(\mathbf{A})+C,\] _is a lattice embedding._ Proof.: Let \(C\) be a clonoid from \(\mathbf{A}/B\) to \(\mathbf{B}\). Since \(C\) contains the constant \(0\) function, \(\operatorname{Clo}(\mathbf{A})\subseteq\operatorname{Clo}(\mathbf{A})+C\) and the latter contains all projections on \(A\). To see that \(\operatorname{Clo}(\mathbf{A})+C\) is closed under composition, let \(u\in\operatorname{Clo}(\mathbf{A}),v\in C\) be \(n\)-ary and let \(f_{1},\ldots,f_{n}\in\operatorname{Clo}(\mathbf{A}),g_{1},\ldots,g_{n}\in C\) be \(k\)-ary. Then \[[u+v^{\prime}]\left(f_{1}+g_{1}^{\prime},\ldots,f_{n}+g_{n}^{ \prime}\right)\] \[= u(f_{1}+g_{1}^{\prime},\ldots,f_{n}+g_{n}^{\prime})+v^{\prime}(f _{1}+g_{1}^{\prime},\ldots,f_{n}+g_{n}^{\prime})\] \[= u(f_{1},\ldots,f_{n})+[u(g_{1},\ldots,g_{n})]^{\prime}+[v(f_{1},\ldots,f_{n})]^{\prime}\] since \(u\) is linear on \(\mathbf{A}\) and \(v^{\prime}\) is constant on \(B\)-cosets. Note that \(u(f_{1},\ldots,f_{n})\in\operatorname{Clo}(\mathbf{A})\) and \(u(g_{1},\ldots,g_{n})+v(f_{1},\ldots,f_{n})\in C\) by the respective closure properties. Hence \([u+v^{\prime}]\left(f_{1}+g_{1}^{\prime},\ldots,f_{n}+g_{n}^{\prime}\right)\) is in \(\operatorname{Clo}(\mathbf{A})+C\) and the latter is a clone. That \(\varphi\) is a lattice embedding is straightforward. We note that all the expansions \(\mathbf{A}_{C}:=(A,\operatorname{Clo}(\mathbf{A})+C)\) in Lemma 2.6 are \(2\)-nilpotent with respect to the term condition commutator [8, Chapter 7]. In particular the congruence modulo \(B\) is central in \(\mathbf{A}_{C}\). ### Relational description of clonoids Pippenger [14] extended the Galois theory between clones and relations to the setting of clonoids between sets and pairs of relations. Couceiro and Foldes [5] further expanded this to our setting of clonoids between algebras. **Definition 2.7**.: Let \(A\) and \(B\) be nonempty sets and let \(n\in\mathbb{N}\). For \(R\subseteq A^{n},S\subseteq B^{n}\), let \[\operatorname{Pol}(R,S):=\bigcup_{k\in\mathbb{N}}\{f\colon A^{k}\to B\mid f(R,\ldots,R)\subseteq S\}\] denote the set of _polymorphisms_ of the relational pair \((R,S)\). **Theorem 2.8**.: _[_5_, cf. Theorem 2]_ _Let \(\mathbf{A}\) and \(\mathbf{B}\) be algebras with \(|A|\) finite. For \(C\subseteq\bigcup_{n\in\mathbb{N}}B^{A^{n}}\) the following are equivalent:_ 1. \(C\) _is a clonoid from_ \(\mathbf{A}\) _to_ \(\mathbf{B}\)_._ 2. _There exist subalgebras_ \(R_{i},S_{i}\) _of_ \(\mathbf{A}^{m_{i}},\mathbf{B}^{m_{i}}\)_, respectively, for_ \(m_{i}\in\mathbb{N}\) _and_ \(i\) _in a set_ \(I\) _such that_ \(C=\bigcap_{i\in I}\operatorname{Pol}(R_{i},S_{i})\)_._ In [1] Aichinger and the first author of this paper showed that in certain settings the number of relational pairs needed to determine a clonoid is finite. A finite algebra \(\mathbf{B}\) has _few subpowers_ if there exists a polynomial \(p\) such that for every \(n\in\mathbb{N}\) the number of subalgebras of \(\mathbf{B}^{n}\) is at most \(2^{p(n)}\). All finite algebras with (quasi)group or lattice operations have few subpowers. Hence the following holds in particular for \(\mathbf{B}\) a group or module. **Theorem 2.9**.: _[_1_, Theorem 5.3]_ _Let \(\mathbf{A},\mathbf{B}\) be finite algebras such that \(\mathbf{B}\) has few subpowers._ 1. _Then the lattice of clonoids from_ \(\mathbf{A}\) _to_ \(\mathbf{B}\) _satisfies the descending chain condition._ 2. _Every clonoid from_ \(\mathbf{A}\) _to_ \(\mathbf{B}\) _is finitely related (i.e. the polymorphism clonoid of a single relational pair)._ 3. _The number of clonoids from_ \(\mathbf{A}\) _to_ \(\mathbf{B}\) _is finite or countably infinite._ We note that between finite algebras there are only finitely many clonoids if and only if they are all generated by functions of some bounded arity. **Lemma 2.10**.: _For finite algebras \(\mathbf{A},\mathbf{B}\) the following are equivalent:_ 1. _The number of clonoids from_ \(\mathbf{A}\) _to_ \(\mathbf{B}\) _is finite._ 2. _There exists some_ \(n\in\mathbb{N}\) _such that every clonoid from_ \(\mathbf{A}\) _to_ \(\mathbf{B}\) _is generated by_ \(n\)_-ary functions._ 3. _There exists some_ \(n\in\mathbb{N}\) _such that for all_ \(k\in\mathbb{N}\) _every function_ \(f\colon A^{k}\to B\) _is generated by its_ \(n\)_-ary_ \(\mathbf{A},\mathbf{B}\)_-minors._ Proof.: Straightforward. ### Abelian Mal'cev algebras Algebras \((A,F_{1})\) and \((A,F_{2})\) on the same universe are _polynomially equivalent_ if they have the same clone of polynomial functions. A ternary operation \(m\) on a set \(A\) is _Mal'cev_ if \[m(x,y,y)=x=m(y,y,x)\text{ for all }x,y\in A.\] An algebra \(\mathbf{A}\) is _Mal'cev_ if it has a term operation which is Mal'cev. Every expansion of a group \((A,+,-,0)\) has a Mal'cev term operation \(m(x,y,z)=x-y+z\). A Mal'cev algebra is abelian (with respect to the term condition commutator) if and only if it is polynomially equivalent to a module [8]. We will need the following explicit characterization of abelian Mal'cev algebras in Section 4. **Lemma 2.11**.: _[_8_, Chapters 5, 9]_ _Let \(\mathbf{A}\) be an abelian algebra with Mal'cev term operation \(m\). Then_ 1. _the set of rank-_\(1\)_-commutator terms_ \(R_{\mathbf{A}}:=\{r\in\operatorname{\mathrm{Clo}}(\mathbf{A})^{(2)}\ :\ r(z,z)=z\ \forall z\in A\}\) _of_ \(\mathbf{A}\) _forms a ring_ \(\mathbf{R_{A}}\) _under_ \[r(x,z)+s(x,z):=m(r(x,z),z,s(x,z)),\] \[-r(x,z):=m(z,r(x,z),z)\] \[r(x,z)\cdot s(x,z):=r(s(x,z),z)\] _for_ \(r,s\in R\) _and with neutral elements_ \(z,x\) _for_ \(+,\cdot\)_, respectively._ 2. _Fixing any_ \(0\in A\) _as neutral element,_ \(A\) _forms an_ \(\mathbf{R_{A}}\)_-module_ \(\mathbf{A}_{0}\) _under_ \[a+b:=m(a,0,b),\quad-a:=m(0,a,0),\] \[ra:=r(a,0)\] _for_ \(a,b\in A,r\in R_{\mathbf{A}}\)_._ _The expansion of_ \(\mathbf{A}\) _with the constant operation_ \(0\) _is term equivalent to the expansion of_ \(\mathbf{A}_{0}\) _with_ \(\operatorname{\mathrm{Clo}}(\mathbf{A})^{(1)}\)_. In particular,_ \(\mathbf{A}\) _and_ \(\mathbf{A}_{0}\) _are polynomially equivalent._ 3. _For_ \(0,z\in A\)_,_ \[h\colon A\to A,\ x\mapsto m(x,0,z),\] _is an_ \(\mathbf{R_{A}}\)_-module isomorphism from_ \(\mathbf{A}_{0}\) _to_ \(\mathbf{A}_{z}\)_._ Let \(\mathbf{A}\) be an abelian Mal'cev algebra. For \(r=(r_{ij})_{i,j\in[k]}\) a \(k\times k\)-matrix over \(R_{\mathbf{A}}\), \(x=(x_{1},\ldots,x_{k})\) a \(k\)-tuple of variables and \(z\) another variable, define \[r*_{z}x:=\left(\sum_{j=1}^{k}r_{ij}(x_{j},z)\right)_{i\in[k]}\] with term functions added by \(s(x,z)+t(x,z):=m(s(x,z),z,t(x,z))\). Then \(r*_{z}x\) is a \(k\)-tuple of \(k+1\)-ary term functions over \(\mathbf{A}\). Since the ring \(\mathbf{R_{A}}\) acts faithfully on any abelian Mal'cev algebra \(\mathbf{A}\), the nilpotence degree of the Jacobson radical of \(\mathbf{R_{A}}\) is at most the height of the congruence lattice of \(\mathbf{A}\). ### Distributive modules A ring \(\mathbf{R}\) is _semiperfect_ if it has a set \(e_{1},...,e_{\ell}\) of orthogonal idempotents with \(e_{1}+\cdots+e_{\ell}=1\) such that \(e_{i}Re_{i}\) is a local ring for all \(i\in[\ell]\). Then the idempotents \(e_{1},\ldots,e_{\ell}\) are called local. All left Artinian (in particular all finite) rings are semiperfect. A module is _uniserial_ if its submodules are linearly ordered by inclusion. In particular, simple modules and \(\mathbb{Z}_{p^{n}}\) for any prime \(p\) and \(n\in\mathbb{N}\) are uniserial. For an example over a noncommutative ring, let \(F\) be a field and \[R:=\left\{\begin{pmatrix}a&b\\ 0&a\end{pmatrix}\ :\ a,b\in F\right\}.\] Then \(A:=F^{2}\) with the usual action of matrices on column vectors forms a uniserial \(\mathbf{R}\)-module with submodules \(0\leq F\times 0\leq F^{2}\). Distributive modules over semiperfect rings can be decomposed as sums of uniserial modules over local subrings as follows. **Lemma 2.12**.: _Let \(\mathbf{R}\) be a semiperfect ring with \(1=\sum_{i=1}^{\ell}e_{i}\) for local orthogonal idempotents \(e_{1},\ldots,e_{\ell}\) and let \(\mathbf{A}\) be a distributive \(\mathbf{R}\)-module. Then_ 1. [10, Lemma 4] _[_17_, 1.28]__\(e_{i}A\) _is a uniserial_ \(e_{i}Re_{i}\)_-module for all_ \(i\in[\ell]\)_._ 2. \(A=\sum_{i=1}^{\ell}e_{i}A\) _is a distributive module over the subring_ \(R^{\prime}:=\sum_{i=1}^{\ell}e_{i}Re_{i}\) _of_ \(\mathbf{R}\) _with the induced action from_ \(\mathbf{R}\)_._ 3. \(J(\mathbf{R}^{\prime})\subseteq J(\mathbf{R})\)_._ Proof.: (1) We include the short proof from [10, Lemma 4] for the convenience of the reader. For \(i\in[\ell]\), write \(e:=e_{i}\) and \(J:=J(\mathbf{R})\). We first show that the set of \(\mathbf{R}\)-modules \(\{Rex\ :\ x\in A\}\) is linearly ordered under inclusion. Seeking a contradiction, suppose that there are \(x,y\in A\) such that \(Rex\) and \(Rey\) are not comparable. Since \(Je\) is the unique maximal \(\mathbf{R}\)-submodule of \(Re\) by the assumptions, \(Rex\) and \(Rey\) have unique maximal submodules \(Jex\) and \(Jey\), respectively. Hence \(Rex\cap Rey=Jex\cap Jey\). It follows that \[(Rex+Rey)/(Jex+Jey)\cong Re/Je\times Re/Je,\] which contradicts the distributivity of \(\mathbf{A}\). Thus \(\{Rex\ :\ x\in A\}\) is linearly ordered. So for each \(x,y\in A\) either \(ex\in eRey\) or \(ey\in eRex\). Therefore \(eA\) is a uniserial \(eRe\)-module. (2) Using the orthogonality of the idempotents, we see that every \(\mathbf{R}^{\prime}\)-submodule of \(\mathbf{A}\) is a direct sum of \(\mathbf{R}^{\prime}\)-submodules of \(e_{i}A\) for \(i\in[\ell]\). Further the \(\mathbf{R}^{\prime}\)-submodules of \(e_{i}A\) coincide with the \(e_{i}Re_{i}\)-submodules. Since the latter are linearly ordered by (1), it follows that \(\sum_{i=1}^{\ell}e_{i}A\) is a distributive \(\mathbf{R}^{\prime}\)-module. For (3), let \(s\in J(e_{1}Re_{1})\) and \(r\in R\). Then \(s=e_{1}se_{1}\) and so \[1-rs=1-\sum_{i=1}^{\ell}e_{i}re_{1}se_{1}=1-e_{1}re_{1}se_{1}-\sum_{i>1}e_{i}re_ {1}se_{1}.\] Since \(e_{1}re_{1}se_{1}\) is in the Jacobson radical of the local ring \(e_{1}Re_{1}\) with identity \(e_{1}\), we see that \(e_{1}-e_{1}re_{1}se_{1}\) is a unit in \(e_{i}Re_{i}\). Since the quotient of \(e_{1}Re_{1}\) by \(J(e_{1}Re_{1})\) is a division ring, the inverse of \(e_{1}-e_{1}re_{1}se_{1}\) has the form \(e_{1}-u\) for some \(u\in J(e_{1}Re_{1})\). By the orthogonality of idempotents again, \[(1-u)(1-rs) =\underbrace{(1-u)(1-e_{1}re_{1}se_{1})}_{=1}-(1-\underbrace{u}_ {=ue_{1}})(\sum_{i>1}e_{i}re_{1}se_{1})\] \[=1-\sum_{i>1}e_{i}re_{1}se_{1}.\] Since \((\sum_{i>1}e_{i}re_{1}se_{1})^{2}=0\), we see that \(1-\sum_{i>1}e_{i}re_{1}se_{1}\) has inverse \(1+\sum_{i>1}e_{i}re_{1}se_{1}.\) Hence \(1-rs\) is a unit for every \(r\in R\), so \(s\in J(\mathbf{R})\). A similar argument shows that \(J(e_{i}Re_{i})\subseteq J(\mathbf{R})\) for each \(i\in[\ell]\). Hence \(J(\mathbf{R}^{\prime})=\sum_{i=1}^{\ell}J(e_{i}Re_{i})\subseteq J(\mathbf{R})\). Let \(\mathbf{A}\) be a module over a ring \(\mathbf{R}\), let \(k\in\mathbb{N}\) and \(T=(t_{ij})_{1\leq i,j\leq k}\) a \(k\times k\)-matrix over \(\mathbf{R}\). Then \(T\) acts on \(x:=(x_{1},\ldots,x_{k})\) in \(\mathbf{A}^{k}\) by \[Tx:=(\sum_{j=1}^{k}t_{1j}x_{j},\ldots,\sum_{j=1}^{k}t_{kj}x_{j}).\] Here \(A^{k}\to A^{k},\ x\mapsto Tx\), can be considered as a \(k\)-tuple of \(k\)-ary term functions of \(\mathbf{A}\) but is not necessarily an \(\mathbf{R}\)-homomorphism if \(\mathbf{R}\) is noncommutative. For interpolating functions with domain \(\mathbf{A}^{k}\) in Section 3, it will be useful to cover \(\mathbf{A}^{k}\) by its \(\mathbf{R}\)-submodules that are isomorphic to \(\mathbf{A}\times(J(\mathbf{R})A)^{k-1}\). We collect some information about these submodules in the following. **Lemma 2.13**.: _Let \(\mathbf{R}\) be a ring with Jacobson radical \(J:=J(\mathbf{R})\), let \(\mathbf{A}\) be a finite distributive \(\mathbf{R}\)-module, let \(k\in\mathbb{N}\) and let_ \[V:=\{M\leq\mathbf{A}^{k}\ :\ (JA)^{k}\leq M,M/(JA)^{k}\cong\mathbf{A}/JA\}.\] _Then_ 1. _for every_ \(M\leq\mathbf{A}^{k}\) _such that_ \((JA)^{k}\leq M\) _and_ \(M/(JA)^{k}\) _embeds into_ \(\mathbf{A}/JA\) _there exists an invertible_ \(k\times k\)_-matrix_ \(T\) _over_ \(\mathbf{R}\) _such that_ \(\mathbf{A}/JA\) _is a finite distributive_ \(\mathbf{R}\)_._ 2. _If_ \(\mathbf{A}\) _is a finite distributive_ \(\mathbf{R}\)_, then_ \(\mathbf{A}\) _is a finite distributive_ \(\mathbf{R}\)_._ Proof.: Let \(\mathbf{A}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}^{k}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}^{k}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}^{k}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}^{k}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}^{k}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}^{k}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}^{k}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}^{k}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}^{k}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}^{k}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}^{k}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}^{k}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}^{k}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}^{k}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}\)-module. Let \(\mathbf{A}^{k}\) be a finite distributive \(\mathbf{R}^{k}\)-module. that \(TM\leq A\times(JA)^{k-1}\); in particular, \(\operatorname{GL}_{k}(\mathbf{R})\) acts transitively on \(V\)._ 2. _For all distinct_ \(M,N\in V\) _there exists_ \(L<\mathbf{A}\) _such that_ \(M\cap N\leq L^{k}\)_._ 3. \(A^{k}=\bigcup V\cup\bigcup\{L^{k}\ :\ L<\mathbf{A}\}\)_._ Proof.: Factoring \(\mathbf{R}\) by the annihilator of \(\mathbf{A}\) if necessary, we may assume that \(\mathbf{R}\) is faithful on \(\mathbf{A}\), in particular that \(\mathbf{R}\) is finite. We first prove (1) for the case that \(\mathbf{R}\) is commutative and semisimple. Then \(J=0\) and \(\mathbf{R}\) is the direct sum of finite fields \(\mathbf{K}_{1},\ldots,\mathbf{K}_{\ell}\) for some \(\ell\in\mathbb{N}\) by the Wedderburn-Artin Theorem. Since \(\mathbf{A}\) is distributive and semisimple, it is a direct sum of pairwise non-isomorphic simple \(\mathbf{R}\)-modules. It follows that \(\mathbf{A}\) is isomorphic to the regular \(\mathbf{R}\)-module \(\sum_{i=1}^{\ell}K_{i}\). Let \(M\leq\mathbf{A}^{k}\) such that \(M\) embeds into \(\mathbf{A}\). Then we have \(S\subseteq[\ell]\) such that \(M\cong\sum_{i\in S}K_{i}\). More precisely, we also have \(j_{i}\in[k]\) for \(i\in S\) such that \[M=\sum_{i\in S}0\times\cdots\times 0\times K_{i}\times 0\times\cdots\times 0 \leq\mathbf{A}^{k}.\] Clearly there exists an invertible \(k\times k\) matrix \(T\) over \(\mathbf{R}\) that moves \(K_{i}\) from the \(j_{i}\)-th component of \(A^{k}\) to the first component for all \(i\in S\). Hence \(TM=\sum_{i\in S}K_{i}\times 0^{k-1}\leq\mathbf{A}\times 0^{k-1}\). This proves (1) for \(\mathbf{R}\) commutative and semisimple. Next assume that \(\mathbf{R}\) is semisimple. By the Wedderburn-Artin Theorem \(\mathbf{R}\cong\sum_{i=1}^{\ell}\mathbf{F}_{i}^{n_{i}\times n_{i}}\) for some \(\ell,n_{1},\ldots,n_{\ell}\in\mathbb{N}\) and finite fields \(\mathbf{F}_{1},\ldots,\mathbf{F}_{\ell}\). Since \(\mathbf{A}\) is distributive and semisimple, it is a direct sum of pairwise non-isomorphic simple \(\mathbf{R}\)-modules, hence \(\mathbf{A}\cong\sum_{i=1}^{\ell}F_{i}^{n_{i}}\). Note that \(\mathbf{F}_{i}^{n_{i}\times n_{i}}\) (and hence \(\mathbf{R}\)) has a subring \(\mathbf{K}_{i}\), which is a field of order \(|F_{i}|^{n_{i}}\) for every \(i\in[\ell]\). Let \(\hat{\mathbf{R}}:=\sum_{i=1}^{\ell}\mathbf{K}_{i}\) and let \(\hat{\mathbf{A}}\) denote the \(\hat{\mathbf{R}}\)-module reduct of \(\mathbf{A}\). Since \(|A|=|\hat{R}|\), we see that \(\hat{\mathbf{A}}\) is isomorphic to the regular \(\hat{\mathbf{R}}\)-module. Further \(\mathbf{R}\)-submodules of \(\mathbf{A}\) and \(\hat{\mathbf{R}}\)-submodules of \(\hat{\mathbf{A}}\) coincide and \(\hat{\mathbf{A}}\) is distributive as well. Applying (1) for \(\hat{A}\) over the commutative ring \(\hat{\mathbf{R}}\), we obtain that for every \(M\leq\mathbf{A}^{k}\) that embeds into \(\mathbf{A}\), there exists some \(T\in\operatorname{GL}_{k}(\hat{\mathbf{R}})\leq\operatorname{GL}_{k}(\mathbf{R})\) such that \(TM\leq\mathbf{A}\times 0^{k-1}\). This proves (1) for semisimple \(\mathbf{R}\). Finally, for the general case let \(\mathbf{R}\) be arbitrary, let \(M\leq\mathbf{A}^{k}\) such that \((JA)^{k}\leq M\) and \(\overline{M}:=M/(JA)^{k}\) embeds into \(\overline{\mathbf{A}}:=\mathbf{A}/JA\). Applying (1) for the distributive module \(\overline{\mathbf{A}}\) over the semisimple ring \(\overline{\mathbf{R}}:=\mathbf{R}/J\), we obtain \(T\in R^{k\times k}\) whose projection \(\overline{T}\) in \(\overline{R}^{k\times k}\) is invertible and satisfies \(\overline{TM}\leq\overline{\mathbf{A}}\times 0^{k-1}\). Since \(\overline{T}\) is invertible over \(\overline{\mathbf{R}}\), we have \(S\in R^{k\times k}\) such that \(ST=I_{k}-U\) for \(I_{k}\) the \(k\times k\) identity matrix over \(\mathbf{R}\) and some \(U\in J^{k\times k}\). Since \(J\) is nilpotent, it follows that \(ST\in\operatorname{GL}_{k}(\mathbf{R})\) and further \(T\in\operatorname{GL}_{k}(\mathbf{R})\). Since \(\overline{TM}=N/J\times 0^{k-1}\) for some \(JA\leq N\leq\mathbf{A}\), we obtain \(TM\subseteq N\times(JA)^{k-1}\). Equality follows since \(|M|=|N\times(JA)^{k-1}|\) and \(T\) is invertible. Thus (1) is finally proved in full generality. For (2) let \(M,N\in V\) be distinct. By (1) we have \(T\in\operatorname{GL}_{k}(\mathbf{R})\) such that \(T(M\cap N)=L\times(JA)^{k-1}\) for some \(JA\leq L<\mathbf{A}\). Hence \(M\cap N\leq T^{-1}L^{k}\leq L^{k}\). For (3) let \(a\in A^{k}\) and \(M:=Ra+(JA)^{k}\). Again we use the notation \(\overline{\mathbf{R}}:=\mathbf{R}/J\), \(\overline{\mathbf{A}}:=\mathbf{A}/JA\) and \(\overline{M}:=M/(JA)^{k}\). Then \(\overline{M}\) is a cyclic \(\overline{\mathbf{R}}\)-submodule of \(\overline{\mathbf{A}}^{k}\). Hence \(\overline{M}\) is a sum of pairwise non-isomorphic simple \(\overline{\mathbf{R}}\)-modules and embeds into \(\overline{\mathbf{A}}\). By (1) we have \(T\in\operatorname{GL}_{k}(\mathbf{R})\) such that \(TM=L\times(JA)^{k-1}\) for some \(JA\leq L\leq\mathbf{A}\). If \(L=A\), then \(M\in V\) and \(a\in\bigcup V\). Else if \(L<\mathbf{A}\), then \(M\leq T^{-1}L^{k}\leq L^{k}\) and \(a\in\bigcup\{L^{k}\ :\ L<\mathbf{A}\}\). Thus \(A^{k}\subseteq\bigcup V\cup\bigcup\{L^{k}\ :\ L<\mathbf{A}\}\). The converse inclusion is trivial. We note that Lemma 2.13 does not generalize to arbitrary cyclic \(\mathbf{R}\)-modules. For example, let \(\mathbf{R}:=\mathbf{F}^{2\times 2}\) be the \(2\times 2\)-matrix ring over a field \(\mathbf{F}\) and let \(\mathbf{A}\) be the regular \(\mathbf{R}\)-module. Then \[M:=\left\{\begin{bmatrix}a&0\\ b&0\end{bmatrix}\ :\ a,b\in F\right\}^{2}\leq\mathbf{A}^{2}\] is isomorphic to \(\mathbf{A}\) and fixed by \(\mathbf{R}^{2\times 2}\). Hence there exists no \(T\in\mathbf{R}^{2\times 2}\) such that \(TM=A\times 0\). ### Inner rank of matrices The rank of matrices over fields can be generalized to arbitrary rings as follows (see [4, Proposition 5.4.3]). Let \(\mathbf{R}\) be a ring, \(m,n\in\mathbb{N}\) and \(A\in R^{m\times n}\). The _inner rank_ of \(A\neq 0\) is defined as \[\operatorname{rk}(A):=\min\{r\in\mathbb{N}\ :\ A=BC\text{ for some }B\in R^{m \times r},C\in R^{r\times n}\}\] and \(\operatorname{rk}(0):=0\). Then \(\operatorname{rk}(A)\) is the least \(r\) such that the right \(\mathbf{R}\)-submodule of \(R^{m}\) that is generated by the columns of \(A\) is contained in an \(r\)-generated module (equivalently, the least \(r\) such that the left \(\mathbf{R}\)-submodule of \(R^{n}\) that is generated by the rows of \(A\) is contained in an \(r\)-generated module). Note that in particular \(\operatorname{rk}(A)\leq\min(m,n)\). If \(\mathbf{R}\) is a field, the inner rank is just the usual row or column rank. ### Uniformly generated functions Let \(\mathbf{A},\mathbf{B}\) be algebras. For \(k,\ell,n\in\mathbb{N}\), let \[R_{n}^{\ell,k}:=\{r\in(\operatorname{Clo}(\mathbf{A})^{(\ell)})^{k} \ :\\ r(x)=(v_{1}(w_{1}(x),\ldots,w_{n}(x)),\ldots,v_{k}(w_{1}(x),\ldots, w_{n}(x)))\\ \text{for }v_{1},\ldots,v_{k}\in\operatorname{Clo}(\mathbf{A})^{(n )},w_{1},\ldots,w_{n}\in\operatorname{Clo}(\mathbf{A})^{(\ell)}\}\] denote the set of \(k\)-tuples of \(\ell\)-ary term functions on \(\mathbf{A}\) that factor through \(n\)-ary term functions. For an \(\mathbf{R}\)-module \(\mathbf{A}\) we simply have \[R_{n}^{\ell,k}=\{A^{\ell}\to A^{k},x\mapsto ax\ :\ a\in R^{k\times\ell}, \operatorname{rk}(a)\leq n\}.\] For \(U\subseteq F(A,B)^{(k)}\), \(o\in\mathbb{N},r_{1},\ldots,r_{o}\in R_{n}^{\ell,k}\) and \(s\in\operatorname{Clo}(\mathbf{B})^{(o)}\), the operator \[d\colon U\to F(A,B)^{(\ell)},\ f(x)\mapsto s(f(r_{1}(x)),\ldots,f(r_{o}(x))),\] is defined by composing \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors with term functions of \(\mathbf{A}\) and \(\mathbf{B}\) on the respective sides. We will express this more shortly by saying that the \(\ell\)-ary function \(f^{\prime}(x):=s(f(r_{1}(x)),\ldots,f(r_{o}(x)))\) is _uniformly generated_ by \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors for the \(k\)-ary functions \(f\in U\). For example, for every \(f\in F(A,B)^{(2)}\) the minor \(f^{\prime}(x_{1}):=f(x_{1},x_{1})\) is uniformly generated by the unary \(\mathbf{A},\mathbf{B}\)-minors of \(f\) (in fact by \(f(x_{1},x_{1})\) itself). Less trivially, for groups \(\mathbf{A}=\mathbf{B}=(\mathbb{Z},+,-,0)\) and \(k=\ell=2,n=1\) the binary function \[f^{\prime}(x_{1},x_{2}):=f(0,0)-f(x_{1},-x_{1})+2f(x_{1}+x_{2},2(x_{1}+x_{2}))\] is uniformly generated by the unary \(\mathbf{A},\mathbf{B}\)-minors \(f(0x,0x),f(x,-x),f(x,2x)\) of \(f\in F(A,B)^{(2)}\). We will use the following properties of uniformly generated functions between modules in Section 3. **Lemma 2.14**.: _Let \(\mathbf{A}\) be an \(\mathbf{R}\)-module, \(\mathbf{B}\) an \(\mathbf{S}\)-module, \(k,\ell,n\in\mathbb{N}\), \(U\subseteq F(A,B)^{(k)}\) and \(U\to F(A,B)^{(\ell)},\ f\mapsto f^{\prime}\)._ 1. _Then_ \(f^{\prime}\) _is uniformly generated by_ \(n\)_-ary_ \(\mathbf{A},\mathbf{B}\)_-minors of_ \(f\in U\) _if and only if there exists_ \(s\colon\{r\in R^{k\times\ell}\ :\ \operatorname{rk}(r)\leq n\}\to S\) _with finite support such that for all_ \(f\in U\) _and all_ \(x\in A^{\ell}\)__ \[f^{\prime}(x)=\sum_{r\in R^{k\times\ell},\operatorname{rk}(r)\leq n}s(r)f(rx).\] 2. _Assume_ \(k=\ell\)_, that_ \(f^{\prime}\) _is uniformly generated by_ \(n\)_-ary_ \(\mathbf{A},\mathbf{B}\)_-minors of_ \(f\in U\) _and that_ \(f-f^{\prime}\) _is uniformly generated by its_ \(n\)_-ary_ \(\mathbf{A \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors for every \(f\in U\). Then every \(f\in U\) is uniformly generated by its \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors._ Proof.: (1) is straightforward from the definition. For (2) let \(s,t\colon\{r\in R^{k\times k}\ :\ \mathrm{rk}(r)\leq n\}\to S\) such that for all \(f\in U\) and all \(x\in A^{k}\) \[f^{\prime}(x) =\sum_{r\in R^{k\times k},\mathrm{rk}(r)\leq n}s(r)f(rx),\] \[(f-f^{\prime})(x) =\sum_{u\in R^{k\times k},\mathrm{rk}(u)\leq n}t(u)(f-f^{\prime}) (ux).\] Since \(\mathrm{rk}(ru)\leq\min(\mathrm{rk}(r),\mathrm{rk}(u))\), it follows that \[(f-f^{\prime})(x)=\sum_{u\in R^{k\times k},\mathrm{rk}(u)\leq n}t(u)\left(f(ux )-\sum_{r\in R^{k\times k},\mathrm{rk}(r)\leq n}s(r)f(rux)\right)\] is uniformly generated by \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors of \(f\). Together with the assumption on \(f^{\prime}\) we see that \(f=(f-f^{\prime})+f^{\prime}\) is also uniformly generated by \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors of \(f\). ### Sums Let \(S\) be a subset of an abelian group \((A,+)\). To simplify notation we will also write \(\sum S\) for \(\sum_{s\in S}s\). ## 3. Clonoids from distributive modules This section consists of the proof that every function from a finite distributive \(\mathbf{R}\)-module \(\mathbf{A}\) into a coprime module \(\mathbf{B}\) is uniformly generated by its \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors, where \(n\) is the nilpotence degree of the Jacobson radical of \(\mathbf{R}\). This is the basis of all our finite generation results for clonoids between coprime abelian Mal'cev algebras. More precisely we show the following. **Theorem 3.1**.: _Let \(\mathbf{A}\) be a finite distributive \(\mathbf{R}\)-module, let \(n\in\mathbb{N}\) such that \(J(\mathbf{R})^{n}=0\), and let \(\mathbf{B}\) be an \(\mathbf{S}\)-module such that \(|A|\) is invertible in \(\mathbf{S}\)._ _For all \(k\in\mathbb{N}\) there exists \(s\colon\{r\in R^{k\times k}\ :\ \mathrm{rk}(r)\leq n\}\to S\) such that for all \(f\colon A^{k}\to B\) and all \(x\in A^{k}\)_ \[f(x)=\sum_{r\in R^{k\times k},\mathrm{rk}(r)\leq n}s(r)f(rx).\] **Example 3.2**.: We illustrate Theorem 3.1 for binary functions from \(\mathbf{A}=(\mathbb{Z}_{2},+)\) to an \(\mathbf{S}\)-module \(\mathbf{B}\) such that \(2\) a unit in \(\mathbf{S}\). Then every \(f\colon A^{2}\to B\) is uniformly generated by its unary \(\mathbf{A},\mathbf{B}\)-minors \(f(0,0)\), \(f(x,0)\), \(f(0,x),f(x,x)\) via \[f(x_{1},x_{2})= f(0,0)\] \[+2^{-1}[f(x_{1},0)+f(x_{1}+x_{2},0)-f(0,0)-f(x_{2},0)\] \[+f(0,x_{2})+f(0,x_{1}+x_{2})-f(0,0)-f(0,x_{1})\] \[+f(x_{1},x_{1})+f(x_{2},x_{2})-f(0,0)-f(x_{1}+x_{2},x_{1}+x_{2})].\] This identity can be derived from the interpolation arguments for Theorem 3.1 below but is more elementary verified by the following case analysis. If \(x_{2}=0\), then lines 1 and 2 on the right hand side add up to \(f(x_{1},0)\) while lines 3 and 4 each cancel. If \(x_{1}=0\), then similarly lines 1 and 3 add up to \(f(0,x_{2})\) while lines 1 and 4 each cancel. Finally if \(x_{1}=x_{2}\), then lines 1 and 4 add up to \(f(x_{1},x_{2})\) while lines 2 and 3 each cancel. Hence in each case the right hand side of the formula yields \(f(x_{1},x_{2})\) and the claim is proved. By Lemma 2.12 it suffices to prove Theorem 3.1 for \(\mathbf{R}=\sum_{i=1}^{\ell}R_{i}\) a direct product of finite local rings \(R_{i}\) with greatest ideal \(J_{i}\) and uniserial module \(A_{i}\) for \(i\in[\ell]\). Further \(\mathbf{A}=\sum_{i=1}^{\ell}A_{i}\) is a faithful distributive \(\mathbf{R}\)-module and \(J(\mathbf{R})=\sum_{i=1}^{\ell}J_{i}=:J\). Because of its faithfulness, \(\mathbf{R}\) is finite and \(|R|\) has the same prime divisors as \(|A|\). Since \(|A|\) is invertible in \(\mathbf{S}\), so is \(|R|\). We fix these assumptions and notation throughout this section for the proof of Theorem 3.1. We will show that \(f\) is generated in a uniform way by its \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors via induction on \(\mathbf{A}\) and a series of reductions and interpolations. The base case of Theorem 3.1 for trivial \(\mathbf{A}\) is clear. Next the induction assumption yields the following. **Lemma 3.3**.: _Let \(M\) be a proper submodule of \(\mathbf{A}\). There exists \(f^{\prime}\colon A^{k}\to B\) such that \(f^{\prime}|_{M^{k}}=f|_{M^{k}}\) and \(f^{\prime}\) is uniformly generated by \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors of \(f\)._ Note that in Lemma 3.3 the difference of \(f\) and \(f^{\prime}\) is \(0\) on \(M^{k}\). So by Lemma 2.14(2) it suffices to prove Theorem 3.1 for functions satisfying \(f(M^{k})=0\). If \(f(N^{k})=0\) for some submodule \(N\) of \(\mathbf{A}\), then also \(f^{\prime}(N)=0\). Hence by repeating this reduction for all maximal submodules \(M\) of \(\mathbf{A}\), we may assume that \(f\) vanishes on all proper submodules of \(\mathbf{A}\). First we show that for any maximal \(M\) every such \(f\) uniformly generates a function that is equal to \(f\) on \(A\times M^{k-1}\) and \(0\) else. **Lemma 3.4**.: _Let \(M\) be a maximal submodule of \(\mathbf{A}\) and assume \(f(M^{k})=0\). Then_ \[g_{M}\colon A^{k}\to B,\ x\mapsto\begin{cases}f(x)&\text{ if }x\in A\times M^{k-1}, \\ 0&\text{ else},\end{cases}\] _is uniformly generated by \(\mathbf{A},\mathbf{B}\)-minors of \(f\)._ Proof.: From the decomposition of \(\mathbf{A}\) we may assume that \(M=J_{1}A_{1}+\sum_{i=2}^{\ell}A_{i}\). We use induction on the number of submodules of \(\mathbf{A}_{1}\). For the base case, \(\mathbf{A}_{1}\) is simple, \(J_{1}=0\) and \(\mathbf{R}_{1}\) is a simple ring. Let \(e\) be the multiplicative identity of \(\sum_{i=2}^{\ell}R_{i}\). We claim that for all \(x_{1},\ldots,x_{k}\in A\) \[g_{M}(x_{1},\ldots,x_{k})= |R_{1}|^{1-k}\left(\sum_{a_{2},\ldots,a_{k}\in R_{1}}f(x_{1}+ \sum_{i=2}^{k}a_{i}x_{i},x_{2},\ldots,x_{k})\right.\] \[\left.-\sum_{a_{2},\ldots,a_{k}\in R_{1}}f(ex_{1}+\sum_{i=2}^{k}a _{i}x_{i},x_{2},\ldots,x_{k})\right). \tag{3.1}\] If \(x_{2},\ldots,x_{k}\in M\), this follows since \(\sum_{i=2}^{k}a_{i}x_{i}=0\) and the right hand side of (3.1) simplifies to \[f(x_{1},x_{2},\ldots,x_{k})-\underbrace{f(ex_{1},x_{2},\ldots,x_{k})}_{=0\ \text{since}\ f(M^{k})=0}=f(x_{1},x_{2},\ldots,x_{k}).\] Else if \((x_{2},\ldots,x_{k})\not\in M^{k-1}\), then the maximality of \(M\) in \(\mathbf{A}\) implies \[\left\{\sum_{i=2}^{k}a_{i}x_{i}\ :\ a_{2},\ldots,a_{k}\in R_{1}\right\}=\sum_{i =2}^{k}R_{1}x_{i}=A_{1}\] where each element in \(A_{1}\) is attained with the same multiplicity \(\frac{|R_{1}|^{k-1}}{|A_{1}|}\). So the right hand side of (3.1) yields \[|R_{1}|^{-1}\left(\sum f(x_{1}+A_{1},x_{2},\ldots,x_{k})-\sum f(\underbrace{ ex_{1}+A_{1}}_{=x_{1}+A_{1}\ \text{since}\ A=A_{1}\times M},x_{2},\ldots,x_{k})\right)=0.\] This shows (3.1) and the base case of the induction. For the induction step let \(M=J_{1}A_{1}+\sum_{i=2}^{\ell}A_{i}\) and \(J_{1}A_{1}\neq 0\) in the following. Let \(N\) be the smallest \(\mathbf{R}\)-submodule of \(\mathbf{A}_{1}\). Then \(N\leq M\) and \(N=IA_{1}\) for \(I\) a power of \(J_{1}\). In order to apply the induction assumption, we will take averages of functions over arguments \(\sum Ix\). For this we note that by the minimality of \(N\), \[Ix=\begin{cases}0&\text{ for all }x\in M,\\ N&\text{ for all }x\in A\setminus M.\end{cases}\] First we claim that \[\bar{f}(x_{1},\dots,x_{k}):=|I|^{-k^{2}}\sum_{a_{1},\dots,a_{k}\in I^{k}}f(x_{1}+ \sum_{j=1}^{k}a_{1j}x_{j},x_{2}+\sum_{j=1}^{k}a_{2j}x_{j},\dots,x_{k}+\sum_{j=1} ^{k}a_{kj}x_{j})\] satisfies \[\bar{f}(x)=|N|^{-k}\sum f(x+N^{k})\text{ for all }x\in A^{k}.\] For \(x\in M^{k}\), this follows from the assumption that \(f(M^{k})=0\) and \(N\subseteq M\). Else if \(x\not\in M^{k}\), it follows from \[\left\{\sum_{j=1}^{k}a_{ij}x_{j}\ :\ a_{i}\in I^{k}\right\}=\sum_{j=1}^{k} Ix_{j}=N,\] where each element in \(N\) is attained with the same multiplicity \(\frac{|I|^{k}}{|N|}\). Since \(\bar{f}\) is constant on blocks modulo \(N\), we may view it as a function from \(\mathbf{A}/N\) to \(\mathbf{B}\). By the induction assumption on \(\mathbf{A}/N\) with maximal submodule \(M/N\), \[\bar{g}_{M}(x):=\begin{cases}\bar{f}(x)&\text{ if }x\in A\times M^{k-1},\\ 0&\text{ else},\end{cases}\] is uniformly generated by \(\mathbf{A}/N,\mathbf{B}\)-minors of \(\bar{f}\), equivalently uniformly generated by \(\mathbf{A},\mathbf{B}\)-minors of \(f\). Similarly as for \(\bar{f}\) we see that \[\hat{f}(x_{1},\dots,x_{k}):=|I|^{-k(k-1)}\sum_{a_{1},\dots,a_{k} \in I^{k-1}}f(x_{1}+\sum_{j=2}^{k}a_{1,j-1}x_{j},x_{2}+\sum_{j=2}^{k}a_{2,j-1} x_{j},\] \[\dots,x_{k}+\sum_{j=2}^{k}a_{k,j-1}x_{j})\] satisfies \[\hat{f}(x)=\begin{cases}f(x)&\text{ if }x\in A\times M^{k-1},\\ |N|^{-k}\sum f(x+N^{k})=\bar{f}(x)&\text{ else}.\end{cases}\] Now \(g_{M}=\hat{f}-(\bar{f}-\bar{g}_{M})\) is an \(\mathbf{A},\mathbf{B}\)-minor of \(f\). We note that the formulas for \(\hat{f},\bar{f},\bar{g}_{M}\) and \(g_{M}\) are independent of \(f\). Hence the induction step and the lemma is proved. Next we obtain a function that coincides with \(f\) on \(A\times(JA)^{k-1}\) and is uniformly generated by the \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors of \(f\). **Lemma 3.5**.: _Assume \(f(M^{k})=0\) for all proper submodules \(M\) of \(\mathbf{A}\). Then_ \[g\colon A^{k}\to B,\ x\mapsto\begin{cases}f(x)&\text{ if }x\in A\times(JA)^{k-1}, \\ 0&\text{ else},\end{cases}\] _is uniformly generated by \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors of \(f\)._ Proof.: That \(g\) can be uniformly generated from \(\mathbf{A},\mathbf{B}\)-minors of \(f\) follows by repeated application of Lemma 3.4 for all maximal submodules \(M\) of \(\mathbf{A}\) since \(JA\) is the intersection of these submodules. It remains to show that \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors suffice. For this we induct on the nilpotence degree \(n\) of \(J\). The base case \(n=1\) and \(J=0\) holds since \(g\) can clearly be uniformly generated by \(f(x_{1},0,\ldots,0)\). Assume \(J>0\) and let \(I\) be the annihilator of \(J\) in \(\mathbf{R}\) in the following. Then the Jacobson radical of \(\mathbf{R}/I\) has nilpotence degree \(n-1\). For \(x_{1}\in A\), define \[f_{x_{1}}\colon A^{k-1}\to B,\ (x_{2},\ldots,x_{k})\mapsto f(x_{1},x_{2}, \ldots,x_{k}).\] Since \(JA\) is a distributive \(\mathbf{R}/I\)-module, the induction assumption of Theorem 3.1 yields that the restriction of \(f_{x_{1}}\) to \((JA)^{k-1}\) can be interpolated by its \(n-1\)-ary \(JA,\mathbf{B}\)-minors in a uniform way (independently of \(x_{1}\)). More precisely we have \(s\colon\{r\in R^{k-1\times k-1}\ :\ \operatorname{rk}(r)\leq n-1\}\to S\) such that \[f(x_{1},x_{2},\ldots,x_{k})=\sum_{r\in R^{(k-1)\times(k-1)},\operatorname{rk} (r)\leq n-1}s(r)f(x_{1},r(x_{2},\ldots x_{k}))\] for all \(x_{1}\in A,x_{2},\ldots,x_{k}\in JA.\) Here \((x_{1},r(x_{2},\ldots,x_{k}))\) denotes the concatenation of the \(1\)-tuple \(x_{1}\) and the \(k-1\)-tuple \(r(x_{2},\ldots,x_{k})\). For \(r\in R^{k-1\times k-1}\) the \(k\times k\) block diagonal matrix \(r^{\prime}:=\begin{pmatrix}1&0\\ 0&r\end{pmatrix}\) has inner rank at most \(\operatorname{rk}(r)+1\) and satisfies \(r^{\prime}(x_{1},\ldots,x_{k})=(x_{1},r(x_{2},\ldots,x_{k}))\). Then \[f^{\prime}(x):=\sum_{r\in R^{(k-1)\times(k-1)},\operatorname{rk}(r)\leq n-1} s(r)f(r^{\prime}x)\] is equal to \(f\) on \(A\times(JA)^{k}\) and uniformly generated by \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors of \(f\). By the argument in the first paragraph of this proof, \(g\) is uniformly generated by \(\mathbf{A},\mathbf{B}\)-minor of \(f^{\prime}\). Hence \(g\) is uniformly generated by \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors of \(f\) Extending Lemma 3.5 we show that \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors of \(f\) generate functions that are equal to \(f\) on any submodule \(N\) isomorphic to \(A\times(JA)^{k-1}\) and \(0\) else. **Lemma 3.6**.: _Assume \(f(M^{k})=0\) for all proper submodules \(M\) of \(\mathbf{A}\). Let \((JA)^{k}\leq N\leq\mathbf{A}^{k}\) such that \(N/(JA)^{k}\cong\mathbf{A}/JA\). Then_ \[f_{N}\colon A^{k}\to B,\ x\mapsto\begin{cases}f(x)&\text{if }x\in N,\\ 0&\text{else},\end{cases}\] _is uniformly generated by \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors of \(f\)._ Proof.: By Lemma 2.13 we have some invertible \(k\times k\)-matrix \(T\) over \(\mathbf{R}\) such that \(TN=A\times(JA)^{k-1}\). Let \(g\) as in Lemma 3.5 be uniformly generated by \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors of \(f(T^{-1}x)\), that is, \[g\colon A^{k}\to B,\ x\mapsto\begin{cases}f(T^{-1}x)&\text{if }x\in A\times J ^{k-1},\\ 0&\text{else}.\end{cases}\] We claim that \[f_{N}(x)=g(Tx)\text{ for all }x\in A^{k}. \tag{3.2}\] If \(x\in N\), then \(Tx\in A\times(JA)^{k-1}\) and \[g(Tx)=f(T^{-1}Tx)=f(x).\] Else if \(x\not\in N\), then \(Tx\not\in A\times(JA)^{k-1}\) and so \(g(Tx)=0\). This proves (3.2) and in particular that \(f_{N}\) is uniformly generated by \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors of \(f\). Summing up the restrictions \(f_{N}\) of \(f\) from the previous lemma we now obtain the main result of this section, which we repeat in its short form. **Theorem 3.1**.: \(f\) _is uniformly generated by its \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors._ Proof.: Recall that by repeated application of Lemma 3.3 we have \(f^{\prime}\) generated by \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors of \(f\) such that \(f-f^{\prime}\) is \(0\) on \(M^{k}\) for all proper submodules \(M\) of \(\mathbf{A}\). By Lemma 2.14(2) we may assume \(f(M^{k})=0\) for all \(M<\mathbf{A}\) in the following. Let \[V:=\{N\leq\mathbf{A}^{k}\ :\ (JA)^{k}\leq N,N/(JA)^{k}\cong\mathbf{A}/JA\}\] and let \(N\in V\). By Lemma 3.6\(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors of \(f\) uniformly generate \(f_{N}\), which is \(f\) on \(N\) and \(0\) else. Lemma 2.13 and \(f(M^{k})=0\) for \(M<\mathbf{A}\) imply that the support of the functions \(f_{N}\) partitions the support of \(f\). Hence \[f=\sum_{N\in V}f_{N}.\] Thus \(f\) is uniformly generated by \(n\)-ary \(\mathbf{A},\mathbf{B}\)-minors of \(f\). Theorem 1.4 for modules is an easy consequence of Theorem 3.1. Proof of Theorem 1.4 for modules \(\mathbf{A},\mathbf{B}\).: Let \(C\) be a clonoid from \(\mathbf{A}\) to \(\mathbf{B}\). Then \(C=\langle C^{(n)}\rangle\) by Theorem 3.1. If \(\mathbf{B}\) is finite, then there are finitely many \(n\)-ary functions from \(A\) to \(B\) and so there are finitely many clonoids from \(\mathbf{A}\) to \(\mathbf{B}\). ## 4. Clonoids from distributive abelian Mal'cev algebras Extending Theorem 3.1 we show that every function from a finite abelian Mal'cev algebra \(\mathbf{A}\) with distributive congruence lattice into some coprime abelian Mal'cev algebra \(\mathbf{B}\) is generated by its \(n+1\)-ary \(\mathbf{A},\mathbf{B}\)-minors where \(n\) is the nilpotence degree of the Jacobson radical of the ring \(\mathbf{R_{A}}\) (see the definition in Lemma 2.11). The increase in the arity compared to the result for modules is due to the fact that we have to compensate for the missing constant term function \(0\) with a projection, i.e., an additional variable \(z\). More precisely we have the following. **Theorem 4.1**.: _Let \(\mathbf{A}\) be a finite abelian Mal'cev algebra with distributive congruence lattice, let \(n\in\mathbb{N}\) be the nilpotence degree of the Jacobson radical of \(\mathbf{R_{A}}\), and let \(\mathbf{B}\) be an abelian Mal'cev algebra such that \(|A|\) is invertible in \(\mathbf{R_{B}}\)._ _For all \(k\in\mathbb{N}\) there exists \(s\colon\{r\in R_{\mathbf{A}}^{k\times k}\ :\ \operatorname{rk}(r)\leq n\}\to R_{ \mathbf{B}}\) such that for all \(f,b\colon A^{k+1}\to B\) and all \(x\in A^{k},z\in A\)_ \[f(x,z)=\sum_{r\in R_{\mathbf{A}}^{k\times k},\operatorname{rk}(r)\leq n}s(r) \ast_{b(x,z)}f(r\ast_{z}x,z) \tag{4.1}\] _where the sum is taken pointwise with respect to \(+_{b(x,z)}\) in \(\mathbf{B}\)._ Proof.: Let \(k\in\mathbb{N},a\in A,b\in B\). Since \(\mathbf{A}\) and the \(\mathbf{R_{A}}\)-module \(\mathbf{A}_{a}\) are polynomially equivalent by Lemma 2.11(2), they have the same congruences. Since the congruence lattice of \(\mathbf{A}\) is distributive by assumption, so is the congruence lattice of \(\mathbf{A}_{a}\). Now the distributive \(\mathbf{R_{A}}\)-module \(\mathbf{A}_{a}\) and the \(\mathbf{R_{B}}\)-module \(\mathbf{B}_{b}\) satisfy the assumptions of Theorem 3.1. Thus we have some \(s\colon\{r\in R_{\mathbf{A}}^{k\times k}\ :\ \operatorname{rk}(r)\leq n\}\to R_{ \mathbf{B}}\) such that for all \(r\in R_{\mathbf{A}}^{k\times k}\), \(s\colon\operatorname{rk}(r)\leq n\}\), \(s\colon\{r\in R_{\mathbf{A}}^{k\times k}\ :\ \operatorname{rk}(r)\leq n\}\), \(s\colon\{r\in R_{\mathbf{A \(g\colon A^{k}\to B\) and all \(x\in A\) \[g(x)=\sum_{r\in R_{\mathbf{A}}^{k\times k},\operatorname{rk}(r)\leq n}s(r)*_{b}g( r*_{a}x). \tag{4.2}\] Note that this formula depends on the constants \(a,b\) via the action \(*_{a}\) of \(R_{\mathbf{A}}^{k\times k}\) on \(A^{k}\) and the underlying addition \(u+_{a}v=m(u,a,v)\) of \(\mathbf{A}_{a}\) as well as the operations \(*_{b},+_{b}\) of \(\mathbf{B}_{b}\) for the outer sum. We claim that the function \(s\) is actually independent of the choices of \(a,b\) since the modules \(\mathbf{A}_{a},\mathbf{B}_{b}\) are isomorphic for all \(a\in A,b\in B\), respectively. To see that let first \(g\colon A^{k}\to B\), \(z\in A\) and \(p\colon\mathbf{A}_{a}^{k}\to\mathbf{A}_{z}^{k}\) be the \(\mathbf{R}_{\mathbf{A}}\)-module isomorphism which exists by Lemma 2.11(3). Applying (4.2) to \(gp\) and using that \(p\) is a homomorphism, we obtain for all \(x\in A^{k}\) \[g(p(x))=\sum_{r\in R_{\mathbf{A}}^{k\times k},\operatorname{rk}(r)\leq n}s(r) *_{b}g(p(r*_{a}x))=\sum_{r\in R_{\mathbf{A}}^{k\times k},\operatorname{rk}(r) \leq n}s(r)*_{b}g(r*_{z}p(x))).\] Since \(p\) is a bijection on \(A^{k}\), this yields \[g(x)=\sum_{r\in R_{\mathbf{A}}^{k\times k},\operatorname{rk}(r)\leq n}s(r)*_{ b}g(r*_{z}x) \tag{4.3}\] for all \(x\in A^{k}\). Hence \(s\) is independent of the choice of the neutral element in the module \(\mathbf{A}_{z}\). For \(f\colon A^{k+1}\to B\) and \(z\in A\), we set \(g(x_{1},\dots,x_{k}):=f(x_{1},\dots,x_{k},z)\) in (4.3) to obtain \[f(x,z)=\sum_{r\in R_{\mathbf{A}}^{k\times k},\operatorname{rk}(r)\leq n}s(r)* _{b}f(r*_{z}x,z) \tag{4.4}\] for all \(x\in A^{k},z\in A\). Next let \(c\in B\) and let \(h\colon\mathbf{B}_{c}\to\mathbf{B}_{b}\) be the \(\mathbf{R}_{\mathbf{B}}\)-module isomorphism given in Lemma 2.11(3). Applying (4.4) to \(hf\) and using that \(h\) is a homomorphism, we obtain for all \(x\in A^{k},z\in A\) that \[hf(x,z) =\sum_{r\in R_{\mathbf{A}}^{k\times k},\operatorname{rk}(r)\leq n }s(r)*_{b}hf(r*_{z}x,z)\] \[=h\left(\sum_{r\in R_{\mathbf{A}}^{k\times k},\operatorname{rk}( r)\leq n}s(r)*_{c}f(r*_{z}x,z)\right).\] Note that here the sum outside of \(h\) is computed in \(\mathbf{B}_{b}\) and inside of \(h\) in \(\mathbf{B}_{c}\). Since \(h\) is a bijection on \(B\), it follows that \[f(x,z)=\sum_{r\in R_{\mathbf{A}}^{k\times k},\operatorname{rk}(r)\leq n}s(r)*_{c }f(r*_{z}x,z) \tag{4.5}\] for all \(x\in A^{k},z\in A,c\in B\). Hence \(s\) is also independent of the choice of the neutral element in the module \(\mathbf{B}_{c}\). In particular by choosing \(c=b(x,z)\) independently for any point \(x\in A^{k},z\in A\) in equation (4.5), we obtain (4.1). Theorem 1.4 is an easy consequence of Theorem 4.1. Proof of Theorem 1.4.: Let \(\mathbf{A}\) be polynomially equivalent to a distributive \(\mathbf{R_{A}}\)-module with \(n\) the nilpotence degree of \(J(\mathbf{R_{A}})\), let \(\mathbf{B}\) be polynomially equivalent to an \(\mathbf{R_{B}}\)-module such that \(|A|\) is invertible in \(\mathbf{R_{B}}\). Then every clonoid from \(\mathbf{A}\) to \(\mathbf{B}\) is generated by its \(n+1\)-ary functions by Theorem 4.1. Hence, if \(\mathbf{B}\) is finite, then there are only finitely many clonoids from \(\mathbf{A}\) to \(\mathbf{B}\). We point out one important special case of Theorem 1.4. **Corollary 4.2**.: _Let \(\mathbf{A}=\mathbf{A}_{1}\times\cdots\times\mathbf{A}_{\ell}\) for pairwise non-isomorphic finite abelian simple Mal'cev algebras \(\mathbf{A}_{1},\ldots,\mathbf{A}_{\ell}\) and let \(\mathbf{B}\) be a finite abelian Mal'cev algebra such that \(|A|\) and \(|B|\) are coprime._ _Then every clonoid from \(\mathbf{A}\) to \(\mathbf{B}\) is generated by binary functions._ Proof.: Since the factors \(\mathbf{A}_{i}\) are pairwise non-isomorphic and simple, every congruence of \(\mathbf{A}\) is a product congruence. That is, the congruence lattice of \(\mathbf{A}\) is Boolean, in particular, distributive. Since \(\mathbf{A}\) is polynomially equivalent to a direct product of simple \(\mathbf{R_{A}}\)-modules, \(\mathbf{R_{A}}\) is semisimple. Thus \(\mathbf{A}\) satisfies the assumptions of Theorem 1.4 with \(n=1\) and the result follows. ## 5. Lower bounds on the number of clonoids from \(\mathbf{A}\) to \(\mathbf{B}\) In this section we give necessary conditions on the algebras \(\mathbf{A}\) and \(\mathbf{B}\) such that there are only finitely many clonoids between them. We start by showing that if there exists \(n\in\mathbb{N}\) such that every clonoid from \(\mathbf{A}\) to a nontrivial \(\mathbf{B}\) is generated by \(n\)-ary functions, then all subalgebras of \(\mathbf{A}\) are generated by \(n\) elements. **Lemma 5.1**.: _Let \(n\in\mathbb{N}\), let \(\mathbf{A}\) be an algebra not all of whose subalgebras are generated by \(n\) elements, and let \(\mathbf{B}\) be a nontrivial algebra. Then not every clonoid from \(\mathbf{A}\) to \(\mathbf{B}\) is generated by \(n\)-ary functions._ Proof.: By assumption \(\mathbf{A}\) has some subalgebra \(U\) that is generated by \(n+1\) elements but not by \(n\). We construct clonoids \(C\) and \(D\) from \(\mathbf{A}\) to \(\mathbf{B}\) with \(C^{(k)}=D^{(k)}\) for all \(k\leq n\) but \(C^{(n+1)}\neq D^{(n+1)}\). For any subalgebra \(M\) of \(\mathbf{A}\), let \(\rho_{M}:=M^{2}\). Then \[C:=\operatorname{Pol}(\rho_{U},=_{B})\] is the clonoid of all functions from \(A\) to \(B\) that are constant on \(U\). For \(a,b\in A^{n}\), let \(\sigma_{a,b}\) be the subalgebra of \(\mathbf{A}^{2}\) that is generated by \((a_{1},b_{1}),\ldots,(a_{n},b_{n})\). Let \[D:=\bigcap_{a,b\in U^{n}}\operatorname{Pol}(\sigma_{a,b},=_{B}).\] Then \(f\colon A^{k}\to B\) is in \(D\) iff \(f(g_{1}(a),\ldots,g_{k}(a))=f(g_{1}(b),\ldots,g_{k}(b))\) for all \(a,b\in U^{n}\) and all \(n\)-ary term operations \(g_{1},\ldots,g_{k}\) of \(\mathbf{A}\). Clearly \(C\subseteq D\). To show \(D^{(k)}\subseteq C^{(k)}\) for \(k\leq n\), let \(f\in D^{(k)}\) and \(a,b\in U^{k}\). Repeating the last entry of \(a,b\), respectively, we obtain \(n\)-tuples \(a^{\prime}:=(a_{1},\ldots,a_{k},\ldots a_{k}),b^{\prime}:=(b_{1},\ldots,b_{k},\ldots b_{k})\) over \(U\). Since \((a_{1},b_{1}),\ldots,(a_{k},b_{k})\) are in \(\sigma_{a^{\prime},b^{\prime}}\) and \(f\in\operatorname{Pol}(\sigma_{a^{\prime},b^{\prime}},=_{B})\), we have \(f(a)=f(b)\). Thus \(f\in C\) and \(C^{(k)}=D^{(k)}\) for all \(k\leq n\). To show that \(C^{(n+1)}\neq D^{(n+1)}\) let \(0,1\) be distinct elements in \(B\) and define \[g\colon A^{n+1}\to B,\ (x_{1},\ldots,x_{n+1})\mapsto\begin{cases}1&\text{ if }x_{1}, \ldots,x_{n+1}\text{ generate }U,\\ 0&\text{ else.}\end{cases}\] Since there exist \(n+1\) elements that generate \(U\) by assumption, \(g\) is not constant on \(U\), hence not in \(C\). However \(g\) is constant \(0\) on all proper subalgebras of \(U\) and in particular on all \(n\)-generated subalgebras. Thus \(g\in D^{(n+1)}\setminus C^{(n+1)}\). Even if \(\mathbf{A}\) is a cyclic module, not all clonoids from \(\mathbf{A}\) to \(\mathbf{B}\) may be generated by unary functions by the following. **Lemma 5.2**.: _Let \(\mathbf{R}\) be a finite ring with nonzero Jacobson radical, let \(\mathbf{A}\) be the left regular module over \(\mathbf{R}\), and let \(\mathbf{B}\) be a nontrivial algebra. Then not every clonoid from \(\mathbf{A}\) to \(\mathbf{B}\) is generated by unary functions._ Proof.: Since \(\mathbf{R}\) is a finite ring, its Jacobson radical \(J:=J(\mathbf{R})\) is nilpotent. It follows that \(\mathbf{R}\) has a nonzero ideal \(I\leq J\) such that \(I^{2}=0\). Note that \(I\) is contained in every maximal left ideal of \(\mathbf{R}\) since \(J\) is the intersection of all maximal left ideals of \(\mathbf{R}\). We construct clonoids \(C,D\) from \(\mathbf{A}\) to \(\mathbf{B}\) such that \(C^{(1)}=D^{(1)}\) but \(C^{(2)}\neq D^{(2)}\). For any submodule \(M\) of \(\mathbf{A}\), let \(\rho_{M}:=M^{2}\). Then \[C:=\operatorname{Pol}(\equiv_{I},=)\cap\bigcap_{M<\mathbf{A}}\operatorname{ Pol}(\rho_{M},=)\] is the clonoid of functions from \(\mathbf{A}\) to \(\mathbf{B}\) that are constant modulo \(I\) and constant on every proper submodule of \(\mathbf{A}\). For \(a\in I\), let \(\sigma_{a}:=R(1,1+a)\) be a submodule of \(\mathbf{A}^{2}\). Note that \(\sigma_{a}\) is contained in \(\equiv_{I}\). Let \[D:=\bigcap_{a\in I}\operatorname{Pol}(\sigma_{a},=)\cap\bigcap_{M<\mathbf{A}} \operatorname{Pol}(\rho_{M},=).\] Then \(f\in D^{(k)}\) iff \(f(x_{1},\ldots,x_{k})=f(x_{1}+x_{1}a,\ldots,x_{k}+x_{k}a)\) for all \(x_{1},\ldots,x_{k}\in R\) and all \(a\in I\) and \(f\) is constant on every proper submodule of \(\mathbf{A}\). Since all elements \(x_{i}a\) in the previous identity are in \(I\), it follows that \[C\subseteq D. \tag{5.1}\] We claim that \[C^{(1)}=D^{(1)}. \tag{5.2}\] For the proof let \(f\in D^{(1)}\) and \(x\in A\). First consider the case that \(Rx\neq R\). Then \(x\) is contained in some maximal submodule \(M\) of \(\mathbf{A}\) and \(I\leq M\) since \(I\) is contained in every maximal submodule. Hence \(x+I\subseteq M\) implies \(f(x+I)=f(x)\). Next suppose that \(Rx=R\). Then \(x\) has a multiplicative left inverse in \(R\). Since \(I\) is finite, \(xI=I\). Since \(f(x)=f(x+xI)\) by assumption, we see that \(f(x)=f(x+I)\). Hence \(f\) is constant on \(I\)-cosets and \(f\in C\). Together with (5.1) this implies (5.2). Next we claim that \[C^{(2)}\neq D^{(2)}. \tag{5.3}\] To see this let \(0,1\) be distinct elements in \(B\) and define \[g\colon A^{2}\to B,\,(x,y)\mapsto\begin{cases}1&\text{ if }x=y\equiv_{I}1,\\ 0&\text{ else.}\end{cases}\] Clearly \(g\notin C\) as for nonzero \(a\in I\) we have \((1,1)\equiv_{I}(1+a,1)\) but \(g(1,1)=1\neq 0=g(1+a,1)\). For \(g\in D\) we first show that \(g\in\operatorname{Pol}(\sigma_{a},=)\) for all \(a\in I\). Let \((x,y)\in A^{2}\) and \(a\in I\). If \(x\not\equiv_{I}1\), then also \(x(1+a)=x+xa\not\equiv_{I}1\) and \(g(x,y)=0=g(x(1+a),y(1+a))\). The symmetric argument works if \(y\neq_{I}1\). Hence we assume \(x,y\in 1+I\) in the following, say \(x=1+u\) and \(y=1+v\) for some \(u,v\in I\). If \(u\neq v\), then \(g(x,y)=0\) and also \[g(x(1+a),y(1+a))=g(1+u+a+\underbrace{ua}_{=0},1+v+a+\underbrace{va}_{=0})=0.\] Here we used the assumption that \(I^{2}=0\). If \(u=v\), then similarly \(g(x,y)=1\) and \[g(x(1+a),y(1+a))=g(1+u+a,1+u+a)=1.\] Hence \(g\in\operatorname{Pol}(\sigma_{a},=)\). Now let \(M\) be a proper submodule of \(\mathbf{A}\) and show \(g\in\operatorname{Pol}(\rho_{M},=)\). Note that by the nilpotence of \(J\), every element in \(1+J\) is a unit in \(\mathbf{R}\). Hence in particular \(1+I\) and \(M\) are disjoint. Thus \(g(M,M)=0\). Therefore \(g\in D^{(2)}\setminus C^{(2)}\) proving (5.3) and that \(D\) is not generated by its unary functions. Next we prove Theorem 1.5 which characterizes the finite modules over commutative rings for which all clonoids are generated by their unary functions. Proof of Theorem 1.5.: The implication \((2)\Rightarrow(1)\) follows from Theorem 1.4. For \((1)\Rightarrow(2)\), assume that every clonoid from the \(\mathbf{R}\)-module \(\mathbf{A}\) to \(\mathbf{B}\) is generated by unary functions. By Lemma 5.1\(\mathbf{A}\) is a cyclic module. So \(\mathbf{A}\) is isomorphic to \(R/L\) for some left ideal \(L\) of \(\mathbf{R}\). Since \(\mathbf{R}\) is commutative, \(L\) is a (two-sided) ideal of \(\mathbf{R}\). Now \(L(\mathbf{R}/L)=0\) yields \(LA=0\) and \(L=0\) since \(\mathbf{A}\) is a faithful \(\mathbf{R}\)-module. Hence \(\mathbf{A}\) is isomorphic to the regular \(\mathbf{R}\)-module. Since the Jacobson radical \(J(\mathbf{R})=0\) by Lemma 5.2 and \(\mathbf{R}\) is finite, the Wedderburn-Artin Theorem yields that \(\mathbf{R}\) is isomorphic to a direct product of matrix rings over finite fields. Since \(\mathbf{R}\) is commutative, each matrix ring has dimension \(1\times 1\) and \(\mathbf{R}\) is a direct product of finite fields. Finally we construct an infinite ascending chain of clonoids between any two finite modules whose orders have a nontrivial common divisor to prove Theorem 1.3. So in this case we attain the upper bound guaranteed by Theorem 2.9. Proof of Theorem 1.3.: Let \(p\) be a common prime divisor of \(|A|\) and \(|B|\). Then \(\mathbf{A}\) has a simple quotient \(\mathbf{A}^{\prime}\) of \(p\)-power order and \(\mathbf{B}\) has a simple subalgebra \(\mathbf{B}^{\prime}\) of \(p\)-power order. By the Jacobson Density Theorem \(\mathbf{A}^{\prime}\) and \(\mathbf{B}^{\prime}\) are modules over full matrix rings over (not necessarily the same) finite fields of characteristic \(p\). In particular we can expand to a module \(\mathbf{A}^{+}=\mathbb{Z}_{p}^{m}\) over the matrix ring \(\mathbf{R}:=\mathbb{Z}_{p}^{m\times m}\) and \(\mathbf{B}^{\prime}\) to a module \(\mathbf{B}^{+}=\mathbb{Z}_{p}^{n}\) over \(\mathbf{S}:=\mathbb{Z}_{p}^{n\times n}\) for some natural numbers \(m\) and \(n\). By Lemmas 2.3 and 2.4 it suffices to construct infinitely many clonoids from \(\mathbf{A}^{+}\) to \(\mathbf{B}^{+}\). For ease of notation assume \(\mathbf{A}=\mathbf{A}^{+}\) and \(\mathbf{B}=\mathbf{B}^{+}\) in the following. For \(x=(x_{1},\ldots,x_{m})\) and \(y=(y_{1},\ldots,y_{m})\) in \(A\) define \[xy:=(x_{1}y_{1},\underbrace{0,\ldots,0}_{m-1})\in A\quad\text{ and }\quad x^{ \prime}:=(x_{1},\underbrace{0,\ldots,0}_{n-1})\in B.\] Here \(x_{1}y_{1}\) denotes the usual product in \(\mathbb{Z}_{p}\). For \(k\in\mathbb{N}\) let \[f_{k}\colon A^{k}\to B,\ (x_{1},\ldots,x_{k})\mapsto(x_{1}\cdots x_{k})^{\prime}.\] We claim that \[\langle f_{1}\rangle\subsetneq\langle f_{1},f_{2}\rangle\subsetneq\langle f_ {1},f_{2},f_{3}\rangle\subsetneq\cdots \tag{5.4}\] is an infinite strictly ascending sequence of clonoids from \(\mathbf{A}\) to \(\mathbf{B}\). For the proof first note that \(f_{k}\) is linear in each argument and in particular zero-absorbing for all \(k\in\mathbb{N}\). We show that \(\langle f_{1},\ldots,f_{k}\rangle\) does not contain any zero-absorbing functions of arity \(k+1\) except for the constant zero function. To this end let \(g\in\langle f_{1},\ldots,f_{k}\rangle^{(k+1)}\). By the multilinearity of the \(f_{i}\) we can expand \(g\) into the form \[g(x_{1},\ldots,x_{k+1})=\sum_{\emptyset\neq I\subsetneq[k+1]}\left(\sum_{j=1} ^{\ell_{I}}s_{I,j}\left(\prod_{i\in I}x_{i}^{e_{I,j,i}}\right)^{\prime}\right)\] for some \(\ell_{I}\in\mathbb{N},s_{I,j}\in\mathbb{Z}_{p}^{n\times n}\) and \(e_{I,j,i}\in[p]\). Note that no product in \(A\) has more than \(k\) factors. We rewrite the summands of the outer sum of \(g\) as \[g(x_{1},\ldots,x_{k+1})=\sum_{\emptyset\neq I\subsetneq[k+1]}a_{I}(x_{1}, \ldots,x_{k+1})\left(\prod_{i\in I}x_{i}\right)^{\prime} \tag{5.5}\] for some \(a_{I}\colon A^{k+1}\to\mathbb{Z}_{p}^{n\times n}\) that do not depend on \(x_{j}\) for any \(j\in[k+1]\setminus I\). Assume that \(g\) is zero-absorbing. We show that then every summand \(a_{I}(x_{1},\ldots,x_{k+1})\left(\prod_{i\in I}x_{i}\right)^{\prime}\) in (5.5) is \(0\) by induction on \(I\subsetneq[k+1]\). The base case \(a_{\emptyset}(x)=0\) holds by the very form of \(g\). For \(I\subsetneq[k+1]\) and for \(x\in A^{k+1}\), define \(x^{I}=(x_{1}^{I},\ldots,x_{k+1}^{I})\) in \(A^{k+1}\) by \[x_{i}^{I}:=\begin{cases}x_{i}&\text{if }i\in I,\\ 0&\text{else}.\end{cases}\] Then \[0 =g(x^{I})\quad\text{since $g$ is zero-absorbing and $|I|\leq k$},\] \[=\sum_{\emptyset\neq J\subseteq I}a_{J}(x^{I})\left(\prod_{i\in J}x _{i}\right)^{\prime}\quad\text{since $\prod_{i\in J}x_{i}^{I}=0$ unless $J\subseteq I$},\] \[=\sum_{\emptyset\neq J\subseteq I}a_{J}(x)\left(\prod_{i\in J}x_{i }\right)^{\prime}\quad\text{since $a_{J}$ does not depend on $x_{i}$ for $i\not\in I$},\] \[=a_{I}(x)\left(\prod_{i\in I}x_{i}\right)^{\prime}\quad\text{by induction assumption}.\] Thus the induction step is proved and \(g=0\) is the only \(k+1\)-ary zero-absorbing function in \(\langle f_{1},\dots,f_{k}\rangle\). In particular \(f_{k+1}\) is not in \(\langle f_{1},\dots,f_{k}\rangle\). This proves (5.4). Clearly the union of clonoids in (5.4) is not finitely generated.
2302.14805
Optimization of a three-phase Induction Motor for Electric Vehicles Based on Hook-Jews Optimization Method
In this paper, the Hook-Jews (HJ) optimization method is used to optimize a 3-phase Squirrel-Cage Induction Motor (SCIM) as an Electric Vehicle's (EV) motor. Optimal designs with different numbers of poles, different nominal and maximum speeds, and different numbers of grooves are compared and the best one is selected. The optimization method used has advantages such as simple programming, omission of gradients, short convergence times, and the possibility of changing individual parameters. Design parameter variations for optimal designs with rated speeds for 2-pole and 4-pole motors are shown and explained. The results show that his 2-pole motor with the rectangular stator and rotor slots and a rated speed of 1800 rpm has the highest efficiency.
Arash Mousaei, Sahar Aziz Mohammadabadi
2023-02-28T17:55:22Z
http://arxiv.org/abs/2302.14805v1
Optimization of a three-phase Induction Motor for Electric Vehicles Based on Hook-Jews Optimization Method ###### Abstract In this paper, the Hook-Jews (HJ) optimization method is used to optimize a 3-phase Squirrel-Cage Induction Motor (SCIM) as an Electric Vehicle's (EV) motor. Optimal designs with different numbers of poles, different nominal and maximum speeds, and different numbers of grooves are compared and the best one is selected. The optimization method used has advantages such as simple programming, omission of gradients, short convergence times, and the possibility of changing individual parameters. Design parameter variations for optimal designs with rated speeds for 2-pole and 4-pole motors are shown and explained. The results show that this 2-pole motor with the rectangular stator and rotor slots and a rated speed of 1800 rpm has the highest efficiency. Electric Vehicles (EVs), Squirrel-Cage Induction Motor (SCIM), Optimization ## I Introduction Car traffic is the main cause of respiratory and noise pollution in big cities. Using Electric Vehicles (EVs) can be the solution to these two problems. Among the advantages of EVs is the possibility of accessing electric energy through the electric energy distribution system. The big disadvantage of these cars is the low power density and long battery charging time. Therefore, in addition to proper vehicle control and proper energy management, optimal design of its various parts, in particular the motor, is also necessary. The topic of EVs goes back to the beginning of the 20th century (around 1916). But due to the abundance of fossil energy, the lack of awareness of the problem of air pollution, and the limitation of electrical energy resources, it was forgotten for a long time. Around the year 1980, the crisis of fossil energies due to their reduction and the sharp increase in the price of fuel, and noise and respiratory air pollution caused attention to EVs again. In this way, different EVs with different Electric Motors (EMs) were designed and built. Among EMs, DC motors are considered for their simple control system, and Switch Reluctance Motors (SRMs) for their simple construction and easy control. The problems caused by the commutator and brush system of DC motors and the high noise of SR motors caused more attention to AC motors. In recent years, Induction Motors (IMs) are very popular due to their simple construction, low price, and high energy density. The main requirements of EV's motors are: [2] torque to-inertia ratio and high power-to-weight ratio, high breaking torque (up to 400 percent), high speed, low noise level, no need for maintenance, small size and lightweight, simple control Reasonable price, high efficiency with different speeds (low and high) and the ability to return energy when braking or moving downhill. It should be mentioned that the squirrel cage induction motor has more of these requirements. Different optimization methods have been used for the optimal design of induction motors [3] to [16]. These methods are Random Search (RS), Direct Search (DS) or Hook-Jewes (HJ), Simple (S), Paved (P), Daviden-Fletcher-Pavel (DFP) method, Deepest Slope (DS), direct search based on the Rosenberg method, First-Order Gradient (FOG), Monica (M), Niching Genetic Algorithm and optimization based on Neural Network (NN). Table I compares the results obtained from five different optimization methods with the objective function of the price of consumables, in the case of a 7.5 kW, IM with four poles [6] and [7]. The comparison of the results in Table I shows that the HJ method has a better performance. Based on this, and also the recommendations of various authors [3] to [5], and [10], the forgotten for a long time. Around the year 1980, the crisis of fossil energies due to their reduction and the sharp increase in the price of fuel, and noise and respiratory air pollution caused attention to EVs again. In this way, different EVs with different Electric Motors (EMs) were designed and built. Among EMs, DC motors are considered for their simple control system, and Switch Reluctance Motors (SRMs) for their simple construction and easy control. The problems caused by the commutator and brush system of DC motors and the high noise of SR motors caused more attention to AC motors. In recent years, Induction Motors (IMs) are very popular due to their simple construction, low price, and high energy density. The main requirements of EV's motors are: [2] torque to-inertia ratio and high power-to-weight ratio, high breaking torque (up to 400 percent), high speed, low noise level, no need for maintenance, small size and lightweight, simple control Reasonable price, high efficiency with different speeds (low and high) and the ability to return energy when braking or moving downhill. It should be mentioned that the squirrel cage induction motor has more of these requirements. Different optimization methods have been used for the optimal design of induction motors [3] to [16]. These methods are Random Search (RS), Direct Search (DS) or Hook-Jewes (HJ), Simple (S), Paved (P), Daviden-Fletcher-Pavel (DFP) method, Deepest Slope (DS), direct search based on the Rosenberg method, First-Order Gradient (FOG), Monica (M), Niching Genetic Algorithm and optimization based on Neural Network (NN). determined based on the intended efficiency of the EV. The system's voltage is selected based on the available possibilities. Choosing a higher voltage provides the possibility of better performance, however, some problems such as the limitation of the voltage of the batteries limit this possibility. The number of grooves can be calculated based on design standards. The nominal speed, the number of poles, and the shape of the grooves should be examined more precisely. Based on this, it is the best design that can be obtained. In this article, the effect of the number of poles, nominal speed, and the shape of the grooves on the efficiency of the three-phase EV's IM is studied and evaluated. The 15-hp three-phase IM for EV is optimally designed with different values of these parameters and finally, the final optimal design is introduced. The article consists of six sections. In the second part, the effect of the number of poles, nominal speed, and the shape of the grooves on the efficiency of the motor is studied. In the third section, the method of calculating engine efficiency is presented. In the fourth section, the optimal design is studied. In the fifth section, a 15 hp sample engine is designed and the effects of the abovementioned parameters are discussed and investigated. Finally, the conclusion is made in the last section. II The Effect of the Number of Poles, Grooves type, and Nominal Speed on the Efficiency of the Motor The EV's IM must work in a wide range of speeds (zero to several thousand rotations per minute). In addition, other requirements of EVs such as sufficient torque in the acceleration mode or working at a constant speed, small mechanical time constant (dynamic and fast), and so on, should be provided by IM. Due to the limitation of battery energy, the available energy should be used in the best way, so it should have a high efficiency and its design should be such that the torque fluctuations (caused by the harmonics of the power source) are reduced within this range. and have little noise. In order to have fast dynamics, the system should be as light as possible. Of course, you can consider the goal of optimal volume to occupy the least space. Therefore, its design is different from the design of standard IMs to limit the industry. The current in the Voltage Source Inverter (VSI) supply should be high leakage inductance, but not so much that it enters the unstable operating range at different motor speeds. To reduce the keying frequency and ultimately cost less, and to reduce keying losses, consider reducing the effect on the skin and related losses, deep grooves should not be used. Because the motor with an inverter power supply will not have the problem of starting torque, there is no need to use the deep grooves of the rotor [13, 17] to [20] in order to increase the acceleration during acceleration and also the acceleration during positive acceleration. The nominal values of the elements of the power converter (inverter), and the radius of the rotor should be as small as possible, and instead of its length, it should be selected as large. Because the power supply voltage range is limited due to the use of a battery, therefore, for the specific power of the current and the thickness of the conductors, a large groove space is necessary. Considering the high speed of the motor, the diameter of the rotor must be small, so the number of grooves on the pole is small. In this order, the number of motor poles is chosen between 2 and 4 [13] and [20]. Considering the high frequency in the motor, it is possible to reduce the skin effect and the losses caused by it by choosing the appropriate shape of the rotor groove. For this reason, usually in high-speed IMs (such as EV) the shape of the rotor groove is rectangular its length and width are very close to each other. In addition, the voltage can be increased to better distribute the flux in different parts of the groove and reduce the harmonic losses. The maximum speed of the motor depends on the selected power transmission system and the maximum speed of the vehicle, but the nominal and base speed (the base speed is selected equal to the nominal speed), the pole, the type of groove, the losses and other conditions of the motor's performance and the operating conditions are also used in the selection process. Choosing the nominal speed with regard to the working frequency of the machine at this speed has a direct effect on the keying losses and harmonics. On the other hand, it will also be effective on the moment of inertia of the motor, so choosing the optimal nominal speed can be done at the same time. It provided transients for the motor and ultimately for the EV. ## III Calculation of Efficiency Characteristics In the inverter feeding of the IM, in addition to the first (main) harmonic, individual harmonics also appear, and hence the efficiency characteristics of this motor are different from those with sinusoidal feeding. The most important losses in the IM with inverter feeding can be classified and calculated as follows (of course, there are other losses that are considered in the design software, but they are mentioned here for the sake of simplicity). ### _Core losses_ The total core losses in an IM's inverter can be calculated using equation 1: \[P_{c}=\sum_{m}P_{cm}=\sum_{m}(P_{hm}+P_{em}) \tag{1}\] Each of Foucault's Current and hysteresis of harmonics is firstly calculated in the unit of weight of different parts of the magnetic circuit of the motor (core) according to the following relations and then by multiplying the weight of the relevant part, the harmonic losses of each of these parts are calculated separately for each of the harmonics: \[P_{hm}=\sum_{i}P_{hmi}=\sum_{i}G_{i}\,P_{hmi} \tag{2}\] \[=\sum_{i}G_{i}K_{n}\sigma_{h}f_{m}B_{mmi}^{k}\] \[P_{em}=\sum_{i}P_{emi}=\sum_{i}G_{i}\,P_{emi} \tag{3}\] \[=\sum_{i}G_{i}K_{e}t^{2}f_{m}^{2}B_{mmi}^{2}K_{EM/pi}\] ### _Resistor losses of stator and rotor_ The total losses of the stator and rotor can be calculated from the sum of the losses related to the different current harmonics in the form of the following relationship: \[P_{\Omega}=\sum_{m}3(R_{sm}I_{sm}^{2}+R_{rm}l_{rm}^{2}) \tag{4}\] ### _Mechanical losses (friction, ventilation and air resistance)_ With the help of the following relationship, these losses can be calculated at different speeds: \[P_{fw}=8D_{r}(L+0.15)v_{a}^{2} \tag{5}\] Finally, all the losses and efficiency are: \[P_{loss}=P_{\Omega}+P_{c}+P_{fw}+(P_{p}+P_{K}+P_{z}+P_{blt}) \tag{6}\] \[P_{in}=\sum_{m}3V_{sm}I_{sm}\cos\varphi_{m} \tag{7}\] \[\eta=(P_{in}-P_{loss})/P_{in} \tag{8}\] ### _Nominal torque and failure torques at nominal and maximum speeds_ The following relations obtain the values of these torques: \[T_{n}=\sum_{m}1.5PR_{rm}I_{rm}^{2}/(mf_{m}s_{m}) \tag{9}\] \[T_{pb}=1.5E_{s1}/(X_{r1}\omega_{s})=T_{n}R_{r1}/X_{r1} \tag{10}\] \[T_{pm}=(f_{b}/f_{max})^{2}T_{pb} \tag{11}\] ### _Inertia coefficient_ This coefficient, which determines the positive and negative accelerations of the motor, is calculated as follows. \[H=0.5J\omega_{r}^{2}/Q \tag{12}\] ## IV Optimum design of Motor Various methods have been presented for the optimal design of IM [3] to [16], which are used for the optimal design of EV's IM. In this article, the HJ method has been used for this purpose [3] to [5] and [10] to [12]. In this section, only the objective functions, limitations, and optimization variables of EV's IM are mentioned. There are different points of view on the optimal design of a motor, which are: Minimum cost, maximum efficiency, minimum volume or weight, good performance (such as low slippage, high power factor, etc.), and a multifaceted view (combination of different views). In an EV, considering the limitation of energy and battery power should be used of them, so, efficiency is definitely important. In addition, in order to reduce the weight of the whole car and in other words, to reduce the energy consumption, the motor should be lightweight, so it seems that in the case of the optimal design of the EV, the multi-faceted point of view is weight and efficiency. The objective function can be chosen. Of course, it is viable to make a design with different combinations of these points of view and choose the best one that leads to the most suitable expected result as the optimal design. In this article, considering that the main goal is to compare the effect of choosing the number of poles, the type of groove, and the nominal speed on the stable and transient performance of the motor, only the efficiency function has been discussed. In the design optimization, other goals are also pursued, which as improving or at least maintaining the desired performance for the motor, which can be mentioned as secondary goals. Since enlarging the objective function (a multidimensional function with various and many components) causes the optimization process to be slow and the scope of the search for the optimized software is limited, and probably not very favorable results will be obtained. Therefore, these secondary objectives are not directly included in the objective function but are applied as constraints to the optimization process. The most important restrictions considered in this article: minimum power factor (0.85), the maximum increase in temperature (75 \({}^{\circ}\)C), minimum production torque, the minimum ratio of failure torque to nominal torque at nominal speed (1.5), minimum breakdown torque at maximum speed (3.5 Nm), maximum rotor speed (120 m/s), maximum rotor time constant (4 s), the maximum flux density of stator's tooth (1.2 T), maximum total cost (if necessary), and maximum weight or volume (if it is not part of the target function). There are many design variables of IM. Often, the more the number of optimization variables cause the better optimization result but the speed of convergence is greatly reduced and it will be difficult to control the optimization process. Therefore, we try as much as possible to prevent the increase of the number of variables and also to use variables that have a greater effect on the optimal design as the main optimization variables. Based on this, in this article, these variables are: the inner diameter of the stator, length of the core, width and depth of the stator groove, width and depth of the rotor groove, the depth of the stator and rotor rings, the length of the air gap, the cross-sectional surface of the rings on the outer part of the rotor, and the airgap's flux density. In addition to these variables, other variables including working voltage, nominal and maximum speeds, the number of stator and rotor grooves, the type of stator and rotor grooves, and the number of poles can also be considered. In this article, the working voltage of the motor has been chosen to be equal to 96 V due to the limitation of the battery voltage and other restrictions. The maximum speed of the motor has been selected according to the maximum speed requested by the 900 rpm. The number of slots of the stator and rotor with the change and repetition of optimization are 18 and 13 for the 2-pole motor, and 24 and 18 for the 4-pole motor, respectively. The nominal speed, and the type of grooves and the number of poles are the things that are discussed more in the article, and different optimal designs with different values of the nominal speed, the type of groove and the number of poles are examined and their efficiency is compared. and finally, based on the comparisons, the best design is presented. ## V Optimal design of the sample motor In this part, a squirrel cage IM for EV is designed and the effect of choosing the number of poles, the type of stator and rotor slots, and the nominal speed on the performance characteristics of the motor is examined and analyzed. In the design, the first five harmonics of the supply voltage (5th, 7th, 11th, 13\({}^{\text{th}}\), and 17th harmonics) are taken into account with values of 0.972, 0.088, 0.019, 0.015, and 0.050, respectively. The nominal motor's power is 15 hp and the maximum speed of the motor is selected at 900 rpm. According to the comparisons in section II, the choice of the number of poles will be limited to 2 and 4. To compare the effect of the grooves, two types of rectangular and round grooves are studied in the limit state. Considering that increasing the nominal speed leads to a decrease in the efficiency of the system, the nominal speed range is chosen from 1600 rpm to 2000 rpm. The number of slots of the stator and rotor in the case of the 2-pole design is 18 and 13 respectively, and in the case of the 4-pole design, choose 18 and 4 more. Tables 2 and 3 show the results of different optimal designs for stator and rectangular rotor grooves, rectangular stator grooves, and round rotor, respectively. It is observed that for rectangular stator and rotor grooves: * The volume of the 2-pole motor for three nominal speeds is significantly less than that of the 4-pole motor (43 % on average). * The inner diameter of the stator of the 2-pole motor is smaller, and on the contrary, the outer diameter of the stator is larger than that of the 4-pole motor. * Contrary to the larger volume of the four-pole motor, its weight is less (rectangular and the groove of the rotor is round). * The moment of inertia of the rotor of the 2-pole motor is significantly less than that of the 4-pole motor (49 % on average). * The efficiency of the two-pole motor is 0.96 % higher than the four-pole motor. The average power factor of the 2-pole motor is 17 % higher than that of the 4-pole motor, and the power factor is not met in the 4-pole motor, but it is met in the 2-pole motor. * The braking torque at the maximum speed of the 2-pole motor is higher (2.85 % on average), so the 2-pole motor can have more overload at the maximum speed. * The braking torque at the nominal speed of the 2-pole motor is on average 3.04 % higher, so the 2-pole motor can have a more additional load at the nominal speed and also more dynamic acceleration. * The increase in temperature has been calculated in all 2-pole designs, but in the case of 4-pole designs, it has only been calculated for the nominal speed of 1600 rpm. * The price of 2-pole designs is higher (on average 14.35 %). Almost similar results can be obtained in the case of designs with rectangular grooves for the stator and round for the rotor (Table III). In general, it can be concluded that the designs with rectangular stator and rotor grooves are better in terms of performance compared to rectangular stator grooves and round rotor grooves. In order to compare more and better the effect of the nominal speed on the performance of the motors, a number of parameters in Table II have been drawn as a curve according to the nominal speed. Figures 1A to 1E, in the order of the curve of changes in stator inner diameter, the core length, core volume, core weight, and windings, moment of inertia, value, breaking torque at maximum and nominal speeds, efficiency and power factor with the change of speed from 1400 rpm to 2800 rpm for 2-pole designs, they show. Figures 2A to 2E show the same curves for 4-pole designs in the nominal speed range of 1400 rpm to 2100 rpm. In the case of both 2-pole and 4-pole designs, the increase in the nominal speed on average causes a decrease in the length of the core, inner diameter of the stator, volume and weight, moment of inertia, cost, breaking torque at the nominal speed, efficiency and power factor, and vice versa for the breaking torque in the braking torque. It increases by approximately. Considering that the objective function is efficiency optimization, so it can be said that increasing the nominal speed causes the design quality to decrease, or in other words, reducing the nominal speed causes the design to improve. From the point of view of increasing the nominal speed from the point of view of volume, weight, moment of inertia and the desired price, these things are very important in the selection of motors for electric vehicles. So, an option should be selected according to the above requirements and the intended prioritization in the case of the engine, so that all the main goals and secondary goals are met as much as possible. In the optimal design of the motor, only the losses of the motor itself are taken into account and the switching losses are not taken into account. Whenever the switching losses are also taken into account, due to the fact that increasing the nominal speed will increase the switching losses and increase the volume and nominal values of the power elements, this increase in the nominal speed is more than the nominal speed. ## VI Conclusion Examining and analyzing different plans with different combinations gives the following results: * Despite choosing a low supply voltage (in order to limit the voltage of the batteries) and considering the five harmonics in the voltage waveform, optimal design results are desirable. * The advantages of the 2-pole motor compared to the 4-pole motor are: small size and moment of inertia, high efficiency, better power factor, and high breaking torque at maximum and nominal speeds. * The disadvantages of the 2-pole motor compared to the 4-pole motor are: weight and high price. * Rectangular grooves for the stator and rotor in 2-pole designs lead to better motor efficiency compared to rectangular grooves for the stator and round grooves for the rotor. For 4-pole designs, the opposite of the above is true. * For 2- and 4-pole designs, increasing the nominal speed in the design phase leads to volume, weight, the moment of inertia, price, breakdown torque at nominal speed, efficiency, and power factor, and vice versa, the breakdown torque increases at maximum speed. Considering the objective function (yield), these results show that increasing the nominal speed leads to an unfavorable plan, while the secondary objectives (volume, weight, and moment of inertia) require that the nominal speed be increased. Another factor to limit the nominal speed. * Summarizing the results of the constant work mode, the 2-pole motor with a nominal speed of 1800 rpm shows the most suitable design for an electric car. ### List of symptoms \begin{tabular}{l l} P\({}_{i}\), P\({}_{\mathrm{cm}}\) & Foucault, hysteresis and core losses to m-th harmonic, and total core losses \\ P\({}_{\mathrm{min}}\), & Total Foucault and hysteresis losses and unit weight \\ P\({}_{\mathrm{emi}}\) & of i-th part and m-th harmonic \\ K\({}_{\mathrm{k}}\), K\({}_{\mathrm{e}}\) & Coefficients of Foucault and hysteresis \\ \(\sigma_{\mathrm{h}}\) & For steel sheets with a thickness of 0.35 mm, it is equal to 3. \\ \end{tabular} \begin{tabular}{l l} S\({}_{\mathrm{m}}\), f\({}_{\mathrm{m}}\) & Its frequency and shift are proportional to the m-th harmonic \\ B\({}_{\mathrm{emi}}\) & The maximum magnetic flux density corresponding to the m-th harmonic in the i-th part according to Tesla \\ K & For today's magnetic materials, it is about 2 \\ t, pi & Specific resistance in ohms per centimeter and thickness of steel sheets in millimeters \\ Gi & The weight of the i-th part is in kilograms \\ K\({}_{\mathrm{Em}}\) & The permeability coefficient for m-th harmonic \\ P\({}_{\mathrm{fw}}\), & Ohmic losses and mechanical losses according to watts \\ P\({}_{\mathrm{\Omega}}\) & The rotor and stator resistances corresponding to the third harmonic and the leakage reactance of the rotor \\ R\({}_{\mathrm{sm}}\), & corresponding to the first harmonic \\ \(\mathrm{R}_{\mathrm{rm}}\) & \\ \(\mathrm{cos}\,\varphi_{\mathrm{m}}\) & Rotor currents, stator currents, voltage and power \\ V\({}_{\mathrm{sm}}\), \(\mathrm{I}_{\mathrm{sm}}\) & factor of the input motor corresponding to harmonic \\ I\({}_{\mathrm{m}}\) & mam according to ampere and volt \\ D\({}_{\mathrm{D}}\) & The diameter and length of the rotor core, the outer \\ L & and inner diameters of the stator \\ D\({}_{\mathrm{s}}\), & d\({}_{\mathrm{r}}\) & Width and depth of rotor and stator grooves \\ W\({}_{\mathrm{s}}\) & \\ V\({}_{\mathrm{a}}\) & \\ P\({}_{\mathrm{h}}\) & The linear speed of the rotor in meters per second \\ & \\ & Losses caused by the leakage flow of teeth according \\ & to Watt \\ P\({}_{\mathrm{K}}\) & Losses caused by the flow of leakage due to the \\ & \\ & grooves being diagonal \\ P\({}_{\mathrm{Z}}\) & Losses caused by zigzag leakage flow according to \\ & watts \\ P\({}_{\mathrm{b1}}\) & Losses caused by leakage flow of rotor bars according \\ & to watts \\ T\({}_{\mathrm{fp}}\), T\({}_{\mathrm{pm}}\) & Nominal torque, failure torques at maximum and \\ T\({}_{\mathrm{a}}\), Z\({}_{\mathrm{Li}}\) & Total impedance and stator impedance corresponding \\ & to the first harmonic according to Ohm \\ f\({}_{\mathrm{max}}\), f\({}_{\mathrm{b}}\) & Nominal and maximum frequencies \\ Pf, \(\eta\) & Efficiency and power factor \\ C\({}_{\mathrm{t}}\), V, & Weight, volume and price of the core and windings \\ H, J, T & Increasing the temperature, moment of inertia and inertia constant of the motor \\ \end{tabular}
2302.14811
qSWIFT: High-order randomized compiler for Hamiltonian simulation
Hamiltonian simulation is known to be one of the fundamental building blocks of a variety of quantum algorithms such as its most immediate application, that of simulating many-body systems to extract their physical properties. In this work, we present qSWIFT, a high-order randomized algorithm for Hamiltonian simulation. In qSWIFT, the required number of gates for a given precision is independent of the number of terms in Hamiltonian, while the systematic error is exponentially reduced with regards to the order parameter. In this respect, our qSWIFT is a higher-order counterpart of the previously proposed quantum stochastic drift protocol (qDRIFT), in which the number of gates scales linearly with the inverse of the precision required. We construct the qSWIFT channel and establish a rigorous bound for the systematic error quantified by the diamond norm. qSWIFT provides an algorithm to estimate given physical quantities using a system with one ancilla qubit, which is as simple as other product-formula-based approaches such as regular Trotter-Suzuki decompositions and qDRIFT. Our numerical experiment reveals that the required number of gates in qSWIFT is significantly reduced compared to qDRIFT. Particularly, the advantage is significant for problems where high precision is required; for example, to achieve a systematic relative propagation error of $10^{-6}$, the required number of gates in third-order qSWIFT is 1000 times smaller than that of qDRIFT.
Kouhei Nakaji, Mohsen Bagherimehrab, Alan Aspuru-Guzik
2023-02-28T18:02:39Z
http://arxiv.org/abs/2302.14811v2
# qSWIFT: High-order randomized compiler for Hamiltonian simulation ###### Abstract Hamiltonian simulation is known to be one of the fundamental building blocks of a variety of quantum algorithms such as its most immediate application, that of simulating many-body systems to extract their physical properties. In this work, we present qSWIFT, a high-order randomized algorithm for Hamiltonian simulation. In qSWIFT, the required number of gates for a given precision is independent of the number of terms in Hamiltonian, while the systematic error is exponentially reduced with regards to the order parameter. In this respect, our qSWIFT is a higher-order counterpart of the previously proposed quantum stochastic drift protocol (qDRIFT), whose number of gates scales linearly with the inverse of the precision required. We construct the qSWIFT channel and establish a rigorous bound for the systematic error by using the diamond norm. qSWIFT provides an algorithm to estimate given physical quantities by using a system with one ancilla qubit, which is as simple as other product-formula-based approaches such as regular Trotter-Suzuki decompositions and qDRIFT. Our numerical experiment reveals that the required number of gates in qSWIFT is significantly reduced compared to qDRIFT. Particularly, the advantage is significant for problems where high precision is required; for example, to achieve a systematic relative propagation error of \(10^{-6}\), the required number of gates in third-order qSWIFT is 1000 times smaller than that of qDRIFT. ## I Introduction Hamiltonian simulation is a key subroutine of quantum algorithms for simulating quantum systems. Given a Hamiltonian \(H=\sum_{\ell=1}^{L}h_{\ell}H_{\ell}\), where \(h_{\ell}\geq 0\) and \(L\) is the number of terms, the task in Hamiltonian simulation is to construct a quantum circuit that approximately emulate time evolution \(U(t):=\exp(-\mathrm{i}Ht)\) of the system for time \(t\). Several approaches have been established for this task. The conventional approach uses the Trotter-Suzuki decompositions that provide a deterministic way for Hamiltonian simulation [1; 2; 3]. The gate count of this approach scales at least linearly with the number of terms \(L\) in \(H\)[3]; we note that the gate count in [2] scales at least quadratically with \(L\). Although this scaling is formally efficient but is impractical for many applications of interest, particularly for the electronic-structure problem in quantum chemistry, where the number of terms in a Hamiltonian is prohibitively large. An alternative approach is randomly permuting the order of terms in the Trotter-Suzuki decompositions [4]. This randomized compilation provides a slightly better scaling for gate count over Ref. [2], but the gate count still depends on the number of Hamiltonian terms quadratically. The quantum stochastic drift protocol (qDRIFT) [5] is another randomized Hamiltonian simulation approach but is independent of the number of terms. In qDRIFT, gates of the form \(\exp(-\mathrm{i}H_{\ell}\tau)\) with small interval \(\tau\) are applied randomly with a probability proportional to the strength \(h_{\ell}\) of the corresponding term in the Hamiltonian. qDRIFT improves upon the Trotter-Suzuki approach in that its gate count is independent of \(L\) and \(\Lambda:=\max_{\ell}h_{\ell}\) (magnitude of the strongest term in the Hamiltonian), and instead depends on \(\lambda:=\sum_{\ell=1}^{L}h_{\ell}\). However, qDRIFT has poor scaling with respect to the precision \(\varepsilon\) in contrast to that in the Trotter-Suzuki approach. We note that there are other approaches to Hamiltonian simulation with asymptotically better performance as a function of various parameters [6; 7; 8; 9; 10; 11]. Still, the approaches based on product formulae, e.g., Trotter-Suzuki decompositions and qDRIFT, are preferred for their superior performance in practice [12] and predominant usage in experimental implementations [13; 14; 15] due to their simplicity and the fact that they do not require any ancilla qubits. From this perspective, we focus on an approach based on product formulae while having better gate scaling than the previous methods. In this paper, we propose the _quantum swift protocol (qSWIFT)_, a high-order randomized algorithm having (i) better scaling with respect to the precision \(\varepsilon\) and (ii) the same scaling with respect to \(\lambda\) compared to qDRIFT. Specifically, the gate count of qSWIFT scales as \(\mathcal{O}((\lambda t)^{2}/\varepsilon^{\frac{1}{3}})\) with \(K\) as the order-parameter while that of qDRIFT scales as \(\mathcal{O}((\lambda t)^{2}/\varepsilon)\). For example, with respect to the precision \(\varepsilon\), the gate count of qSWIFT scales as \(\mathcal{O}(1/\sqrt{\varepsilon})\) for the second-order and as \(\mathcal{O}(1/\varepsilon^{\frac{1}{3}})\) for the third-order. Our qSWIFT algorithm shares its simplicity with the other approaches based on product formulae. It works in the system with one ancilla qubit (we refer to the qubits other than the ancilla qubit simply as the system qubits). We can construct all gate operations with \(\exp(\mathrm{i}H_{\ell}\tau)\) and the _swift operators_\(\tilde{\mathcal{S}}^{b}_{\ell}:=S^{(b)}_{\ell}\rho S^{(b)\dagger}_{\ell}\) where \(S^{(b)}_{\ell}\) is a unitary transformation; the swift operators can be constructed if we can efficiently implement controlled-\(H_{\ell}\) gates, as shown in Fig. 1. In the case of qDRIFT, the entire time evolution is divided into segments, and a sampled time evolution \(\exp(\mathrm{i}H_{\ell}\tau)\) is performed in each segment (see Fig. 1(a)). In qSWIFT, we utilize the _swift circuit_ in addition to the circuit for qDRIFT. The swift circuit also has segments; in most segments, a sampled time evolution \(\exp(\mathrm{i}H_{\ell}\tau)\) is performed to the system qubits, but in the other segments, a sequence of the swift operators is performed (see Fig. 1(b)). The number of swift operators is upper bounded by about twice the order parameter. Therefore, the qSWIFT algorithm can be performed with almost no additional resources compared to the qDRIFT. We will now describe in more detail how qSWIFT is carried out. First, we build the qSWIFT channel that simulates the ideal time evolution. We then establish a bound for the distance between the qSWIFT channel and the ideal channel, quantified with the diamond norm, which exponentially decreases by increasing the order parameter. The established bound yields the desired scaling for the gate count of qSWIFT. It should be noted that the qSWIFT channel itself is not physical in the sense that it is not a completely-positive and trace-preserving (CPTP) map. Nevertheless, we can employ the qSWIFT channel to develop a procedure for measuring a physical quantity of interest, i.e., computing the expectation value of some given observable that is exponentially more precise than the original qDRIFT with respect to the order parameter. Our numerical analysis also reveals the advantage of our qSWIFT algorithm. We show the asymptotic behavior of qSWIFT by using electronic molecular Hamiltonians with the number of qubits \(\sim 50\) and compare the performance with the other approaches based on the product formulae. Specifically, we compute the required number of gates to approximate the time evolution with the molecule Hamiltonians with a given systematic error \(\varepsilon\). We show that the number of gates in the third-order version of qSWIFT is \(10\) times smaller than that of qDRIFT when \(\varepsilon=0.001\) for every time region. A significant reduction of the number of gates is observed when \(\varepsilon=10^{-6}\); the required number of gates in third-order (sixth-order) qSWIFT is \(1,000\) (\(10,000\)) times smaller than that of qDRIFT. We also simulate our qSWIFT algorithm and the other product formulae-based algorithms by using a quantum circuit simulator with the small-size (eight-qubits) molecular Hamiltonian. Its result is consistent with the result of the asymptotic behavior analysis. The rest of the paper is organized as follows. In Section II, we briefly review approaches for the Hamiltonian Figure 1: Quantum circuits for implementing the swift operators \(\tilde{\mathcal{S}}^{(b)}_{\ell}:=S^{(b)}_{\ell}\rho S^{(b)\dagger}_{\ell}\) (\(b\in\{0,1\}\)). The top line corresponds to an ancilla qubit, and the bottom line corresponds to the qubits in the system. Figure 2: Examples of quantum circuits used for qDRIFT and qSWIFT. simulation based on the product formula. Section III and Section IV are dedicated to proposing and analyzing our qSWIFT algorithm. In Section III, we introduce the way of constructing the second-order qSWIFT algorithm. Then we generalize the algorithm to the higher-order in Section IV. In Section V, we validate our algorithm by numerical experiments. Finally, in Section VI, we conclude with some discussions. ## II Background This section covers the key background pertinent to the following sections. We begin with a brief description of Hamiltonian simulation and Trotter-Suzuki formulae in Section II.1. Then we review the qDRIFT algorithm for Hamiltonian simulation in Section II.2. ### Hamiltonian simulation by Trotter-Suzuki formulae We begin with a brief description of the Hamiltonian simulation. For a given time-independent Hamiltonian of the form \(H=\sum_{\ell=1}^{L}h_{\ell}H_{\ell}\), where \(h_{\ell}>0\) and \(H_{\ell}\) are Hermitian operators with \(\|H_{\ell}\|=1\), the task in Hamiltonian simulation is to find a good approximation of the transformation \[\mathcal{U}(t):\rho\to U(t)\rho U^{\dagger}(t), \tag{1}\] with \(U(t):=\mathrm{e}^{\mathrm{i}Ht}\) and \(t\) as a real parameter. We assume we can efficiently implement each \(\mathrm{e}^{\mathrm{i}H_{\ell}t^{\prime}}\) by quantum gates with \(t^{\prime}\) as a real number. For example, if we decompose \(H\) to the sum of the tensor products of the Pauli operators, we can efficiently implement each \(\mathrm{e}^{\mathrm{i}H_{\ell}t}\). The conventional approach for the Hamiltonian simulation is the Trotter-Suzuki decomposition [1; 2]. In the first-order Trotter-Suzuki decomposition for Hamiltonian simulation, the entire simulation for time \(t\) is divided into \(r\) segments of simulations for time \(t/r\) as \(U(t)=(U(t/r))^{r}\) and \(U(t/r)\) is approximated as \(U(t/r)\approx U_{\textsc{ts}}^{(1)}(t/r)\) with \[U_{\textsc{ts}}^{(1)}(t):=\prod_{\ell=1}^{L}\mathrm{e}^{\mathrm{i}h_{\ell}H_{ \ell}t}, \tag{2}\] which yields the approximation \(U(t)\approx(U_{\textsc{ts}}^{(1)}(t/r))^{r}\) for the entire simulation. The second-order Trotter-Suzuki decomposition is given by \(U(t)\approx(U_{\textsc{ts}}^{(2)}(t/r))^{r}\) with \[U_{\textsc{ts}}^{(2)}(t):=\prod_{\ell^{\prime}=L}^{1}\mathrm{e}^{\mathrm{i}h_{ \ell^{\prime}}H_{\ell^{\prime}}t/2}\prod_{\ell=1}^{L}\mathrm{e}^{\mathrm{i}h_{ \ell}H_{\ell}t/2}, \tag{3}\] which serves as the base case for the recursive formula \[U_{\textsc{ts}}^{(2k)}(t):=\left[U_{\textsc{ts}}^{(2k-2)}(p_{k}t)\right]^{2}U_ {\textsc{ts}}^{(2k-2)}((1-4p_{k})t)\left[U_{\textsc{ts}}^{(2k-2)}(p_{k}t) \right]^{2} \tag{4}\] for the \(2k\)th-order decomposition, where \(p_{k}:=1/(4-4^{1/(2k-1)})\). Let us discuss the Trotter-Suzuki decomposition in the channel representation. The \(2k\)th-order Trotter-Suzuki channel \(\mathcal{U}_{\textsc{ts}}^{(2k)}(t):\rho\to U_{\textsc{ts}}^{(2k)}(t)\rho U_{ \textsc{ts}}^{(2k)}(t)\) is used to approximate the channel \(\mathcal{U}(\rho)\approx(\mathcal{U}_{\textsc{ts}}^{(2k)}(t/r))^{r}\). For a given channel \(\mathcal{C}\), we denote by \(\mathcal{C}^{\prime r^{\prime}}\) the \(r^{\prime}\)-repetition of \(\mathcal{C}\). Previous analytic work [2; 5] shows that \[|\mathcal{U}(t)-(\mathcal{U}_{\textsc{ts}}^{(2k)}(t/r))^{r}||_{\diamond}\leq\varepsilon \tag{5}\] for \(r\in\mathcal{O}(\alpha L\Lambda t(\alpha L\Lambda t/\varepsilon)^{1/2k})\) with \(\alpha:=2\cdot 5^{k-1}\), where \(||\cdot||_{\diamond}\) is the diamond norm. We note that \(\alpha\) here is defined so that \(r\alpha L\) is the number of gates used in the \(2k\)th-order decomposition. Hence the gate count for the \(2k\)th-order Trotter-Suzuki decomposition, denoted by \(G_{\textsc{ts}}\), is \[G_{\textsc{ts}}=r\alpha L\in O\left(\frac{\alpha^{2}L^{2}\Lambda t(\alpha L \Lambda t)\frac{\pm}{\varepsilon}}{\varepsilon^{\frac{\alpha}{2k}}}\right). \tag{6}\] Notice that the gate count approaches to \(\mathcal{O}(L^{2}\Lambda t)\) by increasing the order parameter \(2k\), but the prefactor scales exponentially with \(2k\). Because of this rapidly growing prefactor, Trotter-Suzuki decompositions of finite orders, typically second (\(k=1\)) or fourth order (\(k=2\)), are used in practice [5]. We note that [3] improves the gate-count scaling to linear with respect to \(L\) though it depends at least linearly on \(\lambda=\sum_{\ell=1}^{L}h_{\ell}\). However, we use the gate-count scaling in Eq. (6) to compare qSWIFT against qDRIFT [5], particularly for comparing the numerical experiments. ### Hamiltonian simulation by qDRIFT Developed by Campbell [5], qDRIFT is an algorithm for Hamiltonians simulation using a randomized procedure. While the procedure is randomized, with many repetitions the evolution stochastically drifts towards the target unitary. Specifically, the exact time evolution is approximated by \(N\) repetitions of the qDRIFT channel \(\mathcal{E}_{N}\) as \(\mathcal{U}\approx\mathcal{E}_{N}^{N}\), with the qDRIFT channel defined as \[\mathcal{E}_{N}(\rho):=\sum_{\ell=1}^{L}p_{\ell}\mathcal{T}_{\ell}(\rho), \tag{7}\] where \[\mathcal{T}_{\ell}(\rho)=\mathrm{e}^{\mathrm{i}H_{\ell}\tau}\rho\mathrm{e}^{- \mathrm{i}H_{\ell}\tau}, \tag{8}\] is the unitary channel that we call the _time operator_, and \[p_{\ell}:=h_{\ell}/\lambda,\quad\lambda=\sum_{\ell}h_{\ell},\quad\tau:=\lambda t /N, \tag{9}\] are three variables used through the paper. To realize the qDRIFT channel \(\mathcal{E}_{N}\), the index \(\ell\) is sampled according to the probability \(p_{\ell}\) and the the quantum state \(\rho\) is evolved through the channel associated with the operator \(\mathrm{e}^{\mathrm{i}H_{\ell}\tau}\). For evaluating the systematic error of the approximation, they define the exact short time evolution \(\mathcal{U}_{N}\) as \[\mathcal{U}_{N}(\rho):=e^{\mathrm{i}Ht/N}\rho\mathrm{e}^{-\mathrm{i}Ht/N}. \tag{10}\] Then, they show \[d_{\circ}\left(\mathcal{U}_{N},\mathcal{E}_{N}\right)\leq\frac{2(\lambda t)^{ 2}}{N^{2}}e^{2\lambda t/N}, \tag{11}\] where the diamond distance is defined as \[d_{\circ}\left(\mathcal{U}^{\prime},\mathcal{E}^{\prime}\right):=\frac{1}{2} ||\mathcal{U}^{\prime}-\mathcal{E}^{\prime}||_{\diamond}. \tag{12}\] They utilize the diamond distance \(d_{\circ}\left(\mathcal{U},\mathcal{E}_{N}^{N}\right)\) as the measure of the systematic error. By using the subadditive feature of the diamond distance, they obtain the bound for the diamond distance as: \[d_{\circ}\left(\mathcal{U},\mathcal{E}_{N}^{N}\right)\leq Nd_{\circ}\left( \mathcal{U}_{N},\mathcal{E}_{N}\right)\leq\frac{2(\lambda t)^{2}}{N}e^{2 \lambda t/N}\in\mathcal{O}\left(\frac{(\lambda t)^{2}}{N}\right). \tag{13}\] In other words, to reduce the systematic error within \(\varepsilon\), we need to set \(N\in\mathcal{O}\big{(}(\lambda t)^{2}/\varepsilon\big{)}\). In most of the applications of the Hamiltonian simulation, what we have interests is computing the expectation value of an observable after applying the time evolution operator \(\mathcal{U}\). Let us write the expectation value as \[q:=\mathrm{Tr}(Q\mathcal{U}(\rho_{\mathrm{init}})), \tag{14}\] where \(Q\) is an observable and \(\rho_{\mathrm{init}}\) is an input quantum state. By using the qDRIFT algorithm, we can approximately compute the value of \(Q\) as \[q^{(1)}:=\mathrm{Tr}\left(Q\mathcal{E}_{N}^{N}(\rho_{\mathrm{init}})\right), \tag{15}\] where the systematic error is bounded as \[|q-q^{(1)}|\leq 2||Q||_{\infty}d_{\circ}\left(\mathcal{U},\mathcal{E}_{N}^{N} \right)\in\mathcal{O}\left(||Q||_{\infty}\left(\frac{(\lambda t)^{2}}{N}\right) \right). \tag{16}\] ## III Second order qSWIFT In this section, we describe our second-order qSWIFT as a preparation for introducing the general high-order qSWIFT in Section IV. To elucidate our algorithm, we use a "mixture function" in our second and higher-order qSWIFT. We begin by describing this function in Section III.1. Next, we construct the second-order qSWIFT channel and discuss its error bound in Section III.2. Finally, in Section III.3, we explain how to apply the constructed qSWIFT channel for computing physical quantities. ### Mixture function In constructing our qSWIFT channels, we make use of a mixture function. As a preparation, let us first define the following _sorting function_. **Definition III.1** (Sorting function).: Let \(S_{N}\) be the permutation group. For positive integers \(k,N\) with \(k<N\), let \(\vec{\mathcal{A}}:=(\mathcal{A}_{1},\ldots,\mathcal{A}_{k})\) and \(\vec{\mathcal{B}}:=(\mathcal{B}_{1},\ldots,\mathcal{B}_{N-k})\). We define the sorting function as \[f_{\sigma,k,N-k}(\vec{\mathcal{A}},\vec{\mathcal{B}}):=\mathcal{X}_{\sigma(1) }\mathcal{X}_{\sigma(2)}\cdots\mathcal{X}_{\sigma(N)}, \tag{17}\] where \(\sigma\in S_{N}\) and \[\mathcal{X}_{j}=\begin{cases}\mathcal{A}_{j}&j\leq k,\\ \mathcal{B}_{j-k}&j\geq k+1.\end{cases} \tag{18}\] The mixture function is defined by using the sorting function as follows. **Definition III.2** (Mixture function).: For positive integers \(k,N\) with \(k<N\), let \(\vec{\mathcal{A}}:=(\mathcal{A}_{1},\ldots,\mathcal{A}_{k})\) and \(\vec{\mathcal{B}}:=(\mathcal{B}_{1},\ldots,\mathcal{B}_{N-k})\). Then we define the mixture function as \[M_{k,N-k}(\vec{\mathcal{A}},\vec{\mathcal{B}})=\sum_{\sigma\in S_{N,k}^{\rm sub }}f_{\sigma,k,N-k}(\vec{\mathcal{A}},\vec{\mathcal{B}}), \tag{19}\] where the set \(S_{N,k}^{\rm sub}\) is the subgroup of the permutation group \(S_{N}\) comprised of all elements \(\sigma\in S_{N}\) that satisfies the following condition: if both \(\mathcal{X}_{i},\mathcal{X}_{j}\in\vec{\mathcal{A}}\) or \(\vec{\mathcal{B}}\) then \(\sigma(i)<\sigma(j)\) for any \(i<j\). We remark that the number of elements in \(S_{N,k}^{\rm sub}\) is \(\binom{N}{k}\). For simplicity, if elements of \(\vec{\mathcal{B}}\) are identical, we denote the sorting and mixture functions as \(f_{\sigma,k,N-k}(\vec{\mathcal{A}},\mathcal{B})\) and \(M_{k,N-k}(\vec{\mathcal{A}},\mathcal{B})\), respectively. In this case \(\mathcal{X}_{j}=\mathcal{B}\) for \(j\geq k+1\). Similarly, we use the notation \(f_{\sigma,k,N-k}(\mathcal{A},\mathcal{B})\) and \(M_{k,N-k}(\mathcal{A},\mathcal{B})\) if elements of \(\vec{\mathcal{A}}\), and also elements of \(\vec{\mathcal{B}}\), are identical. In this case, \(\mathcal{X}_{j}=\mathcal{A}\) for \(j\leq k\) and \(\mathcal{X}_{j}=\mathcal{B}\) for \(j\geq k+1\). Note that the sorting function in Eq. (17) and mixture function in Eq. (19) are bilinear functions. For example, if the \(\ell\)th element of \(\vec{\mathcal{A}}\) is a linear combination of elements of another vector \(\vec{\mathcal{F}}\), i.e., if \(\mathcal{A}_{\ell}=\sum_{n}c_{n}\mathcal{F}_{n}\) for \(c_{n}\in\mathbb{C}\), then we have \[M_{k,N-k}\left((\mathcal{A}_{1},\ldots,\mathcal{A}_{\ell-1},\sum_{n}c_{n} \mathcal{F}_{n},\mathcal{A}_{\ell+1},\ldots,\mathcal{A}_{k}),\vec{\mathcal{B} }\right)=\sum_{n}c_{n}M_{k,N-k}\left((\mathcal{A}_{1},\ldots,\mathcal{A}_{\ell -1},\mathcal{F}_{n},\mathcal{A}_{\ell+1},\ldots,\mathcal{A}_{k}),\vec{ \mathcal{B}}\right). \tag{20}\] In general, if \(\mathcal{A}_{\ell}=\sum_{n_{\ell}}c_{n_{\ell}}\mathcal{F}_{n_{\ell}}\) for any \(\ell\), then the identity \[M_{k,N-k}\left(\vec{\mathcal{A}},\vec{\mathcal{B}}\right)=\sum_{n_{1}}c_{n_{1} }\sum_{n_{2}}c_{n_{2}}\cdots\sum_{n_{k}}c_{n_{k}}M_{k,N-k}\left((\mathcal{F}_ {n_{1}},\ldots,\mathcal{F}_{n_{k}}),\vec{\mathcal{B}}\right) \tag{21}\] holds. ### Second-order qSWIFT channel To construct the qSWIFT channel, let us define \[\mathcal{L}_{\ell}(\rho) :=\mathrm{i}[H_{\ell},\rho], \tag{22}\] \[\mathcal{L}(\rho) :=\frac{\mathrm{i}}{\lambda}[H,\rho]=\sum_{\ell}p_{\ell}\mathcal{ L}_{\ell}(\rho), \tag{23}\] where the variables \(\lambda,p_{\ell}\) and \(\tau\) are defined in Eq. (9). We then have \[\mathcal{U}_{N} =\mathrm{e}^{\mathcal{L}\tau}=\mathbb{I}+\tau\mathcal{L}+\Delta^{ (2)}\mathcal{U}_{N}, \tag{24}\] \[\mathcal{E}_{N} =\sum_{\ell}p_{\ell}\mathrm{e}^{\mathcal{L}\tau}=\mathbb{I}+\tau \mathcal{L}+\Delta^{(2)}\mathcal{E}_{N}, \tag{25}\] for the ideal time-evolution channel in Eq. (10) and the qDRIFT channel in Eq. (7), where \[\Delta^{(k)}\mathcal{U}_{N} =\sum_{n=k}^{\infty}\frac{\tau^{n}}{n!}\mathcal{L}^{n}, \tag{26}\] \[\Delta^{(k)}\mathcal{E}_{N} =\sum_{n=k}^{\infty}\frac{\tau^{n}}{n!}\sum_{\ell=1}^{L}p_{\ell} \mathcal{L}_{\ell}^{n}. \tag{27}\] Let \(\Delta_{k}:=\Delta^{(k)}\mathcal{U}_{N}-\Delta^{(k)}\mathcal{E}_{N}\), then \[\Delta_{k}=\sum_{n=k}^{\infty}\frac{\tau^{n}}{n!}\mathcal{L}^{(n)},\quad \mathcal{L}^{(n)}:=\mathcal{L}^{n}-\sum_{\ell=1}^{L}p_{\ell}\mathcal{L}_{\ell} ^{n}. \tag{28}\] Using the definition of \(\Delta_{k}\) and Eqs. (24) and (25), we have \(\mathcal{U}_{N}=\mathcal{E}_{N}+\Delta_{2}\) which we use to expand \(\mathcal{U}=\mathcal{U}_{N}^{N}\) as \[\mathcal{U} =(\mathcal{E}_{N}+\Delta_{2})^{N} \tag{29}\] \[=\mathcal{E}_{N}^{N}+\sum_{k=1}^{N}M_{k,N-k}\left(\Delta_{2}, \mathcal{E}_{N}\right)\] (30) \[=\mathcal{E}_{N}^{N}+\frac{\tau^{2}}{2}M_{1,N-1}\left(\mathcal{L }^{(2)},\mathcal{E}_{N}\right)+M_{1,N-1}\left(\Delta_{3},\mathcal{E}_{N} \right)+\sum_{k=2}^{N}M_{k,N-k}\left(\Delta_{2},\mathcal{E}_{N}\right), \tag{31}\] where we used \(\Delta_{2}=(\tau^{2}/2)\mathcal{L}^{(2)}+\Delta_{3}\) and linearity of \(M_{1,N-1}\) to obtain the last equality. Let us denote the first two terms as \[\mathcal{E}^{(2)}:=\mathcal{E}_{N}^{N}+\frac{\tau^{2}}{2}M_{1,N-1}\left( \mathcal{L}^{(2)},\mathcal{E}_{N}\right). \tag{32}\] We refer to \(\mathcal{E}^{(2)}\) as the _second-order qSWIFT_ channel. In the following lemma, we provide a bound for the error in approximating the ideal channel \(\mathcal{U}\) in Eq. (1) by the second-order qSWIFT channel \(\mathcal{E}^{(2)}\), where the error is quantified as the diamond norm of their difference. **Lemma III.3**.: _Let \(\mathcal{U}\) be the ideal channel in Eq. (1) and let \(\mathcal{E}^{(2)}\) be the second-order qSWIFT channel in Eq. (32). Then, in the region \(\lambda t\geq 1\),_ \[d_{\circ}\left(\mathcal{U},\mathcal{E}^{(2)}\right)\in\mathcal{O}\Bigg{(} \bigg{(}\frac{(\lambda t)^{2}}{N}\bigg{)}^{2}\Bigg{)}, \tag{33}\] _as far as \(N\leq 2\sqrt{2}e(\lambda t)^{2}\)._ We provide the proof in Appendix A.1. By this lemma, as far as we consider the reasonable parameter region \(\lambda t\geq 1\), if \(N\in\mathcal{O}\big{(}(\lambda t)^{2}/\sqrt{\varepsilon}\big{)}\) for \(\varepsilon>0\), then \(d_{\circ}\left(\mathcal{U},\mathcal{E}^{(2)}\right)\leq\varepsilon\). This provides a quadratic improvement over the original qDRIFT with respect to \(\varepsilon\). ### Implementation of the second-order qSWIFT channel The second-order qSWIFT channel we constructed is not a physical channel as it is not a CPTP map. Therefore, we cannot directly implement the qSWIFT channel itself. However, in most applications of the Hamiltonian simulation, our interest is in computing physical quantities, i.e., the expectation value of an observable after applying the time evolution operator as described in Section II.2. Thus, in the following, we focus on how to compute \(q^{(2)}:=\mathrm{Tr}(Q\mathcal{E}^{(2)}(\rho_{\mathrm{init}}))\), for a given observable \(Q\) and the input \(\rho_{\mathrm{init}}\). By using \(q^{(2)}\), the systematic error is reduced to \[|q-q^{(2)}|\leq 2||Q||_{\infty}d_{\circ}\left(\mathcal{U},\mathcal{E}^{(2)} \right)\in\mathcal{O}\left(||Q||_{\infty}\left(\frac{(\lambda t)^{2}}{N}\right) ^{2}\right), \tag{34}\] where we use (33), which is the quadratic improvement in terms of \((\lambda t)^{2}/N\) from the one in qDRIFT (16). Here we provide a way of computing \(q^{(2)}\) by quantum circuits. We can expand \(q^{(2)}\) as \[q^{(2)}=q^{(1)}+\delta q, \tag{35}\] where \[\delta q=\frac{\tau^{2}}{2}\mathrm{Tr}\left(QM_{1,N-1}\left(\mathcal{L}^{(2)}, \mathcal{E}_{N}\right)(\rho_{\mathrm{init}})\right). \tag{36}\] For computing \(q^{(1)}\), we just need to apply the original qDRIFT channel and compute the expectation value. Thus, we focus on how to compute \(\delta q\) in the following. More specifically, we will show that \(\delta q\) is computable by using the swift circuits, composed of the time evolution \(\exp\left(\mathrm{i}H_{\ell}\tau\right)\) and the swift operators shown in Fig. 1. By using \[M_{1,N-1}\left(\mathcal{L}^{(2)},\mathcal{E}_{N}\right)=\sum_{r=0}^{N-1} \mathcal{E}_{N}^{N-1-r}\mathcal{L}^{(2)}\mathcal{E}_{N}^{r}, \tag{37}\] which can be derived from the definition (19), we obtain \[\delta q=\frac{\tau^{2}}{2}\sum_{r=0}^{N-1}\delta q_{r}, \tag{38}\] where \(\delta q_{r}:=\mathrm{Tr}\left(Q\mathcal{E}_{N}^{N-1-r}\mathcal{L}^{(2)} \mathcal{E}_{N}^{r}(\rho_{\mathrm{init}})\right)\). Now let us move on to the evaluation of \(\delta q_{r}\). We transform \(\delta q_{r}\) as \[\delta q_{r} =\mathrm{Tr}\left(Q\mathcal{E}_{N}^{N-1-r}\mathcal{L}^{(2)} \mathcal{E}_{N}^{r}(\rho_{\mathrm{init}})\right), \tag{39}\] \[=\sum_{\ell,k=1}^{L}p_{\ell}p_{k}\mathrm{Tr}\left(Q\mathcal{E}_{ N}^{N-1-r}\mathcal{L}_{\ell}\mathcal{L}_{k}\mathcal{E}_{N}^{r}(\rho_{\mathrm{ init}})\right)-\sum_{\ell=1}^{L}p_{\ell}\mathrm{Tr}\left(Q\mathcal{E}_{N}^{N-1-r} \mathcal{L}_{\ell}^{2}\mathcal{E}_{N}^{r}(\rho_{\mathrm{init}})\right), \tag{40}\] where we use (28) in the second equality. Let two probability distributions be \[P_{0}^{(2)}(\vec{\ell})=p_{\ell_{2}}p_{\ell_{1}},\ P_{1}^{(2)}(\vec{\ell})= \begin{cases}p_{\ell}&\ell_{1}=\ell_{2}=\ell\\ 0&\ell_{1}\neq\ell_{2}\end{cases}, \tag{41}\] where the input \(\vec{\ell}\) is a vector with two elements \((\ell_{1},\ell_{2})\). Also, let \[\mathcal{L}_{2}(\vec{\ell}):=\mathcal{L}_{\ell_{2}}\mathcal{L}_{\ell_{1}}. \tag{42}\] Then it holds \[\delta q_{r}:=\sum_{s=0}^{1}(-1)^{s}\sum_{\vec{\ell}}P_{s}^{(2)}(\vec{\ell}) \mathrm{Tr}\left(Q\mathcal{K}(r,\vec{\ell})(\rho_{\mathrm{init}})\right), \tag{43}\] where we define a channel \(\mathcal{K}(r,\vec{\ell})\) as \[\mathcal{K}(r,\vec{\ell}):=\mathcal{E}_{N}^{N-1-r}\mathcal{L}_{2}(\vec{\ell}) \mathcal{E}_{N}^{r}. \tag{44}\] To evaluate each term of (43), we utilize a system with one ancilla qubit. Let us write the density matrix for a given system with one ancilla qubit as the matrix form: \[\left(\begin{array}{cc}\rho_{00}&\rho_{01}\\ \rho_{10}&\rho_{11}\end{array}\right):=|0\rangle\langle 0|\otimes\rho_{00}+|0 \rangle\langle 1|\otimes\rho_{01}+|1\rangle\langle 0|\otimes\rho_{10}+|1 \rangle\langle 1|\otimes\rho_{11}. \tag{45}\] We define the operation of a quantum channel \(\tilde{\mathcal{K}}(r,\vec{\ell})\) that transforms the initial state \(\tilde{\rho}_{\mathrm{init}}=|+\rangle\langle+|\otimes\rho_{\mathrm{init}}\) into a final state as \[\tilde{\rho}_{\mathrm{init}}=\left(\begin{array}{cc}\rho_{\mathrm{init}}/2& \rho_{\mathrm{init}}/2\\ \rho_{\mathrm{init}}/2&\rho_{\mathrm{init}}/2\end{array}\right)\xrightarrow{ \tilde{\mathcal{K}}(r,\vec{\ell})}\left(\begin{array}{cc}\cdot&\mathcal{K}(r,\vec{\ell})(\rho_{\mathrm{init}})/2\\ \mathcal{K}(r,\vec{\ell})(\rho_{\mathrm{init}})/2&\cdot\end{array}\right), \tag{46}\] where the dot \([.]\) in the diagonal element denotes a matrix we do not have interest in now. Then it holds \[\mathrm{Tr}(Q\mathcal{K}(r,\vec{\ell})(\rho_{\mathrm{init}}))=\mathrm{Tr} \left(\tilde{Q}\tilde{\mathcal{K}}(r,\vec{\ell})(\tilde{\rho}_{\mathrm{init}}) \right), \tag{47}\] and consequently, \[\delta q_{r}:=\sum_{s=0}^{1}(-1)^{s}\sum_{\vec{\ell}}P_{s}^{(2)}(\vec{\ell}){\rm Tr }\left(\bar{Q}\tilde{\mathcal{K}}(r,\vec{\ell})(\tilde{\rho}_{\rm init})\right), \tag{48}\] where \[\bar{Q}=X\otimes Q, \tag{49}\] with \(X\) as the Pauli-X observable for the ancilla qubit. Therefore, we can evaluate \(\delta q_{r}\) if we implement the channel \(\tilde{\mathcal{K}}(r,\vec{\ell})\) by using quantum circuits. We can specify the channel \(\tilde{\mathcal{K}}(r,\vec{\ell})\) as follows: \[\tilde{\mathcal{K}}(r,\vec{\ell}):=\tilde{\mathcal{E}}_{N}^{N-1-r}\tilde{ \mathcal{L}}_{\tilde{\mathcal{L}}_{\ell}}\tilde{\mathcal{L}}_{\ell_{1}}\tilde {\mathcal{E}}_{N}^{r}, \tag{50}\] where we define \(\tilde{\mathcal{E}}_{N}:=\mathbf{1}\otimes\mathcal{E}_{N}\) and \(\vec{\ell}=(\ell_{1},\ell_{2})\), and where the operation of the channel \(\tilde{\mathcal{L}}_{\ell}\) is specified for the input having the identical non-diagonal elements as follows: \[\left(\begin{array}{cc}\cdot&\rho\\ \rho&\cdot\end{array}\right)\xrightarrow{\tilde{\mathcal{L}}_{\ell}}\left( \begin{array}{cc}\cdot&\mathcal{L}_{\ell}(\rho)\\ \mathcal{L}_{\ell}(\rho)&\cdot\end{array}\right). \tag{51}\] The channel \(\tilde{\mathcal{L}}_{\ell}\) can be written as the sum of two swift operators \(\tilde{\mathcal{S}}_{\ell}^{(0)}\) and \(\tilde{\mathcal{S}}_{\ell}^{(1)}\) introduced in Fig. 1 as \[\tilde{\mathcal{L}}_{\ell}=\tilde{\mathcal{S}}_{\ell}^{(0)}+\tilde{\mathcal{S} }_{\ell}^{(1)}, \tag{52}\] which is easily checked by using \[\left(\begin{array}{cc}\cdot&\rho\\ \rho&\cdot\end{array}\right)\xrightarrow{\tilde{\mathcal{S}}_{\ell}^{(0)}} \left(\begin{array}{cc}\cdot&-{\rm i}\rho H_{\ell}\\ {\rm i}H_{\ell}\rho&\cdot\end{array}\right),\quad\left(\begin{array}{cc} \cdot&\rho\\ \rho&\cdot\end{array}\right)\xrightarrow{\tilde{\mathcal{S}}_{\ell}^{(1)}} \left(\begin{array}{cc}\cdot&{\rm i}H_{\ell}\rho\\ -{\rm i}\rho H_{\ell}&\cdot\end{array}\right). \tag{53}\] It can be pedagogically shown that the channel (50) reproduces the transformation in (46); with \(\rho^{\prime}_{\rm init}=\rho_{\rm init}/2\), it holds \[\left(\begin{array}{cc}\rho^{\prime}_{\rm init}&\rho^{\prime}_{ \rm init}\end{array}\right)\xrightarrow{\tilde{\mathcal{E}}_{N}^{r}}\left( \begin{array}{cc}\mathcal{E}_{N}^{r}(\rho^{\prime}_{\rm init})&\mathcal{E}_ {N}^{r}(\rho^{\prime}_{\rm init})\\ \mathcal{E}_{N}^{r}(\rho^{\prime}_{\rm init})&\mathcal{E}_{N}^{r}(\rho^{\prime} _{\rm init})\end{array}\right) \tag{54}\] \[\xrightarrow{\tilde{\mathcal{L}}_{\ell_{1}}}\left(\begin{array} []{cc}\cdot&\mathcal{L}_{\ell_{1}}\tilde{\mathcal{E}}_{N}^{r}(\rho^{\prime}_{ \rm init})\\ \mathcal{L}_{\ell_{1}}\tilde{\mathcal{E}}_{N}^{r}(\rho^{\prime}_{\rm init})& \cdot\end{array}\right)\] (55) \[\xrightarrow{\tilde{\mathcal{L}}_{\ell_{2}}}\left(\begin{array} []{cc}\mathcal{L}_{\ell_{2}}\mathcal{L}_{\ell_{1}}\tilde{\mathcal{E}}_{N}^{r}( \rho^{\prime}_{\rm init})&\cdot\end{array}\right)\] (56) \[\xrightarrow{\tilde{\mathcal{E}}_{N}^{N-1-r}}\left(\begin{array} []{cc}\cdot&\tilde{\mathcal{K}}(r,\vec{\ell})(\rho^{\prime}_{\rm init})\\ \mathcal{\tilde{K}}(r,\vec{\ell})(\rho^{\prime}_{\rm init})&\cdot\end{array} \right). \tag{57}\] By substituting (52) to (50), we obtain \[\tilde{\mathcal{K}}(r,\vec{\ell})=\sum_{b_{1},b_{2}=0}^{1}\tilde{\mathcal{E} }_{N}^{N-1-r}\tilde{\mathcal{S}}_{\ell_{2}}^{(b_{2})}\tilde{\mathcal{S}}_{\ell _{1}}^{(b_{1})}\tilde{\mathcal{E}}_{N}^{r}. \tag{58}\] Finally, from (38), (48), and (58), \[\delta q =\frac{\tau^{2}}{2}\sum_{s=0}^{1}(-1)^{s}\sum_{b_{1},b_{2}=0}^{1 }\sum_{r=0}^{N-1}\sum_{\vec{\ell}}P_{s}^{(2)}(\vec{\ell}){\rm Tr}\left(Q \tilde{\mathcal{E}}_{N}^{N-1-r}\tilde{\mathcal{S}}_{\ell_{2}}^{(b_{2})}\tilde{ \mathcal{S}}_{\ell_{1}}^{(b_{1})}\tilde{\mathcal{E}}_{N}^{r}(\rho_{\rm init}) \right), \tag{59}\] \[=\frac{(\lambda t)^{2}}{2N}\sum_{s=0}^{1}(-1)^{s}\sum_{b_{1},b_{2} =0}^{1}\delta q(s,b_{1},b_{2}), \tag{60}\] where in the second equality, we define \[\delta q(s,b_{1},b_{2}):=\frac{1}{N}\sum_{\ell=0}^{N-1}\sum_{\vec{\ell}}P_{s}^ {(2)}(\vec{\ell}){\rm Tr}\left(\bar{Q}\tilde{\mathcal{E}}_{N}^{N-1-r}\tilde{ \mathcal{S}}_{\ell_{2}}^{(b_{2})}\tilde{\mathcal{S}}_{\ell_{1}}^{(b_{1})}\tilde{ \mathcal{E}}_{N}^{r}(\tilde{\rho}_{\rm init})\right). \tag{61}\] We can evaluate (61) using Monte Carlo sampling. To this end, let \[P_{\textsc{mdbift}}(\vec{k},n)=p_{k_{1}}p_{k_{2}}\cdots p_{k_{n}} \tag{62}\] be the probability distribution for product of multiple qDRIFT probability distributions, where \(n\in\mathbb{Z}^{+}\) specifies the number of qDRIFT distributions and \(k_{j}\), for each \(j\), goes from \(1\) to \(L\). Then \[\tilde{\mathcal{E}}_{N}^{n}=\sum_{k_{1},k_{2},\ldots k_{n}}^{L}P_{\textsc{ mdbift}}(\vec{k},n)\tilde{\mathcal{T}}_{n}(\vec{k}), \tag{63}\] where \(\tilde{\mathcal{T}}_{n}(\vec{k})\) is the unitary channel defined as the sequence of the time operators: \[\tilde{\mathcal{T}}_{n}(\vec{k}):=(\mathbf{1}\otimes\mathcal{T}_{k_{n}}\cdots \mathcal{T}_{k_{2}}\mathcal{T}_{k_{1}}). \tag{64}\] We obtain an unbiased estimator of \(\delta q(s,b_{1},b_{2})\) as follows: 1. Sample \(\ell\) uniformly from \(\{0,1,\cdots N-1\}\). 2. With probability \(P_{s}^{(2)}(\vec{\ell})\) sample \(\vec{\ell}=(\ell_{1},\ell_{2})\). With probability \(P_{\textsc{mdbift}}(\vec{k},r)\) and \(P_{\textsc{mdbift}}(\vec{k}^{\prime},N-1-r)\), sample \(\vec{k}\) and \(\vec{k}^{\prime}\). 3. Estimate \[\operatorname{Tr}\left(\tilde{Q}\tilde{\mathcal{T}}_{N-1-r}(\vec{k}^{\prime}) \tilde{\mathcal{E}}_{\ell_{2}}^{(b_{2})}\tilde{\mathcal{E}}_{\ell_{1}}^{(b_{ 1})}\tilde{\mathcal{T}}_{r}(\vec{k})(\tilde{\rho}_{\textsc{init}})\right)\] (65) with \(N_{\text{shot}}\) measurements and set the resulting value to \(\delta\hat{q}(s,b_{1},b_{2})\), which is an unbiased estimator of \(\delta q(s,b_{1},b_{2})\). The value (65) can be evaluated by applying a swift circuit composed of the time evolution and the swift operators and estimating the expectation value of \(\tilde{Q}\) by measurements. We repeat the above process \(N_{\text{sample}}\) times, and the estimate of \(\delta q(s,b_{1},b_{2})\) is computed as the sample average. By substituting each estimate of \(\delta q(s,b_{1},b_{2})\) to (61), we obtain the estimate of \(\delta q\) as \(\delta\hat{q}\). It should be noted that the swift circuit for evaluating each term of (65) has the structure that two swift operators are tucked in between \(N-1\) time operators as in Fig. 1(b). Therefore, the number of gates in the swift circuit is almost the same as the original qDRIFT that requires \(N\) time operators. ## IV Higher-order qSWIFT In this section, we generalize the second-order qSWIFT channel introduced in Section III.2 to an arbitrary high-order channel. First we construct the higher-order qSWIFT channel and discuss the error bound in Section IV.1. Then, in Section IV.2, we construct an algorithm to apply the qSWIFT channel for computing physical quantities. ### Higher-order qSWIFT channel To construct a high-order qSWIFT channel, we retain higher orders of \(\tau\) in the right-hand-side of \[\mathcal{U}=(\mathcal{E}_{N}+\Delta_{2})^{N}=\mathcal{E}_{N}^{N}+\sum_{k=1}^ {N}M_{k,N-k}(\Delta_{2},\mathcal{E}_{N}), \tag{66}\] where \(M_{k,N-k}\) is the mixture function defined in Eq (19). To this end, we note that \(\Delta_{2}\) is a linear combination of \(\mathcal{L}^{(n)}\) as per Eq. (28). Thus by Eq. (21) we obtain \[M_{k,N-k}(\Delta_{2},\mathcal{E}_{N}) =\sum_{n_{1}=2}^{\infty}\frac{\tau^{n_{1}}}{n_{1}!}\sum_{n_{2}=2} ^{\infty}\frac{\tau^{n_{2}}}{n_{2}!}\cdots\sum_{n_{k}=2}^{\infty}\frac{\tau^{ n_{k}}}{n_{k}!}M_{k,N-k}\left(\left(\mathcal{L}^{(n_{1})},\ldots,\mathcal{L}^{(n_{k}) }\right),\mathcal{E}_{N}\right), \tag{67}\] \[=\sum_{n_{1},n_{2},\ldots,n_{k}=2}^{\infty}\frac{\tau^{\sum_{j=1}^ {k}n_{j}}}{n_{1}!n_{2}!\cdots n_{k}!}\sum_{\ell=2}^{\infty}\delta\left[\xi, \sum_{\ell=1}^{k}n_{\ell}\right]M_{k,N-k}\left(\left(\mathcal{L}^{(n_{1})}, \ldots,\mathcal{L}^{(n_{k})}\right),\mathcal{E}_{N}\right),\] (68) \[=\sum_{\xi=2}^{\infty}\tau^{\xi}\sum_{n_{1},n_{2},\ldots,n_{k}=2} ^{\xi}\frac{1}{n_{1}!n_{2}!\cdots n_{k}!}\delta\left[\xi,\sum_{j=1}^{k}n_{j} \right]M_{k,N-k}\left(\left(\mathcal{L}^{(n_{1})},\ldots,\mathcal{L}^{(n_{k}) }\right),\mathcal{E}_{N}\right), \tag{69}\] where \(\delta[i,j]\) here is the Kronecker \(\delta\). To show the second equality, we use the identity \[\sum_{\xi=2}^{\infty}\delta\left[\xi,\sum_{j=1}^{k}n_{j}\right]=1, \tag{70}\] which holds for a fixed set of integers \(\{n_{j}\}_{j=1}^{k}\) with \(n_{j}\geq 2\). In the last equality, we use the fact that the Kronecker \(\delta\) is zero if any of \(n_{1},n_{2}\cdots n_{k}\) is larger than \(\xi\). Truncating the upper limit of \(\xi\) yields a high-order qSWIFT channel. Specifically, we define the high-order qDRIFT as follows. **Definition IV.1** (Higher-order qSWIFT).: We define \(K\)-th order qSWIFT channel (\(K\leq N\)) as \[\mathcal{E}^{(K)}:=\mathcal{E}_{N}^{N}+\sum_{\xi=2}^{2K-2}\tau^{ \xi}\sum_{k=1}^{N}\sum_{n_{1},\ldots,n_{k}=2}^{\xi}\frac{1}{n_{1}!n_{2}!\cdots n _{k}!}\delta\left[\xi,\sum_{j=1}^{k}n_{j}\right]M_{k,N-k}\left(\left(\mathcal{ L}^{(n_{1})},\ldots,\mathcal{L}^{(n_{k})}\right),\mathcal{E}_{N}\right). \tag{71}\] Here we note that the upper limit \(N\) in the second summation can be replaced with \(K\) for \(K\leq N\) because the terms with \(k>K\) become zero by the Kronecker \(\delta\). We remark that setting \(K=2\) yields the second-order qSWIFT channel in Eq. (32). Also, by setting \(K=1\), we reproduce the qDRIFT channel. We now provide a bound for the error in approximating the ideal channel \(\mathcal{U}\) in Eq. (1) by the high-order qSWIFT channel \(\mathcal{E}^{(K)}\) in the following lemma. **Lemma IV.2**.: _Let \(\mathcal{U}\) be the ideal channel in and let \(\mathcal{E}^{(K)}\) be the \(K\)-th order qSWIFT channel. Then, in the region \(\lambda t\geq 1\),_ \[d_{\circ}\left(\mathcal{U},\mathcal{E}^{(K)}\right)\in\mathcal{ O}\Bigg{(}\bigg{(}\frac{(\lambda t)^{2}}{N}\bigg{)}^{K}\Bigg{)}. \tag{72}\] _as far as \(N\leq 2\sqrt{2}e(\lambda t)^{2}\)._ The proof is given in Appendix A.2. As in the case of the second order, as far as we consider the reasonable parameter region \(\lambda t\geq 1\), if \(N\in\mathcal{O}\big{(}(\lambda t)^{2}/\sqrt{\varepsilon}\big{)}\) for \(\varepsilon>0\), then \(d_{\circ}\left(\mathcal{U},\mathcal{E}^{(K)}\right)\leq\varepsilon\). ### Implementation of the higher-order qSWIFT channel As we discuss in Section III.3, the qSWIFT channel is not a physical channel, but we can apply it to computing the physical quantities. Specifically, we discuss how to compute \[q^{(K)}:=\mathrm{Tr}\left(Q\mathcal{E}^{(K)}\right). \tag{73}\] Then, the systematic error is bounded as \[|q-q^{(K)}|\leq 2||Q||_{\infty}d_{\circ}\left(\mathcal{U}, \mathcal{E}^{(K)}\right)\in\mathcal{O}\left(||Q||_{\infty}\left(\frac{( \lambda t)^{2}}{N}\right)^{K}\right), \tag{74}\] which can be exponentially small with the order parameter. Here, we provide the way to compute \(q^{(K)}\). We can expand \(q^{(K)}\) as \[q^{(K)} =q^{(1)}+\sum_{\xi=2}^{2K-2}\sum_{k=1}^{N}\sum_{n_{1},\cdots,n_{k }=2}^{\xi}\delta\left[\xi,\sum_{j=1}^{k}n_{j}\right]\delta q^{(k)}(\vec{n}), \tag{75}\] \[=q^{(1)}+\sum_{\xi=2}^{2K-2}\sum_{k=1}^{K}\sum_{\vec{n}\in G_{2}( k,\xi)}\delta q^{(k)}(\vec{n}), \tag{76}\] where \[\delta q^{(k)}(\vec{n}):=\frac{\tau^{\sum_{j=1}^{k}n_{j}}}{n_{1}! n_{2}!\cdots n_{k}!}\mathrm{Tr}\left(QM_{k,N-k}\left(\left(\mathcal{L}^{(n_{1})}, \ldots,\mathcal{L}^{(n_{k})}\right),\mathcal{E}_{N}\right)(\rho_{\mathrm{init }})\right). \tag{77}\] with \(\vec{n}=\{n_{1},n_{2},\cdots,n_{k}\}\) and where we replace \(N\) in the second summation with \(K\) in the second equality (since the terms with \(k>K\) become zero by the Kronecker \(\delta\).). To obtain (76), we define the set \(G_{2}(k,\xi)\) composed of all vectors with \(k\) integer elements \(\{n_{j}\}_{j=1}^{k}\) that satisfies \(n_{j}\geq 2\) and \(\sum_{j=1}^{k}n_{j}=\xi\). As an example, when using the second-order (\(K=2\)) qSWIFT, the summation of the second term in (77) includes only one term as: \[q^{(2)}=q^{(1)}+\delta q^{(1)}\left(\{2\}\right). \tag{78}\] we see \(\delta q\) in (35) corresponds to \(\delta q^{(1)}\left(\{2\}\right)\), where we write \(\{a_{1},\cdots,a_{k}\}\) as the vector having elements \(a_{1},\cdots,a_{k}\). As another example, when using the third-order (\(K=3\)) qSWIFT, \[q^{(3)}=q^{(1)}+\delta q^{(1)}\left(\{2\}\right)+\delta q^{(1)}\left(\{3\} \right)+\delta q^{(1)}\left(\{4\}\right)+\delta q^{(2)}\left(\{2,2\}\right). \tag{79}\] Since \(q^{(1)}\) is again evaluable by using the original qDRIFT, we focus on the evaluation of \(\delta q^{(k)}(\vec{n})\) in the following. Similar to the second-order case, we will transform \(\delta q^{(k)}(\vec{n})\) as a sum of the terms evaluable with quantum circuits. The mixture function \(M_{k,N-k}(\cdot)\) included in (77) is written as the summation of \(\binom{N}{k}\) terms as \[\delta q^{(k)}(\vec{n})=\frac{\tau^{\sum_{j=1}^{k}n_{j}}}{n_{1}n_{2}!\cdots n_ {k}!}\sum_{\sigma\in S_{N,k}^{wh}}\delta q_{\sigma}^{(k)}(\vec{n}), \tag{80}\] where \[\delta q_{\sigma}^{(k)}(\vec{n}):=\mathrm{Tr}\left(Qf_{\sigma,k,N-k}\left( \left(\mathcal{L}^{(n_{1})},\ldots,\mathcal{L}^{(n_{k})}\right),\mathcal{E}_{ N}\right)(\rho_{\mathrm{init}})\right), \tag{81}\] with \(f_{\sigma,k,N-k}\) as the sorting function defined in (17). Now we move on to the calculation of \(\delta q_{\sigma}^{(k)}\left(\vec{n}\right)\). To this end, let \[\mathcal{D}_{0}^{(n)}:=\mathcal{L}^{n},\ \mathcal{D}_{1}^{(n)}:=\sum_{\ell}p_{ \ell}\mathcal{L}_{\ell}^{n}. \tag{82}\] We can write \(\mathcal{L}^{(n)}\) as a linear combination of \(\mathcal{D}_{s}^{(n)}\): \[\mathcal{L}^{(n)}=\sum_{s=0}^{1}(-1)^{s}\mathcal{D}_{s}^{(n)} \tag{83}\] as per (28). Let two probability distributions: \[P_{0}^{(n)}(\vec{\ell})=P_{\textsc{mphift}}(\vec{\ell},n),\ P_{1}^{(n)}(\vec{ \ell})=\begin{cases}p_{\ell}&\ell_{1}=\cdots=\ell_{n}=\ell,\\ 0&\ell_{a}\neq\ell_{b}\ \mathrm{for}\ \exists(a,b)\end{cases}, \tag{84}\] where \(n\) specifies the number of vectors in \(\vec{\ell}\) and we write the \(a\)th element of \(\vec{\ell}\) as \(\ell_{a}\). We see that for \(n=2\), (84) is consistent with (41). Then it holds \[\mathcal{D}_{s}^{(n)}=\sum_{\vec{\ell}}P_{s}^{(n)}(\vec{\ell})\mathcal{L}_{n} (\vec{\ell}), \tag{85}\] with \[\mathcal{L}_{n}(\vec{\ell}):=\mathcal{L}_{\ell_{n}}\cdots\mathcal{L}_{\ell_{ 1}}, \tag{86}\] which is consistent with (42) for \(n=2\). By using the bilinearity of the sorting function, we obtain \[f_{\sigma,k,N-k}\left(\left(\mathcal{L}^{(n_{1})},\ldots,\mathcal{ L}^{(n_{k})}\right),\mathcal{E}_{N}\right) \tag{87}\] \[=\sum_{s_{1},\cdots,s_{k}=0}^{1}(-1)^{\sum_{e}s_{e}}f_{\sigma,k,N- k}\left(\left(\mathcal{D}_{s_{1}}^{(n_{1})},\cdots,\mathcal{D}_{s_{k}}^{(n_{k})} \right),\mathcal{E}_{N}\right),\] (88) \[=\sum_{s_{1},\cdots,s_{k}=0}(-1)^{\sum_{e}s_{e}}\sum_{\vec{\ell}_ {1},\cdots,\vec{\ell}_{k}}P_{s_{1}}^{(n_{1})}(\vec{\ell}_{1})\cdots P_{s_{k}}^{ (n_{k})}(\vec{\ell}_{k})f_{\sigma,k,N-k}\left(\left(\mathcal{L}_{n_{1}}(\vec{ \ell}_{1}),\cdots,\mathcal{L}_{n_{k}}(\vec{\ell}_{k})\right),\mathcal{E}_{N} \right), \tag{89}\] where we use (83) in the first equality and (85) in the second equality. Substituting above into (81), we obtain \[\delta q_{\sigma}^{(k)}\left(\vec{n}\right)=\sum_{s_{1},\cdots,s_{k}=0}^{1}(-1 )^{\sum_{e}s_{e}}\sum_{\vec{\ell}_{1},\cdots,\vec{\ell}_{k}}P_{s_{1}}^{(n_{1})} (\vec{\ell}_{1})\cdots P_{s_{k}}^{(n_{k})}(\vec{\ell}_{k})\delta q_{\sigma}^{(k )}\left(\vec{n},\left(\vec{\ell}_{1},\cdots,\vec{\ell}_{k}\right)\right), \tag{90}\] where \[\delta q_{\sigma}^{(k)}\left(\vec{n},\left(\vec{\ell_{1}},\cdots,\vec{\ell_{k}} \right)\right):=\operatorname{Tr}\left(Qf_{\sigma,k,N-k}\left(\left(\mathcal{L} _{n_{1}}(\vec{\ell}_{1}),\cdots,\mathcal{L}_{n_{k}}(\vec{\ell}_{k})\right), \mathcal{E}_{N}\right)\left(\rho_{\text{init}}\right)\right). \tag{91}\] As in Section III.3, we compute the right hand side of (91) by using the system with one ancilla qubit. Let \[\tilde{\mathcal{L}}_{n}(\vec{\ell}):=\tilde{\mathcal{L}}_{\ell_{n}}\cdots \tilde{\mathcal{L}}_{\ell_{1}}. \tag{92}\] By repeatedly operating (51) with \(j=\ell_{1},\cdots,\ell_{n}\), we obtain the operation of \(\tilde{\mathcal{L}}_{n}(\vec{\ell})\) as \[\left(\begin{array}{cc}\cdot&\rho\\ \rho&\cdot\end{array}\right)\xrightarrow{\tilde{\mathcal{L}}_{n}(\vec{\ell}) }\left(\begin{array}{cc}\cdot&\mathcal{L}_{n}(\vec{\ell})(\rho)\\ \mathcal{L}_{n}(\vec{\ell})(\rho)&\cdot\end{array}\right) \tag{93}\] for a given density operator \(\rho\). Recall the definition of the sorting function (17): \[f_{\sigma,k,N-k}\left(\left(\mathcal{L}_{n_{1}}(\vec{\ell}_{1}),\cdots, \mathcal{L}_{n_{k}}(\vec{\ell}_{k})\right),\mathcal{E}_{N}\right)=\mathcal{X }_{\sigma(1)}\cdots\mathcal{X}_{\sigma(N)}, \tag{94}\] with \[\mathcal{X}_{a}=\begin{cases}\mathcal{L}_{n_{a}}(\vec{\ell}_{a})&a\leq k,\\ \mathcal{E}_{N}&a\geq k+1\end{cases}. \tag{95}\] Then in the system with one ancilla qubit, it holds \[f_{\sigma,k,N-k}\left(\left(\tilde{\mathcal{L}}_{n_{1}}(\vec{\ell}_{1}), \cdots,\tilde{\mathcal{L}}_{n_{k}}(\vec{\ell}_{k})\right),\tilde{\mathcal{E} }_{N}\right)=\tilde{\mathcal{X}}_{\sigma(1)}\cdots\tilde{\mathcal{X}}_{\sigma (N)}, \tag{96}\] with \[\tilde{\mathcal{X}}_{a}=\begin{cases}\tilde{\mathcal{L}}_{n_{a}}(\vec{\ell}_{ a})&a\leq k,\\ \tilde{\mathcal{E}}_{N}&a\geq k+1\end{cases}. \tag{97}\] Since it holds that \[\left(\begin{array}{cc}\cdot&\rho\\ \rho&\cdot\end{array}\right)\xrightarrow{\tilde{\mathcal{X}}_{a}}\left( \begin{array}{cc}\cdot&\mathcal{X}_{a}(\rho)\\ \mathcal{X}_{a}(\rho)&\cdot\end{array}\right), \tag{98}\] the operation of \(\tilde{\mathcal{X}}_{\sigma(1)}\cdots\tilde{\mathcal{X}}_{\sigma(N)}\) to the input state \(\tilde{\rho}_{\text{init}}=|+\rangle\langle+|\otimes\rho_{\text{init}}\) reproduces the operation of \(\mathcal{X}_{\sigma(1)}\cdots\mathcal{X}_{\sigma(N)}\) as \[\tilde{\rho}_{\text{init}}=\left(\begin{array}{cc}\rho_{\text{init}}/2&\rho _{\text{init}}/2\\ \rho_{\text{init}}/2&\rho_{\text{init}}/2\end{array}\right)\xrightarrow{\tilde {\mathcal{X}}_{\sigma(1)}\cdots\tilde{\mathcal{X}}_{\sigma(N)}}\left( \begin{array}{cc}\cdot&\mathcal{X}_{\sigma(1)}\cdots\mathcal{X}_{\sigma(N)}( \rho_{\text{init}})/2&\cdot\end{array}\right). \tag{99}\] Therefore, the estimation value of the observable \(\tilde{Q}=X\otimes Q\) with the final state in (99) gives \(\delta q_{\sigma}^{(k)}\left(\vec{n},\left(\vec{\ell_{1}},\cdots,\vec{\ell_{k }}\right)\right)\), i.e., \[\delta q_{\sigma}^{(k)}\left(\vec{n},\left(\vec{\ell_{1}},\cdots,\vec{\ell_{ k}}\right)\right)=\operatorname{Tr}\left(\tilde{Q}f_{\sigma,k,N-k}\left(\left( \tilde{\mathcal{L}}_{n_{1}}(\vec{\ell}_{1}),\cdots,\tilde{\mathcal{L}}_{n_{k }}(\vec{\ell}_{k})\right),\tilde{\mathcal{E}}_{N}\right)\left(\tilde{\rho}_{ \text{init}}\right)\right). \tag{100}\] Next, we discuss the way of evaluating the right hand side of (100). By substituting (52) to (100), we obtain \[\tilde{\mathcal{L}}_{n}(\vec{\ell})=\sum_{\vec{b}}\tilde{\mathcal{S}}_{n}^{( \vec{b})}(\vec{\ell}), \tag{101}\] with \(\vec{b}\in\{0,1\}^{\otimes n}\), where \[\tilde{\mathcal{S}}_{n}^{(\vec{b})}(\vec{\ell})=\tilde{\mathcal{S}}_{\ell_{n}}^ {(b_{n})}\cdots\tilde{\mathcal{S}}_{\ell_{1}}^{(b_{1})}. \tag{102}\] Then with \(\vec{b}_{j}\in\{0,1\}^{\otimes n_{j}}(j=1\cdots k)\), we obtain \[f_{\sigma,k,N-k}\left(\left(\tilde{\mathcal{L}}_{n_{1}}(\vec{ \ell}_{1}),\cdots,\tilde{\mathcal{L}}_{n_{k}}(\vec{\ell}_{k})\right),\tilde{ \mathcal{E}}_{N}\right) \tag{103}\] \[=\sum_{\vec{b}_{1}\cdots\vec{b}_{k}}f_{\sigma,k,N-k}\left(\left( \tilde{\mathcal{S}}_{n_{1}}^{(\vec{b}_{1})}(\vec{\ell}_{1}),\cdots,\tilde{ \mathcal{S}}_{n_{k}}^{(\vec{b}_{k})}(\vec{\ell}_{k})\right),\tilde{\mathcal{E} }_{N}\right)\] (104) \[=\sum_{\vec{b}_{1}\cdots\vec{b}_{k}}\sum_{\vec{\ell}}P_{\text{ MDHIFT}}(\vec{\ell},N-k)f_{\sigma,k,N-k}\left(\left(\tilde{\mathcal{S}}_{n_{1}}^{( \vec{b}_{1})}(\vec{\ell}_{1}),\cdots,\tilde{\mathcal{S}}_{n_{k}}^{(\vec{b}_{k})}( \vec{\ell}_{k})\right),\left(\tilde{\mathcal{T}}_{\ell_{1}},\cdots,\tilde{ \mathcal{T}}_{\ell_{N-k}}\right)\right), \tag{105}\] where we use (101) in the first equality and we use \[\tilde{\mathcal{E}}_{N}:=\sum_{\ell=1}^{L}p_{\ell}\tilde{\mathcal{T}}_{\ell}, \tag{106}\] and \[P_{\textsc{imbrift}}(\vec{\ell},N-k)=p_{\ell_{1}}\cdots p_{\ell_{N-k}}, \tag{107}\] with \(\vec{\ell}=\{\ell_{1}\cdots\ell_{N-k}\}\) in the second equality. By substituting (105) to (100), we obtain \[\delta q_{\sigma}^{(k)}\left(\vec{n},\left(\vec{\ell}_{1},\cdots,\vec{\ell_{k}}\right)\right) \tag{108}\] \[=\sum_{\vec{b}_{1}\cdots\vec{b}_{k}}\sum_{\vec{\ell}}P_{\textsc{ imbrift}}(\vec{\ell},N-k)\mathrm{Tr}\left(\tilde{Q}f_{\sigma,k,N-k}\left(\left( \tilde{\mathcal{S}}_{n_{1}}^{(\vec{\ell}_{1})}(\vec{\ell}_{1}),\cdots,\tilde{ \mathcal{S}}_{n_{k}}^{(\vec{\ell}_{k})}(\vec{\ell}_{k})\right),\left(\tilde{ \mathcal{T}}_{\ell_{1}},\cdots,\tilde{\mathcal{T}}_{\ell_{N-k}}\right)\right) \left(\tilde{\rho}_{\mathrm{init}}\right)\right).\] Finally, combining (80), (81), and (90) with (108), \[\delta q^{(k)}(\vec{n}) =\binom{N}{k}\frac{\tau^{\sum_{j=1}^{k}n_{j}}}{n_{1}!n_{2}!\cdots n _{k}!}\frac{1}{\binom{N}{k}}\sum_{\sigma\in S^{\mathrm{wh}}_{N,k}}\delta q_{ \sigma}^{(k)}(\vec{n}), \tag{109}\] \[=c^{(k)}(\vec{n})\sum_{\vec{b}_{1}\cdots\vec{b}_{k}}\sum_{\vec{s} }(-1)^{\sum_{c=1}^{k}s_{c}}\delta q^{(k)}\left(\left(\vec{b}_{1},\cdots\vec{b }_{k}\right),\vec{s}\right), \tag{110}\] where \[\delta q^{(k)}\left(\left(\vec{b_{1}},\cdots\vec{b_{k}}\right), \vec{s}\right):=\frac{1}{\binom{N}{k}}\sum_{\sigma\in S^{\mathrm{wh}}_{N,k}} \sum_{\vec{\ell}_{1},\cdots\vec{\ell}_{k}} P_{s_{1}}^{(n_{1})}(\vec{\ell}_{1})\cdots P_{s_{k}}^{(n_{k})}( \vec{\ell}_{k})\sum_{\vec{\ell}}P_{\textsc{imbrift}}(\vec{\ell},N-k)\] \[\mathrm{Tr}\left(\tilde{Q}f_{\sigma,k,N-k}\left(\left(\tilde{ \mathcal{S}}_{n_{1}}^{(\vec{\ell}_{1})}(\vec{\ell}_{1}),\cdots,\tilde{ \mathcal{S}}_{n_{k}}^{(\vec{\ell}_{k})}(\vec{\ell}_{k})\right),\left(\tilde{ \mathcal{T}}_{\ell_{1}},\cdots,\tilde{\mathcal{T}}_{\ell_{N-k}}\right)\right) \right), \tag{111}\] and we define the coefficient as \[c^{(k)}(\vec{n}):=\binom{N}{k}\frac{\tau^{\sum_{j=1}^{k}n_{j}}}{n_{1}!n_{2}! \cdots n_{k}!}. \tag{112}\] We can get an unbiased estimator of (111) using Monte Carlo sampling. More specifically, we repeat the following procedure and compute the average of the output; the \(p\)-th operation works as follows: 1. Sample \(\sigma\) uniformly from all elements of \(S^{\mathrm{wh}}_{N,k}\). 2. Sample \(\vec{\ell}_{1},\cdots\vec{\ell}_{k}\) according to \(P_{s_{1}}^{n_{1}}(\vec{\ell}_{1})\cdots P_{s_{k}}^{n_{k}}(\vec{\ell}_{k})\). Sample \(\vec{\ell}\) according to \(P_{\textsc{imbrift}}(\vec{\ell},N-k)\). 3. Evaluate \[\mathrm{Tr}\left(\tilde{Q}f_{\sigma,k,N-k}\left(\left(\tilde{\mathcal{S}}_{n_{ 1}}^{(\vec{\ell}_{1})}(\vec{\ell}_{1}),\cdots,\tilde{\mathcal{S}}_{n_{k}}^{( \vec{\ell}_{k})}(\vec{\ell}_{k})\right),\left(\tilde{\mathcal{T}}_{\ell_{1}}, \cdots,\tilde{\mathcal{T}}_{\ell_{N-k}}\right)\right)\left(\tilde{\rho}_{ \mathrm{init}}\right)\right)\] (113) with \(N_{\mathrm{shot}}\) measurements. We set the result to \(\delta\hat{q}_{\sigma_{p}}^{(k)}\left(\left(\vec{b_{1}},\cdots\vec{b_{k}} \right),\vec{s}\right)\). As in the second-order case, we repeat the above process \(N_{\mathrm{sample}}\) times and compute the average of \(\delta\hat{q}_{\sigma_{p}}^{(k)}\left(\left(\vec{b_{1}},\cdots\vec{b_{k}} \right),\vec{s}\right)\) as \(\delta\hat{q}^{(k)}\left(\left(\vec{b_{1}},\cdots\vec{b_{k}}\right),\vec{s}\right)\), which gives the estimate of \(\delta q^{(k)}\left(\left(\vec{b_{1}},\cdots\vec{b_{k}}\right),\vec{s}\right)\). Substituting the estimate to (109), we obtain the estimate of \(\delta q^{(k)}(\vec{n})\) as \[\delta\hat{q}^{(k)}(\vec{n})=c^{(k)}(\vec{n})\sum_{\vec{b}_{1}\cdots\vec{b}_{k} }\sum_{\vec{s}}(-1)^{\sum_{c=1}^{k}s_{c}}\delta\hat{q}^{(k)}\left(\left(\vec{b _{1}},\cdots\vec{b_{k}}\right),\vec{s}\right). \tag{114}\] We write the above algorithm to estimate \(\delta q^{(k)}(\vec{n})\) as **Evalcorrection** and summarize it in **Algorithm 1**. By using the **Evalcorrection**, we can construct the algorithm to compute \(q^{(K)}\) according to (75). We write the algorithm as **qSWIFT** and summarize it in **Algorithm 2**. Since we can tune \(N_{\text{sample}}\) and \(N_{\text{shot}}\) depending on \(\delta\hat{q}^{(k)}(\vec{n})\) we parameterize them as \(N_{\text{sample}}(\vec{n})\) and \(N_{\text{shot}}(\vec{n})\). We also write the number of circuits sampled and the number of running each circuit for calculating \(q^{(1)}\) as \(N_{\text{sample}}^{0}\) and \(N_{\text{shot}}^{0}\). ``` 0:\(k\), \(\vec{n}\), \(N\), \(N_{\text{sample}}\), and \(N_{\text{shot}}\) 1:for\(\vec{s}\) in \(\{0,1\}^{\oplus k}\)do 2:for\((\vec{b}_{1},...,\vec{b}_{k})\) in \((\{0,1\}^{\oplus n_{1}},...,\{0,1\}^{\oplus n_{k}})\)do 3:for\(p=1\) to \(N_{\text{sample}}\)do 4: Sample \(\sigma\) uniformly from \(S_{N,k}^{\text{sub}}\). 5: Sample \(\vec{\ell}_{1},\cdots\vec{\ell}_{k}\) according to \(P_{s_{1}^{1}}(\vec{\ell}_{1})\cdots P_{s_{k}}^{n_{k}}(\vec{\ell}_{k})\). Sample \(\vec{\ell}\) according to \(P_{\text{smurr}}(\vec{\ell},N-k)\). 6: With \(N_{\text{shot}}\) measurements, estimate the following by the quantum circuit and set it to \(\delta\hat{q}_{\sigma_{p}}^{(k)}\left(\left(\vec{b_{1}},\cdots\vec{b_{k}} \right),\vec{s}\right)\): \[\text{Tr}\left(\tilde{Q}f_{\sigma,k,N-k}\left(\left(\tilde{S}_{s_{1}}^{(\vec{ \ell}_{1})}(\vec{\ell}_{1}),\cdots,\tilde{S}_{s_{k}}^{(\vec{\ell}_{k})}(\vec{ \ell}_{k})\right),\left(\tilde{\mathcal{T}}_{\ell_{1}},\cdots,\tilde{\mathcal{T }}_{\ell_{N-k}}\right)\right)\left(\tilde{\rho}_{\text{init}}\right)\right).\] 7:endfor 8: Set \(\delta\hat{q}^{(k)}\left(\left(\vec{b_{1}},\cdots\vec{b_{k}}\right),\vec{s} \right)=\frac{1}{N_{\text{sample}}}\sum_{p=1}^{N_{\text{sample}}}\delta\hat{q }_{\sigma_{p}}^{(k)}\left(\left(\vec{b_{1}},\cdots\vec{b_{k}}\right),\vec{s}\right)\). 9:endfor 10:endfor 11: Set \[\delta\hat{q}^{(k)}(\vec{n})=c^{(k)}(\vec{n})\sum_{\vec{b_{1}}\cdots\vec{b_{ k}}}\sum_{\vec{s}}(-1)^{\sum_{\epsilon=1}^{k}s_{\epsilon}}\delta\hat{q}^{(k)} \left(\left(\vec{b_{1}},\cdots\vec{b_{k}}\right),\vec{s}\right).\] Output:\(\delta\hat{q}^{(k)}(\vec{n})\) ``` **Algorithm 1** **Evalcorrection** It should be noted that there is a statistical error in \(\hat{q}^{(K)}\) due to the use of Monte Carlo sampling and the limited number of measurements. To reduce the statistical error, we need to increase \(N_{\text{sample}}(\vec{n})\), \(N_{\text{shot}}(\vec{n})\), \(N_{\text{sample}}^{0}\), and \(N_{\text{shot}}^{0}\). We show how the total number of quantum circuits runs scale with the order in Appendix B even though our focus in this paper is discussing the reduction of systematic error and not the statistical error. In the discussion, we show that the dominant source of the quantum circuit runs comes from the estimation of \(\Delta\hat{q}^{(k)}(\vec{n})\) with limited \(\vec{n}\) and the cost for \(\Delta\hat{q}^{(k)}(\vec{n})\) with other \(\vec{n}\) is negligible when \(N\) is large; consequently, the total number of quantum circuit runs scale only less than quadratically with \(K\). ## V Numerical experiments In this section, we present two numerical simulations of qSWIFT. In Section V.1, we show the asymptotic behavior of qSWIFT and compare it with Trotter-Suzuki decomposition and qDRIFT. In Section V.2, we perform a numerical experiment with the hydrogen molecule Hamiltonian and compare its performance with those of the previous algorithms. ### Asymptotic behaviour We compute the required number of gates \(N\) to achieve a given systematic error for each evolution time and each algorithm: qSWIFT, Trotter-Suzuki decomposition, and qDRIFT. To clarify the difference from the original qDRIFT, we use the three molecules used in the numerical experiment of the original paper [5]: propane, ethane, and carbon dioxide. For the qSWIFT algorithm, we utilize the third-order (\(K=3\)) qSWIFT. We also use the sixth-order qSWIFT (\(K=6\)) as a reference. For the Trotter-Suzuki decomposition, we use both the deterministic and the randomized methods of the first, second, and fourth order. For calculating the systematic error of qSWIFT, we use the bound (A22) derived in Appendix A.2. For the error of qDRIFT and the Trotter-Suzuki decompositions, we utilize the same upper-bound formulae as in the literature [5] (See Appendix B and Appendix C of [5]). Fig 3 show the asymptotic behaviors of \(N\) for each time to achieve the systematic error \(\varepsilon=0.001\), which includes three subfigures corresponding to each molecule: propane with STO-3G basis (left), ethane with 6-31G basis (center), and carbon dioxide with 6-31G basis (right). To generate each molecule Hamiltonian, we use OpenFermion [16]. For each figure, we show the third-order qSWIFT by the pink line (qSWIFT-3), the sixth-order qSWIFT by the pink dotted line (qSWIFT-6), and the qDRIFT by the black line (qDRIFT). For the Trotter-Suzuki decomposition, the best of the deterministic method among the first, second, and fourth order is shown by the gray dotted line (TS (Best)), and the best of the randomized method is shown by the green dotted line (RTS (Best)). Figure 4: The asymptotic behavior to achieve \(\varepsilon=10^{-6}\). Other settings are the same as Fig. 3. Figure 3: The asymptotic behaviors of \(N\) for each time for each molecule to achieve the systematic error \(\varepsilon=0.001\). Three subfigures correspond to each molecule: propane with STO-3G basis (left), ethane with 6-31G basis (center), and carbon dioxide with the 6-31G basis (right). We show the third-order qSWIFT by the pink line (qSWIFT-3), the sixth-order qSWIFT by the pink dotted line (qSWIFT-6), and the qDRIFT by the black line (qDRIFT). For the Trotter-Suzuki decomposition, the best of the deterministic method among the first, second, and fourth order is shown by the gray dotted line (TS (Best)), and the best of the randomized method is shown by the green dotted line (RTS (Best)). We see that the qSWIFT algorithms outperform the qDRIFT for all \(t\) in the sense that the required number of gates \(N\) is more than 10 times smaller in the qSWIFT algorithms than in the qDRIFT. While the original qDRIFT reduces the number of gates in the region \(t\lesssim 10^{8}\) from the Trotter-Suzuki decompositions, the qSWIFT algorithms realize a further reduction of the gates. Note that the improvement from the third-order qSWIFT to the sixth-order qSWIFT is not as large as that from the qDRIFT to the third-order qSWIFT. Since there is a trade-off between the order and the number of quantum circuits run as we discuss in Appendix B, it may be better to utilize the third-order qSWIFT rather than the sixth-order qSWIFT though it depends on the features of quantum devices. Fig. 4 is the same figure as Fig. 3 other than that, the required systematic error is \(\varepsilon=10^{-6}\). In this case, where more precise time evolution is necessary, the merit of using qSWIFT is much clearer. In qSWIFT-3 (qSWIFT-6), the required number of gates for each time is almost 1,000 (10,000) times smaller than that of qDRIFT. The region where qDRIFT has an advantage over the Trotter-Suzuki is very limited (\(t\lesssim 10^{5}\sim 10^{6}\)) due to the bad scaling of qDRIFT in terms of \(\varepsilon\). In contrast, the region where qSWIFT has the advantage over Trotter-Suzuki decompositions does not change much from the case of \(\varepsilon=0.001\) (\(t\lesssim 10^{9}\sim 10^{10}\)). This result shows the merit of our algorithm; in the case of qDRIFT, we need to increase the number of gates to reduce the systematic error, but in the case of our qSWIFT, it can be reduced just by increasing the order parameter of the algorithm. ### Simulation of the hydrogen molecule We estimate \(\mathrm{Tr}(Q\mathcal{U}(\rho_{\mathrm{init}}))\) by using qSWIFT algorithm described in **Algorithm 2** with an observable \(Q\), time evolution \(\mathcal{U}\), and an input state \(\rho_{\mathrm{init}}\). For \(\mathcal{U}\), we implement the time evolution with the hydrogen molecule Hamiltonian with the 6-31G basis and \(t=1\). Again we use OpenFermion [16] to generate the molecule Hamiltonian. We utilize the Bravyi-Kitaev transformation [17] for transforming the Fermionic operators to Pauli operators. The number of terms in the Hamiltonian \(L\) is 184. The generated Hamiltonian has eight qubits, and therefore, we use a system with nine qubits (including one ancilla qubit). As for the observable \(Q\), we choose \(Q=ZIIIIIII\), and as for the input state, we choose \(\rho_{\mathrm{init}}=|+\rangle^{\otimes 8}\). For comparison, we also estimate \(\mathrm{Tr}(Q\mathcal{U}(\rho_{\mathrm{init}}))\) by using the qDRIFT Figure 5: The estimation error of \(\mathrm{Tr}(Q\mathcal{U}(\rho_{\mathrm{init}}))\) for each number of gates \(N\) for each method with \(Q=ZIIIIIII\) and \(\rho_{\mathrm{init}}=|+\rangle^{\otimes 8}\). The time evolution is performed by the hydrogen molecule Hamiltonian with 6-31g basis transformed by the Bravyi-Kitaev transformation. The evolution time is set to be \(t=1\). In plotting each point, we perform six trials and show the mean value and the standard deviation of the mean. For qSWIFT, we show the result of the second-order (qSWIFT-2) with the purple line and the third-order (qSWIFT-3) with the pink line. The result of the qDRIFTalgorithm (qDRIFT) is plotted with the black line. For the deterministic Trotter-Suzuki decomposition, the first-order result (TS-1st) is plotted with the dotted gray line, and the second-order result (TS-2nd) is plotted with the dotted green line. For the randomized Trotter-Suzuki decomposition, the first-order result (RTS-1st) is plotted with the dotted yellow line, and the second-order result (RST-2nd) is plotted with the dotted blue line. and the Trotter-Suzuki decomposition. As the input parameters of the qSWIFT, we set \(N^{0}_{\rm shot}=N_{\rm shot}(\vec{n})=100\), \(N^{0}_{\rm sample}=400,000\), and \(N_{\rm sample}(\vec{n})=C(\vec{n})\times N^{0}_{\rm sample}\). For qDRIFT and the randomized Trotter-Suzuki decomposition, we sample \(N^{0}_{\rm sample}\) quantum circuits and perform \(N^{0}_{\rm shot}\) measurements for each circuit. For the deterministic Trotter-Suzuki decomposition, we perform \(N^{0}_{\rm shot}\times N^{0}_{\rm sample}\) measurements. For the quantum circuit simulation, we use Qulacs [18]. Fig. 5 shows the estimation error of \(\mathrm{Tr}(Q\mathcal{U}(\rho_{\rm init}))\) for each number of gates \(N\) for each method. In plotting each point, we perform six trials and show the mean value and the standard deviation of the mean. For qSWIFT, we show the result of the second-order (qSWIFT-2) with the purple line and the third-order (qSWIFT-3) with the pink line. The result of the qDRIFT (qDRIFT) is plotted with the black line. For the Trotter-Suzuki decomposition, we show the first and second-order results. The minimum \(N\) for the first and second Trotter-Suzuki decomposition is 184 and 368 (1840 for the fourth order). For the deterministic Trotter-Suzuki decomposition, the first-order result (TS-1st) is plotted with the dotted gray line, and the second-order result (TS-2nd) is plotted with the dotted green line. For the randomized Trotter-Suzuki decomposition, the first-order result (RTS-1st) is plotted with the dotted yellow line, and the second-order result (RTS-2nd) is plotted with the dotted blue line. We see that qSWIFT algorithms outperform the other methods. Particularly, the required \(N\) for the third-order qSWIFT for achieving \(\varepsilon\sim 0.001\) is almost 10 times smaller than that for qDRIFT, which is consistent with the asymptotic behavior shown in Fig. 3. Consequently, even in the region where Trotter-Suzuki decomposition works better than qDRIFT, qSWIFT algorithms outperform Trotter-Suzuki decompositions. ## VI Conclusion and discussion Hamiltonian simulation is a crucial subroutine of various quantum algorithms. Approaches based on the product formulae are practically favored due to their simplicity and ancilla-free nature. There are two representative product formulae-based methods: Trotter-Suzuki decompositions and qDRIFT [5]. Trotter-Suzuki decompositions have the issue that the number of gates depends on the number of the terms in the Hamiltonian, at least linearly. In contrast, qDRIFT avoids the dependency on the number of terms but has the issue that the number of gates is dependent on the systematic error \(\varepsilon\) as \(O(1/\varepsilon)\). In this paper, we propose qSWIFT, a high-order randomized algorithm having both the advantage of the Trotter Suzuki decompositions and qDRIFT, in the sense that its gate count is independent of the number of terms and that the gate count is asymptotically optimal with respect to the precision. We construct the qSWIFT channel and bound the systematic error by the diamond norm. We prove that the qSWIFT channel satisfies the required precision that decreases exponentially with the order parameter in terms of the diamond norm. Then we create the algorithm applying the qSWIFT channel to estimate given physical quantities. The algorithm requires a system as simple as qDRIFT; it requires just one ancilla qubit, and only the time evolution operators and the swift operators constructible with the gate in Fig. 2b are necessary to construct the quantum circuits. Our numerical demonstration shows that qSWIFT outperforms qDRIFT with respect to the number of gates for required precision. Particularly when high precision is required, there is a significant advantage of using qSWIFT; the number of gates in the third-order (sixth-order) qSWIFT is 1000 (10000) times smaller than qDRIFT to achieve the systematic error of \(\varepsilon=10^{-6}\). As a future direction, it is beneficial to perform case studies to investigate the performance of qSWIFT in specific problems. Particularly, the literature [19] points out that the qDRIFT does not perform well in the phase estimation problems, unlike originally expected [5] due to the relatively large systematic error for a given number of gates. In contrast, since qSWIFT can successfully reduce systematic error by increasing the order parameter, we expect that the performance in the phase estimation is significantly improved with qSWIFT. Also, as we note in Section I, there are algorithms using many ancilla qubits [6; 7; 8; 9; 10; 11] that achieve a better asymptotic gate scaling with respect to \(\lambda\) and \(t\) though its constant prefactor is relatively large compared to qDRIFT and qSWIFT. Comparing qSWIFT with those algorithms and discussing the advantages of each algorithm in specific problems is a promising direction for future work. ## Acknowledgement We thank Nathan Wiebe for helpful discussions. K.N. acknowledges the support of Grant-in-Aid for JSPS Research Fellow 22J01501. A.A.-G. acknowledges support from the Canada 150 Research Chairs program and CIFAR. A.A.-G. acknowledges the generous support of Anders G. Froseth. ## Appendix A Bound for the systematic errors in qSWIFT channels ### Error bound for the second-order qSWIFT channel In this subsection, we prove the error bound given in Lemma III.3 for our second-order qSWIFT channel. To this end, we utilize the triangle and submultiplicative properties of the diamond norm. Namely, we utilize \[||\mathcal{A}+\mathcal{B}||_{\diamond} \leq||\mathcal{A}||_{\diamond}+||\mathcal{B}||_{\diamond}, \tag{10}\] \[||\mathcal{A}\mathcal{B}||_{\diamond} \leq||\mathcal{A}||_{\diamond}||\mathcal{B}||_{\diamond}, \tag{11}\] for given channels \(\mathcal{A}\) and \(\mathcal{B}\). The diamond distance between the exact time evolution and the qSWIFT channel can be evaluated as \[\begin{split} d_{\diamond}\left(\mathcal{U},\mathcal{E}^{(2)} \right)&=\frac{1}{2}||M_{1,N-1}\left(\Delta_{3},\mathcal{E}_{N} \right)+\sum_{k=2}^{N}M_{k,N-k}\left(\Delta_{2},\mathcal{E}_{N}\right)||_{ \diamond}\\ &\leq\frac{1}{2}||M_{1,N-1}\left(\Delta_{3},\mathcal{E}_{N} \right)||_{\diamond}+\frac{1}{2}\sum_{k=2}^{N}||M_{k,N-k}\left(\Delta_{2}, \mathcal{E}_{N}\right)||_{\diamond},\\ &\leq\frac{1}{2}\sum_{n=3}^{\infty}\frac{\tau^{n}}{n!}||M_{1,N-1 }(\mathcal{L}^{(n)},\mathcal{E}_{N})||_{\diamond}+\frac{1}{2}\sum_{k=2}^{N} \sum_{n_{1},\cdots n_{k}=2}^{\infty}\frac{\tau^{\sum_{j}^{k}n_{j}}}{n_{1}! \cdots n_{k}!}||M_{k,N-k}\left(\left(\mathcal{L}^{(n_{1})},\ldots,\mathcal{L}^ {(n_{k})}\right),\mathcal{E}_{N}\right)||_{\diamond},\end{split} \tag{12}\] where we use (10) and (11) to show the inequality. By the definition of the mixture function \(M_{k,N-k}\) in Eq. (19) and using the triangle and submultiplicative properties of the diamond norm, we obtain \[\begin{split}||M_{k,N-k}\left(\left(\mathcal{L}^{(n_{1})}, \ldots,\mathcal{L}^{(n_{k})}\right),\mathcal{E}_{N}\right)||_{\diamond}& \leq\sum_{\sigma\in S_{N,N}^{n_{k}}}||f_{\sigma,k,N-k}\left( \left(\mathcal{L}^{(n_{1})},\ldots,\mathcal{L}^{(n_{k})}\right),\mathcal{E}_{ N}\right)||_{\diamond}\\ &\leq\binom{N}{k}\prod_{j=1}^{k}||\mathcal{L}^{(n_{j})}||_{ \diamond}||\mathcal{E}_{N}||_{\diamond}^{N-k}\\ &\leq\binom{N}{k}2^{k+\sum_{j=1}^{k}n_{j}}.\end{split} \tag{13}\] To show the last inequality we use \[||\mathcal{L}^{(n)}||_{\diamond}\leq||\mathcal{L}||_{\diamond}^{n}+\sum_{\ell= 1}^{L}p_{\ell}||\mathcal{L}_{\ell}||_{\diamond}^{n}\leq 2^{n+1}, \tag{14}\] which holds since \(||\mathcal{L}||_{\diamond}\leq 2\) and \(||\mathcal{L}_{\ell}||_{\diamond}\leq 2\). By using (13), \[\begin{split} d_{\diamond}\left(\mathcal{U},\mathcal{E}^{(2)} \right)&\leq\frac{1}{2}\sum_{n=3}^{\infty}\frac{\tau^{n}}{n!} \binom{N}{1}2^{n+1}+\frac{1}{2}\sum_{k=2}^{N}\sum_{n_{1},\cdots n_{k}=2}^{ \infty}\frac{\tau^{\sum_{j}^{k}n_{j}}}{n_{1}!\cdots n_{k}!}\binom{N}{k}2^{k+ \sum_{j=1}^{k}n_{j}}\\ &=\frac{1}{2}\sum_{n=3}^{\infty}\frac{N\tau^{n}}{n!}2^{n+1}+\frac{ 1}{2}\sum_{k=2}^{N}\sum_{n_{1},\cdots n_{k}=2}^{\xi}\sum_{\xi=4}^{\infty}\delta \left[\xi,\sum_{j=1}^{k}n_{j}\right]\frac{\tau^{\xi}}{n_{1}!\cdots n_{k}!} \binom{N}{k}2^{k+\xi},\end{split} \tag{15}\] where in the second line, we use \[\sum_{\xi=4}^{\infty}\delta\left[\xi,\sum_{j=1}^{k}n_{j}\right]=1, \tag{16}\] which holds for a fixed set of integers \(\left\{n_{j}\right\}_{j=1}^{k}\) with \(n_{j}\geq 2\) and \(k\geq 2\). By using that \(1/n\leq 1/2\) in the summand of the first term and \(1/n_{j}\leq 1/2\) in the summand of the second term, we obtain \[\begin{split} d_{\circ}\left(\mathcal{U},\mathcal{E}^{(2)}\right)& \leq\frac{1}{2}\sum_{n=3}^{\infty}(2\tau)^{n}N+\frac{1}{2}\sum_{ \xi=4}^{\infty}\sum_{k=2}^{N}(2\tau)^{\xi}\binom{N}{k}\sum_{n_{1},\cdots n_{k} =2}^{\xi}\delta\left[\xi,\sum_{j=1}^{k}n_{j}\right],\\ &=\frac{1}{2}\sum_{n=3}^{\infty}(2\tau)^{n}N+\frac{1}{2}\sum_{ \xi=4}^{\infty}\sum_{k=2}^{\lfloor\xi/2\rfloor}(2\tau)^{\xi}\binom{N}{k}\sum_ {n_{1},\cdots n_{k}=2}^{\xi}\delta\left[\xi,\sum_{j=1}^{k}n_{j}\right],\end{split} \tag{100}\] with \(\lfloor x\rfloor\) as the largest integer that does not exceed a real value \(x\), where we use the fact that the summand of the second term vanishes if \(k\geq\lfloor\xi/2\rfloor\). Using \[\sum_{n_{1},\cdots,n_{k}=2}^{\xi}\delta\left[\xi,\sum_{j=1}^{k}n_{j}\right] \leq\xi^{k}, \tag{101}\] we obtain \[\begin{split} d_{\circ}\left(\mathcal{U},\mathcal{E}^{(2)}\right)& \leq\frac{1}{2}\sum_{n=3}^{\infty}(2\tau)^{n}\binom{N}{1}+\frac{ 2}{2}\sum_{\xi=4}^{\infty}\sum_{k=2}^{\lfloor\xi/2\rfloor}(2\tau)^{\xi}\binom {N}{k}\xi^{k},\\ &\leq\frac{1}{2}\sum_{n=3}^{\infty}(2\tau)^{n}N+\frac{1}{2}\sum_{ \xi=4}^{\infty}(2\tau)^{\xi}N^{\lfloor\xi/2\rfloor}\sum_{k=2}^{\lfloor\xi/2 \rfloor}\frac{\xi^{k}}{k!}\\ &\leq\frac{1}{2}\sum_{n=3}^{\infty}(2\tau)^{n}N+\frac{1}{2}\sum_{ \xi=4}^{\infty}(2e\tau)^{\xi}N^{\lfloor\xi/2\rfloor}\\ &\leq\frac{1}{2}\sum_{\xi=3}^{\infty}(2e\tau)^{\xi}N^{\lfloor \xi/2\rfloor},\end{split} \tag{102}\] where to show the third inequality, we use \[\sum_{k=1}^{\lfloor\xi/2\rfloor}\frac{\xi^{k}}{k!}\leq\sum_{k=0}^{\infty} \frac{\xi^{k}}{k!}=e^{\xi}. \tag{103}\] Now let us evaluate \(\frac{1}{2}\sum_{\xi=2K-1}^{\infty}(2e\tau)^{\xi}N^{\lfloor\xi/2\rfloor}\) (in the current case, \(K=2\)). It can be evaluated depending on if \(\xi\) is odd (\(\xi=2m\) with \(m\) as an integer) or even (\(\xi=2m-1\)) as, \[\begin{split}\sum_{\xi=2K-1}^{\infty}\frac{1}{2}(2e\tau)^{\xi}N ^{\lfloor\xi/2\rfloor}&\leq\frac{1}{2}\sum_{m=K}^{\infty}(2e\tau )^{2m}N^{m}+\frac{1}{2}\sum_{m=K}^{\infty}(2e\tau)^{2m-1}N^{m-1}\\ &=\frac{1}{2}\left(1+\frac{1}{2eN\tau}\right)\sum_{m=K}^{\infty}(2 e\tau)^{2m}N^{m}\\ &=\eta(\lambda t,N)\left(\frac{(2e\lambda t)^{2}}{N}\right)^{K}, \end{split} \tag{104}\] as far as \[\frac{(2e\lambda t)^{2}}{N}<1, \tag{105}\] where \[\eta(x,N):=\frac{1}{2}\left(1+\frac{1}{2ex}\right)\frac{1}{1-(2ex)^{2}/N}. \tag{106}\] By setting \(K=2\), we obtain the bound \[d_{\circ}\left(\mathcal{U},\mathcal{E}^{(2)}\right)\leq\eta(\lambda t,N)\left( \frac{(2e\lambda t)^{2}}{N}\right)^{2}. \tag{107}\] In the reasonable parameter range: \(1\leq\lambda t\leq\sqrt{N}/2\sqrt{2}e\) (where the latter inequality holds by choosing suitable \(N\)), it holds \(\eta(\lambda t,N)\leq 3/2\), and therefore \[d_{\circ}\left(\mathcal{U},\mathcal{E}^{(2)}\right)\leq\frac{3}{2}\left(\frac{(2 e\lambda t)^{2}}{N}\right)\in\mathcal{O}\left(\left(\frac{(\lambda t)^{2}}{N} \right)^{2}\right). \tag{100}\] ### Error bound for the higher-order qSWIFT channels We now prove the error bound given in Lemma IV.2 for the \(K\)-th order qSWIFT channel. By the definition of this channel in Eq. (71), we have \[d_{\circ}\left(\mathcal{U},\mathcal{E}^{(K)}\right) =\frac{1}{2}||\Phi^{(\infty)}-\mathcal{E}^{(K)}||_{\circ} \tag{101}\] \[=\frac{1}{2}\sum_{\xi=2K-1}^{\infty}\tau^{\xi}\sum_{k=1}^{N}\sum _{n_{1},\ldots,n_{k}=2}^{\xi}\frac{1}{n_{1}!n_{2}!\cdots n_{k}!}\delta\left[ \xi,\sum_{j=1}^{k}n_{j}\right]M_{k,N-k}\left(\left(\mathcal{L}^{(n_{1})}, \ldots,\mathcal{L}^{(n_{k})}\right),\mathcal{E}_{N}\right)\] (102) \[\leq\frac{1}{2}\sum_{\xi=2K-1}^{\infty}\tau^{\xi}\sum_{k=1}^{N} \sum_{n_{1},\ldots,n_{k}=2}^{\xi}\frac{1}{n_{1}!n_{2}!\cdots n_{k}!}\delta \left[\xi,\sum_{j=1}^{k}n_{j}\right]||M_{k,N-k}\left(\left(\mathcal{L}^{(n_{1} )},\ldots,\mathcal{L}^{(n_{k})}\right),\mathcal{E}_{N}\right)||_{\circ}, \tag{103}\] where we used the triangular inequality (100). By substituting (100) into (101), we obtain \[d_{\circ}\left(\mathcal{U},\mathcal{E}^{(K)}\right) \leq\frac{1}{2}\sum_{\xi=2K-1}^{\infty}\tau^{\xi}\sum_{k=1}^{N} \sum_{n_{1},\ldots,n_{k}=2}^{\xi}\frac{1}{n_{1}!n_{2}!\cdots n_{k}!}\delta \left[\xi,\sum_{j=1}^{k}n_{j}\right]\binom{N}{k}2^{\xi+k} \tag{104}\] \[\leq\frac{1}{2}\sum_{\xi=2K-1}^{\infty}(2\tau)^{\xi}\sum_{k=1}^{ N}\binom{N}{k}\sum_{n_{1},\ldots,n_{k}=2}^{\xi}\delta\left[\xi,\sum_{j=1}^{k}n_{j} \right],\] \[=\frac{1}{2}\sum_{\xi=2K-1}^{\infty}(2\tau)^{\xi}\sum_{k=1}^{ \lfloor\xi/2\rfloor}\binom{N}{k}\sum_{n_{1},\ldots,n_{k}=2}^{\xi}\delta\left[ \xi,\sum_{j=1}^{k}n_{j}\right].\] We use \(1/n_{j}\leq 1/2\) for \((n_{j}\geq 2)\) in the second inequality and to show the third equality, we use the fact that summands with \(k>\lfloor\xi/2\rfloor\) vanishes. Using (103), we obtain \[d_{\circ}\left(\mathcal{U},\mathcal{E}^{(K)}\right) \leq\frac{1}{2}\sum_{\xi=2K-1}^{\infty}(2\tau)^{\xi}\sum_{k=1}^{ \lfloor\xi/2\rfloor}\binom{N}{k}\xi^{k}, \tag{105}\] \[\leq\frac{1}{2}\sum_{\xi=2K-1}^{\infty}(2\tau)^{\xi}N^{\lfloor \xi/2\rfloor}\sum_{k=1}^{\lfloor\xi/2\rfloor}\frac{\xi^{k}}{k!},\] \[\leq\frac{1}{2}\sum_{\xi=2K-1}^{\infty}(2e\tau)^{\xi}N^{\lfloor \xi/2\rfloor},\] where to show the last inequality, we use (100). By using (102), we obtain \[d_{\circ}\left(\mathcal{U},\mathcal{E}^{(K)}\right)\leq\eta(\lambda t,N)\left( \frac{(2e\lambda t)^{2}}{N}\right)^{K}, \tag{106}\] In the reasonable parameter range: \(1\leq\lambda t\leq\sqrt{N}/2\sqrt{2}e\) again, it holds \(\eta(\lambda t,N)\leq 3/2\), and therefore \[d_{\circ}\left(\mathcal{U},\mathcal{E}^{(K)}\right)\leq\frac{3}{2}\left(\frac{ (2e\lambda t)^{2}}{N}\right)\in\mathcal{O}\left(\left(\frac{(\lambda t)^{2}}{ N}\right)^{K}\right). \tag{107}\] Note that when we draw Fig. 3 and Fig. 4, we use the formula (106). ## Appendix B Statistical error Due to the sampling error and the shot noise, there is a statistical error in \(\delta\hat{q}^{(k)}(\vec{n})\). Let us estimate the statistical error in the following. For simplicity, we set \(N_{\text{shot}}(\vec{n})=1\). Let \(\Delta_{q}\left(\left(\vec{b_{1}},\vec{b_{2}},\cdots,\vec{b_{k}}\right),\vec{s}\right)\) as the statistical error of \(\delta\hat{q}^{(k)}\left(\left(\vec{b_{1}},\cdots\vec{b_{k}}\right),\vec{s}\right)\). Then, from the central limit theorem, it behaves as \[\Delta_{q}\left(\left(\vec{b_{1}},\vec{b_{2}},\cdots,\vec{b_{k}}\right),\vec{s }\right)\sim\frac{\text{Var}\left(\delta\hat{q}^{(k)}\left(\left(\vec{b_{1}}, \cdots\vec{b_{k}}\right),\vec{s}\right)\right)}{\sqrt{N_{\text{sample}}(\vec{ n})}}, \tag{10}\] where \(\text{Var}(A)\) denotes the variance of the variable \(A\). In the rest of the section, we use '\(\sim\)' with the same meaning. Since \[-1\leq\delta\hat{q}^{(k)}\left(\left(\vec{b_{1}},\cdots\vec{b_{k}}\right),\vec {s}\right)\leq 1, \tag{11}\] from Popoviciu's inequality on variances, it holds \(\text{Var}\left(\delta\hat{q}^{(k)}\left(\left(\vec{b_{1}},\cdots\vec{b_{k}} \right),\vec{s}\right)\right)\leq 1\), and \[\Delta_{q}\left(\left(\vec{b_{1}},\vec{b_{2}},\cdots,\vec{b_{k}}\right),\vec{ s}\right)\lesssim\frac{1}{\sqrt{N_{\text{sample}}(\vec{n})}}. \tag{12}\] Using the fact that each \(\delta\hat{q}^{(k)}\left(\left(\vec{b_{1}},\cdots\vec{b_{k}}\right),\vec{s}\right)\) is independent, we can estimate the statistical error of \(\delta\hat{q}^{(k)}(\vec{n})\) as \[\begin{split}|\delta q^{(k)}(\vec{n})-\delta\hat{q}^{(k)}(\vec{n })|&\leq c^{(k)}(\vec{n})\sqrt{\sum_{\vec{b_{1}}\cdots\vec{b_{k}} }\sum_{\vec{s}}\left[\Delta_{q}\left(\left(\vec{b_{1}},\vec{b_{2}},\cdots, \vec{b_{k}}\right),\vec{s}\right)\right]^{2}}\\ &\lesssim c^{(k)}(\vec{n})\sqrt{\frac{2^{\sum_{j}n_{j}+k}}{N_{ \text{sample}}(\vec{n})}},\end{split} \tag{13}\] where to show the second inequality, we use \(\sum_{\vec{b_{1}}\cdots\vec{b_{k}}}\sum_{\vec{s}}1=2^{\sum_{j}n_{j}+k}\). In other words, the statistical error is bounded as \[|\delta q^{(k)}(\vec{n})-\delta\hat{q}^{(k)}(\vec{n})|\leq\varepsilon, \tag{14}\] if we set \[N_{\text{sample}}(\vec{n})\sim\frac{[c^{(k)}(\vec{n})]^{2}\times 2^{\sum_{j}n_{j}+k }}{\varepsilon^{2}}. \tag{15}\] Let \(N_{\text{total}}^{(k)}(\vec{n})\) be the total circuit run for calculating \(\delta\hat{q}^{(k)}(\vec{n})\). Then it holds \[N_{\text{total}}(\vec{n})=2^{\sum_{j}n_{j}+k}\times N_{\text{sample}}(\vec{n}) \sim\frac{[c^{(k)}(\vec{n})]^{2}\times 2^{2\sum_{j}n_{j}+2k}}{\varepsilon^{2}}. \tag{16}\] Let us further clarify the implication of (16). Recall that in the qSWIFT algorithm, we only compute \(\delta\hat{q}^{(k)}(\vec{n})\) only if \[\sum_{j=1}^{k}n_{j}=\xi,\ \ n_{j}\geq 2\ (\forall j). \tag{17}\] When the condition (17) is satisfied, it holds \[c^{(k)}(\vec{n})\leq N^{k}\left(\frac{\tau^{\xi}}{2^{k}}\right), \tag{18}\] where \(\xi\geq 2k\) (the equality holds when \(n_{j}=2\) for all \(j\)). Conversely, \[c^{(k)}(\vec{n})\leq\left\{\begin{array}{cc}\left(\frac{(\lambda t)^{2}}{2N }\right)^{\frac{\xi}{2}}&\text{if $n_{j}=2$ for all $j$}\\ \sqrt{\frac{2}{N}}\left(\frac{(\lambda t)^{2}}{2N}\right)^{\frac{\xi}{2}}& \text{otherwise}\end{array}\right., \tag{19}\] meaning that \(c^{(k)}(\vec{n})\) is suppressed by the factor \(\sqrt{2/N}\) unless \(n_{j}=2\) is satisfied for all \(j\). Therefore, the dominant source of the quantum circuits run comes from the calculation of \(\delta\hat{q}^{(k)}(\vec{n})\) with \(n_{j}=2\) for all \(j\); and the number of measurements for calculating \(\delta\hat{q}^{(k)}(\vec{n})\) with other \(\vec{n}\) asymptotically becomes negligible as \(N\) becomes large. In the asymptotic limit, the total number of quantum circuits run \(N_{\rm total}\) for computing all terms in \(q^{(K)}\) within the error \(\varepsilon\) in \(2K\)-th order qSWIFT can be estimated as \[N_{\rm total} \approx N_{\rm sample}^{0}+N_{\rm total}^{(1)}\left(\{2\}\right)+N _{\rm total}^{(2)}\left(\{2,2\}\right)+\cdots N_{\rm total}^{(K)}\left(\{2, \cdots 2\}\right) \tag{111}\] \[\sim\frac{1}{\varepsilon^{2}}\sum_{k=0}^{K}\left(\frac{(2\lambda t )^{2}}{N}\right)^{\xi}, \tag{112}\] which includes the number of samples \(N_{\rm sample}^{0}\) to compute \(q^{(1)}\). The approximation in the first line denotes the asymptotic limit. To obtain the last expression, we use (107) and also use that if \(N_{\rm shot}^{0}=1\), required number of samples to reduce the statistical error of \(q^{(1)}\) within \(\varepsilon\) is \[N_{\rm sample}^{0}\sim\frac{1}{\varepsilon^{2}} \tag{113}\] from the central limit theorem. To reduce the statistical error of \(q^{(K)}\) less than \(\varepsilon_{\rm total}\), we should set \(\varepsilon=\varepsilon_{\rm total}/\sqrt{K+1}\), and therefore, \[N_{\rm total}\sim\frac{K+1}{\varepsilon_{\rm total}^{2}}\sum_{k=0}^{K}\left( \frac{(2\lambda t)^{2}}{N}\right)^{\xi}. \tag{114}\] Thus, if we fix \(\varepsilon_{\rm total}\), \(N_{\rm total}\) scales less than quadratically with \(K\) as far as \((2\lambda t)^{2}/N<1\).
2304.04754
Porównanie metod detekcji zajętości widma radiowego z wykorzystaniem uczenia federacyjnego z oraz bez węzła centralnego
Dynamic spectrum access systems typically require information about the spectrum occupancy and thus the presence of other users in order to make a spectrum al-location decision for a new device. Simple methods of spectrum occupancy detection are often far from reliable, hence spectrum occupancy detection algorithms supported by machine learning or artificial intelligence are often and successfully used. To protect the privacy of user data and to reduce the amount of control data, an interesting approach is to use federated machine learning. This paper compares two approaches to system design using federated machine learning: with and without a central node.
Łukasz Kułacz
2023-03-31T05:30:19Z
http://arxiv.org/abs/2304.04754v1
# Porousnuie metod detekcji zajetosci widma radiowego z ###### Abstract Dynamic spectrum access systems typically require information about the spectrum occupancy and thus the presence of other users in order to make a spectrum allocation decision for a new device. Simple methods of spectrum occupancy detection are often far from reliable, hence spectrum occupancy detection algorithms supported by machine learning or artificial intelligence are often and successfully used. To protect the privacy of user data and to reduce the amount of control data, an interesting approach is to use federated machine learning. This paper compares two approaches to system design using federated machine learning: with and without a central node. federated machine learning, spectrum occupancy detection, wireless networks. 2022 ## 1 Wstzp Jedonym z glowych wyzwan stojacych przed bezprzewodowymi sieciami telekomunikacyinymi jest efektywne wykorzystanie dostepnych zasobow czestotliwoosciouych. Pomiary przerowdzone n przelomie wiekrow w roznych regional swiata wykazaly duza ierefektynwnoise prywozydaziu czestotliwoosci statzycjado danej usa lup zbioru usabg. Lug observraci a buglaj sedma z pryczczyn powstain koncepji radi kognijtwnego, przedstawionej w rozprawie doktorskie j. Mitoll III z 1999 roku, w ktorej dostep do widma radiowego realizowary jest dynamamicz i adaptacyjnine. W ciagu octanich 20 lat na cidm ywicice preprowadozno licmez badania majace nech otrimacowanie skutczych metod ckreskania zajetosci widma (tzw. sensing), zarzdzania widmem i elastyczz. This research was funded in whole or in part by National Science Centre (2021/41/N/ST7/01298). For the purpose of Open Access, the author has applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
2309.03979
Separable Self and Mixed Attention Transformers for Efficient Object Tracking
The deployment of transformers for visual object tracking has shown state-of-the-art results on several benchmarks. However, the transformer-based models are under-utilized for Siamese lightweight tracking due to the computational complexity of their attention blocks. This paper proposes an efficient self and mixed attention transformer-based architecture for lightweight tracking. The proposed backbone utilizes the separable mixed attention transformers to fuse the template and search regions during feature extraction to generate superior feature encoding. Our prediction head performs global contextual modeling of the encoded features by leveraging efficient self-attention blocks for robust target state estimation. With these contributions, the proposed lightweight tracker deploys a transformer-based backbone and head module concurrently for the first time. Our ablation study testifies to the effectiveness of the proposed combination of backbone and head modules. Simulations show that our Separable Self and Mixed Attention-based Tracker, SMAT, surpasses the performance of related lightweight trackers on GOT10k, TrackingNet, LaSOT, NfS30, UAV123, and AVisT datasets, while running at 37 fps on CPU, 158 fps on GPU, and having 3.8M parameters. For example, it significantly surpasses the closely related trackers E.T.Track and MixFormerV2-S on GOT10k-test by a margin of 7.9% and 5.8%, respectively, in the AO metric. The tracker code and model is available at https://github.com/goutamyg/SMAT
Goutam Yelluru Gopal, Maria A. Amer
2023-09-07T19:23:02Z
http://arxiv.org/abs/2309.03979v1
# Separable Self and Mixed Attention Transformers for Efficient Object Tracking ###### Abstract The deployment of transformers for visual object tracking has shown state-of-the-art results on several benchmarks. However, the transformer-based models are underutilized for Siamese lightweight tracking due to the computational complexity of their attention blocks. This paper proposes an efficient self and mixed attention transformer-based architecture for lightweight tracking. The proposed backbone utilizes the separable mixed attention transformers to fuse the template and search regions during feature extraction to generate superior feature encoding. Our prediction head performs global contextual modeling of the encoded features by leveraging efficient self-attention blocks for robust target state estimation. With these contributions, the proposed lightweight tracker deploys a transformer-based backbone and head module concurrently for the first time. Our ablation study testifies to the effectiveness of the proposed combination of backbone and head modules. Simulations show that our Separable Self and Mixed Attention-based Tracker, SMAT, surpasses the performance of related lightweight trackers on GOT10k, TrackingNet, LaSOT, NfS30, UAV123, and AVsT datasets, while running at 37 fps on CPU, 158 fps on GPU, and having 3.8M parameters. For example, it significantly surpasses the closely related trackers E.T.Track and MixFormerV2-S on GOT10k-test by a margin of 7.9% and 5.8%, respectively, in the AO metric. The tracker code and model is available at [https://github.com/goutamyg/SMAT](https://github.com/goutamyg/SMAT) ## 1 Introduction The Siamese Network-based (SN) architecture is prevalent in visual object tracking due to its simplicity and high speed [34, 39]. The SN architecture consists of a backbone to generate robust feature representation of the target template and search regions, a localization head module for target state estimation, and an optional feature favor module for relation modeling [37]. In recent years, the transformer-based [11, 30, 33] tracking methods have unified feature extraction and relation modeling by deploying self and mixed attention blocks in their backbone [8, 34] to simplify the SN architecture further. Enabled by the computational power of GPUs, these transformer-based SN trackers achieve high frames-per-second (_fps_) during inference. However, the high computational complexity of these transformer-based tracking algorithms severely impacts the _fps_ on constrained hardware, _e.g._, CPUs, limiting the utility of these algorithms towards applications with hardware constraints. The lightweight SN-based trackers [1, 2, 35], which are specifically designed for resource-constrained environments, adopt efficient building blocks to maintain real-time speed, i.e., \(\geq 30\)_fps_. Therefore, these trackers cannot fully leverage the modeling power of transformers in their architecture because of the computational complexity of the standard transformers, especially the expensive matrix multiplication while computing attention. Mehta _et al._[23] addressed this issue by replacing the costly matrix-to-matrix multiplication with separable elementwise operations to present an efficient Mobile Vision Transformer (ViT) block for vision-related tasks. Leveraging these advances, we propose a separable self and mixed attention transformer-based lightweight architecture for real-time tracking. The architecture of the proposed tracker is shown in Figure 1. We employ a cascaded arrangement of convolutional neural network (CNN) and ViT blocks in the proposed tracker backbone. Such a hybrid design [22] combines the merits of convolutions (i.e., learning the spatially-local representations) and transformers (i.e., modeling the long-range dependencies) with fewer parameters compared Figure 1: Proposed _SMAT_ architecture. The separable mixed attention Vision Transformer-based backbone jointly performs feature extraction and fusion of template and search regions. The separable transformer-based head models long-range dependencies within the fused features to predict accurate bounding boxes. to the fully transformer-based backbone architecture, such as [8, 37]. Apart from generating a robust feature representation, the proposed backbone facilitates the exchange of information between the target template and search region by computing mixed attention [8] in the ViT block without bloating the backbone latency. Our prediction head efficiently performs global contextual modeling of encoded features using separable self-attention units. Such transformer-based global modeling of encoded features improves the localization accuracy compared to the fully convolution-based methods, as shown in [1]. With these contributions, we propose a lightweight self and mixed attention transformers-based tracker, _SMAT_, running beyond real-time speed on a CPU. ## 2 Related Work The introduction of transformer-based modeling has significantly improved the performance of SN-based trackers in recent years [17]. These SN trackers [34, 38, 6, 31] exploited the global contextual modeling capabilities of the transformer layers for relation modeling [37], i.e., to fuse the features extracted from the target template and the search region. The deployment of computationally expensive transformer-based backbones for SN tracking [37, 37, 8, 13, 19, 5, 13] has improved the tracker performance further and achieved state-of-the-art results on various challenging benchmarks [12, 16, 25]. In recent years, there have been several SN-based lightweight algorithms proposed for efficient object tracking. LightTrack [35] used neural architectural search [7] to design an efficient backbone and head modules suitable for resource-constrained environments. FEAR [2] presented a compact and energy-efficient SN-based tracking method running at real-time speed on a smartphone. It uses the dual-template representation with a dynamic update scheme to model the target appearance variations. Stark-Lightning [34] proposed an efficient tracking method with a lightweight transformer-based feature fusor module. HiFT [3] used hierarchical feature transformers to achieve real-time speed on an embedded processor for aerial tracking. SiamHFFT [10] introduced a hierarchical transformer-based feature fusion module for efficient tracking on CPUs. HCAT [4] deployed a feature sparsification module and a hierarchical cross-attention transformer-based architecture to achieve real-time on edge devices. It should be noted that the transformer-based tracker [34, 34, 4, 10] employ CNN-based backbones for feature extraction and utilize the transformer layers only for relation modeling, i.e., to fuse the feature representations generated by their backbones. Fewer lightweight SN trackers use transformer modules in their backbone or head architecture. MixFormerV2 [9] presented a fully transformer-based [11] backbone for efficient tracking, based on knowledge-distillation [15] and progressive model-depth pruning. E.T.Track [1] proposed an efficient Exemplar Transformer-based prediction head for visual tracking. It utilized a single instance-level attention layer in the transformer block to achieve real-time on a CPU. Compared to the other related trackers, [9] and [1] are the closest to our work. Unlike the two-stream encoding approach by [1, 2, 34, 35, 10, 1] (i.e., the template and search region features are extracted independently), the proposed backbone facilitates the exchange of information between template and search regions during feature extraction. Different from [2, 34, 34, 3, 4, 9], we use a transformer-based head for target localization. In contrast to the fully transformer-based backbone by [9], we use a cascade of CNN [29] and ViT [23] blocks in our backbone. Also, the iterative nature of knowledge-distillation and model pruning by [9] requires multiple rounds of model training, whereas our tracker model needs to be trained only once. Compared to the exemplar transformer-based head module by E.T.Track [1], our separable self-attention prediction head is \(3\times\) compact in terms of parameters (5.7 Million for E.T.Track versus 1.8 Million for our _SMAT_). As a post-processing step, the related [35] and [1] refine the predicted bounding boxes by penalizing large changes in target size and aspect ratio of the predicted boxes between consecutive frames. We do not employ any bounding-box refinement techniques during post-processing. To summarize our contributions, we are the first to propose: * A separable mixed attention ViT-based backbone for joint feature extraction and information fusion between the template and search regions. Our approach combines the merits of CNNs and efficient ViTs to learn superior feature encoding for accurate tracking without bloating the backbone latency. * A separable self-attention transformer-based prediction head to model the global dependencies within the fused feature encoding for accurate bounding-box prediction. Compared to related work, for the first time, our method concurrently deploys an efficient transformer-based backbone and prediction head for lightweight tracking. ## 3 The Proposed Method This section discusses the architecture and training details of the proposed _SMAT_ tracker. Section 3.1 presents our tracker backbone and Section 3.2 describes the architecture of the proposed head module. Section 3.3 has details of the loss function used during training. ### Proposed Mixed Attention ViT-based Backbone The backbone of the proposed tracker receives two images as its input; one is the target template \(Z_{in}\), and the other is the search region for target localization, \(X_{in}\). First, we apply the CNN-based Inverted Residual (IR) [29] blocks on \(Z_{in}\) and \(X_{in}\). These IR blocks generate spatially local feature representations of \(Z_{in}\) and \(X_{in}\) while being efficient compared to the regular convolutional blocks [29]. In addition, these IR blocks reduce the spatial dimensionality of input images by the pooling operation to generate low-dimensional feature representations for our separable mixed attention ViT block. The architecture of the proposed mixed attention ViT block is shown in Figure 2. Let \(Z\in R^{W_{x}\times H_{x}\times C}\) and \(X\in R^{W_{x}\times H_{x}\times C}\) denote the template and search region feature representations, respectively. Inside the proposed ViT block, we initially pass \(Z\) and \(X\) through a series of \(3\times 3\) and \(1\times 1\) CNN layers with shared weights to project the number of channels in \(Z\) and \(X\) from \(C\) to \(d\). We tokenize [22] the output of CNN blocks and concatenate them to generate a total of \(k\) tokens to learn mixed attention [8] between the template and search regions. Inside the transformer layer, we first apply a set of three \(1\times 1\) convolutional filters (denoted as _qkv-proj_ in Figure 2) to generate the query \(\mathcal{Q}\in R^{k\times 1}\), the key \(\mathcal{K}\in R^{k\times d}\), and the value \(\mathcal{V}\in R^{k\times d}\). Then, we apply the softmax operation on the query vector \(\mathcal{Q}\) and broadcast along its column (i.e., the element in \(i^{th}\) row is repeated \(d\) times along the column dimension) to generate \(\tilde{\mathcal{Q}}\in R^{k\times d}\). Using \(\tilde{\mathcal{Q}}\) and \(\mathcal{K}\), the context vector \(\mathcal{A}\in R^{1\times d}\) is computed as \[\mathcal{A}=\sum_{k}\tilde{\mathcal{Q}}\odot\mathcal{K}, \tag{1}\] where \(\odot\) denotes the element-wise multiplication and \(\sum_{k}\) indicates summation across the rows. The context vector \(\mathcal{A}\) is broadcasted along its rows to create \(\tilde{\mathcal{A}}\in R^{k\times d}\), which is used to compute the mixed attention \(\mathcal{M}\in R^{k\times d}\) as \[\mathcal{M}=\tilde{\mathcal{A}}\odot\text{ReLU}(\mathcal{V}). \tag{2}\] The elementwise multiplication operations in Eq. 1 and Eq. 2 reduce the latency of the separable transformer layer, shown in Figure 2, when compared to the dense matrix-to-matrix multiplication-based attention computation in standard transformers [30]. Also, computing the mixed attention on the concatenated features concurrently models the global interactions _within_ (i.e., self) and _between_ (i.e., cross) the target template and the search area. Therefore, mixed attention requires fewer transformer block evaluations than separately computing the self and cross-attention. Similar to [30], we employ a residual connection [14] around the attention computation block. We pass the output of the residual connection through a \(1\times 1\) convolutional feedforward network (denoted as _ffn-out_ in Figure 2) to generate the output of the separable transformer layer. We split the resulting feature map to separate the template and search region features with \(d\) channels. Finally, we re-project the number of channels from \(d\) to \(C\) by applying a shared \(1\times 1\) convolutional filter on the separated feature maps to generate \(\tilde{X}\) and \(\tilde{Z}\) as the output. The computation of mixed attention in the ViT block facilitates implicit relation modeling during feature extraction, thereby generating superior features compared to the two-stream encoding approach. Such feature fusion also avoids needing a parameter-heavy module for the subsequent relation modeling between the template and search region features. For a parameter-efficient relation modeling (or feature fusion) between the features \(\tilde{X}\) and \(\tilde{Z}\) generated by the proposed backbone, we use the _parameter-free_ pixel-wise cross-correlation [36] operation (_cf._ Figure 1). Figure 2: Proposed Separable Mixed Attention ViT block. The _qkv-proj_ denotes the set of three \(1\times 1\) convolutional filters to generate the _Query_, _Key_, and _Value_ for attention computation. The mixed attention output is passed through a \(1\times 1\) convolutional _ffn-out_ block to generate the output of the transformer layer. The fused feature encoding \(F\) is computed as, \[F=PWCorr(\tilde{X},\tilde{Z}). \tag{3}\] where \(PWCorr\) denotes the pixel-wise cross-correlation operation. We apply a \(1\times 1\) convolution filter on the fused encoding to transform the number of channels to \(C_{h}\). To perform target localization, we pass the resulting encoding \(F\) to the proposed prediction head in Section 3.2. ### Proposed Self-Attention Prediction Head The pipeline of the proposed predictor head is shown in Figure 3. Our prediction head has two branches: target classification and bounding-box regression. Unlike a fully CNN-based approach, we implement these branches via a convolutional and transformer layers cascade. The CNN layers are suitable for modeling the local relationships within the fused feature representation but are limited in effectively capturing the non-local associations. On the other hand, the transformer layers explicitly model the long-range global interactions within the features by processing the tokenized feature encoding. Such a global modeling scheme is beneficial for localization under scenarios of drastic target shape variations and heavy occlusion [34]. The cascade of convolutional and transformer-based layers combines the best of their modeling strengths to produce high-quality tracking results on challenging videos. First, we apply a \(3\times 3\) convolutional filter for the proposed predictor head on the fused encoding \(F\) from Eq. 3 to extract spatially local feature representations. We then tokenize the filter output and pass it through a stack of \(L_{cls}\) and \(L_{reg}\) separable self-attention transformer layers in classification and regression branches, respectively. For these transformer blocks, the pipeline of computing the attention is similar to the transformer layer in the ViT block from Section 3.1; except, here, we calculate self-attention to model long-range dependencies _within_ the fused feature encoding \(F\) for robust target state estimation. The output of the self-attention transformer layer is passed through a \(3\times 3\) CNN layer to generate a score map \(\mathcal{R}\) for the classification branch, and the local offset and the normalized bounding-box size for the regression branch, as in [37]. A combination of local and long-range contextual modeling by our predictor head improves tracker performance without significantly increasing the overall model latency. ### Loss function for Training We use loss functions on the classification and regression output generated by the proposed head module while training our model. We use the weighted focal loss for the output of the classification branch; for the output of the regression branch, we use the \(\ell_{1}\) loss and generalized Intersection-over-Union (\(IoU\)) loss, as in [37]. The overall loss function \(\mathcal{L}_{total}\) is defined as \[\mathcal{L}_{total}=\mathcal{L}_{focal}+\lambda_{\ell_{1}}\cdot\mathcal{L}_{ \ell_{1}}+\lambda_{IoU}\cdot\mathcal{L}_{IoU}, \tag{4}\] where \(\mathcal{L}_{focal}\), \(\mathcal{L}_{\ell_{1}}\), and \(\mathcal{L}_{IoU}\) represent the focal loss, \(\ell_{1}\) loss and the generalized \(IoU\) loss functions, respectively. \(\lambda_{\ell_{1}}\) and \(\lambda_{IoU}\) are the hyperparameters determining the relative impact of the respective loss functions. ## 4 Experimental and Ablation Results ### Implementation Details We set the size of the template and search region images, i.e., \(Z_{in}\) and \(X_{in}\) from Section 3, to \(128\times 128\) and \(256\times 256\), respectively, during training and inference. We deploy two CNN-based IR [29] blocks and two ViT-blocks in our backbone. The sequential ordering of the IR and ViT blocks is the same as MobileViTv2 [23] pipeline. During feature extraction, our backbone performs four downsampling operations, each by a factor of 2; therefore, the spatial dimension of the features generated by our backbone is \(8\times 8\) and \(16\times 16\) for the template and search regions, respectively. For the proposed head module, we set the number of channels in the fused encoding, i.e., \(C_{h}\) in Figure 3, to 128. We set the number of transformer layers \(L_{cls}\) and \(L_{reg}\) in the classification and regression branches to 2 and 4, respectively. The reason for defining \(L_{reg}\) two times the value of \(L_{cls}\) is because the regression head predicts twice the variables, i.e., local offset and bounding-box size, compared to the classification branch predicting the target center. ### Training and Inference Details We use the GOT10k [16], LaSOT [12], TrackingNet [25], and COCO [20] datasets to train our tracker. GOT10k, LaSOT, and TrackingNet have a non-overlapping train-test split ratio of 9335-to-180, 1120-to-280, and 30132-to-511 videos, respectively. Also, GOT10k provides 180 additional videos as the validation split. We apply data augmentation (horizontal flip and scale jittering) to generate training image pairs for the still images in the COCO train dataset. We Figure 3: Proposed Separable Self-Attention Transformer-based Predictor Head. It utilizes two branches for target classification and bounding-box regression. use the combined training splits of these four datasets to train our model. We train our model for 300 epochs, and each epoch uses \(6\times 10^{4}\) image pairs uniformly sampled from the training dataset. The initial learning rate (_lr_) is set to 0.0004 and is reduced by a factor of 10 after 240 epochs. The _lr_ for the backbone parameters is set 0.1 times the _lr_ for the remaining trainable parameters of our model. We use AdamW [21] as the network optimizer and set the weight decay to \(10^{-4}\). The values of hyperparameters \(\lambda_{\ell_{1}}\) and \(\lambda_{IoU}\) from Eq. 4 are set to 5 and 2, respectively. These hyperparameter values used for training are derived from [37] with no additional finetuning. We initialize our backbone weights using a pretrained MobileViTv2 model provided by the authors [23]. We use PyTorch [27] for developing the tracker code. Our model is trained on a single NVidia Telsa V100 GPU (32GB memory) with a batch size of 128. We monitor the possibility of overfitting by periodically evaluating the values of loss functions \(\mathcal{L}_{focal},\mathcal{L}_{\ell_{1}}\), and \(\mathcal{L}_{IoU}\) from Eq. 4 using the GOT10k validation videos. During inference, we use the annotation from the first frame in the video as the target template and do not perform model update. To define the search region at frame \(t\), we crop a region around tracker output at frame \(t-1\), four times the target size. This image is resized to \(256\times 256\) and utilized as the search image at frame \(t\). As a post-processing step, we apply a Hanning window on the classification score map \(\mathcal{R}\) to penalize large target displacement predictions. ### Comparison to the Related Work To assess the performance of the proposed _SMAT_, we evaluate our tracker on the test-split of GOT10k [16], LaSOT [12], TrackingNet [25], NfS30 [18], UAV123 [24], and AViST [26] datasets. GOT10k-test has 180 test videos having non-overlapping object classes compared to their training videos, mainly to promote the generalization of tracking algorithms towards unseen object categories. LaSOT-test has 280 videos with 14 different attributes and balanced class categories. With an average length of 2500 frames per video, LaSOT-test is effective in accessing long-term tracking capabilities. TrackingNet-test contains 511 challenging videos curated from the large-scale YouTube-BB [28] dataset with 15 attribute annotations. NfS30 has 100 test videos, predominantly containing fast-moving objects with significant motion blur. UAV123 is a low-altitude UAV tracking benchmark and has 123 videos with 12 attribute annotations. AViST dataset has 120 challenging videos with a wide range of atmospheric adverse scenarios such as rain, fog, fire, low-light, snow, tornado, and smoke impacting the target appearance in the test videos. The datasets NfS30, UAV123, and AViST have no training split videos. We use the metrics recommended by the corresponding dataset authors to quantify the tracker performance during our evaluation. GOT10k uses the Average of Overlap (\(AO\)) based on the Intersection-of-Union (_IoU_) value between the groundtruth and predicted bounding boxes, averaged over all the test videos. It also uses Success Rate (\(SR\)), computing the fraction of frames having an \(IoU\) value greater than a threshold \(\tau\), with values of \(\tau\) as 0.5 and 0.75. TrackingNet uses Area-Under-the-Curve (\(AUC\)), Precision (\(P\)), and Normalized-Precision (\(P_{norm}\)) as the tracker evaluation metrics. \(AUC\) is equivalent to \(AO\)[40], and \(P\) is computed based on the distance between the groundtruth and predicted bounding-box centers, measured in pixels. The metric \(P_{norm}\) is similar to \(P\); however, \(P_{norm}\) uses normalized bounding boxes while measuring the distance between their centers. LaSOT uses the same evaluation metric as TrackingNet, whereas NfS30 and UAV123 use \(AUC\) and \(P\) for tracker evaluation. Along with the \(AUC\), AViST uses OP\(50\) and OP\(75\) as its evaluation metrics, which are equivalent to \(SR\) at thresholds 0.5 and 0.75, respectively, from GOT10k. To ensure fair tracker evaluation and avoid finetuning of parameters on the test data, GOT10k and TrackingNet sequester the ground-truth annotations for their test videos. Therefore, we generate the metrics for these datasets by submitting the raw tracker results to the remote evaluation server. The groundtruth annotations are available for the LaSOT-test, NfS30, UAV123, and AViST datasets. We compare the results of the proposed _SMAT_ against the related lightweight trackers: LightTrack [35], StarkLightning [34], FEAR-XS [2], HCAT [4], E.T.Track [1], and MixFormerV2-S [9], evaluated using the pretrained models provided by their authors. From Table 1, we can see that the proposed _SMAT_ outperforms the related trackers on all six test datasets: GOT10k-test, TrackingNet-test, LaSOT-test, NfS30, UAV123, and AViST. No related tracker performs consistently second best across the six datasets. HCAT exhibits the second-best results in 10 out of 16 metrics across all datasets, while MixFormerV2-S scores the second-best in 6 out of 16 cases. Regarding _fps_ under CPU, our tracker is relatively faster than MixFormerV2-S by 19% and slower than HCAT by 17%. Considering the GOT10k-test dataset (server-based evaluation), our tracker is better than MixFormerV2-S and HCAT on average by 3.5% in \(AO\), 4% in \(SR_{0.50}\), and 5.8% in \(SR_{0.75}\). Since **GOT10k-test** dataset has videos with target object categories unseen during training, it is well-suited for evaluating tracker generalization. Our _SMAT_ results on the GOT10k-test demonstrate its superior generalization capability compared to the related lightweight trackers. Since E.T.Track has a transformer-based predictor head and MixFormerV2-S has a fully transformer-based backbone, these trackers are closely related to our work. By concurrently deploying a transformer-based backbone and head module, our _SMAT_ outperforms both E.T.Track and MixFormerV2 on average by a significant margin of 6.8% in \(AO\), 8.8% in \(SR_{0.50}\), and 12.4% in \(SR_{0.75}\). On **TrackingNet-test** benchmark, the proposed _SMAT_ has a better performance than all the related trackers by at least 1.9% in \(AUC\), 1.8% in \(P_{norm}\), and 3% in \(P\). No single tracker consistently exhibits the second-best performance on the TrackingNet dataset. The results of our _SMAT_ on the **LaSOT-test** videos show that our tracker has better long-term tracking performance than other lightweight trackers. The related E.T.Track employs a post-processing step to refine the predicted bounding boxes, which reduces the chances of target loss or drift during long-term tracking. The other related tracker, MixFormerV2, utilizes an online template update scheme to improve long-term tracking performance. Despite the absence of such bounding-box refinement step and template update scheme, our _SMAT_ performs better than E.T.Track by 2.8% and MixFormerV2 by 0.7% in \(AUC\). The results on **NfS30** dataset show that our _SMAT_ is resilient to motion blur and is better-suited for tracking fast-moving objects than the related trackers. Similarly, proposed _SMAT_ has a better \(AUC\) by at least 2.8% and 0.9% on **AVisT** and **UAV123** datasets, respectively, compared to the other lightweight trackers. It shows that our tracker performs better than the related trackers on videos affected by adverse visibility conditions and is robust to the challenges of airborne scenarios. The last column of Table 1 lists the _fps_ of the proposed _SMAT_ and related trackers, evaluated on a 12th Gen Intel(R) Core-i9 CPU. The proposed _SMAT_ tracker achieves a real-time speed of 37 _fps_ by leveraging the computational efficiency of the separable self and mixed attention blocks used in its model architecture. Compared to standard transformer-based MixFormerV2-S, our tracker is faster by 19% since the proposed _SMAT_ uses separable attention-based transformers in its backbone for efficient computation of attention. However, in comparison to the related lightweight trackers with a two-stream pipeline [1, 2, 34, 35, 4], our _SMAT_ has a 17% lower _fps_ on average. It is mainly due to the coupling of features in our tracker backbone, requiring evaluation of template and search region features at every frame during inference. On the other hand, trackers with a two-stream pipeline compute the template features only once since they do not perform feature fusion in their backbone. Our tracker achieves a speed of 158 _fps_ upon its evaluation on an Nvidia RTX 3090 GPU. With nearly 3.8 Million parameters, our model has a size of 15.3MB on disk. ### Ablation Study In this section, we present ablation study results quantifying the role of different components of our tracker. For this study, we use GOT10k-test [16] and LaSOT-test [12] due to their suitability towards gauging tracker generalization and long-term tracking performance, respectively. **Comparing feature fusion techniques:** In Section 3.1, we mentioned that the proposed mixed attention efficiently approximates the explicit modeling of interaction _within_ and _between_ the target template and search regions. In this section, we experimentally verify the efficacy and efficiency of the proposed approach by comparing its results with other feature fusion methods that deploy explicit computation of self or cross-attention (or both). For the variant-_A_ shown in Figure 4, we use a shared separable self-attention transformer block for the template and search regions. This approach facilitates independent computation of the template and search features for high tracking speed; however, it restricts the information exchange between the two regions (i.e., no feature fusion). For the variant-_B_, we deploy shared separable cross-attention transformers \begin{table} \begin{tabular}{c|c c c|c c c|c c c c|c c c c|c} \hline \multirow{2}{*}{Tracker} & \multicolumn{2}{c|}{GOT10k-test [16]} & \multicolumn{2}{c|}{TrackingNet-test [25]} & \multicolumn{2}{c|}{LasSOT-test [12]} & \multicolumn{2}{c|}{NS30 [18]} & \multicolumn{2}{c|}{UAV123 [24]} & \multicolumn{2}{c||}{ANSiT [26]} & \multicolumn{2}{c|}{_fps_} \\ & \(AO\) & \(SR_{0.20}\) & \(SR_{0.75}\) & \(AUC\) & \(P_{norm}\) & \(P\) & \(AUC\) & \(P_{norm}\) & \(P\) & \(AUC\) & \(P\) & \(AUC\) & \(P\) & \(AUC\) & \(\text{OP50}\) & \(\text{OP75}\) & (CPU) \\ \hline LightTrack [35] (CVPR’21) & 0.582 & 0.68 & 0.442 & 0.729 & 0.793 & 0.699 & 0.522 & 0.583 & 0.517 & 0.565 & 0.692 & 0.617 & 0.799 & 0.404 & 0.437 & 0.242 & 42 \\ Start-Lightning [34] (4CCV’22) & 0.596 & 0.696 & 0.479 & 0.727 & 0.779 & 0.641 & 0.578 & 0.660 & 0.574 & 0.596 & 0.710 & 0.620 & 0.820 & 0.394 & 0.431 & 0.223 & 50 \\ FEAR-XS [2] (ECCV’22) & 0.573 & 0.681 & 0.455 & 0.715 & 0.805 & 0.699 & 0.501 & 0.594 & 0.523 & 0.486 & 0.563 & 0.610 & 0.816 & 0.370 & 0.421 & 0.220 & 42 \\ HCTM [1] (ECCV’22) & 0.634 & 0.734 & 0.558 & 0.763 & 0.824 & 0.726 & 0.590 & 0.683 & 0.605 & 0.619 & 0.741 & 0.620 & 0.805 & 0.418 & 0.263 & 45 \\ E.TTrack [1] (WACV’23) & 0.566 & 0.646 & 0.425 & 0.740 & 0.798 & 0.698 & 0.589 & 0.670 & 0.603 & 0.570 & 0.694 & 0.626 & 0.808 & 0.390 & 0.412 & 0.227 & 44 \\ MixFormerV2-S [9] (arXiv’23) & 0.587 & 0.672 & 0.482 & 0.767 & 0.812 & 0.714 & 0.610 & 0.694 & 0.614 & 0.610 & 0.722 & 0.634 & 0.837 & 0.396 & 0.425 & 0.227 & 30 \\ SMAT (ours) & **0.645** & **0.747** & **0.578** & **0.786** & **0.842** & **0.756** & **0.617** & **0.711** & **0.646** & **0.620** & **0.746** & **0.643** & **0.899** & **0.447** & **0.597** & **0.313** & 37 \\ \hline \end{tabular} \end{table} Table 1: Comparison of proposed _SMAT_ with the related lightweight SN trackers on GOT10k-test (server), TrackingNet-test (server), LaSOT-test, NFS30, UAV123, and AVisT datasets. The best and second-best results are highlighted in red and blue, respectively. Figure 4: Comparing different feature fusion techniques (_A_, \(B\) and _C_) with the proposed mixed attention-based method shown in \(D\). for inter-region feature fusion but no intra-region feature modeling (i.e., no self-attention). A similar cross-feature blending approach for relation modeling has shown excellent tracking results [6]. Lastly, we implement cascaded self and cross-attention transformers for the third variant-_C_ to explicitly model inter and intra-region feature fusion. The proposed mixed attention block, shown as variant-_D_ in Figure 4, approximates explicit self and cross-attention computation by applying a separable transformer block on the concatenated features from the template and search regions. Table 2 summarizes the performance of these variants on GOT10k and LaSOT test datasets. From Table 2, we can see that variant-_A_ has a \(1.05\times\) higher _fps_ than the proposed method; however, the lack of information flow between the template and search region impacts its performance. Hence, compared to the proposed tracker, it has a lower \(AUC\) score by 1.4% and 1.3% on GOT10k and LaSOT, respectively. Performance of the pure cross-attention-based variant-_B_ is lower than our _SMAT_ by 0.8% in \(AUC\) on the LaSOT dataset, and both approaches have comparable performance on the GOT10k dataset. However, variant-_B_ has a relatively lower _fps_ by 18.9% than our _SMAT_, which indicates that separately computing cross-attention for the template and search regions is slower than our mixed-attention computation on concatenated features. Variant-_C_ achieves the best results on both datasets; however, explicit computation of self and cross-attention significantly impacts its tracking speed. Therefore, it has the lowest _fps_ value compared to the other variants and is relatively slower than our approach by 35% on a CPU. Compared to the variant-_C_, our method approximates the inter and intra-region feature fusion by a single attention operation. Therefore, as seen from Table 2, our mixed attention-based approach provides the best trade-off between performance and speed compared to other feature fusion variants shown in Figure 4. **Convolutional vs Transformer-based head:** To quantify the significance of our separable self-attention transformer-based predictor head from Section 3.2, we replace the proposed module with a fully-convolutional predictor head for classification and bounding-box regression. As seen from Table 3, this replacement decreases the performance of the proposed _SMAT_ tracker by 3.5% in \(AO\) for the GOT10k-test dataset and 2.3% in \(AUC\) for the LaSOT-test dataset, highlighting the impact of global contextual modeling of encoded features by the proposed predictor head. **Standard vs Separable Attention mechanism:** To evaluate the efficiency of the separable attention mechanism deployed in the proposed _SMAT_, we retrain our model by replacing the separable transformer blocks in the tracker backbone with the standard transformer-based MobileViT block [22]. From Table 4, we observe that the proposed method has a higher \(AUC\) of 1.7% on the LaSOT dataset compared to the standard attention-based tracker. On the other hand, our approach has a lower \(AUC\) of 1.4% on the GOT10k dataset. However, the efficient attention evaluation enhances the speed of the proposed tracking approach by 15.6% compared to the standard transformer-based tracking, as seen from Table 4. ### Attribute-based analysis In this section, we compare the per-attribute performance of the proposed _SMAT_ tracker with related lightweight trackers on the LaSOT dataset. Table 5 summarizes the tracker evaluation results on various challenging factors (or attributes) of the LaSOT dataset, namely Aspect Ratio Change (_ARC_), Background Clutter (_BC_), Camera Motion (_CM_), Deformation (_DEF_), Fast Motion (_FM_), Full Occlusion (_FOC_), Illumination Variation (_IV_), Low Resolution (_LR_), Motion Blur (_MB_), Out-of-View (_OV_), Partial Occlusion (_POC_), Rotation (_ROT_), Scale Variation (_SV_), and Viewpoint Change (_VC_). From Table 5, we can see that our tracker has the best performance on 8 out of 14 attributes, and it has the second-best performance in 5 cases. For the attributes _IV_ and _DEF_, the proposed _SMAT_ performs significantly better than the second-best tracker MixFormerV2-S [9], with a higher \(AUC\) of 4.3% and 3.7%, respectively. In comparison to the related trackers that do not perform feature fusion in their backbone, i.e., [1, 2, 34, 35, 3], our tracker has a higher \begin{table} \begin{tabular}{c|c c|c c|c c||c} \hline \multicolumn{1}{c|}{\multirow{2}{*}{ \begin{tabular}{c} Mixed Attention \\ Mechanism \\ \end{tabular} }} & \multicolumn{3}{c|}{GOT10k-test [16]} & \multicolumn{3}{c||}{LaSOT-test [12]} & \multicolumn{1}{c}{_fps_} \\ & \(AO\) & \(SR_{0.50}\) & \(SR_{0.75}\) & \(AUC\) & \(P_{norm}\) & \(P\) & (CPU) \\ \hline Standard & **0.659** & **0.760** & **0.605** & 0.600 & 0.687 & 0.630 & 32 \\ Separable (ours) & 0.645 & 0.747 & 0.578 & **0.617** & **0.711** & **0.646** & **37** \\ \hline \end{tabular} \end{table} Table 4: Comparison of tracker performance using the standard _vs_ separable mixed attention mechanism in the backbone. The best results are highlighted in red. \begin{table} \begin{tabular}{c|c c c|c c c|c} \hline \multicolumn{1}{c|}{\multirow{2}{*}{ \begin{tabular}{c} Attention \\ Mechanism \\ \end{tabular} }} & \multicolumn{3}{c|}{GOT10k-test [16]} & \multicolumn{3}{c}{LaSOT-test [12]} & \multicolumn{1}{c}{_fps_} \\ & \(AO\) & \(SR_{0.50}\) & \(SR_{0.75}\) & \(AUC\) & \(P_{norm}\) & \(P\) & (CPU) \\ \hline \(A\) & 0.631 & 0.726 & 0.578 & 0.604 & 0.689 & 0.629 & **39** \\ \(B\) & 0.645 & 0.743 & 0.590 & 0.609 & 0.696 & 0.631 & 30 \\ \(C\) & **0.654** & **0.761** & **0.598** & **0.621** & **0.717** & **0.656** & 24 \\ \(D\) (ours) & 0.645 & 0.747 & 0.578 & 0.617 & 0.711 & 0.646 & 37 \\ \hline \end{tabular} \end{table} Table 2: Summarizing the feature fusion-based ablation study results for the proposed _SMAT_ tracker. The best and second-best results are highlighted in red and blue, respectively. \begin{table} \begin{tabular}{c|c c c|c c c} \hline \multicolumn{1}{c|}{\multirow{2}{*}{ \begin{tabular}{c} Predicted Head \\ \end{tabular} }} & \multicolumn{3}{c|}{GOT10k-test [16]} & \multicolumn{3}{c}{LaSOT-test [12]} & \multicolumn{1}{c}{_fps_} \\ & \(AO\) & \(SR_{0.50}\) & \(SR_{0.75}\) & \(AUC\) & \(P_{norm}\) & \(P\) \\ \hline Fully-Convolutional & 0.610 & 0.709 & 0.540 & 0.594 & 0.082 & 0.612 \\ Transformer-based (ours) & **0.645** & **0.747** & **0.578** & **0.617** & **0.711** & **0.646** \\ \hline \end{tabular} \end{table} Table 3: Ablation study results for the proposed transformer-based prediction head. Best results are highlighted in red. \(AUC\) of 4.4% for _DEF_ and 2.3% for _ARC_ than the second best trackers, [1, 4] and [4], respectively. It indicates the effectiveness of transformer-based feature fusion in our tracker backbone, producing accurate bounding boxes under drastic target appearance variations. Also, in comparison to these trackers, our _SMAT_ is resilient to tracking failures under _POC_ and _BC_, with a higher \(AUC\) of 3.2% and 2.7%, respectively, than the second-best results. On the other hand, our _SMAT_ has an inferior performance than the related trackers [9] and [34] for the attribute _FM_; we are working to improve our _SMAT_ performance under _FM_. ### Visualizing the attention maps To showcase the interpretability of the proposed _SMAT_ tracker, we visualize the tracker output and the corresponding attention maps in Figure 5 for four sequences chosen from the LaSOT test dataset. The images on the left contain the target template (top-left corner), the search region at frame \(\#t\), and the tracker output. The images in the center and right indicate the attention maps corresponding to the transformer blocks of our tracker backbone at the spatial resolution of \(32\times 32\) and \(16\times 16\), respectively. For the examples shown in Figure 5, the target object is impacted by a challenging factor described in Section 4.5, i.e., _ARC_ for _bicycle-18_, _IV_ for _drone-2_, _DEF_ for _drone-2_, and _POC_ for _microphone-16_. Despite the influence of these attributes, our _SMAT_ successfully locates the target object. For the example shown in the last row of Figure 5, the target is partially occluded by an external object. In this case, the tracker focuses on the visual cues around the target object, as seen from the attention maps in the center. This information is processed by the subsequent transformer blocks of our tracker backbone to produce stronger attention values in the target center and generate an accurate bounding box. ## 5 Conclusion This paper proposed a separable self and mixed attention transformer-based architecture for lightweight tracking. The proposed backbone utilized the separable mixed attention transformer layer to facilitate the exchange of information between the target template and the search region and generate improved encoding compared to the two-stream tracking pipeline. The proposed separable self-attention transformer-based predictor head efficiently modeled long-range dependencies within the fused encoding to generate superior target classification and bounding-box prediction results. Our ablation study analyzed the accuracy-speed tradeoffs using different feature fusion methods, showcased the effectiveness of the proposed head module for accurate tracking, and demonstrated the efficiency of the separable mixed attention compared to the standard attention-based tracking. Our _SMAT_ performed better than related lightweight trackers on six challenging benchmarks. The computational efficiency of the proposed architecture assisted our tracker, with 3.8M parameters, to exceed real-time speed on a CPU, while running at 158 _fps_ on GPU. Figure 5: Visualizing the bounding box output (left) and the corresponding attention maps (center and right) for the proposed _SMAT_ tracker. The larger values in the attention map are denoted by red color, while the smaller values are represented by blue color. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c||c|c} \hline \hline Tracker & _ARC_ & _BC_ & _CM_ & _DEF_ & _FM_ & _POC_ & _IV_ & _LR_ & _MB_ & _OV_ & _POC_ & ROT & SV & _VC_ & Overall \\ \hline LightTrack [35] & 0.503 & 0.434 & 0.539 & 0.577 & 0.334 & 0.386 & 0.550 & 0.407 & 0.457 & 0.441 & 0.497 & 0.519 & 0.523 & 0.502 & 0.522 \\ Stark-Lightning [34] & 0.572 & 0.491 & 0.613 & 0.594 & 0.471 & 0.505 & 0.610 & 0.516 & 0.568 & 0.557 & 0.554 & 0.577 & 0.582 & 0.581 & 0.578 \\ FEAR-XS [2] & 0.488 & 0.437 & 0.528 & 0.505 & 0.389 & 0.403 & 0.506 & 0.421 & 0.473 & 0.425 & 0.478 & 0.489 & 0.506 & 0.487 & 0.501 \\ HCAT [4] & 0.587 & 0.524 & 0.639 & 0.619 & 0.460 & 0.507 & 0.606 & 0.520 & 0.579 & 0.538 & 0.568 & 0.592 & 0.600 & 0.567 & 0.590 \\ E.T.Track [1] & 0.573 & 0.526 & 0.590 & 0.619 & 0.404 & 0.480 & 0.612 & 0.484 & 0.545 & 0.519 & 0.562 & 0.588 & 0.594 & 0.576 & 0.589 \\ MixformerV2-S [9] & 0.603 & 0.519 & 0.642 & 0.626 & 0.507 & 0.539 & 0.619 & 0.556 & 0.604 & 0.574 & 0.586 & 0.603 & 0.617 & 0.630 & 0.610 \\ SMAT (ours) & 0.610 & 0.553 & 0.656 & 0.663 & 0.665 & 0.517 & 0.662 & 0.523 & 0.592 & 0.570 & 0.600 & 0.621 & 0.624 & 0.597 & 0.617 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparing the \(AUC\) values of the proposed _SMAT_ with the related lightweight trackers for 14 attributes of the LaSOT dataset. The best and second-best results are highlighted in red and blue, respectively. The last column indicates the mean \(AUC\) across all videos.
2303.17957
On a Probabilistic Approach for Inverse Data-Driven Optimal Control
We consider the problem of estimating the possibly non-convex cost of an agent by observing its interactions with a nonlinear, non-stationary and stochastic environment. For this inverse problem, we give a result that allows to estimate the cost by solving a convex optimization problem. To obtain this result we also tackle a forward problem. This leads to the formulation of a finite-horizon optimal control problem for which we show convexity and find the optimal solution. Our approach leverages certain probabilistic descriptions that can be obtained both from data and/or from first-principles. The effectiveness of our results, which are turned in an algorithm, is illustrated via simulations on the problem of estimating the cost of an agent that is stabilizing the unstable equilibrium of a pendulum.
Émiland Garrabé, Hozefa Jesawada, Carmen Del Vecchio, Giovanni Russo
2023-03-31T10:37:45Z
http://arxiv.org/abs/2303.17957v3
# Inverse Data-Driven Optimal Control for Nonlinear Stochastic ###### Abstract We consider the problem of estimating the possibly non-convex cost of an agent by observing its interactions with a nonlinear, non-stationary and stochastic environment. For this inverse problem, we give a result that allows to estimate the cost by solving a convex optimization problem. To obtain this result we also tackle a forward problem. This leads to formulate a finite-horizon optimal control problem for which we show convexity and find the optimal solution. Our approach leverages certain probabilistic descriptions that can be obtained both from data and/or from first-principles. The effectiveness of our results, which are turned in an algorithm, is illustrated via simulations on the problem of estimating the cost of an agent that is stabilizing the unstable equilibrium of a pendulum. ## I Introduction Inferring the intents of an agent by observing its interactions with the environment is crucial to many scientific domains, with applications spanning across e.g., engineering, psychology, economics, management and computer science. Inverse optimal control/reinforcement learning (IOC/IRL) refers to both the problem and the class of methods to infer the cost/reward driving the actions of an agent by observing its inputs/outputs [1]. Tackling this problem is relevant to sequential decision-making [2] and can be useful to design data-driven control systems with humans-in-the-loop as well as incentive schemes in sharing economy settings [3]. In this context, a key challenge in IOC/IRL lies in the fact that the underlying optimization can become ill-posed even when the environment dynamics is linear, deterministic and the cost is convex. Motivated by this, we propose an approach to estimate possibly non-convex costs when the underlying dynamics is nonlinear, non-stationary and stochastic. The approach leverages probabilistic descriptions that can be obtained directly from data and/or from first-principles. Also, the results allow to obtain cost estimates by solving an optimization problem that we show to be convex. _Related works:_ we briefly survey a number of works related to the results and methodological framework of this paper and we refer to [1] for a detailed review on inverse problems across learning and control. As remarked in [4] IRL has its roots in IOC and these methods were originally developed to find control _histories_ to produce observed output _histories_. It was however quickly noticed that even for simple output histories, the resulting control was often infeasible [4]. More recently, driven by the advances in computational power to process datasets, IRL methods have gained considerable attention. In [5] a maximum entropy-based approach is proposed for stationary Markov Decision Processes (MDPs), which is based on a backward/forward pass scheme (see also [6] for linear multi-agent games). In [7] a local approximation of the reward is used and in [8] Gaussian processes are exploited, leading to a method that requires matrix inversion operations and to optimization problems that are not convex in general. Instead, in [9] manipulation tasks are considered and path integrals are used to learn the cost, while in [10] learning is achieved via deep networks for stationary MDPs. A model-based IRL approach for deterministic systems is presented in [11] for online cost estimation and [12] tackles the IRL problem in the context of deterministic multiplayer non-cooperative games. The framework of linearly solvable MDPs is instead leveraged in [13] and, while it has the advantages of avoiding solving forward MDPs in each iteration of the optimization and of yielding a convex optimariatopn problem, it also assumes that the agent can specify directly the state transition. We also recall [14], where a risk-sensitive IRL method is proposed for stationary MDPs assuming that the expert policy belongs to the exponential distribution. Also, in the context of IOC, [15] considers stochastic dynamics and proposes an approach to learn the parameter of a control regularizer. The IOC problem for known nonlinear deterministic systems with quadratic cost function in the input is also considered in [16]. Finally, as we shall see, in order to obtain our results on the inverse problem we also solve a forward problem that involves optimizing, over probability functions, costs that contain a Kullback-Leibler divergence term. We refer to e.g., [2] for a survey on this class of problems in the context of sequential decision-making across learning and control. _Contributions:_ we introduce a number of results to estimate the possibly non-convex and non-stationary cost of an agent by observing its interactions with the environment, which can be nonlinear, non-stationary and stochastic, and for which just a probabilistic description is known. This probabilistic description can be obtained directly from data. Specifically, by leveraging a probabilistic framework, we give a result that enables to estimate the cost by solving an optimization problem that is convex even when the dynamics is nonlinear, non-stationary and stochastic. In order to obtain our result on the inverse problem, which leverages maximum likelihood arguments, we also tackle a forward problem. This leads to formulate a finite-horizon optimal control problem with randomized policies as decision variables. For this problem, we find the optimal solution and show that this is a probability mass function with an exponential twisted kernel. This is a class of policies that is often assumed in works on IRL. Also, we turn our result on cost estimation in an algorithm and its effectiveness is illustrated via simulations on the problem of estimating the cost of an agent that is stabilizing the unstable equilibrium of a pendulum. While our results are inspired by works on IRL/IOC, this paper offers a number of key technical novelties. First, we do not require that the agent can specify its state transitions and we do not assume that the expert policy is stationary. Despite this, our approach leads to an optimization problem to estimate the cost that we prove to be convex. Moreover, our approach does not require running and solving forward problems in each iteration of the optimization and it does not require the underlying dynamics to be deterministic. ## II Mathematical Preliminaries and Problem Formulation Sets are in _callgraphic_ and vectors in **bold**. A random variable is denoted by \(\mathbf{V}\) and its realization is \(\mathbf{v}\). We denote the _probability mass function_, or simply _pmf_, of \(\mathbf{V}\) by \(p(\mathbf{v})\) and we let \(\mathcal{D}\) be the convex subset of pmfs. Whenever we take the sums involving pmfs we always assume that the sum exists. The expectation of a function \(\mathbf{h}(\cdot)\) of \(\mathbf{V}\) is \(\mathbb{E}_{p}[\mathbf{h}(\mathbf{V})]:=\sum_{\mathbf{v}}\mathbf{h}(\mathbf{v })p(\mathbf{v})\), where the sum is over the support of \(p(\mathbf{v})\); whenever it is clear from the context, we omit the subscript in the sum. The joint pmf of \(\mathbf{V}_{1}\) and \(\mathbf{V}_{2}\) is denoted by \(p(\mathbf{v}_{1},\mathbf{v}_{2})\) and the conditional pmf of \(\mathbf{V}_{1}\) with respect to \(\mathbf{V}_{2}\) is \(p\left(\mathbf{v}_{1}\mid\mathbf{v}_{2}\right)\). Countable sets are denoted by \(\{w_{k}\}_{k_{1}:k_{n}}\), where \(w_{k}\) is the generic set element, \(k_{1}\) (\(k_{n}\)) is the index of the first (last) element and \(k_{1}:k_{n}\) is the set of consecutive integers between (including) \(k_{1}\) and \(k_{n}\). A pmf of the form \(p(\mathbf{v}_{0},\ldots,\mathbf{v}_{N})\) is compactly written as \(p_{0:N}\) (by definition \(p_{k:k}:=p_{k}(\mathbf{v}_{k})\)). We use the shorthand notation \(p_{k|k-1}\) to denote \(p_{k}(\mathbf{v}_{k}\mid\mathbf{v}_{k-1})\). Also, functionals are denoted by capital calligraphic characters with arguments within curly brackets. We make use of the Kullback-Leibler (KL [17]) divergence, a measure of the proximity of the pair of pmfs \(p(\mathbf{v})\) and \(q(\mathbf{v})\), defined as \(\mathcal{D}_{\text{KL}}\left(p\mid\mid q\right):=\sum_{\mathbf{v}}p(\mathbf{v })\ln\left(p(\mathbf{v})/q(\mathbf{v})\right)\). We also recall here the chain rule for the KL-divergence: **Lemma 1**.: _Let \(\mathbf{V}\) and \(\mathbf{Z}\) bet two (possibly, vector) random variables and let \(f(\mathbf{v},\mathbf{z})\) and \(g(\mathbf{v},\mathbf{z})\) be two joint pmfs. Then, the following identity holds:_ \[\begin{split}\mathcal{D}_{\text{KL}}\left(f(\mathbf{v},\mathbf{z })\mid\mid g(\mathbf{v},\mathbf{z})\right)=\mathcal{D}_{\text{KL}}\left(f( \mathbf{v})\mid\mid g(\mathbf{v})\right)+\\ \mathbb{E}_{f(\mathbf{v})}\left[\mathcal{D}_{\text{KL}}\left(f( \mathbf{z}\mid\mathbf{y})\mid\mid g(\mathbf{z}\mid\mathbf{y})\right)\right]. \end{split} \tag{1}\] ### _Set-up of the Control Problem_ We let \(\mathbf{X}_{k}\in\mathcal{X}\subseteq\mathbb{Z}^{n}\) be the system state at time step \(k\) and \(\mathbf{U}_{k}\in\mathcal{U}\subseteq\mathbb{Z}^{p}\) be the control input at time step \(k\). Throughout the paper, the time indexing is chosen so that the control input \(\mathbf{u}_{k}\) is determined based on information available up to \(k-1\) and when input \(\mathbf{u}_{k}\) is applied, the system transitions from state \(\mathbf{x}_{k-1}\) to state \(\mathbf{x}_{k}\). We let: (i) \(\mathbf{\Delta}_{k}:=(\mathbf{x}_{k-1},\mathbf{u}_{k})\) be the input-state data pair collected from the system when this is in state \(\mathbf{x}_{k-1}\) and \(\mathbf{u}_{k}\) is applied; (ii) \(\mathbf{\Delta}_{0:N}:=(\{\mathbf{\Delta}_{k}\}_{1:N},\mathbf{x}_{N})\) be the dataset over the time horizon \(\mathcal{T}:=0:N\). We also denote by \(p_{0:N}:=p(\mathbf{\Delta}_{0:N})\) the joint pmf of the dataset. We use the wording _dataset_ to denote a sequence of input-state data. Sometimes, in applications one has available a collection of datasets, which we term as _database_ in what follows. **Remark 1**.: _As noted in [18], \(p_{0:N}\) is a black box type model that can be obtained directly from the data and does not require assumptions on the underlying dynamics._ We now make the standard assumption that the Markov property holds. Then, \(p_{0:N}\) can be conveniently partitioned: \[p_{0:N}=p_{0}\left(\mathbf{x}_{0}\right)\prod_{k=1}^{N}p_{k|k-1}=p_{0}\left( \mathbf{x}_{0}\right)\prod_{k=1}^{N}p_{k|k-1}^{(x)}p_{k|k-1}^{(u)}, \tag{2}\] where we used the shorthand notation \(p_{k|k-1}^{(x)}:=p_{k}^{(x)}\left(\mathbf{x}_{k}\mid\mathbf{u}_{k},\mathbf{x} _{k-1}\right)\) and \(p_{k|k-1}^{(x)}:=p_{k|k-1}^{(x)}p_{k|k-1}^{(u)}=p\left(\mathbf{x}_{k},\mathbf{ u}_{k}\mid\mathbf{x}_{k-1}\right)\). Also, it is useful to define the joint pmf \(\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{ \mathbb{\mathbb{\mathbb{\mathbbmathbbmathbbmathbbmathbbmathbbmathbbmathbbmathbb{\mathbbmathbbmathbbmathbb{ \mathbb{ \leftleft( \) \left(\mathbf{\mathbf{\mathbb{\mathbb{\mathbb{\mathbb{ 0 1 1}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\}\}\}\}\}\}\}}\}\}}}\}\}\}\}\}\}\}\}\}\}\}\}\\\)\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\{{{{ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\}\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\ As we shall see, the solution of Problem 1 is a sequence of randomized policies. At each \(k\), the control input applied to the system, i.e. \(\mathbf{u}_{k}^{*}\), is sampled from \(p_{k|k-1}^{(u)}\stackrel{{*}}{{\rightarrow}}1\). In the cost functional of Problem 1, minimizing the second term minimizes the expected agent cost, while minimizing the first term amounts at minimizing the discrepancy between \(p_{0:N}\) and \(q_{0:N}\). Hence, the first term in the cost functional can be thought of as a regularizer, biasing the behavior of the closed loop system towards the pmf \(q_{0:N}\). Typically, \(q_{0:N}\) takes the role of a passive dynamics, see e.g. [19, 20], or it is used to express some desired behavior for the closed loop system extracted from demonstration databases as in [21]. See also [2] for a survey on sequential decision-making problems that involve minimizing this class of cost functionals. ### _The Inverse Control Problem_ The inverse control problem we consider consists in estimating both the cost-to-go for the agent, say \(\bar{c}_{k}(\cdot)\), and the agent cost \(c_{k}(\cdot)\) given a set of observed states/inputs sampled from \(p_{k|k-1}^{(x)}\) and from the agent policy. In what follows, we denote by \(\tilde{\mathbf{x}}_{k}\) and \(\dot{\mathbf{u}}_{k}\) the observed state and control input at time-step \(k\). We also make the following: **Assumption 2**.: _There exist some \(\mathbf{w}_{k}:=[w_{k,1},\ldots,w_{k,f}]^{T}\) such that \(\bar{c}_{k}\left(\mathbf{x}_{k}\right)=-\mathbf{w}_{k}{}^{T}\mathbf{h}\left( \mathbf{x}_{k}\right)\), where \(\mathbf{h}(\mathbf{x}_{k}):=[h_{1}(\mathbf{x}_{k}),\ldots,h_{f}(\mathbf{x}_{ k})]^{T}\) and \(h_{i}:\mathcal{X}\rightarrow\mathbb{R}\) are known functions, \(i=1,\ldots,f\)._ In what follows, we say that \(\mathbf{h}(\mathbf{x}_{k})\) is the features vector. The assumption, which is rather common in the literature see e.g., [5, 6, 9, 11, 13], formalizes the fact that the cost-to-go can be expressed as a linear combination of given, possibly nonlinear, _features_[22]. With our results in Section III-B we propose a maximum likelihood estimator for the cost (see e.g., [23] for a maximum likelihood framework for linear systems in the context of data-driven control). ## III Main Results ### _Computing the Optimal Policy for Problem 1_ With the next result we give the solution to Problem 1. **Theorem 1**.: _Consider Problem 1 and let Assumption 1 hold. Then:_ (i) _the problem has the unique solution \(\{p_{k|k-1}^{(u)}\stackrel{{*}}{{\rightarrow}}\}_{1:N}^{*}\), with_ \[p_{k|k-1}^{(u)}\stackrel{{*}}{{\rightarrow}}=\frac{\bar{p}_{k|k- 1}^{(u)}\exp\left(-\mathbb{E}_{p_{k|k-1}^{(x)}}[\bar{c}_{k}(\mathbf{X}_{k})] \right)}{\sum_{\mathbf{u}_{k}}\bar{p}_{k|k-1}^{(u)}\exp\left(-\mathbb{E}_{p_{ k|k-1}^{(x)}}[\bar{c}_{k}(\mathbf{X}_{k})]\right)},\] (5) _where_ \[\bar{p}_{k|k-1}^{(u)}:=q_{k|k-1}^{(u)}\exp\left(-\mathcal{D}_{\text{KL}}\left( p_{k|k-1}^{(x)}\mid\mid q_{k|k-1}^{(x)}\right)\right),\] _and where_ \(\bar{c}_{k}:\mathcal{X}\rightarrow\mathbb{R}\) _is obtained via the backward recursion_ \[\bar{c}_{k}(\mathbf{x}_{k}) =c_{k}(\mathbf{x}_{k})-\hat{c}_{k}(\mathbf{x}_{k}), \tag{6}\] \[\hat{c}_{k}(\mathbf{x}_{k}) =\ln\left(\mathbb{E}_{q_{k+1|k}^{(u)}}\left[\exp\left(-\mathcal{D }_{\text{KL}}\left(p_{k+1|k}^{(x)}\mid\mid q_{k+1|k}^{(x)}\right)\right.\right.\] \[\left.\left.\left.-\mathbb{E}_{p_{k+1|k}^{(x)}}[\bar{c}_{k+1}( \mathbf{X}_{k+1})]\right)\right]\right),\] \[\mathcal{D}_{\text{KL}}\left(p_{N+1|N}^{(x)}\mid\mid q_{N+1|N}^{(x)}\right)\] \[\qquad+\mathbb{E}_{p_{N+1|N}^{(x)}}\left[\bar{c}_{N+1}(\mathbf{ X}_{N+1})\right]=0;\] (ii) _the corresponding minimum is given by:_ \[-\sum_{k=1}^{N}\mathbb{E}_{\bar{p}_{k-1}}\left[\hat{c}_{k-1}(\mathbf{X}_{k-1} )\right],\] (7) _where_ \[\bar{p}_{k-1}:=p_{k-1}(\mathbf{x}_{k-1}).\] _Sketch of the proof._ The full proof is omitted here for brevity and will be presented elsewhere. We give here a sketch of the proof, which is by induction. **Step \(1\).** Consider the cost functional in (4). By means of Lemma 1, Problem 1 can be recast as the sum of the following two sub-problems: \[\underset{\{p_{k|k-1}^{(u)}\}_{1:N-1}}{\text{min}} \bigg{\{}\mathcal{D}_{\text{KL}}\left(p_{0:N-1}\mid\mid q_{0:N-1} \right) \tag{8a}\] \[+\sum_{k=1}^{N-1}\mathbb{E}_{\bar{p}_{k-1k}}\left[\mathbb{E}_{p_{ k|k-1}^{(x)}}[c_{k}(\mathbf{X}_{k})]\right]\bigg{\}}\] \[s.t.\ p_{k|k-1}^{(u)}\in\mathcal{D}\ \ \forall k\in 1:N-1,\] and \[\underset{p_{N|N-1}^{(u)}}{\text{min}} \Big{\{}\mathbb{E}_{\bar{p}_{N-1}}\left[\mathcal{D}_{\text{KL}} \left(p_{N|N-1}\mid\mid q_{N|N-1}\right)\right. \tag{8b}\] \[\left.\left.+\mathbb{E}_{p_{N|N-1}}\left[c_{N}(\mathbf{X}_{N}) \right]\right]\Big{\}}\] \[s.t.\ p_{N|N-1}^{(u)}\in\mathcal{D}.\] Hence, the minimum of (8b) is \(\mathbb{E}_{\bar{p}_{N-1}}\left[\mathcal{C}_{N}\left\{p_{N|N-1}^{(u)}\stackrel{{ *}}{{\rightarrow}}\right\}\right]\), with \(\mathcal{C}\left\{p_{N|N-1}^{(u)}\stackrel{{*}}{{\rightarrow}}\right\}\) being the optimal cost obtained by solving \[\underset{p_{N|N-1}^{(u)}}{\text{min}} \mathcal{D}_{\text{KL}}\left(p_{N|N-1}\mid\mid q_{N|N-1}\right)+ \mathbb{E}_{p_{N|N-1}}\left[\bar{c}_{N}(\mathbf{X}_{N})\right] \tag{9}\] \[s.t.\ p_{N|N-1}^{(u)}\in\mathcal{D},\] where we set \(\bar{c}_{N}(\mathbf{x}_{N}):=c_{N}(\mathbf{x}_{N})+\hat{c}_{N}(\mathbf{x}_{N})\), \(\hat{c}_{N}(\mathbf{x}_{N})=0\). This corresponds to the recursion in (6) at \(k=N\). **Step \(2\).** The next step is to show that the problem in (9) can be conveniently written as \[\underset{p_{N|N-1}^{(u)}}{\text{min}} \bigg{\{}\mathbb{E}_{p_{N|N-1}^{(u)}}\left[\mathcal{D}_{\text{KL} }\left(p_{N|N-1}^{(x)}\mid\mid q_{N|N-1}^{(x)}\right)\right. \tag{10}\] \[\left.+\mathbb{E}_{p_{N|N-1}^{(x)}}\left[\bar{c}_{N}(\mathbf{X}_{N })\right]\right]+\alpha(\mathbf{x}_{N-1})\bigg{\}}\] \[s.t.\ p_{N|N-1}^{(u)}\in\mathcal{D},\] where \(\alpha(\mathbf{x}_{N-1}):=\mathcal{D}_{\text{KL}}\left(p_{N|N-1}^{(u)}\ ||\ q_{N|N-1}^{(u)}\right)\). By studying the second variation of the cost functional, it can be shown that this is strictly convex in the decision variable \(p_{N|N-1}^{(u)}\). Hence, since the subset \(\mathcal{D}\) is convex, the problem in (10) is a convex optimization problem. **Step \(3\).** We find the solution to the problem in (9) by using the equivalent formulation given in (10). Since the problem in (10) is convex with a strictly convex cost functional, the unique optimal solution can be found by imposing the stationarity conditions on the Lagrangian, which is given by: \[\mathcal{L}(p_{N|N-1}^{(u)},\lambda_{N}) =\mathbb{E}_{p_{N|N-1}^{(u)}}\Big{[}\mathcal{D}_{\text{KL}}\left( p_{N|N-1}^{(x)}\ ||\ q_{N|N-1}^{(x)}\right)+\] \[\mathbb{E}_{p_{N|N-1}^{(x)}}\left[\bar{c}_{N}(\mathbf{X}_{N}) \right]\Big{]}+\mathcal{D}_{\text{KL}}\left(p_{N|N-1}^{(u)}\ ||\ q_{N|N-1}^{(u)}\right)\] \[\qquad+\lambda_{N}\left(\sum_{\mathbf{u}_{k}}p_{N|N-1}^{(u)}-1 \right), \tag{11}\] where \(\lambda_{N}\) is the Lagrange multiplier corresponding to the constraint \(p_{N|N-1}^{(u)}\in\mathcal{D}\). Now, by imposing the first order stationarity conditions on \(\mathcal{L}(p_{N|N-1}^{(u)},\lambda_{N})\), it can be shown that the unique optimal solution is given by: \[p_{N|N-1}^{(u)}\ \ problem_ \[\begin{split}\mathbf{w}^{*}&:=\left[\mathbf{w}_{1}^{*T}, \ldots,\mathbf{w}_{M}^{*T}\right]^{T}\in\\ \underset{\mathbf{w}}{\text{arg min}}\Bigg{\{}\sum_{k=1}^{M}& \left(-\mathbb{E}_{p(\mathbf{x}_{k}|\hat{\mathbf{x}}_{k-1},\hat{\mathbf{u}}_{ k})}\left[\mathbf{w}_{k}^{T}\mathbf{h}(\mathbf{x}_{k})\right]\right.\\ &\left.+\ln\Big{(}\sum_{\mathbf{u}_{k}}\bar{q}_{k|k-1}^{(u)}( \hat{\mathbf{x}}_{k-1},\mathbf{u}_{k})\right.\\ &\left.\exp\left(\mathbb{E}_{p(\mathbf{x}_{k}|\hat{\mathbf{x}}_{k -1},\mathbf{u}_{k})}\left[\mathbf{w}_{k}^{T}\mathbf{h}(\mathbf{x}_{k})\right] \right)\Big{)}\right\}\!,\end{split} \tag{16}\] _and where_ \[\begin{split}\bar{q}_{k|k-1}^{(u)}(\hat{\mathbf{x}}_{k-1}, \mathbf{u}_{k}):=\left(q(\mathbf{u}_{k}\mid\hat{\mathbf{x}}_{k-1})\right.\\ \left.\exp\left(-\mathcal{D}_{\text{KL}}\left(p(\mathbf{x}_{k} \mid\hat{\mathbf{x}}_{k-1},\mathbf{u}_{k})\mid\mid q(\mathbf{x}_{k}\mid\hat{ \mathbf{x}}_{k-1},\mathbf{u}_{k})\right)\right)\right),\end{split} \tag{17}\] _Sketch of the proof._ The result is based on maximum likelihood, leveraging the structure of the policy of Theorem 1. Convexity follows from the fact that the feasibility domain is convex and the cost function is a linear combination of the log-sum-exp function and of a linear function (in the decision variables). The proof, omitted here for brevity, will be presented elsewhere. **Remark 4**.: _The problem in (16) is an unconstrained convex optimization problem with a twice differentiable cost. Constraints on the \(\mathbf{w}_{k}\)'s can be added to capture application-specific requirements, such as dwell-time constraints._ Next, we propose an estimator when the cost, which we simply denote by \(c(\cdot)\), is stationary. The result (the proof of which is omitted here for brevity) implies that the cost can be estimated from a _greedy_ policy obtained via Theorem 1. Note that, in this case, the decision variable in the resulting optimization is \(\mathbf{w}_{s}\in\mathbf{R}^{f}\) rather than \(\mathbf{w}\in\mathbf{R}^{f\times M}\). **Corollary 1**.: _Let Assumption 2 hold and consider \(p_{k|k-1}^{(u)}\)\({}^{*}\) obtained at each \(k\) from Theorem 1 with \(N=1\). Further, let the cost be stationary. Then, the maximum likelihood estimate for the cost is \(c^{*}(\mathbf{x}_{k})=\mathbf{w}_{s}^{*T}\mathbf{h}(\mathbf{x}_{k})\), where \(\mathbf{w}_{s}^{*}\) is given by:_ \[\begin{split}\mathbf{w}_{s}^{*}\in\underset{\mathbf{w}_{s}}{\text {arg min}}\Bigg{\{}\sum_{k=1}^{M}&\left(-\mathbb{E}_{p(\mathbf{x}_{k}| \hat{\mathbf{x}}_{k-1},\hat{\mathbf{u}}_{k})}\left[\mathbf{w}_{s}^{T}\mathbf{ h}(\mathbf{x}_{k})\right]\right)\\ &+\sum_{k=1}^{M}\ln\left(\sum_{\mathbf{u}_{k}}\bar{q}_{k|k-1}^{(u) }(\hat{\mathbf{x}}_{k-1},\mathbf{u}_{k})\right.\\ &\left.\exp\left(\mathbb{E}_{p(\mathbf{x}_{k}|\hat{\mathbf{x}}_{k -1},\mathbf{u}_{k})}\left[\mathbf{w}_{s}^{T}\mathbf{h}(\mathbf{x}_{k})\right] \right)\right)\Bigg{\}}.\end{split} \tag{18}\] _with \(\mathbf{w}_{s}\in\mathbf{R}^{f}\) and \(\bar{q}_{k|k-1}^{(u)}(\hat{\mathbf{x}}_{k-1},\mathbf{u}_{k})\) defined in Theorem 2._ Corollary 1 implies that, rather conveniently, the cost can be learned from a greedy policy rather than from the optimal policy. The result can be also turned into an algorithmic procedure with its main steps given in Algorithm 1. ``` Inputs: observed data \(\hat{\mathbf{u}}_{1},\ldots,\hat{\mathbf{u}}_{M}\) and \(\hat{\mathbf{x}}_{0},\ldots,\hat{\mathbf{x}}_{M}\), \(f\)-dimensional features vector \(\mathbf{h}(\mathbf{x}_{k})\), \(p_{k|k-1}^{(x)}\), \(q_{k|k-1}^{(x)}\), \(q_{k|k-1}^{(u)}\) Output:\(\tilde{c}^{*}(\mathbf{x}_{k})\) for\(k=1\) to \(M\)do Compute \(\bar{q}_{k|k-1}^{(u)}(\hat{\mathbf{x}}_{k-1},\mathbf{u}_{k})\) using (17) endfor Compute \(\mathbf{w}_{s}^{*}\) by solving the problem in (18) ``` **Algorithm 1** Pseudo-code from Corollary 1 ## IV Application Example We illustrate the effectiveness of our results by considering the problem of stabilizing a pendulum on its unstable equilibrium point. Specifically, given a suitable cost, we first used Theorem 1 to compute the optimal policy and then we leveraged Corollary 1 to estimate the cost used in the policy. The pendulum dynamics (only used to generate data) is: \[\begin{split}\theta_{k}&=\theta_{k-1}+\omega_{k-1}dt+W_ {\theta}\\ \omega_{k}&=\omega_{k-1}+\left(\frac{g}{l}\sin(\theta _{k-1})+\frac{u_{k}}{ml^{2}}\right)dt+W_{\omega},\end{split} \tag{19}\] where \(\theta_{k}\) is the angular position, \(\omega_{k}\) is the angular velocity and \(u_{k}\) is the torque applied on the hinged end. The parameter \(l\) is the length of the rod, \(m\) is the mass of the pendulum, \(g\) is the gravity and \(dt\) is the discretization step. Also, \(W_{\theta}\) and \(W_{\omega}\) capture Gaussian noise on the state variables. In our experiments we set \(W_{\theta}\sim\mathcal{N}(0,0.05)\) and \(W_{\omega}\sim\mathcal{N}(0,0.1)\). As in [2] we let \(\mathbf{X}_{k}:=[\theta_{k},\omega_{k}]^{T}\). Also, \(\mathbf{X}_{k}\in\mathcal{X}\) and \(u_{k}\in\mathcal{U}\) and set \(\mathcal{X}:=[-\pi,\pi]\times[-5,5]\) and \(\mathcal{U}:=[-2.5,2.5]\). The _target_ pendulum we wanted to control had parameters \(m=1\)kg, \(l=0.6\)m, and \(dt=0.1\)s. We also considered a different (i.e., _source_) pendulum with parameters \(m=0.5\)kg, \(l=\)0.5m, and \(dt=0.1\)s. We obtained the pmfs \(p_{k|k-1}^{(x)}\) and \(q_{k|k-1}^{(x)}\) from a database collected following the process from [2], leveraging the source code that was provided therein. We obtained \(q_{k|k-1}^{(u)}\) by controlling the source pendulum via Model Predictive Control (MPC) with a receding horizon window width of \(20\) steps. The action space was \(\mathcal{U}\) and the cost function at each \(k\) was \(\sum_{t\in k:k+H-1}(\theta_{k}^{2}+0.1\omega_{k}^{2})+\theta_{k+H}^{2}+0.5\omega_{k+H} ^{2}\). Then, as in [2], we added Gaussian noise to the MPC control inputs so that \(q_{k|k-1}^{(u)}\) was \(\mathcal{N}(\tilde{u}_{k},0.2)\), with \(\tilde{u}_{k}\) being the control input computed via MPC. Given this set-up, we first computed \(p_{k|k-1}^{(u)}\)\({}^{*}\) for the target pendulum using Theorem 1 with \(N=1\) and using as cost: \[c(\mathbf{x}_{k})=(\theta_{k}-\theta_{d})^{2}+0.01(\omega_{k}-\omega_{d})^{2}, \tag{20}\] with \(\theta_{d}=0\) and \(\omega_{d}=0\) (\(\theta_{d}=0\) corresponds to the unstable equilibrium). Then, the control input to the target pendulum was obtained by sampling from \(p_{k|k-1}^{(u)}\)\({}^{*}\). In Figure 1 the behavior is shown for the angular position of the controlled pendulum and the corresponding control input. The figure clearly illustrates that the unstable equilibrium is stabilized. Next, we illustrated the effectiveness of Algorithm 1 in reconstructing the cost given in (20) that was used for policy computation. To this aim, we used a dataset of \(300\) datapoints collected from a single simulation where the target pendulum was controlled by the policy computed above. We defined the features as \(\mathbf{h}(\mathbf{x}_{k})=[\mid\theta_{k}-\theta_{d}\mid,\mid\omega_{k}-\omega_{ d}\mid]^{T}\). We then obtained from Algorithm 1 the weights \(\mathbf{w}_{s}^{*}=[4.16,2.03]^{T}\) and hence the estimated cost was \(c^{*}(\mathbf{x}_{k})=4.16\mid\theta_{k}-\theta_{d}\mid+2.03\mid\omega_{k}- \omega_{d}\mid\). Note that the weight was higher for \(\theta_{k}\) than for \(\omega_{k}\), consistently with the cost in (20). Finally, with this estimated cost, we used Theorem 1 with \(N=1\) to obtain a new policy, which we used on the target pendulum. Simulations (in Figure 2) illustrate that this policy with the estimated cost effectively stabilizes the pendulum. ## V Conclusions We considered the problem of estimating the possibly non-convex cost of an agent by observing its interactions with a nonlinear, non-stationary, and stochastic environment. By leveraging certain probabilistic descriptions that can be obtained both directly from the data and/or from first-principles, we formulated a convex optimization problem to estimate the cost. To solve the inverse problem, we also formulated a convex finite-horizon optimal control problem and found its optimal solution. The results were turned into an algorithm and their effectiveness was illustrated via simulations. Future work will involve embedding environment learning and addressing constrained control tasks.
2301.13390
Massive MIMO in 5G: How Beamforming, Codebooks, and Feedback Enable Larger Arrays
Massive multiple-input multiple-output (MIMO) is an important technology in fifth generation (5G) cellular networks and beyond. To help design the beamforming at the base station, 5G has introduced new support in the form of flexible feedback and configurable antenna array geometries. In this article, we present an overview of MIMO throughout the mobile standards, highlight the new beam-based feedback system in 5G NR, and describe how this feedback system enables massive MIMO through beam management. Finally, we conclude with challenges related to massive MIMO in 5G.
Ryan M. Dreifuerst, Robert W. Heath Jr
2023-01-31T03:40:28Z
http://arxiv.org/abs/2301.13390v1
# Massive MIMO in 5G: How Beamforming, Codebooks, and Feedback Enable Larger Arrays ###### Abstract Massive multiple-input multiple-output (MIMO) is an important technology in fifth generation (5G) cellular networks and beyond. To help design the beamforming at the base station, 5G has introduced new support in the form of flexible feedback and configurable antenna array geometries. In this article, we present an overview of MIMO throughout the mobile standards, highlight the new beam-based feedback system in 5G NR, and describe how this feedback system enables massive MIMO through beam management. Finally, we conclude with challenges related to massive MIMO in 5G. ## Introduction Multiple-input-multiple-output (MIMO) is a general class of technologies that incorporates a number of transmission and reception techniques using multiple antennas. Spatial multiplexing (SM), among the most well known MIMO methods, involves the transmission of multiple data streams, called layers in 3GPP specifications. In a single user (SU-)MIMO setting, spatial multiplexing involves directing multiple streams to one user, resulting in an increase in the spectral efficiency proportional to the number of streams. Similarly, the concept can be applied in a multiple user (MU-)MIMO setting, where the layers may be split among users. MIMO usage in mobile standards has played a growing role since the inception of SM in 3GPP Release 7. MIMO was a backbone of the evolved high speed packet access (HSPA+) that would enable doubling the achievable data rates with two-layer MIMO. The subsequent Release \(8\) would increase the downlink MIMO capabilities to \(2\times 2\) with two layers each going to two users, although uplink capabilities were still limited to a single user. Release \(8\) would also introduce other forms of multi-antenna techniques in the form of _transmission modes_. These formats include single-antenna, transmit diversity, open-loop SU-MIMO, closed-loop SU-MIMO, closed-loop rank-\(1\) precoding (beamforming), and MU-MIMO. Releases \(9\) and \(10\) would introduce larger array sizes-up to \(8\) downlink antennas-and transmission modes \(8\) and \(9\) which extend the closed-loop SM to \(8\) antennas or transmit diversity if feedback is unavailable. The introduction of larger arrays also enabled multi-layer beamforming (BF), which is different from previous SM by using multiple antennas for each layer. Academically, this is usually referred to as _precoding_, but precoding is used in 3GPP notation to describe the multiplexing of the data streams onto the ports as shown in Figure 1. Throughout Releases \(8-12\), antenna arrays have been assumed to have the same structure where antennas are built in columns and each column (as well as each polarization in dual-polarized arrays) is separately controlled for SM in the azimuth direction. Some basic degree of elevation control was further enabled with mechanical and electrical tilting of the arrays, though it was not until later that tilting was dynamically controlled and became standardized. Release \(13\) introduced a study item on full-dimension MIMO (FD-MIMO) that included controllable elements in both the azimuth and elevation directions for 3D beamforming. Additionally, the mapping of ports and physical antennas was separated so that multiple antennas could be controlled from a single channel state information (CSI) port (e.g. there does not need to be a one-to-one mapping). This paradigm shift in antenna arrays allowed networks to use larger, more directive arrays without needing to update the standards or feedback by "grouping" antennas into subarrays that are transparent to the UE. With the interest in larger arrays and separation of physical antennas and logical elements, Release \(13\) can be seen as a pivotal point in the evolution towards massive MIMO (M-MIMO). Terminology in mobile broadband includes many acronyms and differences from academic wireless research. To assist readers, Table I outlines important terms necessary for understanding MIMO and feedback in 3GPP standards. An important distinction in mobile networks is an additional set of terms to differentiate the physical antennas from the logical processing chains and further-up data layers, as shown in Figure 1. The data layers are each associated with demodulation reference signals (DMRS) for estimating the effective channel seen by the user equipment (UE). The layers are then precoded over logical antenna ports, which are each mapped to individual resource grids and then sent to the physical antennas. Furthermore, polarization naturally applies to the system by mapping two logical ports onto orthogonal polarized antennas. The logical ports can apply to a non-exclusive subset of the physical antennas, so long as the same subset is used consistently. Each logical port is equipped with its own resource grid and mapper to take advantage of the flexible delineation between ports and physical antennas. Further evolution of the flexibility occurs when the physical antennas are not co-located, which enables multi-panel arrays and cell-free massive MIMO. The coordination of disaggregated antenna panels was introduced in LTE-A release \(11\) as a centralized radio access network (C-RAN) although the idea has been brought back up in recent years as distributed MIMO or cell-free massive MIMO. We provide a simple visualization of these situations in Figure 2. One of the cornerstones of 5G is a focus on flexibility. To that end, 5G NR incorporates a flexible numerology, configurable bandwidth parts, and numerous feedback formats. This flexibility is especially important for the two frequency ranges (FR1: 0-6GHz, FR2: \(\geq 6\)GHz) which correspond to different subcarrier spacings, bandwidths, and feedback regularity. We summarize some of the possible configurations in Table II. Of the changes in 5G, the new format beam management based on synchronization signal blocks (SSB) and channel state information reference signals (CSI-RS) is distinctly different from the CSI feedback process in LTE. The primary reasons for the new processes are: 1) Beamforming is enabled at every step including initial access to improve the SNR during synchronization and channel estimation. 2) Beam management integrates analog beam training with CSI feedback for new antenna array configurations like hybrid architectures. 3) The new feedback format (type-II) provides high resolution CSI feedback to improve MU-MIMO performance. In the next section, we will describe the reference signals, which include known pilot sequences and configuration information, used in the downlink for 5G networks. There are additional reference signals for uplink-based beam management, but most of the focus will be on the downlink herein. In the following sections, we will describe the feedback associated with beam management and address how the integration of beam training improves the initial access performance and enables new, massive architectures. ## Reference signals Reference signals (both SSB and CSI-RS) serve important tasks like synchronization, channel estimation, and handover in 5G networks. Within the reference signals, there are known pilot sequences with mathematical properties that aid tasks like synchronization, as well as configuration information like the base station capabilities and logical geometry. 5G NR uses synchronization signal (SS) blocks, as a reference signal for synchronizing user equipment (UE) and obtaining basic feedback during initial access. SSBs are transmitted periodically by the base station once every \(\{5,10,20,40,80,160\}\) ms. The base station transmits a specific primary and secondary synchronization sequence as well as embedded DMRS. In total, the SSB waveform spans \(240\) subcarriers and \(4\) OFDM symbols for each block. The entire process includes at least one but as many as \(\{4,8,64\}\) SS Blocks in an SS-Burst, depending on the frequency range, each of which is transmitted time-sequentially according to [1, Section 4.1]. Note that both frequency and time offsets can be set to prevent SSB collisions with nearby stations. By using multiple SSBs, the base station can beamform each one differently so that the UE can listen to the entire burst and provide the BS with the best beamformer index within the measurement report, thereby selecting a potential beam for downlink transmission or angular direction for requesting additional feedback. In contrast to SSB reference signals, CSI-RS can be configured with reference symbols transmitted across a wider bandwidth and can occur periodically or aperiodically. CSI-RS also enable feedback for massive hybrid arrays by training the analog beamforming with precise CSI-RS beamformers without the UE needing knowledge of the physical antenna geometry. The CSI-RS index defines the UE-recommended analog beamformer and is always fed back in every feedback packet. If PMI is configured, the UE must also provide precoding information using knowledge of the logical port geometry provided by the BS. The drawback to CSI-RS, especially for large arrays, is an excess of overhead and potentially out-of-date information. In particular, UEs are not required to update the CSI-RS index for a CSI-RS resource if the process has been updated within the last \(5\) subframes, or if the number of CSI processes exceeds the limitations defined in Table VII.2.1 of [1]. This means that channel information may become stale for large arrays which leads to misaligned beamforming and suboptimal performance. Furthermore, CSI-RS are expensive to allocate because only one CSI-RS is utilized over a set of resources to prevent interference. ## Feedback The base station can configure CSI reports, which are packets containing feedback, either periodically or aperiodically, with reference signals multiplexed between the ports. A report generally includes the channel quality indicator (CQI) and a reference signal indicator (SSBRI/CRI). Additionally, a rank indicator for multi-stream communication and a precoding matrix indicator (PMI) can be included in the CSI report. Furthermore, some quantities such as CQI and PMI can include both wideband (average across the bandwidth) and subband components to increase the feedback and precoder responsiveness in frequency selective channels. The CQI provides a measure of the strength of the channel and is used to determine the modulation order and code rate for the downlink transmission. The reference signal indicator is used to report the strongest received reference signal index of the beamformed reference signals for beam training. The PMI field has different characteristics depending on the feedback format but it carries the primary CSI information for the base station. There are two formats for PMI: predefined (type-I) or constructed (type-II). In LTE, the PMI format is always predefined, meaning the UE would select the PMI from an established table of precoder combinations according to a metric, e.g. maximizing signal-to-interference-noise ratio (SINR). The benefit of type-I PMI is that it requires low overhead and is computationally simpler for the UE to calculate. In contrast, type-II PMI is more flexible and precise but it comes at the cost of higher computational complexity and larger overhead. Constructed PMI is built up as a sum of \(L_{\text{CSI}}\) multipath components, which are represented by oversampled 2D DFT beamforming vectors, with quantized amplitude and phase components. An iterative process is used in combination with a metric like SINR to determine the beamforming vectors and complex weights, although, the oversample DFT is reduced to an orthogonal basis after the first multipath component is selected so that the \(L_{\text{CSI}}-1\) beams are only chosen from the remaining orthogonal DFT vectors. While type-II feedback is able to more accurately quantize the CSI, it is still limited by the channel estimation accuracy and the number of multipath components supported in the specification. Release \(16\) is currently restricted to at most \(L_{\text{CSI}}=4\) in type-II feedback and \(L_{\text{CSI}}=6\) in type-II enhanced feedback. This is a strict limitation in rich scattering environments such as macro-cell FR1 deployments, as seen in Figure 3. Channel estimation, however, can be improved through Fig. 1: The data processing flow from data layers to antenna outputs in a 5G base station. First the streams (and DMRS) are multiplexed according to the precoding matrix, which assigns the layers to the ports. Then, the result, along with CSI-RS, is beamformed to the logical ports. The logical processes are then mapped onto OFDM resource grids and potentially beamformed again onto the corresponding physical antennas. 5G uses dual polarized arrays to transmit multiple layers on orthogonal electromagnetic wave directions. Fig. 2: Visualization of MU-MIMO, massive MIMO, and coordinated multi-point or cell-free M-MIMO. In cell-free massive MIMO, there is a centralized compute node, multiple access points, and user equipment (UE). The compute node uses the distributed access points like logical elements and processes the data across the network. better estimators or by increasing the RSRP through beamformed pilot signals. ## Beam Management and Massive MIMO Beam management is designed to unify the reference signals, channel state information, and feedback into one process [3]. At the heart of the process are three steps: initial access, beam reporting, and beam refinement. In some cases, the process is expanded to include beam tracking for mobile UEs or with separate refinement stages for the base station and UE. Ultimately, beam management is built around the flexibility of the feedback system in 5G NR. The initial access period includes the transmission of beamformed SSBs that provide the UEs with a basic synchronization signal and demodulation reference signals. This allows for UEs to save power by going inactive and rejoining the network at a later initial access period. At the physical layer, the UE will receive the SSBs from a single antenna or using one or more spatial filters, such as a multi-panel handset used to overcome hand blockage. The UE will use the received SSB for synchronization and determining the control information. The beam reporting stage includes one or more possible SSB CSI reports which are transmitted in the random access channel. The report includes information for the strongest serving cell and may include a set of the next strongest cells within the same band to assist with load balancing. The number of reported additional cells depends on the carrier frequency, the previous state of the UE in the network, and the bands being monitored. In a newly-active state, the UE reports the top \(6\)-\(16\) additional cells across each active frequency range [1]. This reporting helps to manage handover and mitigate cell-edge interference. In the final steps, the UE has connected to a serving cell and is ready to start receiving data. Further beam refinement and channel estimation can occur by transmitting reference signals with more precise beams. Although not specified in the standard, a typical CSI-RS would cover smaller portions of the reported SSBs' directions or combine coherently across a multipath channel. Using more directional or precise beams can increase the SNR-thereby improving the channel estimates and beam alignment. Beam refinement can also be used to adjust the beamforming slightly to track highly mobile UEs. One of the key limitations of the beam management framework is the finite resources available for sweeping, refinement, and tracking. For example, the SSB sweeping process is limited to \(\{4,8,64\}\) SSB beams, which are broadcast to all users, and each beam is restricted to just \(240\) subcarriers and \(4\) symbols. This corresponds to \(5\%\) or less of the resources available for downlink data transmission. In contrast, CSI-RS can cover an entire bandwidth part, typically assigned as user-specific, and up to \(32\) ports can be configured for CSI-RS. The port limitation is an important one because it defines the maximum dimension that the UE will support for channel estimation and PMI selection. In order to support larger arrays, Base stations are deployed with a combination of directly-connected antennas and hybrid arrays with phase shifters connecting the \(32\) CSI-RS ports to a set of antennas. The analog portion of the hybrid array can be trained with the SSB and CSI-RS beams, while the PMI can be used to determine the digital precoder. The integrated beam training and CSI acquisition in 5G enables a new level of support for arbitrary, massive arrays. ## FR1 and FR2 The key differentiator between FR1 and FR2 i.e. sub-6GHz compared to mmWave-for a set of antenna arrays of equal aperture-is the propagation environment, bandwidth available, and reliance on beam-based architectures. At sub-6 GHz, the propagation environment tends to be more reflective. This results in multipath propagation that is beneficial for traditional spatial multiplexing. In FR1, users are not expected to beamform, so beam training is only necessary on the BS side. Furthermore, because BS arrays have a limited physical size and therefore a relatively small number of FR1 antennas, the beam-based system is used in a limited format [4]. Fig. 3: A comparison from our work [2] of the effective sum spectral efficiency (SE) achieved in a simulated FR1 deployment using CSI type-II feedback with \(L_{\text{CSI}}\) multi-path feedback quantization. It can be seen that sub-6GHz environments have very rich scattering that cannot be captured effectively with the current 5G feedback limitations of \(L_{\text{CSI}}\leq 6\) for large MIMO arrays. For example, the UE may only employ a single antenna during SSB reception and uplink transmission and the BS may reduce the number of active SSB beams because precise beamforming is unnecessary in low-band environments. In FR2, the full beam-based system is employed with BS beam training, repetition, and UE beam training all occurring. During the refinement period, the BS can sweep over a subset of the CSI-RS codebook, and feedback is gathered via one or more CSI-RS measurement reports. When increasing the carrier frequency to millimeter-wave bands, the paths tend to have a strong line of sight component, but very weak reflected components that restrict multi-layer communication. In fact, MU-MIMO does not appear to be active in any FR2 developments, and even \(4\times 2\) MU-MIMO in FR1 has only recently been recorded in commercial networks [5]. While FR1 deployments are often able to support more spatial multiplexing layers, the bandwidth of a mmWave channel in FR2 is much larger, allowing for as much as four times larger data rates. ## Challenges There are a series of challenges associated with massive MIMO compared to traditional architectures. Questions of power consumption, feedback, hardware cost, computational/algorithmic complexity, and robustness are all exacerbated with massive arrays. Furthermore, these challenges are also impacted by the specifications and beam management framework in particular. Here we outline \(3\) challenges specific to M-MIMO arrays in the 5G beam management framework. First, prior to any deployment, a group of codebooks must be designed. The design of codebooks is critical due to varying RF environments and user mobility patterns. In particular, a BS must have codebooks for: 1) initial access (SSB) coverage, 2) refinement (CSI-RS), and 3) feedback and mobility. For case 1, the codebook is generally small due to the limited number of SS blocks defined by \(N_{\text{SSB}}\) in Table II, which is never more than \(64\). Furthermore, the SSB process has a short repetition period with a default of \(20\)ms. In contrast, the CSI-RS codebook used for refinement is often much larger to maximize the signal quality and alignment of the beam with a user. For example, the CSI-RS codebook might contain as many as \(4-16\)x more beams than the number of BS RF chains. The BS would make use of these beams at a slower timescale than the SSB process, typically \(80\)ms with additional aperiodic usage as needed. While both of these codebooks can be specific for a given BS, the third codebook that is used for feedback and mobility must be known by the entire network. The feedback codebook enables quantizing the multipath information at the UE side and providing the quantized representation in the PMI for the BS. In the case of reciprocity (e.g. in time-division duplexing), the PMI may be reduced or neglected, although the feedback is often still helpful due to the wireless link being asymmetric [6, 4]. Therefore, the UE and the BS must both use the same codebook for feedback to correctly share the PMI information. The design of codebooks for SSB and CSI-RS is an active area of research due to the significant gains that can be achieved over generic strategies like DFT codebooks [7]. Additionally, although enhanced type-II feedback codebooks have already been implemented in 3GPP release \(16\), MU-MIMO performance is still severely limited and new codebooks that can efficiently support massive MIMO arrays are needed to reduce the performance loss. Once the codebooks are determined, an intelligent process for beam selection and sweeping is necessary. The process of sweeping and beam selection has generally received the most focus in beam management tasks [8, 9, 10], although it is still largely unclear how system performance is impacted by new sweeping algorithms. In particular, characterizing when the improved alignment outweighs the cost of additional overhead and interference in realistic, large-scale settings is still an active area of research. The greatest reduction of overhead could be seen by minimizing the CSI-RS usage because a typical CSI-RS codebook can include hundreds or thousands of beamforming vectors in modern 5G codebooks. At the same time, CSI-RS beam selection is heavily dependent on the SSB beam selection. Some of the simplest sweeping algorithms are based on hierarchical or tiered search [11, 9], where all of the CSI-RS beams within an SSB beam are used for each user. In an M-MIMO array the angular resolution is very fine, though, so there are many possible CSI-RS beamformers within an SSB beam and the difference in SNR between the beams could be significant depending on the environment. Other algorithms have been proposed based on machine learning, compressive sensing, and channel statistics [8] and references therein. 3GPP has also introduced a study item on machine learning with an explicit focus on improving beam management and feedback through artificial intelligence. These works present new potential directions, but significant research still remains in evaluating such algorithms in realistic scenarios. Finally, a critical issue in beam management is the challenge of mobility robustness. Even with sufficient feedback at one time instance, the BS needs to update the information at least as fast as the channel coherence time to accurately direct the beams. The coherence time, though, depends on the carrier frequency and mobility [12, section 3.4.3]. Furthermore, with highly directive beamforming, mobility can result in misaligned beams that reduce performance or even cause radio link failure [10]. Mobility is especially challenging for vehicular UEs in FR2 bands, which have mobility patterns and extremely directive beams that must adjust frequently. While mobility is challenging for the base station, it is often more difficult for a UE. The UE needs to update the combining weights to match the beamformed channel coherence time, which is on the order of milliseconds. To assist with this, multi-panel arrays [13] have been standardized in Release 16 to reduce link failure and improve robustness. Multi-panel and geometry-aware research is expected to become an active area of interest as a result of the recent standardization. Research efforts have also attempted to improve mobility management with methods such as multi-modal data [10] and machine learning [14]. These methods tend to consider single-beam or single-layer data in mmWave settings, but spatial multiplexing will further challenge mobile communications due to the precise precoding and combining required to achieve coherent processing. ## Conclusion In this article, we have brought together the ideas of feedback and massive MIMO in light of the recent releases of 5G NR. The first focus has been on the integration of beam management, reference signals, and feedback that are enabling MU-MIMO in network deployments. Still, both FR1 and FR2 deployments have yet to reach the potential for massive MIMO due to challenges like codebook design and mobility robustness. We expect massive MIMO will be an active area of research and industrial growth throughout the continued development of 5G and future wireless generations.
2309.06801
Defensive Alliances in Signed Networks
The analysis of (social) networks and multi-agent systems is a central theme in Artificial Intelligence. Some line of research deals with finding groups of agents that could work together to achieve a certain goal. To this end, different notions of so-called clusters or communities have been introduced in the literature of graphs and networks. Among these, defensive alliance is a kind of quantitative group structure. However, all studies on the alliance so for have ignored one aspect that is central to the formation of alliances on a very intuitive level, assuming that the agents are preconditioned concerning their attitude towards other agents: they prefer to be in some group (alliance) together with the agents they like, so that they are happy to help each other towards their common aim, possibly then working against the agents outside of their group that they dislike. Signed networks were introduced in the psychology literature to model liking and disliking between agents, generalizing graphs in a natural way. Hence, we propose the novel notion of a defensive alliance in the context of signed networks. We then investigate several natural algorithmic questions related to this notion. These, and also combinatorial findings, connect our notion to that of correlation clustering, which is a well-established idea of finding groups of agents within a signed network. Also, we introduce a new structural parameter for signed graphs, signed neighborhood diversity snd, and exhibit a parameterized algorithm that finds a smallest defensive alliance in a signed graph.
Emmanuel Arrighi, Zhidan Feng, Henning Fernau, Kevin Mann, Xingqin Qi, Petra Wolf
2023-09-13T08:49:02Z
http://arxiv.org/abs/2309.06801v2
# Defensive Alliances in Signed Networks ###### Abstract The analysis of (social) networks and multi-agent systems is a central theme in Artificial Intelligence. Some line of research deals with finding groups of agents that could work together to achieve a certain goal. To this end, different notions of so-called clusters or communities have been introduced in the literature of graphs and networks. Among these, _defensive alliance_ is a kind of quantitative group structure. However, all studies on the alliance so for have ignored one aspect that is central to the formation of alliances on a very intuitive level, assuming that the agents are preconditioned concerning their attitude towards other agents: they prefer to be in some group (alliance) together with the agents they like, so that they are happy to help each other towards their common aim, possibly then working against the agents outside of their group that they dislike. Signed networks were introduced in the psychology literature to model liking and disliking between agents, generalizing graphs in a natural way. Hence, we propose the novel notion of a _defensive alliance_ in the context of signed networks. We then investigate several natural algorithmic questions related to this notion. These, and also combinatorial findings, connect our notion to that of _correlation clustering_, which is a well-established idea of finding groups of agents within a signed network. Also, we introduce a new structural parameter for signed graphs, _signed neighborhood diversity_ snd, and exhibit a parameterized algorithm that finds a smallest defensive alliance in a signed graph. ## Introduction and Motivation Graphs or networks are one of the most important data structures, which can represent complex relations between objects. Mining graph data has become a hot research field in recent years. Some traditional data mining algorithms have been gradually extended to graph data, such as clustering, classification and frequent pattern mining. However, in the context of graph data, when the nodes of a network may represent persons or collections of persons, up to political entities like countries, one should also consider categories that express, e.g., that two persons like or dislike each other. This has led to the concept of _signed graphs_, independently introduced in Heider [1946], Harary [1953, 1954, 1955]. This model hence originated from psychology; there, it continued to flourish over decades; see De Soto, Henley, and London [1968], Mohazab and Feger [1985], Crandall et al. [2007], Brashears and Brashears [2016], Zorn, Mata, and Alves [2022]. The notion of signed graphs has become one of the main modeling tools in modern network science, with significant applications; e.g., link prediction, as in Ye et al. [2013], Song and Meyer [2015], social system reasoning, as in Tang et al. [2016], clustering and community detection, see, e.g., Cadena, Vullikanti, and Aggarwal [2016], Li et al. [2021] and the following discussion. Positive edges model trust, friendship, or similarity, while negative edges express distrust, opposition, or antagonism. When edges between vertices in a graph model of agent interaction describe links between these agents, then the simplest form of a 'perfect group' of agents corresponds to a clique. Community detection then amounts in finding (large) cliques, which is a well-researched area, being equivalent to finding large independent sets or small vertex covers. The corresponding research stretches over decades; see Bron and Kerbosch (1973), Eppstein, Loffler, and Strash (2013). Hao et al. (2014) adapted the notion of a clique (and hence that of a community) to signed graphs by requiring a positive edge between any two community members. By way of contrast, Li et al. (2021) suggested a notion of signed clique that allows a certain fraction of negative connections within a clique. A bit more relaxed, Yang, Cheung, and Liu (2007) describe a community in a signed network as a dense positive subgraph such that, assuming that the graph has been partitioned into such groups, the negative between-group relations are also dense. This idea was also used to find antagonistic communities in Chu et al. (2016), Gao et al. (2016). Put to the extreme case of denseness, we arrive at the notion of weakly balanced signed graphs discussed below. We will follow the approach of detecting communities defined by structural properties as exemplified above in this paper, lifting notions from unsigned to signed networks, as opposed to approaches based on spectral methods or centrality measures like page ranking, or inspired by physical statistics, as delineated in Kunegis, Lommatzsch, and Bauckhage (2009), Palla et al. (2005), Pons and Latapy (2006), Yang, Cheung, and Liu (2007), to give a few pointers. Clustering on graph data is also called community detection. Then for unsigned graph, the main purpose is to partition the graph into many clusters such that edges within each cluster are dense while edges between clusters are sparse. To quantitatively measure the relative density of a cluster, one can assume each node \(v\) in a cluster \(S\) has more neighbors in \(S\) than outside. Such vertex set \(S\) is also called a "defensive alliance", a concept originally introduced in Fricke et al. (2003), Kim et al. (2005), Kristiansen, Hedetniemi, and Hedetniemi (2004), Szabo and Czaran (2001), Shafique (2004). Up to now, hundreds of papers have been published on alliances in graphs and related notions, as also certified by surveys and book chapters; see Fernau and Rodriguez-Velazquez (2014), Ouazine, Slimani, and Tari (2018), Yero and Rodriguez-Velazquez (2017), Haynes, Hedetniemi, and Henning (2021). An overview on the wide variety of applications for alliances in graphs is given in Ouazine, Slimani, and Tari (2018), including community-detection problems as described in Seba, Lagraa, and Kheddouci (2012). More formally, a non-empty vertex set \(S\) of an unsigned graph \(G\) is a _defensive alliance_ if, for each \(v\in S\), \(|N(v)\cap S|+1\geq|N(v)\setminus S|\). Hence, another interpretation or intuition behind a defensive alliance is that each member of the alliance can be protected by the alliance from any attack outside. The world is often not that simple, as between different countries, there might be friendly or less friendly bounds for historical or many other reasons. Therefore, it makes sense to qualify the edges modelling vicinity as 'friend or foe'relationships. This is why we propose to adapt the notion of a defensive alliance towards signed graphs, which naturally capture these aspects. Moreover, the cited previous work on antagonistic communities neglected the aspect of defensiveness, which is more accurate in depicting international relationships Doreian and Mrvar (2015), social opinion networks Kunegis, Lommatzsch, and Bauckhage (2009), etc. Healy and Stein (1973) looked at several propositions of the multipolar world faced in the European context of the five powers between 1870 and 1881. The Bismarckian system was built on the hypothesis that (from Bismarck's perspective) Germany could see a peaceful future if it would find at least two allies among the four other main powers of the time. (More historical background information is provided in [https://de.wikipedia.org/wiki/Bundnispolitik_Otto_von_Bismarcks](https://de.wikipedia.org/wiki/Bundnispolitik_Otto_von_Bismarcks). Unfortunately, there is no English version of that wikipedia page.) Notice that this is very much the basic condition of a defensive alliance, in particular, given the fact that Germany had one 'fixed enemy' over that time period (France) and sometimes quite some- infty against one its two allies in the 3-emperors-coalition (Russia). Later, Doreian and Mrvar (2015) empirically analyzed the _Correlates of War_ data with a period from 1946 to 1999, showing that the expected trend of network evolution towards a complete balance state is not convincing, and in a sense, the imbalance dramatically increases over time. Because of the occurrence of negative relationships among international parties, defensive alliance theory appears to be more reasonable as it gives a good explanation on the persistent (and even increasing) existence of such negative edges. It could provide a new perspective on how to understand the formation of alliances. Another example is that in a larger online gaming network, the relationships between players can be "positive" (friend) or "negative" (foe). For a player, to keep safe or complete some missions, he/she wants to join or construct a group or alliance which can protect each other from enemy's attacks outside. Clearly, negative edges cannot be avoided in this alliance. When a player is attacked by his enemies outside, all its friends in this group will help while its enemies in this group stay neutral. How to join or construct such kind of "group of people" in signed networks is very interesting and will be discussed also in the context of defensive alliances. ## Main Contributions Our main conceptual contribution is the introduction of a notion of _defensive alliance_ for signed graphs, which has been only defined for unsigned graphs so far. Based on our definition, we investigate several algorithmic questions. * _Alliability_: Given a signed graph, does there exist a defensive alliance in this graph? Possibly one that contains a prescribed vertex \(v\)? While this question gives a trivial yes on unsigned graphs, we can prove that even this question becomes NP-complete for signed graphs. We complement our finding by exhibiting polynomial-time algorithms for Defensive Alliability on several graph classes. * _Defensive Alliance Building_: In view of the described hardness, one might wonder if one can turn a given vertex set into a defensive alliance by turning as few enemy relations into friendships as possible. Interestingly, this Defensive Alliance Building problem can be solved in polynomial time. * _Finding the Smallest Alliances_: As this question is NP-complete, we investigate aspects of parameterized complexity. We prove that the task of finding smallest alliances remains hard (technically speaking, W[1]-hard) when parameterizing by the solution-size parameter (the intended size of the alliance) or by the treewidth of the underlying graph, but it becomes tractable when parameterized by _signed neighborhood diversity_, which is a new structural parameter that we introduce in this paper and that we believe to be useful also for other problems on signed graphs. * _Small Alliances in Special Signed Graphs_: The hardness results motivate us to study small defensive alliances in several classes of signed graphs, where we can get a good combinatorial understanding. Particularly interesting are _balanced graphs_, as they relate to important network analysis questions like _correlation clustering_. As often, we will use the terms _graph_ and _network_ interchangeably, with the same meaning. In the following, we will first present the necessary concepts formally. Then, we exhibit some combinatorial studies of these concepts, before introducing some algorithmic problems formally. ## Formal Concept Definitions We assume some basic knowledge in classical graph theory on the side of the reader. An _(unsigned) graph_\(G\) can be specified as a tuple \((V,E)\), where \(V\) is the vertex set and \(E\) the edge set of \(G\). More formally, \(E\subseteq\binom{V}{2}\), i.e., each edge is a two-element subset of \(V\). \(G^{\prime}=(V^{\prime},E^{\prime})\) is a subgraph of \(G=(V,E)\) if and only if \(V^{\prime}\subseteq V\) and \(E^{\prime}\subseteq E\); it is an induced subgraph, also denoted as \(G[V^{\prime}]\), if \(E^{\prime}=E\cap\binom{V^{\prime}}{2}\). Here, we give a list of basic concepts from classical graph theory which are used in our paper. A graph is said to be embeddable in the plane, or _planar_, if it can be drawn in the plane such that its edges, represented as continuous curves in the plane, intersect only at their endpoints. A graph \(G\) is _chordal_ if it has a chord for every cycle of length no less four. A _clique_ of a graph \(G\) is a complete induced subgraph of \(G\). A _vertex cover_\(C\) of a graph \(G\) is a subset of vertices of \(G\) such that every edge of \(G\) has at least one endpoint in \(C\). The _vertex cover number_ of \(G\) is the size of the smallest vertex cover for \(G\). An _Independent Set_ of a graph \(G\) is a subset of vertices such that no two vertices in the subset are adjacent. Clearly, the complementary set of a vertex cover set is an independent set. **Definition 1**.: _A tree decomposition of a graph \(G\) is a pair \(\mathcal{T}=(T,\{X_{t}\}_{t\in V(T)})\), where \(T\) is a tree whose node \(t\) is assigned a vertex subset \(X_{t}\subseteq V(G)\), called a bag, satisfying the following:_ 1. \(\bigcup_{t\in V(T)}X_{t}=V(G)\)_;_ 2. _for each edge_ \(uv\in E(G)\)_, there is some node_ \(t\) _of_ \(T\) _such that_ \(u\in X_{t},v\in X_{t}\)_;_ 3. _for each_ \(u\in V(G)\)_, the set_ \(T_{u}=\{t\in V(T)\mid u\in X_{t}\}\) _induces a connected subtree of_ \(T\)_._ The _width_ of tree decomposition \(\mathcal{T}\) is given by \(max_{t\in V(T)}\left|X_{t}\right|-1\). The _treewidth_ of a graph \(G\) is the minimum width over all tree decompositions of \(G\). For the sake of designing algorithms, it is convenient to think of nice tree decompositions which can be easily transformed by a given tree decomposition with the same width and a linear number of vertices (Cygan et al. (2015)). **Definition 2**.: _A nice tree decomposition\(\mathcal{T}=(T,\{X_{t}\}_{t\in V(T)})\) is a tree decomposition if each node \(t\in V(T)\) fall into one of the following categories:_ 1. _Leaf node:_ \(t\) _is a leaf node of_ \(T\) _and_ \(X_{t}=\emptyset\)_;_ 2. _Introduce node:_ \(t\) _has exactly one child_ \(t^{\prime}\) _such that_ \(X_{t}=X_{t^{\prime}}\cup\{V\}\) _for some_ \(v\notin X_{t^{\prime}}\)_;_ 3. _Forget node:_ \(t\) _has exactly one child_ \(t^{\prime}\) _such that_ \(X_{t}=X_{t^{\prime}}\setminus\{V\}\) _for some_ \(v\in X_{t^{\prime}}\)_;_ 4. _Join node:_ \(t\) _has exactly two children_ \(t^{\prime}\) _and_ \(t^{\prime\prime}\) _such that_ \(X_{t}=X_{t^{\prime}}=X_{t^{\prime\prime}}\)_._ A _signed network_ is a triple \(G=(V,E^{+},E^{-})\), where \(V\) is a finite set of vertices and \(E^{+}\subseteq\binom{V}{2}\) is the _positive edge_ set and \(E^{-}\subseteq\binom{V}{2}\), with \(E^{+}\cap E^{-}=\emptyset\), is the _negative edge_ set. We call \((V,E^{+}\cup E^{-})\) the _underlying (unsigned) graph_ of \(G\). For \(v\in V\), \(N^{+}(v)=\{u\in V\mid uv\in E^{+}\}\) are the _positive neighbors_ and \(N^{-}(v)=\{u\in V\mid uv\in E^{-}\}\) are the _negative neighbors_ of \(v\). Accordingly, \(\deg^{+}(v)=\left|N^{+}(v)\right|\) and \(\deg^{-}(v)=\left|N^{-}(v)\right|\) denote the positive and negative degree of vertex \(v\), respectively. We use subscripts to restrict the considered neighborhood to a subset of vertices. For instance, if \(X\subseteq V\), then \(N^{+}_{X}(v)=N^{+}(v)\cap X\) and \(\deg^{+}_{X}(v)=\left|N^{+}_{X}(v)\right|\). Similarly to the unsigned case, we use \(\delta^{-}(G)\), respectively \(\delta^{+}(G)\), to denote the smallest negative, respectively positive, degree in \(G\). For a set of vertices \(X\subseteq V\), we denote by \(\overline{X}=V\setminus X\) the complement of \(X\) with respect to \(V\). We distinguish between friend and enemy relationships and define a _defensive alliance_, or DA for short, of a signed network as follows. **Definition 3**.: _A non-empty set \(S\) of vertices of a signed network \(G=(V,E^{+},E^{-})\) is called a defensive alliance if and only if :_ 1. \(\forall v\in S\colon\,\deg^{+}_{S}(v)+1\geq\deg^{-}_{S}(v)\) _and_ 2. \(\forall v\in S\colon\,\deg^{+}_{S}(v)+1\geq\deg^{-}_{\overline{S}}(v)\)_._ The first condition expresses that, within an alliance, there should not be more 'natural enemies' than friends for each member of the alliance. This models a certain stability aspect of the alliance. It also prevents an alliance from being over-stretched by internal conflicts and rather makes sure that solidarity within the alliance is strong enough so that 'natural enemies' within an alliance are at least staying neutral in case of an external attack. The second condition is taken over from the traditional model of defensive alliance in an unsigned graph. It says that each member of an alliance should have at least as many defenders from within the alliance (including itself) than it has enemies outside the alliance. Notice that 'natural friends' outside the alliance are considered harmless, or more explicitly, they are not counted in and are expected to stay neutral, as (if the first condition is met) also 'natural enemies' within the alliance are expected to stay neutral. Both conditions together can also be interpreted as a signed analogue to the idea to maximize the minimum degree within a community, as proposed (e.g.) in Sozio and Gionis (2010). We illustrate these concepts in Figure 1. Following the notation introduced for unsigned graphs in Kristiansen, Hedetniemi, and Hedetniemi (2004), Shafique (2004), we will denote the size of the smallest defensive alliance in \(G\) by \(a_{sd}(G)\). The index \(sd\) reminds us that we work on signed graphs and consider defensive alliances. We will also look into special classes of signed graphs. Often, these are defined via properties of the underlying unsigned graph, for instance, the classes of signed paths, cycles, subcubic, or complete graphs. But there are also very interesting classes of signed graphs that have no counterpart from the unsigned perspective. Cartwright and Harary (1956) defined that a signed graph is _balanced_ if each of its cycles contains an even number of negative edges. Davis (1967) extended this notion; he called a signed graph a _clustering_ or _weakly balanced_ if there are no cycles in the graph with exactly one negative edge. He presented the following characterization: a signed graph is weakly balanced if and only if its vertices can be split into \(k\geq 1\) groups (clusters) such that edges within groups are positive and edges between groups are negative. We call such a signed graph also _\(k\)-balanced_. Davis also showed that a complete signed graph is weakly balanced if and only if none of its triangles has exactly one negative edge. The question to partition a complete signed graph as close as possible to a clustering partitioning has been investigated extensively since the publication of the ground-breaking paper on Correlation Clustering by Bansal, Blum, and Chawla (2004), being equivalent to Cluster Editing on unsigned graphs, but clearly extendible towards non-complete signed graphs. ## Combinatorial Prelude We begin by discussing some observations on the size of a smallest defensive alliance \(a_{sd}\) before we continue with more algorithmic results. By the definitions, using a chain of inequalities, we obtain: **Lemma 1**.: _If a vertex \(v\) is in a defensive alliance, then \(\deg^{+}(v)+1\geq\left\lceil\frac{\deg^{-}(v)}{2}\right\rceil.\)_ Proof.: For any vertex \(v\in S\), \[\deg^{+}(v)+1 \geq\deg^{+}_{S}(v)+1\] \[\geq\max\{\deg^{-}_{S}(v),\deg^{-}_{\overline{S}}(v)\}\] \[\geq\left\lceil\frac{\delta^{-}(v)}{2}\right\rceil\,.\] This simple lemma already has a nice consequence for finding alliances of size at most \(k\). **Corollary 2**.: _If there is a vertex \(v:\deg^{-}(v)\geq 2k+1\) in a signed graph \(G\), then vertex \(v\) cannot be in any defensive alliance of size at most \(k\) in \(G\)._ Next, we give characterizations of small defensive alliance numbers for signed graphs. For the unsigned case, Propositions 1 and 3 in Kristiansen, Hedetniemi, and Hedetniemi (2004) describe similar characterizations. Figure 1: Defensive alliances on unsigned vs signed graphs. **Theorem 3**.: _Let \(G\) be a signed graph. Then,_ 1. \(a_{sd}(G)=1\) _if and only if_ \(\exists v\in V(G):\deg^{-}(v)\leq 1\)_._ 2. \(a_{sd}(G)=2\) _if and only if_ \(\delta^{-}(G)\geq 2\) _and there exist two adjacent vertices_ \(v,u\in V(G):\deg^{-}(v)=\deg^{-}(u)=2\)_._ Proof.: 1) Obviously, a vertex \(v\) can defend itself if and only if \(\deg^{-}(v)\leq 1\). 2) Except for the case of \(a_{sd}(G)=1\), we know the minimum negative degree of \(G\) is at least 2. Assume that we have a defensive alliance \(S\) consisting of two vertices \(v,u\in V(G)\). Case one is that \(v,u\) are connected positively, then for each vertex, there are at most two negative connections outside of \(S\); Case two is that \(v,u\) are connected negatively, then for each vertex, there is at most one negative connection outside of \(S\). Combined with \(\delta^{-}(G)\geq 2\), we conclude that the negative degree of \(v,u\) is exactly 2, regardless of whether they are positively or negatively connected. For the converse direction, if \(\delta^{-}(G)\geq 2\), then from 1) we know \(a_{sd}(G)\geq 2\). We can verify that any two adjacent vertices \(v,u\in V(G)\) with \(\deg^{-}(v)=\deg^{-}(u)=2\) can form a defensive alliance, so that \(a_{sd}(G)=2\). From these combinatorial results, we can conclude the following for signed trees, paths and cycles, keeping in mind that any leaf in a tree forms a minimum defensive alliance. **Corollary 4**.: _For any signed tree graph \(T\), in particular for paths, \(a_{sd}(T)=1\)._ **Corollary 5**.: _For any signed cycle graph \(C\), if there exists a positive edge, then \(a_{sd}(C)=1\); otherwise, \(a_{sd}(C)=2\)._ Corollary 4 clearly also extends to forests, but more interestingly, we can also extend Corollary 5. Recall that a graph is _unicyclic_ if it contains exactly one cycle as a subgraph. **Corollary 6**.: _Let \(G\) be a unicyclic signed graph. Then, \(a_{sd}(G)\leq 2\). Moreover, \(a_{sd}(G)=2\) if and only if \(G\) is a cycle without a positive edge._ Proof.: If \(G\) is unicyclic but \(\delta(G)=1\), then any vertex of degree one forms a defensive alliance. If \(G\) is unicyclic but \(\delta(G)>1\), then \(G\) must be a cycle, so that the previous corollary applies. We are now considering all signed graph with maximum degree at most three, which are called signed subcubic graphs. **Theorem 7**.: _There is a polynomial-time algorithm that decides alliability of any signed subcubic graph \(G\) and if so, it determines \(a_{sd}(G)\)._ We formulate this theorem in an algorithmic fashion, but its proof mainly gives combinatorial insights. Proof.: Let \(G=(V,E^{+},E^{-})\) be a signed subcubic graph. First, we can see if any single vertex forms a defensive alliance on its own. This is the case in particular if \(G\) contains a vertex of degree one, but the general condition is stated in Theorem 3. Similarly, we can check if two adjacent vertices form a defensive alliance, as also specified in Theorem 3 as a combinatorial condition. Now, two cases remain. If \(\delta^{-}(G)=3\), this means that the graph is 3-regular and all its edges are negative, i.e., \(\Delta^{+}(G)=0\). Then, \(G\) does not possess a defensive alliance by Lemma 1. If \(\delta^{-}(G)=2\), we know by assuming \(a_{sd}(G)>2\) that adjacent vertices \(u,v\) never satisfy \(\deg^{-}(v)=\deg^{-}(u)=2\), but \(\min\{\deg^{-}(u),\deg^{-}(v)\}\geq 2\). Assume that \(a_{sd}(G)>2\) and that \(u\in S\) for some hypothetical defensive alliance \(S\). Then, \(\deg^{-}(u)=2\) and (at best) \(\deg^{+}(u)=1\). However, now all neighbors \(v\) of \(u\) must satisfy \(\deg^{-}(v)=3\), i.e., \(v\notin S\), so that we can conclude that such an alliance \(S\) does not exist. **Corollary 8**.: _If a subcubic graph possesses any defensive alliance, then its size is at most 2._ The following structural observation is sometimes helpful: **Proposition 9**.: _If \(S\) is a minimum-size defensive alliance in \(G=(V,E^{+},E^{-})\), then \(S\) is connected in the underlying unsigned graph \((V,E^{+}\cup E^{-})\)._ Hence, when looking for defensive alliances \(S\) of size at most \(k\), we can assume that the diameter of \((V,E^{+}\cup E^{-})[S]\) is at most \(k-1\). As mentioned above, balancedness is an important notion in signed networks, in particular in connection with complete graphs. Hence, the following theorem is an interesting and important combinatorial result. This also means that we can determine \(a_{sd}(G)\) for any weakly balanced signed complete graph \(G\) in polynomial time. **Theorem 10**.: _For any signed complete graph \(G=(V,E^{+},E^{-})\), \(n=|V|\), with \(E^{-}\neq\emptyset\), we can determine its defensive alliance number in the following cases._ 1. _If_ \(G\) _is balanced with partition_ \((V_{1},V_{2})\)_, where_ \(|V_{1}|\geq|V_{2}|\)_, then_ \(a_{sd}(G)=|V_{2}|\)_. Moreover, any subset_ \(S\subseteq V_{1}\) _with_ \(|S|=|V_{2}|\) _is a minimum defensive alliance._ 2. _If_ \(G\) _is_ \(k\)_-balanced with partition_ \((V_{1},V_{2},\ldots,V_{k})\)_, where_ \(|V_{1}|\geq|V_{2}|\geq\ldots\geq|V_{k}|\)_, then we find:_ 1. _If_ \(|V_{1}|\geq\frac{n}{3}\) _and_ \(|V_{2}|\geq\frac{n}{3}\)_, then_ \(a_{sd}(G)=2\left\lceil\frac{n-|V_{2}|}{2}\right\rceil\)_. Any subset_ \(S_{1}\) _of_ \(V_{1}\) _and any subset_ \(S_{2}\) _of_ \(V_{2}\) _with_ \(|S_{1}|=|S_{2}|=\left\lceil\frac{n-|V_{2}|}{2}\right\rceil\) _forms a minimum DA._ 2. _If_ \(|V_{1}|\geq\frac{n}{2}\)_, then_ \(a_{sd}(G)=n-|V_{1}|\)_. Any subset_ \(S\) _of_ \(V_{1}\) _with_ \(|S|=n-|V_{1}|\) _is a minimum DA._ 3. _Otherwise, there is no defensive alliance at all._ Proof.: As 1) is obviously a special case of 2), we only consider \(k\)-balanced complete signed graphs in the following. Slightly abusing notation, we will denote the cluster \(V_{i}\) to which \(x\in V\) belongs as \(V_{x}\). Suppose that the nonempty subset \(S\subseteq V\) is a defensive alliance, and for all \(x\in S\), let \(V_{x}^{\prime}\) denote the subset of \(S\) which includes the vertices that are from the same partition part as \(x\), i.e., \(V_{x}^{\prime}=S\cap V_{x}\). By definition of a defensive alliance, we have, for any \(x\in S\), a) \(\deg_{S}^{+}(x)+1=|V_{x}^{\prime}|\geq\deg_{S}^{-}(x)=|S|-|V_{x}^{\prime}|\) and b) \(\deg_{S}^{+}(x)+1=|V_{x}^{\prime}|\geq\deg_{S}^{-}(x)=n-|S|-|V_{x}|+|V_{x}^{ \prime}|\). So, for all \(x\in S\), \(2|V_{x}|\geq 2|V_{x}^{\prime}|\geq n-|V_{x}|\), i.e., \(|V_{x}|\geq\frac{n}{3}\). Hence, vertices in \(S\) are from at most three different partition parts. We have to discuss the situation \(S=V_{1}^{\prime}\cup V_{2}^{\prime}\cup V_{3}^{\prime}\), where \(V_{1}^{\prime}\subseteq V_{1}\), \(V_{2}^{\prime}\subseteq V_{2}\), and \(V_{3}^{\prime}\subseteq V_{3}\). We further know that \(|V_{1}|\geq|V_{2}|\geq|V_{3}|\geq\frac{n}{3}\). The following discussion (by case distinction) shows the subcases i), ii) and iii) of 2) as in the statement of the theorem. Case one: Should we have three non-empty parts \(V_{1}^{\prime},V_{2}^{\prime},V_{3}^{\prime}\), then \(V_{3}\neq\emptyset\) and hence \(|V_{1}|=|V_{2}|=|V_{3}|=\frac{n}{3}\). From a) above, \(|V_{1}^{\prime}|\geq|V_{2}^{\prime}|+|V_{3}^{\prime}|\), \(|V_{2}^{\prime}|\geq|V_{1}^{\prime}|+|V_{3}^{\prime}|\), and \(|V_{3}^{\prime}|\geq|V_{1}^{\prime}|+|V_{2}^{\prime}|\). So we have \(|V_{1}^{\prime}|=|V_{2}^{\prime}|=|V_{3}^{\prime}|=0\), that is to say, there is no such defensive alliance. This corresponds to part 2)iii). Case two: \(S=V_{1}^{\prime}\cup V_{2}^{\prime}\) with \(V_{2}^{\prime}\neq\emptyset\). We know \(|V_{1}|\geq|V_{2}|\geq\frac{n}{3}\). From a) above, we know \(|V_{1}^{\prime}|\geq|S|-|V_{1}^{\prime}|=|V_{2}^{\prime}|\) and \(|V_{2}^{\prime}|\geq|S|-|V_{2}^{\prime}|=|V_{1}^{\prime}|\), i.e., \(|V_{1}^{\prime}|=|V_{2}^{\prime}|\). From b) above, \(|V_{2}^{\prime}|\geq n-|V_{2}|-|V_{1}^{\prime}|\). Hence, \(|S|=|V_{1}^{\prime}|+|V_{2}^{\prime}|\geq n-|V_{2}|\). On the other side, one can verify that any subset \(V_{1}^{\prime}\) of \(V_{1}\) and any subset \(V_{2}^{\prime}\) of \(V_{2}\) with \(|V_{1}^{\prime}|=|V_{2}^{\prime}|=\left\lceil\frac{n-|V_{2}|}{2}\right\rceil\) forms a defensive alliance. This corresponds to part 2)i). Case three: \(S=V_{1}^{\prime}\subseteq V_{1}\). From a) above, \(|V_{1}^{\prime}|\geq n-|V_{1}|\). Trivially, \(|V_{1}|\geq|V_{1}^{\prime}|\). Hence, \(|V_{1}|\geq\frac{n}{2}\) and \(|S|\geq n-|V_{1}|\). Similarly, we can verify that any subset \(V_{1}^{\prime}\) of \(|V_{1}|\) with \(|V_{1}^{\prime}|=n-|V_{1}|\) is a defensive alliance. This corresponds to part 2)ii). This result also shows the similarities between the notion of a defensive alliance and that of a clustering: A DA is often a cluster, or if not, it is stretching over at most three of them. ## Formal Problem Definitions We are now defining the decision version of the problems that we consider in this paper. As we will show \(\mathsf{NP}\)-hardness of the first one, most of the following is devoted to algorithmically find solutions in special situations, also employing the toolbox of parameterized algorithms. In the spirit of Correlation Clustering, we are also discussing edge flips. More formally, let \(G=(V,E^{+},E^{-})\) be a signed graph. For a set \(T\subseteq E=E^{+}\cup E^{-}\), let \(G_{T}\) be the signed graph obtained after flipping the edges of \(G\) in \(T\), i.e., \(G_{T}=(V,E^{+}\bigtriangleup T,E^{-}\bigtriangleup T)\), where \(\bigtriangleup\) denotes the symmetric set difference. Now, Correlation Clustering could be defined as the question to decide, given \(G\) and \(k\), if some edge set \(T\) exists, \(|T|\leq k\), such that \(G_{T}\) is weakly balanced. This question is also known to be NP-complete, see Bansal, Blum, and Chawla (2004). By the mentioned NP-hardness result of Defensive Alliability, this would be also true for the question to flip at most \(k\) edges to make the graph alliable. However, the following question, whose clustering variant is trivial, is an interesting variation in the context of alliances as we will present a non-trivial algorithm solving it below. \begin{tabular}{|l l|} \hline \multicolumn{2}{|c|}{**Defensive Alliance Structure (Degradibility)**} \\ \hline Input: & A signed network \(G=(V,E^{+},E^{-})\), a vertex set \(D\) and an integer \(k\geq 0\) \\ Problem: Is there a \(T\subseteq E=E^{+}\cup E^{-}\) of size at most \(k\) such that \(D\) is a DA in \(G_{T}\)? \\ \hline \end{tabular} This problem can be motivated by thinking of \(D\) as an ideal alliance, which basically has to be built by creating friendly relationships out of previously unfriendly ones. In other words, we might want to turn a specific group of countries or people into a defensive alliance and we want to know how much we have to pay for it. ## Alliability It is possible that no defensive alliance exists in a signed network. As an extreme example, consider any signed network with only negative edges and minimum degree at least 3. So we first consider the existence of defensive alliance in signed networks, which is called Defensive (pointed) Alliability. As already discussed above, the question of alliability makes no sense for the unsigned counterpart, while, surprinsingly, we obtain hardness results for the signed version. **Theorem 11**.: DefPall _and_ DefAll _are_ NP_-complete._ The reduction is based on the NP-hardness of a well-known variation of the Satisfiability problem, namely NAE-3SAT, see Schaefer (1978). According to Schaeffer, NAE-3SAT can be defined as a coloring problem as follows: \begin{tabular}{|l l|} \hline \multicolumn{2}{|c|}{**Non-Aliance Structure (NAE-3SAT)**} \\ \hline Input: & A base set \(X\), a collection of sets \(C_{1},\ldots,C_{m}\subseteq X\), each having at most three members \\ Problem: Is there a mapping \(A:X\rightarrow\{0,1\}\) such that \(A(C_{i})=\{0,1\}\) for all \(1\leq i\leq m\)? \\ \hline \end{tabular} From the coloring point of view, this means that \(X\) is colored with two colors such that no set \(C_{i}\) is all one color. From the logical perspective, we can think of \(X\) being a set of variables and the coloring being an assignment of the truth values 1 (or true) and 0 (or false). Notice that in Schaeffer's definition, all clauses are monotone, i.e., all literals are positive, but clearly the problem will not become easier if negative literals are allowed. Proof.: For a signed network \(G=(V,E^{+},E^{-})\), obviously, Defensive (Pointed) Alliability is in NP, by checking the two defensive conditions of every vertex in a solution \(S\), which runs in polynomial time \(\mathcal{O}(|S|^{2}(|E^{+}|+|E^{-}|))\). We complete our proof, showing NP-hardness by reducing from NAE-3SAT, formulated in its logic version. Let \(\phi=C_{1}\wedge\cdots\wedge C_{m}\) be a boolean formula with variable set \(X\) (with \(|X|=n\)) in which each clause \(C_{i}\) includes exactly 3 positive literals. The question is that whether there exists a satisfying assignment such that in each clause there is a literal assigned to false. Define \(NC_{v}=(V_{v},\emptyset,E_{v}^{-})\), which is is a negative clique with \(m+2\) many vertices, containing the possibly special vertex \(v\) (in the pointed variation of our problem), as well as \(NC_{x,i}=(V_{x,i},\emptyset,E_{x,i}^{-})\), with \(x\in X\) and \(i\in\mathbb{N}^{+}\), which is a negative clique with four vertices; both negative cliques serve as not-in-the-solution gadgets, differentiated as _big_ and _small_, respectively. More technically, we can put \(V_{x,i}\coloneqq\{x_{i,1},x_{i,2},x_{i,3},x_{i,4}\}\). For each variable \(x\) in \(X\), we define * \(C(x)=\big{\{}C_{j_{1}},\ldots,C_{j_{n_{x}}}\mid x\in C_{j_{i}},i\in\{1,\ldots,n_{x}\},j_{i}\in\{1,\ldots,m\}\big{\}}\), i.e., * \(n_{x}\) is the number of clauses that contain the variable \(x\), and * \(X^{\prime}(x)\coloneqq\big{\{}x_{h}\mid h\in\{1,\ldots,2n_{x}\}\big{\}}\). Then we construct the reduction \(R(\langle\phi\rangle):=\) 1. Build a signed graph \(G=(V,E^{+},E^{-})\) as follows: \[V =V_{v}\cup\{c_{j}\mid j\in\{1,\ldots,m\}\}\cup\bigcup_{x\in X} \left(X^{\prime}(x)\cup\bigcup_{l\in\{1,\ldots,4n_{x}\}}V_{x,l}\right)\] \[E^{+} =\{vc_{j}\mid j\in\{1,\ldots,m\}\}\cup\{x_{2p-1}x_{2p}\mid p\in \{1,\ldots,n_{x}\},x\in X\}\] \[E^{-} =E^{-}_{v}\cup\left(\bigcup_{x\in X}\bigcup_{l\in\{1,\ldots,4n_{x }\}}E^{-}_{x,l}\right)\cup\left(\bigcup_{x\in X}\big{\{}c_{j_{p}}x_{2p}\mid p \in\{1,\ldots,n_{x}\},C_{j_{p}}\in C(x)\big{\}}\right)\cup\] \[\big{\{}x_{2p}x_{2p+1},x_{2n_{x}}x_{1}\mid x\in X,p\in\{1,\ldots, n_{x}-1\}\big{\}}\cup\] \[\big{\{}x_{2p-1}x_{4p-3,1},x_{2p-1}x_{4p-2,1},x_{2p}x_{4p-1,1},x_{ 2p}x_{4p,1}\mid x\in X,p\in\{1,\ldots,n_{x}\}\big{\}}\] 2. Return \(\langle G,v\rangle\) or just \(G\) if we talk about DefAll. If \(\phi=C_{1}\wedge\cdots\wedge C_{m}\) is a yes-instance of NAE-3SAT, where \(C_{i}=x_{i1}\lor x_{i2}\lor x_{i3}\), \(x_{ij}\in X\) and \(i\in\{1,\ldots,m\}\), \(j\in\{1,2,3\}\), then there exists a satisfying assignment \(A\) for the variables \(X\) such that, for each \(C_{i}\), there are \(j_{1},j_{2}\in\{1,2,3\}\) such that \(x_{ij_{1}}=1\) and \(x_{ij_{2}}=0\). Let \(S=\{x\in X\mid A(x)=0\}\). We use the assignment \(A\) to show that \(G\) possesses a defensive alliance \(D\) containing the vertex \(v\): \[D\coloneqq\{v\}\cup\big{\{}c_{i}\mid i\in\{1,\ldots,m\}\big{\}}\cup\bigcup_{ x\in S}X^{\prime}(x)\] We have to check the two defensive alliance conditions for each vertex in this set \(D\). For the vertex \(v\) fixed to be in \(D\), 1. \(\deg_{D}^{+}(v)+1=m+1>0=\deg_{D}^{-}(v)\), and 2. \(\deg_{D}^{+}(v)+1=m+1=\deg_{D}^{-}(v)\). Consider any \(c_{i}\) for \(1\leq i\leq m\). Observe that \(\deg_{D}^{-}(c_{i})=\deg_{S_{X}}^{-}(c_{i})\), with \(S_{X}=\bigcup_{x\in S}X^{\prime}(x)\), so that \(\deg_{D}^{-}(c_{i})\in\{1,2\}\), as otherwise the NAE-assignment \(A\) that determines \(S\) would set all variables in \(C_{i}\) either to 0 or to 1, a contradiction. Similarly, \(\deg_{\overline{D}}^{-}(c_{i})=\deg_{S_{X}^{\prime}}^{-}(c_{i})\), with \(S_{X}^{\prime}=\bigcup_{x\in X\setminus S}X^{\prime}(x)\), so that again \(\deg_{\overline{D}}^{-}\in\{1,2\}\). Hence, 1) \(\deg_{D}^{+}(c_{i})+1=2\geq\deg_{D}^{-}(c_{i})\); 2) \(\deg_{D}^{+}(c_{i})+1=2\geq\deg_{D}^{-}(c_{i})\). For each vertex \(x_{h}\in D\) associated to \(x\in X\), \(1)\)\(\deg_{D}^{+}(x_{h})+1=2\geq\deg_{D}^{-}(x_{h})\); \(2)\)\(\deg_{D}^{+}(x_{h})+1=2=\deg_{D}^{-}(x_{h})\). Therefore, \(D\) is a defensive alliance which contains the vertex \(v\). Let \(D\) be a defensive alliance containing vertex \(v\) of the signed graph obtained from \(R(\left\langle\phi\right\rangle)\). First observe that the not-in-the-solution gadgets deserve their name, i.e., \(D\cap\left(V_{v}\cup\bigcup_{x\in X,l\in\{1,\ldots,4n_{x}\}}V_{x,l}\right)=\emptyset\). In order to defend vertex \(v\), all of its positive neighbors must be in \(D\), i.e., \(\left\{c_{j}\mid j\in\{1,\ldots,m\}\right\}\subset D\,.\) For each vertex \(c_{i}\in D\), at least one of its negative neighbors should be in \(D\), and at least one of its negative neighbors should not be in \(D\), because \(c_{i}\) has exactly one positive neighbor, which is \(v\). Then assume that there is some \(x_{2p}\in D\). As its two negative neighbors \(x_{4p-1,1},x_{4p,1}\) are in small not-in-the-solution gadgets, \(x_{4p-1,1},x_{4p,1}\notin D\), we know that the other two negative neighbors \(x_{2p-1},x_{2p+1}\) must belong to \(D\), as only (possibly) the positive neighbor \(c_{j_{p}}\) can defend \(x_{2p}\); but it also must defend it, so that \(c_{j_{p}}\in D\) is enforced. The case of \(x_{2p+1}\in D\) is very similar (including the boundary case of \(x_{1}\)). Now, the vertex has degree four and negative degree three. As two of its negative neighbors belong to small not-in-the-solution gadgets, both the negative and the positive remaining neighbors must belong to \(D\), as well, bringing us back to the previous case. Hence, we see that \(x_{h}\in D\) for some \(h\in\{1,\ldots,2n_{x}\}\) if and only if \(x_{h}\in D\) for all \(h\in\{1,\ldots,2n_{x}\}\). In other words, for each \(x\in X\), \(X^{\prime}(x)\subseteq D\) or \(X^{\prime}(x)\cap D=\emptyset\). If \(X^{\prime}(x)\subseteq D\), then for the set of clause vertices \(c(x)\) that correspond to the set of clauses \(C(x)\), \(c(x)\subseteq D\). Then we can obtain a satisfying assignment \(A\) for \(\phi\) by Figure 2: Reduction construction for Theorem 11. assigning the value \(1\), i.e., true, to \(x\in X\) if \(X^{\prime}(x)\subseteq D\) or the value \(0\), i.e., false, to \(x\) if \(X^{\prime}(x)\cap D=\emptyset\). Therefore, \(A\) is a satisfying assignment for \(\phi\), whose values in each clause are not all equal to each other. Furthermore, without the fixed vertex \(v\), if \(c_{i}\in D\), then \(v\in D\), because \(v\) is the only positive neighbor of each \(c_{i}\). If some \(x_{2p}\in D\), according to the above, we know that the corresponding set \(c(x)\) of clause vertices satisfies \(c(x)\subseteq D\), so that \(v\in D\). It is easy to see that the described procedure for building \(R(\langle\phi\rangle)\) from \(\phi\) can be executed in polynomial time. Consequently, both DefPAll and DefAll are NP-complete. By analyzing the previous proof again more carefully, and taking the notation from that proof, we find: **Lemma 12**.: _The constructed signed graph \(G\) has at most \(56m+1\) many vertices._ Proof.: Let us analyze the parts of the vertex set \(V\) of \(G\) one-by-one. * \(|V_{v}|=m+2\) by definition. * \(|\{c_{j}\mid j\in\{1,\ldots,m\}\}|=m\) by definition. * \(\big{|}\bigcup_{x\in X}X^{\prime}(x)\big{|}=\sum_{x\in X}2n_{x}\leq 6m\) by double-counting, as each clause contains at most \(3\) literals and \(n_{x}\) is the number of clauses in which variable \(x\) occurs. * Similarly, \(\big{|}\bigcup_{x\in X}\bigcup_{l\in\{1,\ldots,4n_{x}\}}V_{x,l}\big{|}=\sum_{x \in X}16n_{x}=48m\). Altogether, \(56m+2=(m+2)+m+6m+48m\) as claimed. **Theorem 13**.: _Given an instance \(I=(X,\mathcal{C})\) of 3SAT, with \(n=|X|\) variables and \(m=|\mathcal{C}|\) clauses, one can construct in polynomial time an instance \(I^{\prime}=(X^{\prime},\mathcal{C}^{\prime})\) of NAE-3SAT, with \(n^{\prime}=|X^{\prime}|\) and \(m^{\prime}=|\mathcal{C}^{\prime}|\) such that \(I\) is satisfiable if and only if \(I^{\prime}\) has a solution and such that \(n^{\prime}=\mathcal{O}(n+m)\) and \(m^{\prime}=\mathcal{O}(n+m)\)._ Proof.: The claimed reduction is similar to the one indicated in [https://en.wikipedia.org/wiki/Not-all-equal_3-satisfiability](https://en.wikipedia.org/wiki/Not-all-equal_3-satisfiability). For the given 3SAT-instance \(I=(X,\mathcal{C})\), we can assume that all clauses in \(\mathcal{C}\) contain three literals and that no variable occurs twice in any clause. Let \(L(X)=\{x,\neg x\mid x\in X\}\) be the set of literals of \(I\). Further, assume that we have arbitrarily fixed an ordering on the literals in \(L(C)\) for each \(C\in\mathcal{C}\), defining three mappings \(L_{1},L_{2},L_{3}:\mathcal{C}\to L(X)\). We construct \(I^{\prime}=(X^{\prime},\mathcal{C}^{\prime})\) as follows: * \(X^{\prime}\coloneqq X\cup\{\hat{x}\mid x\in X\}\cup\{s\}\cup\{x_{C},\hat{x}_{C }\mid C\in\mathcal{C}\}\). Define the injection \(f:L(x)\to X^{\prime}\) by \(x\mapsto x\) and \(\neg x\mapsto\hat{x}\) for each \(x\in X\). * \(\mathcal{C}^{\prime}\coloneqq\big{\{}\{f(L_{1}(C)),f(L_{2}(C)),x_{C}\},\{\hat{x }_{C},f(L_{3}(C)),s\},\{x_{C},\hat{x}_{C}\}\mid C\in\mathcal{C}\big{\}}\cup \big{\{}\{x,\hat{x}\}\mid x\in X\big{\}}\). First, assume that \(A:X\to\{0,1\}\) is a satisfying assignment for \(I\). Then \(A\) is extended to \(A^{\prime}:X^{\prime}\to\{0,1\}\) by letting \(A^{\prime}(x)=A(x)\) for \(x\in X\), \(A^{\prime}(\hat{x})=\neg A(x)\) (flipping \(0\) to \(1\) and \(1\) to \(0\)), while \(A^{\prime}(x_{C})=1\) and \(A^{\prime}(\hat{x}_{C})=0\) if and only if \(A(L_{1}(C))=A(L_{2}(C))=0\) for all \(C\in\mathcal{C}\) and \(A^{\prime}(s)=1\). Observe that for each \(C^{\prime}\in\mathcal{C}^{\prime}\), one of its variables is set to \(0\) and another one is set to \(1\). Conversely, if \(A^{\prime}:X^{\prime}\to\{0,1\}\) is a not-all-equal assignment of \(I^{\prime}\), then observe that the assignment \(A^{\prime\prime}:X^{\prime}\to\{0,1\}\) that flips all variables in comparison with \(A^{\prime}\), i.e., \(A^{\prime\prime}(x)=\neg(A^{\prime}(x))\), is also a not-all-equal assignment of \(I^{\prime}\). Hence, we can assume, w.l.o.g., that \(A^{\prime}(s)=0\). Let \(A:X\to\{0,1\}\) be simply the restriction of \(A^{\prime}\) to \(X\). Our setting guarantees that, for each \(c\in\mathcal{C}\), by observing the clause \(\{\hat{x}_{C},f(L_{3}(C)),s\}\), either (1) \(A^{\prime}(\hat{x}_{C})=1\) and hence \(A^{\prime}(x_{C})=0\) due to the clause \(\{x_{C},\hat{x}_{C}\}\), or (2) \(A^{\prime}(f(L_{3}(C)))=1\), so that \(C\) is satisfied immediately. In case (1), by observing the clause \(\{f(L_{1}(C)),f(L_{2}(C)),x_{C}\}\), either \(A^{\prime}(f(L_{1}(C)))=1\) or \(A^{\prime}(f(L_{2}(C)))=1\), i.e., the described assignment for \(I\) indeed satisfies each \(C\in\mathcal{C}\). **Corollary 14**.: _Unless ETH breaks, there is no algorithm solving monotone NAE-3SAT instances with \(n\) variables and \(m\) clauses in time \(\mathcal{O}\big{(}2^{o(n+m)}\big{)}\)._ As a further excursion into interesting variants of NAE-SAT, let us make some more observations. Notice that we could also change the construction slightly by replacing all clauses \(\{x,\hat{x}\}\) of size \(2\) by the following gadget consisting of \(6\) clauses (compare Darmann and Docker (2020)): \[\mathbf{NE}(x,\hat{x})=\left\{x,\hat{x},y\right\},\{x,\hat{x},z\},\{v,y,z\}, \{w,y,z\},\{u,v,w\}\right\}.\] In this way, we get an equivalent NAE-3SAT formula, again with \(\mathcal{O}(n+m)\) many variables and clauses. Even more, both constructions shown in Darmann and Docker (2020) prove that also for the restriction NAE-3SAT-E4 where each clause is monotone and contains three variables and every variable occurs exactly four times, we finally obtain \(\mathcal{O}(n+m)\) many variables and clauses, so that we can conclude the following stronger ETH result. **Corollary 15**.: _Unless ETH breaks, there is no algorithm solving monotone NAE-3SAT-E4 instances with \(n\) variables and \(m\) clauses in time \(\mathcal{O}\big{(}2^{o(n+m)}\big{)}\)._ Let us now return to our main theme, which is defensive alliances. Our considerations also allow us to deduce some impossibility results based on the famous Exponential-Time Hypothesis (ETH), see Impagliazzo, Paturi, and Zane (2001). **Corollary 16**.: _Assuming ETH, there is no algorithm for solving Def(P)All-instances with \(n\) vertices that runs in time \(\mathcal{O}(2^{o(n)})\)._ We can complement our finding by exhibiting polynomial-time algorithms for Def(P)All on some graph classes. We already looked at subcubic and weakly balanced signed graphs in the previous section and will look into bounded neighborhood diversity and further parameterizations later. ## Building Defensive Alliances We turn to the task of building an alliance by flipping edges. **Theorem 17**.: Defensive Alliance Building _can be solved in polynomial time (details below)._ In our algorithm, we will make use of the problem Degree-Constrained Subgraph (for short: DCS). An instance of DCS is given by an unsigned graph \(G=(V,E)\) and two bounds \(l_{v},u_{v}\in\mathbb{N}\) for each \(v\in V\). The task is to find a subgraph \(H\) with maximum number of edges, such that for each vertex \(v\in V\), \(l_{v}\leq\deg_{H}(v)\leq u_{v}\). This problem was introduced by Gabow (1983) where the author showed that this problem is solvable in \(\mathcal{O}\left(\sqrt{\sum_{v\in V}u_{v}\cdot|E|}\right)\). We now describe our algorithm, working on an instance \((G,D,k)\) of Defensive Alliance Building, i.e., \(G=(V,E^{+},E^{-})\) is a signed graph, \(D\subseteq V\) and \(k\geq 0\). First, we apply the following reduction rule: If there exists a \(v\in D\) with \(z_{v}\coloneqq\deg_{G,D}^{-}(v)-\deg_{G,D}^{+}(v)-\deg_{G,D}^{-}(v)\geq 0\), then set \(S=\{vx\mid x\in N_{D}^{-}(v)\}\) and add \(z_{v}\) edges from \(\{vx\mid x\in N_{D}^{-}(v)\}\) to \(S\), which is part of the solution that we build in the sense of the following correctness assertion: **Lemma 18**.: \(\langle G,D,k\rangle\) _is a yes-instance of Defensive Alliance Building if and only if \(\langle G_{S},D,k^{\prime}\rangle\) is one, with \(k^{\prime}=k-z_{v}-\deg_{G,D}^{-}(v)=k-(\deg_{G,D}^{-}(v)-\deg_{G,D}^{+})\)._ Proof.: This lemma can be shown by induction by using the following claim: **Claim 19**.: _Let \(v\in D\) and \(S\subseteq E\) be a solution of the Defensive Alliance Building instance \((G,D,k)\) such that there exist edges \(vx\in E^{-}\setminus S\) and \(vy\in E\cap S\) with \(x\in D\) and \(y\notin D\). Then \(S^{\prime}\coloneqq(S\cup\{vx\})\setminus\{vy\}\) is also a solution._ Proof.: Let \(u\in D\setminus\{v\}\). So, \(\deg_{G_{S},D}^{+}(u)\leq\deg_{G_{S^{\prime}},D}^{+}(u)\), \(\deg_{G_{S},D}^{-}(u)\geq\deg_{G_{S^{\prime}},D}^{-}(u)\) (with equality if \(u\neq x\)) and \(\deg_{G_{S},\overline{D}}^{-}(u)=\deg_{G_{S^{\prime}},\overline{D}}^{-}(u)\). Since \(\deg_{G_{S^{\prime}},D}^{+}(v)=\deg_{G_{S},D}^{+}(v)+1\) and \(\deg_{G_{S^{\prime}},\overline{D}}^{-}(v)=\deg_{G_{S},\overline{D}}^{-}(v)+1\), as well as \(\deg_{G_{S^{\prime}},D}^{-}(v)=\deg_{G_{S},D}^{-}(v)-1\), \(D\) is a defensive alliance on \(G_{S^{\prime}}\), just as it was on \(G_{S}\). Applying this claim \(z_{v}\) times proves the lemma. From now on, each \(v\in D\) fulfills \[\deg^{+}_{G,D}(v)+\deg^{-}_{G,D}(v)>\deg^{-}_{G,\overline{D}}(v)\,.\] Hence, a solution includes only negative edges in the induced signed graph \(G[D]\). For the algorithm, we define \(B\coloneqq\{v\in D\mid\deg^{+}_{G,D}(v)+1<\max\{\deg^{-}_{G,D}(v),\deg^{-}_{G, \overline{D}}(v))\}\), collecting those vertices from \(D\) that violate DA conditions. In order to quantify this violation, set \(b(v)\coloneqq 0\) if \(v\notin B\) and \(b(v)\coloneqq\max\{b_{1}(v),b_{2}(v)\}\) if \(v\in B\), where \[b_{1}(v) \coloneqq\deg^{-}_{G,\overline{D}}(v)-\deg^{+}_{G,D}(v)-1,\] \[b_{2}(v) \coloneqq\left\lceil\frac{\deg^{-}_{G,D}(v)-\deg^{+}_{G,D}(v)-1} {2}\right\rceil\,.\] We run the polynomial-time algorithm for DCS on the unsigned graph \(G^{-}\coloneqq(V,E^{-})\) with \(u_{v}\coloneqq b(v)\) and \(l_{v}=0\). Let \(H^{\prime}=(V,S^{\prime})\) be the subgraph which is returned by this algorithm. Clearly, \(S^{\prime}\) only includes edges between vertices in \(B\) and, for each \(v\in B\) with \(\deg_{H^{\prime}}(v)<b(v)\) and each \(u\in N_{G^{\prime},B}(v)=N^{-}_{G,B}(v)\), \(d_{u}=b(u)\). Otherwise, \(H^{\prime}\) would not be maximal. Now, let \(S\coloneqq S^{\prime}\) and \(H\coloneqq H^{\prime}\); for \(v\in B\) with \(\deg_{H}(v)<b(v)\), add edges \(vx\in E^{-}\setminus S\) with \(x\in D\) to \(H\) and \(S\) until equality holds. This implies \(|S\setminus S^{\prime}|=|S|-|S^{\prime}|=\sum_{v\in B}b(v)-\deg_{\widetilde{H} }(v)\). **Lemma 20**.: \(S\) _has minimum cardinality such that \(D\) is a defensive alliance on \(G_{S}\)._ Proof.: First of all, we show that \(D\) is a defensive alliance on \(G_{S}\). Clearly, \(S\subseteq E^{-}\). Therefore, for each \(v\in D\setminus B\), \[\deg^{+}_{G_{S},D}(v)+1 \geq\deg^{+}_{G,D}(v)+1\geq\max\{\deg^{-}_{G,D}(v),\deg^{-}_{G, \overline{D}}(v)\}\] \[\geq\max\{\deg^{-}_{G_{S},D}(v),\deg^{-}_{G_{S},\overline{D}}(v) \}\,.\] Let \(v\in B\). Since \(S\) includes at least \(b(v)\) edges incident to \(v\) with two endpoints in \(D\), \(\deg^{-}_{G_{S},\overline{D}}(v)=\deg^{-}_{G,\overline{D}}(v)\leq\deg^{+}_{G,D}(v)+b(v)+1\leq\deg^{+}_{G_{S},D}(v)+1\) and \(\deg^{-}_{G_{S},\overline{D}}(v)\leq\deg^{-}_{G,\overline{D}}(v)-b(v)\leq \deg^{+}_{G,D}(v)+b(v)+1\leq\deg^{+}_{G_{S},D}(v)+1\). Thus, \(D\) is a defensive alliance. Let \(T\subseteq E\) be of minimum cardinality such that \(D\) is a defensive alliance on \(G_{T}\). Assume there exists a \(v\in B\) with \(|\{vx\in E\mid\text{$vx\in T$}\}|<b(v)\). We will show that this assumption leads to a contradiction to the fact that \(D\) is a defensive alliance on \(G_{T}\). Case 1: If \(b(v)=\deg^{-}_{G,\overline{D}}(v)-\deg^{+}_{G,D}(v)-1\), then \[\deg^{-}_{G_{T},\overline{D}}(v)=\deg^{-}_{G,\overline{D}}(v)=\deg^{+}_{G,D}(v )+1+b(v)>\deg^{+}_{G_{T},D}(v)+1\,.\] Case 2: For \(b(v)=\frac{\deg^{-}_{G,D}(v)-\deg^{+}_{G,D}(v)-1}{2}\), we have \[\deg^{-}_{G_{T},D}(v)>\deg^{-}_{G,D}(v)-b(v)=b(v)+\deg^{+}_{G,D}(v)+1>\deg^{+}_ {G_{T},D}(v)+1\,.\] The case \(b(v)=\frac{\deg^{-}_{G,D}(v)-\deg^{+}_{G,D}(v)-1}{2}\) implies \[\deg^{-}_{G_{T},D}(v)>\deg^{-}_{G,D}(v)-b(v)=b(v)+\deg^{+}_{G,D}(v)\geq\deg^{+ }_{G_{T},D}(v)+1\,.\] As each case contradicts the fact that \(D\) is a defensive alliance on \(G_{T}\), we conclude that \(|\{vx\in E\mid\text{$vx\in T$}\}|\geq b(v)\) for each \(v\in B\). Let \(\widetilde{H}\coloneqq\left(V,\widetilde{S}\right)\) be the output of the DCS algorithm on \(\widetilde{G}=(V,T)\) with the bounds from above. Since \(\widetilde{G}\) is a subgraph of \(G\), \(|\widetilde{S}|\leq|S^{\prime}|\). Since \(\widetilde{S}\) is maximal, adding an edge to \(\widetilde{S}\) would increase \(\deg_{\widetilde{H},V}(v)\) for at most on \(v\in B\) with \(b(v)\geq\deg_{\widetilde{H},V}(v)\). Therefore, we need to add at least \(\sum_{v\in B}\left(b(v)-\deg_{\widetilde{H}}(v)\right)\) many edges to \(\widetilde{S}\) to obtain \(T\). Moreover, \[|T|-|\widetilde{S}|=|T\setminus\widetilde{S}|\geq\sum_{v\in B}\left(b(v)-\deg_ {\widetilde{H}}(v)\right)=\left(\sum_{v\in B}b(v)-\deg_{H^{\prime}}(v)\right)+ \deg_{H^{\prime}}(v)-\sum_{v\in B}\deg_{\widetilde{H}}(v).\] By the sentence before the lemma and the handshaking lemma, we observe \[|T|-|\widetilde{S}|\geq|S|-|S^{\prime}|+2\cdot\left(|S^{\prime}|+|\widetilde{S }|\right)=|S|+\left(|S^{\prime}|-|\widetilde{S}|\right)-|\widetilde{S}|=|S|-| \widetilde{S}|\] Thus, \(|S|\leq|T|\). We still have to analyze the running time. The exhaustive employment of the reduction rule runs in time \(\mathcal{O}\left(|V|^{2}\right)\). The condition \(b(v)\leq\deg_{\widetilde{G}}(v)\leq|V|\) can be calculated in linear time for all \(v\in V\). Furthermore, the DCS-algorithm runs in time \(\mathcal{O}\left(\sqrt{|V|}\cdot|E|\right)\). Adding the vertices in the end runs in linear time with respect to the number of edges. Therefore, overall the algorithm runs in time \(\mathcal{O}\left(|V|^{2}+\sqrt{|V|}\cdot|E|\right)\). ## Minimum Defensive Alliances It is often possible to transfer hardness results known for unsigned graphs to the case of signed graphs. We now describe a formal link between Minimum Defensive Alliance in unsigned and signed graphs; we describe this in the form of a reduction from Minimum Defensive Alliance on unsigned graphs. **Theorem 21**.: _There is a polynomial-time transformation that, given an unsigned graph \(G\), produces a signed graph \(G^{\prime}\) such that a vertex set \(A\) is a defensive alliance in \(G\) if and only if it is a defensive alliance in \(G^{\prime}\)._ Proof.: Let \(G=(V,E)\) be an unsigned graph and \(k\in\mathbb{N}\). For \(v\in V\), define \(d^{\prime}(v)\coloneqq\left\lceil\frac{\deg(v)+1}{2}\right\rceil\), as well as a new set of vertices \(M_{v}\coloneqq\{v_{i,j}\mid i\in\{1,\ldots,d^{\prime}(v)\},j\in\{1,2,3,4\}\,\}\). We construct a signed graph \(G^{\prime}=(V^{\prime},E^{\prime+},E^{\prime-})\) with \[V^{\prime} \coloneqq V\cup\bigcup_{v\in V}M_{v},\quad E^{\prime+}\coloneqq E,\text{ and }\] \[E^{\prime-} \coloneqq\big{\{}\{v,v_{i,1}\},\{v_{i,j},v_{i,\ell}\}\mid v\in V,i\in\{1,\ldots,d^{\prime}(v)\},\,j,\ell\in\{1,2,3,4\}\big{\}}.\] In other words, the only positive edges of \(G^{\prime}\) are inherited from \(G\), and the unsigned graph \((V^{\prime},E^{\prime-})\) is subgraph of a split graph, with \(V\subseteq V^{\prime}\) forming an independent set and \(V^{\prime}\setminus V\) forming a collection of cliques. Since \(\deg^{-}(v_{i,j})\geq 3\) and \(\deg^{+}(v_{i,j})=0\) for all \(i\in\{1,\ldots,d^{\prime}(v)\}\) and \(j\in\{1,2,3,4\}\), we know \(A\cap\left(\bigcup_{v\in V}M_{v}\right)=\emptyset\) for each defensive alliance \(A\subseteq V^{\prime}\) in \(G^{\prime}\), i.e., \(A\subseteq V\). Observe \((*)\): In \(G^{\prime}\), each \(v\in V\) is incident with exactly \(d^{\prime}(v)\) negative edges. For clarity, in the following we attach a possibly second subscript \(G\) or \(G^{\prime}\) to \(\deg\) and its variations to express of which graph we are talking about. Let \(A\subseteq V\) be a defensive alliance in the unsigned graph \(G\). By definition, for all \(v\in A\), \[\deg_{G,A}(v)+1\geq\deg_{G,\overline{A}}(v)=\deg_{G}(v)-\deg_{G,A}(v)\,,\] i.e., \(\deg_{G,A}(v)\geq\frac{\deg_{G}(v)-1}{2}\). Since the degree is an integer, this is equivalent to \[\deg^{+}_{G^{\prime},A}(v)=\deg_{G,A}(v)\geq\left\lceil\frac{\deg_{G}(v)-1}{2} \right\rceil=d^{\prime}(v)-1\stackrel{{(*)}}{{=}}\deg^{-}_{G^{ \prime},\overline{A}}(v)-1\,.\] Also, \(\deg_{G^{\prime},A}^{-}(v)=0\). Therefore, \(A\) is a defensive alliance in \(G\) if and only if \(A\) is a defensive alliance in \(G^{\prime}\). As the set of vertices forming an alliance is the same in the original unsigned graph \(G\) and the constructed signed graph \(G^{\prime}\), many complexity (hardness) results known for Minimum Defensive Alliance in unsigned graphs can translate to complexity (hardness) results for Minimum Defensive Alliance in signed graphs. According to Enciso (2009), Jamieson, Hedetniemi, and McRae (2009) who discussed alliance problems on planar and chordal unsigned graphs, the following is a direct consequence from the reduction of Theorem 21. Notice that the obtained results for special graph classes do not follow from Theorem 11. **Theorem 22**.: Minimum Defensive Alliance _is NP-complete on signed graphs, even if the underlying graph is planar or chordal._ Proof.: We still have to show that the presented reduction of Theorem 21 preserves planarity and chordality. For planarity, it suffices to observe that only \(4\)-cliques are attached to each vertex of a planar graph, and this can be done maintaining the planar embedding of the original planar graph. Concerning chordality, we use the concept of a perfect elimination order. For a graph \(G=(V,E)\); this refers to bijections \(\sigma:\{1,\ldots,|V|\}\to V\) such that, for each \(i\in\{1,\ldots,|V|\}\), \(N[\sigma(i)]\cap\{\sigma(1),\ldots,\sigma(i)\}\) is a clique of \(G[\{\sigma(1),\ldots,\sigma(i)\}]\). It is well known that a graph is chordal if and only if the graph has a perfect elimination order. As Minimum Defensive Alliance on unsigned chordal graphs is NP-complete by Jamieson, Hedetniemi, and McRae (2009), we can assume that the unsigned graph \(G=(V,E)\) that we start with in our reduction is chordal and has a perfect elimination order. We now consider the graph \(G^{\prime}=(V^{\prime},E^{\prime})\) obtained by our reduction; more precisely, we discuss its underlying unsigned graph. We freely use the notations concerning the vertices of \(G^{\prime}\) from the previous description in the following. Since \(N[v_{i,j}]=\{v_{i,1},v_{i,2},v_{i,3},v_{i,4}\}\) is a clique for each \(v\in V\), \(i\in\{1,\ldots,d^{\prime}(v)\}\) and \(j\in\{2,3,4\}\), these vertices can be mentioned in the end of a perfect elimination order. If we delete these vertices, \(v_{i,1}\) is pendant and these vertices can follow in the order. The remaining vertices and edges are the vertices and edges from \(G\). As \(G\) is assumed to be chordal, we can conclude our definition of the ordering of the vertices of \(G^{\prime}\) by any perfect elimination ordering of the vertices of \(G\). By the reasoning above, this gives indeed a perfect elimination order of \(G^{\prime}\), proving that \(G^{\prime}\) is chordal. We now look into this problem from the viewpoint of parameterized complexity. While Minimum Defensive Alliance on unsigned graphs with solution-size parameterization is in FPT, we will show for signed graphs a \(\mathsf{W}[1]\)-completeness result, where hardness is given by the reduction from Clique and membership is given by the reduction from Short Nondeterministic Turing Machine Computation Cesati (2003). We now define the two problems that we employ more formally: \begin{tabular}{|l l|} \hline \hline \multicolumn{2}{|c|}{Property} \\ Input: & An unsigned graph \(G\) and an integer \(k\) \\ Problem: & Is there a clique in \(G\) with \(k\) vertices? \\ \hline \hline \multicolumn{2}{|c|}{Stotel Noordeterministic Turing Machine Computation} \\ Input: & A single-tape nondeterministic Turing machine \(M\), a word \(x\) on the input alphabet \(\Sigma\) \\ & and an integer \(k\) \\ Problem: & Does \(M\) accept \(x\) within \(k\) steps? \\ \hline \end{tabular} **Theorem 23**.: Minimum Defensive Alliance _on signed graphs with solution-size parameterization is \(\mathsf{W}[1]\)-complete, even on balanced signed graphs._ Proof.: It is well-known that Clique with solution-size parameterization is \(\mathsf{W}[1]\)-complete. This is the basis for our reduction. Let \(\langle G=(V,E),k\rangle\) be any instance of Clique. Define \(C_{v,i}=\{v_{ij}|j\in\{1,2,3,4\}\}\) for all \(i\in\{1,\ldots,\max\{3,k\}\}\), and \(V_{E}=\{\widehat{e}\ |\ e\in E\}\), also called edge-vertices. The reduction \(R\) is defined as: \(R(\langle G,k\rangle):=\) 1. Build a signed graph \(G^{\prime}=(V^{\prime},E^{\prime\,+},E^{\prime\,-})\): \[V^{\prime}=V\cup V_{E}\cup\left(\bigcup_{v\in V}\bigcup_{i=1}^{k}C_{v,i} \right)\cup\left(\bigcup_{\widehat{e}\in V_{E}}\bigcup_{i=1}^{3}C_{\widehat{e}, i}\right)\!,\] \[E^{\prime\,+}=\{u\widehat{e},v\widehat{e}\mid e=uv\in E\},\] \[E^{\prime\,-}=\{vv_{11},\widehat{e}\widehat{e}_{1},v_{ij}v_{ih}, \widehat{e}_{j}\widehat{e}_{lh}\mid v\in V,i\in\{1,\ldots,k\},e\in E,l\in\{1,2,3 \},j,h\in\{1,2,3,4\},j\neq h\}.\] 2. Return \(\left\langle G^{\prime},k+\frac{k(k-1)}{2}\right\rangle\). If \(\left\langle G,k\right\rangle\) is a \(\mathsf{yes}\)-instance of Clique, without loss of generality, \(G\) contains a clique of size at least \(k\), denoted as \(C_{k}=\{v_{1},\ldots,v_{k}\}\). Let \(S=C_{k}\cup E_{k}\), where \(E_{k}=\{\widehat{v_{i}v_{j}}\mid i,j\in\{1,\ldots,k\},i\neq j\}\), combined with the reduction \(R\), we know for each \(v\in C_{k}\), \(\deg_{S}^{+}(v)=k-1\), so (1) \(\deg_{S}^{+}(v)+1=k\geq\deg_{S}^{-}(v)=0\); (2) \(\deg_{S}^{+}(v)+1=k\geq\deg_{S}^{-}(v)=k\). For all \(\widehat{e}\in E_{k}\), \(\deg_{S}^{+}(\widehat{e})=2\), so (1) \(\deg_{S}^{+}(\widehat{e})+1=3\geq\deg_{S}^{-}(\widehat{e})=0\); (2) \(\deg_{S}^{+}(\widehat{e})+1=3\geq\deg_{S}^{-}=3\). That is to say, \(S\) forms a defensive alliance of size \(k+\frac{k(k-1)}{2}\). Conversely, if the signed graph \(G^{\prime}\) built by \(R\) contains a defensive alliance \(S\) with \(|S|\leq k+\frac{k(k-1)}{2}\). For all \(v\in C_{x,i}\), \(\deg^{+}(v)+1=0+1<\lceil\frac{\deg^{-}(v)}{2}\rceil=\lceil\frac{3}{2}\rceil=2\). Therefore, such vertices cannot be in any defensive alliance. Moreover, any two vertices in \(V\) are not adjacent, and also \(V_{E}\) forms an independent set. Therefore, \(S=V_{1}\cup V_{2}\), where \(V_{1}\subseteq V,V_{2}\subseteq V_{E}\). By construction, we have \(\deg^{-}(v)=k\), for all \(v\in V\) and \(\deg^{-}(\widehat{e})=3\) for all \(e\in E\). So for \(v_{1}\in V_{1}\), \[|\{\widehat{e}\in V_{2}\mid\exists x\in V:e=v_{1}x\in E\}|\geq k-1\,,\] and for \(\widehat{e}\in V_{2}\), its two incident vertices must be in \(S\). So, we know \(|V_{1}|\geq k\). To defend \(k\) vertices in \(V_{1}\), we need at least \((k-1)+(k-2)+\ldots+1=\frac{k(k-1)}{2}\) many corresponding edge-vertices in \(V_{2}\) without the addition of new vertices. So \(|S|=|V_{1}|+|V_{2}|\geq k+\frac{k(k-1)}{2}\). Then we have \(S=V_{1}\cup V_{2}\), where \(|V_{1}|=k\), \(|V_{2}|=\frac{k(k-1)}{2}\). Obviously, \(V_{1}\) is a clique of \(G\) with size \(k\). For the membership proof, we use the Turing machine characterization of \(\mathsf{W}[1]\) as developed in Cesati (2003). Formally, we have to describe a (in our case, polynomial-time, as well as parameterized) reduction from Minimum Defensive Alliance to Short Nondeterministic Turing Machine (NTM) Computation, consisting in a single-tape nondeterministic Turing machine \(M\), a word \(x\) on the input alphabet of \(M\), and a positive integer \(k\). The task is to decide if \(M\) accepts \(x\) in at most \(k\) steps. Given a signed graph \(G^{\prime}=(V^{\prime},E^{\prime\,+},E^{\prime\,-})\) with \(n\) vertices and \(m=m^{+}+m^{-}\) edges, and a positive integer \(k^{\prime}\), we can construct a single-tape nondeterministic Turing machine \(M\), which nondeterministically guesses \(k^{\prime}\) vertices of \(G^{\prime}\) and writes as \((S\coloneqq)\{v_{1},v_{2},\ldots,v_{k^{\prime}}\}\) onto the tape. Then \(M\) scans each guessed vertex \(v_{i}\) and rejects either (1) \(\deg_{S}^{+}(v_{i})+1<\deg_{S}^{-}(v_{i})\), or (2) \(\deg_{S}^{+}(v_{i})+1<\deg^{-}(v_{i})-\deg_{S}^{-}(v_{i})\). Otherwise, it accepts. It can be verified that the Turing machine \(M\) accepts in \(\mathcal{O}(k^{\prime 2})\) steps if and only if there is a defensive alliance of size \(k^{\prime}\) in \(G^{\prime}\). This explains the correctness of the reduction that we sketched. Notice that this results is not only a negative message, as by the well-known inclusion \(\mathsf{W}[1]\subseteq\mathsf{XP}\), we also get the following algorithmic result, as \(\mathsf{XP}\) is also known as 'poor man's \(\mathsf{FPT}\)'. **Corollary 24**.: _With solution-size parameterization, \(\textsc{MinDefAll}\in\mathsf{XP}\)._ According to (Bliem and Woltran (2018), Theorem 1), Minimum Defensive Alliance on unsigned graphs with treewidth as its parameter is also \(\mathsf{W}[1]\)-hard. With Theorem 21, this translates into our setting as follows. **Theorem 25**.: Minimum Defensive Alliance _is \(\mathsf{W}[1]\)-hard on signed graphs, when parameterized by the treewidth of the underlying graph._ Proof.: Let \((G,k)\) is an instance of Minimum Defensive Alliance on unsigned graphs, based on Theorem 21, we can obtain an instance of Minimum Defensive Alliance on signed graphs \((G^{\prime},k)\) in polynomial time. We show that the treewidth of the underlying graph of \(G^{\prime}\) depends only on the treewidth of \(G\), by modifying an optimal tree decomposition \(\mathcal{T}\) of \(G\) as follows: for each \(v\in V(G)\), take an arbitrary node whose bag \(B\) contains \(v\), and then add \(d^{\prime}(v)\) nodes \(C_{1},C_{2},\ldots,C_{d^{\prime}(v)}\) as its children such that the bag of each \(C_{i}\) is \(B\cup\{v_{i,1}\}\); and then add a child node \(N_{i}\) to each node \(C_{i}\) with bag \(\{v_{i,1},v_{i,2},v_{i,3},v_{i,4}\}\). It is easy to verify that the result is a valid tree decomposition of the underlying graph of \(G^{\prime}\) and its width is at most the treewidth of \(G\) plus 1 or 4. Following [Bliem and Woltran (2018), Theorem 1], we know Minimum Defensive Alliance on signed graphs is \(\mathsf{W}[1]\)-hard when parameterized by treewidth. Notice that the preceding result has some (negative) algorithmic consequences for a case that is important in the context of geographic applications, namely, that of planar signed networks. Not only is MinDefAll NP-hard in this case by Theorem 22, but also the standard approach to show that a planar graph problem is in \(\mathsf{FPT}\), when parameterized by solution size, fails here, as it is based on a dynamic programming algorithm along a tree decomposition of small width, as show-cased for Dominating Set in Alber et al. (2002). Another typical restriction often discussed in the literature is an upper bound on the maximum degree of the graph. We have previously seen that the problems that we study are polynomial-time solvable on subcubic signed graphs; see Theorem 7. Currently, we do not know if this is also true if we lift the upper bound on the degree from three to four, but an increase to five changes the picture, as we show next by modifying the construction of Theorem 11. **Theorem 26**.: Defensive Alliability _(and Minimum Defensive Alliance) on signed graphs is NP-complete even if the maximum degree is 5._ Proof.: The membership follows by Theorem 11, also the hardness is obtained by reducing from NAE-3SAT very similarly. We only modify the clause gadget into Figure 3. Let \(\phi=C_{1}\wedge\cdots\wedge C_{m}\) be an input of NAE-3SAT with the variable set \(X\), where \(|X|=n\). In the same way as in the proof of Theorem 11, define \(N_{v,i}=(V_{v,i},\emptyset,E^{-}_{v,i})\), where \(V_{v,i}\coloneqq\{v_{i,1},v_{i,2},v_{i,3},v_{i,4}\}\), \(i\in\mathbb{N}^{+}\), serving as not-in-the-solution gadgets. Notice that now, all these gadgets are small. Also, for each variable \(x\) in \(X\): \(C(x)=\{C_{j_{1}},\ldots,C_{j_{n_{x}}}\mid x\in C_{j_{i}},i\in\{1,\ldots,n_{x} \},j_{i}\in\{1,\ldots,m\}\}\), and \(X^{\prime}(x)\coloneqq\{x_{h}\mid h\in\{1,\ldots,2n_{x}\}\}\). We construct the signed graph \(G=(V,E^{+},E^{-})\) with: \[V =\{c_{i},d_{i}\mid i\in\{1,\ldots,m\}\}\cup\{V_{d_{i},l}\mid i\in \{1,\ldots,m\},l\in\{1,2\}\}\cup\bigcup_{x\in X}\left(X^{\prime}(x)\cup\bigcup _{v\in X^{\prime}(x),l\in\{1,2\}}V_{v,l}\right)\] \[E^{+} =\{c_{j}d_{j}\mid j\in\{1,\ldots,m\}\}\cup\{x_{2p-1}x_{2p}\mid p \in\{1,\ldots,n_{x}\},x\in X\}\] \[E^{-} =\{d_{i}d_{i+1},d_{1}d_{m}\mid i\in\{1,\ldots,m-1\}\}\cup\{d_{i}v _{d_{i},l},E^{-}_{d_{i},l}\mid i\in\{1,\ldots,m\},l\in\{1,2\}\}\cup\] \[\quad\left(\bigcup_{x\in X}\{c_{j_{p}}x_{2p-1}\mid p\in\{1,\ldots, n_{x}\},C_{j_{p}}\in C(x)\}\right)\cup\{x_{2p}x_{2p+1},x_{2n_{x}}x_{1}\mid x \in X,p\in\{1,\ldots,n_{x}-1\}\}\cup\] \[\quad\left(\bigcup_{x\in X}\{x_{2h}v_{x_{2h},l}\mid h\in\{1, \ldots,n_{x}\},l\in\{1,2\}\}\right)\cup\left(\bigcup_{x\in X}\ \bigcup_{v\in X^{\prime}(x),l\in\{1,2\}}E^{-}_{v,l}\right)\] Figure 3: Clause gadget: the squares represent the not-in-the-solution gadget. If \(A\) is a solution of \(\phi\), then for each \(C_{i}=x_{i1}\lor x_{i2}\lor x_{i3}\), there are \(j_{1},j_{2}\in\{1,2,3\}\) such that \(x_{ij_{1}}=1\) and \(x_{ij_{2}}=0\). Let \(S=\{x\in X\mid A(x)=0\}\). Define \(D\coloneqq\{c_{i},d_{i}\mid i\in\{1,\dots,m\}\}\cup\bigcup_{x\in S}X^{\prime}(x)\), We show that \(D\) is a defensive alliance. For each \(c_{i}\), \(1)\)\(\deg_{D}^{+}(c_{i})+1=2\geq\deg_{D}^{-}(c_{i})\); \(2)\)\(\deg_{D}^{+}(c_{i})+1=2\geq\deg_{\overline{D}}^{-}(c_{i})\). For each \(d_{i}\), \(\deg_{D}^{+}(d_{i})+1=2=\deg_{D}^{-}(c_{i})=\deg_{\overline{D}}^{-}(c_{i})\). For each vertex \(x_{h}\in D\) associated to \(x\in X\), \(1)\)\(\deg_{D}^{+}(x_{h})+1=2\geq\deg_{D}^{-}(x_{h})\); \(2)\)\(\deg_{D}^{+}(x_{h})+1=2=\deg_{\overline{D}}^{-}(x_{h})\). Therefore, \(D\) is a defensive alliance. Now assume there exists a defensive alliance \(D\subseteq V\) of \(G\). As for all \(u\in V_{v,i}\), \(\deg^{-}(u)\geq 4>1=\deg^{+}(u)+1\), we conclude that \(D\subseteq\{c_{i},d_{i}\mid i\in\{1,\dots,m\}\}\cup\{X^{\prime}(x)\mid x\in X\}\). Now, assume that there exists some \(d_{i}\in D\). Since \(\deg^{+}(d_{i})=1\) and \(\deg^{-}(d_{i})=4\), \(|D\cap N^{-}(d_{i})|=2\) and \(|D\cap N^{+}(d_{i})|=1\). Because \(v_{d_{i},1},v_{d_{i},2}\notin D\)\(c_{i},d_{i-1},d_{i+1}\in D\) (or \(d_{1},d_{m}\in D\) if \(i\in\{1,m\}\)). Hence, \(d_{i}\in D\) if and only if \(C\coloneqq\{c_{i},d_{i}\mid i\in\{1,\dots,m\}\}\subseteq D\). If \(c_{i}\in D\) for some \(i\), then \(\deg^{+}(d_{i})=1\) and \(\deg^{-}(d_{i})=3\) implies \(\deg_{D}^{+}(c_{i})=1\) and \(\deg_{D}^{-}(c_{i}),\deg_{\overline{D}}^{-}(c_{i})\in\{1,2\}\). With the argument from above, \(C\) would be a subset of \(D\). Assume there exists an \(x\in X\) with \(X^{\prime}(x)\cap D\neq\emptyset\). By the proof of Theorem 11, \(X^{\prime}(x)\subseteq D\). Define the assignment \(A:X\to\{0,1\}\) with \[x\mapsto\begin{cases}1,&x\in D\\ 0,&x\notin D\end{cases}\] As described before, if there exists a non-empty defensive alliance, then for each \(i\in\{1,\dots,m\}\), \(c_{i}\in D\) and one or two negative neighbors. Therefore, \(\phi\) is a \(\mathsf{yes}\)-instance of NAE-3SAT. In conclusion, (Min)DefAll is \(\mathsf{para}\)-\(\mathsf{NP}\)-hard when parameterized by maximum degree. **Lemma 27**.: _The constructed signed graph \(G\) has at most \(16m\) many vertices._ Proof.: From the previous reduction construction, \(|\{c_{i},d_{i}\mid i\in\{1,\dots,m\}\}\cup\{V_{d_{i},l}\mid i\in\{1,\dots,m\}, l\in\{1,2\}\}|\leq 2m+8m=10m\). Combined with Lemma 12, there are at most \(10m+6m=16m\) many vertices in total. Naturally inherited from our reduction, we have the next ETH-based result: **Corollary 28**.: _Under ETH, there is no algorithm solving DefAll (and MinDefAll) on n-vertex signed graphs of maximum degree \(5\) in time \(\mathcal{O}(2^{o(n)})\)._ When we consider the combination of different parameters, we can design parameterized algorithms with combined parameter solution size and maximum degree, as we have degree and diameter bounded by Proposition 9 and can hence obtain a (huge) kernel; see Fernau (2018). Alternatively, we can build a search tree quite analogously to the case of unsigned graphs, as in Fernau and Raible (2007). **Theorem 29**.: MinDefAll _is in FPT when parameterized both by solution size and by maximum degree._ Proof.: Let \((G,k)\) be an instance of MinDefAll, where the maximum degree of the underlying unsigned graph of \(G\) is \(d\). We show that MinDefAll is fixed-parameter tractable with combined parameters \(k\) and \(d\). From Proposition 9, we are looking for a defensive alliance which is connected in the underlying graph. We guess that a vertex \(v\) is in a defensive alliance \(D\) with \(|D|\leq k\). If \(v\) is already a defensive alliance, we are done; otherwise, we branch on the subset of the neighborhood of \(v\), which leads to \(2^{d}\) branches. Then we have a defensive alliance of size not larger than \(k\) or we keep on branching on the union of the neighborhoods resulting in a branch of size \(2^{k(d-1)}\). As \(D\) is connected, we can branch at most \(k-1\) times and obtain a search-tree of size bounded by \(2^{d}2^{k(k-2)(d-1)}=2^{\mathcal{O}(dk^{2})}\). Moreover, we can check whether we have already obtained a defensive alliance of size at most \(k\) in time \(\mathcal{O}(k^{2})\). We could branch over all vertices of \(G\) in the beginning, so that the problem can be solved in time \(\mathcal{O}(k^{2}2^{\mathcal{O}(dk^{2})}n)\). We can also combine the parameters treewidth and maximum degree. Again, the picture changes and we see parameterized tractability. **Theorem 30**.: (Min)DefAll _is in FPT when parameterized both by treewidth and maximum degree._ Proof.: Let \(G=(V,E^{+},E^{-})\) be an instance of DefAll with bounded treewidth \(l\) and maximum degree \(\Delta\). Actually, we will describe an algorithm based on the principle of dynamic programming that also solves the minimization version in the sense that it first decides if \(G\) admits a defensive alliance at all (which is signalled by having computed \(\infty\) as the size of the smallest alliance) or it will output the size of the smallest defensive alliance of \(G\). Clearly, this would also allow us to solve an instance of MinDefAll if necessary. Assume that a nice tree decomposition \(\mathcal{T}=(T,\{X_{t}\}_{t\in V(T)})\) of \(G\) with width \(l\) is known. Namely, recall that we can compute such a tree decomposition, given \(G\), in \(\mathsf{FPT}\)-time when parameterized by the treewidth of the input (Cygan et al. (2015)). We consider \(T\) as a rooted tree by arbitrarily choosing a root \(\rho\) and \(T_{t}\) as the subtree of \(T\) rooted at \(t\in T\) defined by those vertices whose unique path to \(\rho\) will contain \(t\). Define \(Y_{t}=\bigcup_{t^{\prime}\in V(T_{t})}X_{t^{\prime}}\) for each \(t\in V(T)\). We want to use dynamic programming to solve this problem. Therefore, we build a table \(\tau_{t}\) considering all partial solutions on \(Y_{t}\) (for \(t\in V(T)\)) depending on the interaction between the partial solution and the bag \(X_{t}\). For each \(A_{t}=\{v_{1},\ldots,v_{j}\}\subseteq X_{t}\), let \(\tau_{t}[A_{t},\Delta_{1}^{i},\Delta_{1}^{e},\ldots,\Delta_{j}^{i},\Delta_{j}^ {e}]\) with \(|A_{t}|\coloneqq j\leq l+1\), be the minimum size of the current solution \(S_{t}\) on \(Y_{t}\), where \(X_{t}\cap S_{t}=A_{t}\) and \(\Delta_{k}^{i}=\deg_{S_{t}}^{+}(v_{k})-\deg_{S_{t}}^{-}(v_{k})+1\) or \(\Delta_{k}^{e}=\deg_{S_{t}}^{+}(v_{k})-\deg_{S_{t}}^{-}(v_{k})+1=\deg_{S_{t}}( v_{k})-\deg^{-}(v_{k})+1\) records the first (internal) or second (external) defensive alliance condition for each \(k\in\{1,\ldots,j\}\), respectively. For each configuration \([A_{t},\Delta_{1}^{i},\Delta_{1}^{e},\ldots,\Delta_{j}^{i},\Delta_{j}^{e}]\) of node \(t\), more precisely, its value \(\tau_{t}[A_{t},\Delta_{1}^{i},\Delta_{1}^{e},\ldots,\Delta_{j}^{i},\Delta_{j}^ {e}]\) is given by: \[\min\{|S_{t}|\mid S_{t}\subseteq Y_{t},S_{t}\cap X_{t}=A_{t},\] \[\forall v_{k}\in A_{t},\Delta_{k}^{i}=\deg_{S_{t}}^{+}(v_{k})- \deg_{S_{t}}^{-}(v_{k})+1,\Delta_{k}^{e}=\deg_{S_{t}}(v_{k})-\deg^{-}(v_{k})+1,\] \[\forall u\in S_{t}\setminus A_{t},\deg_{S_{t}}^{+}(u)-\deg_{S_{t} }^{-}(u)+1\geq 0,\deg_{S_{t}}(u)-\deg^{-}(u)+1\geq 0\}.\] Here, we see already how the idea of a partial solution is implemented by using the fact that bags are separators in the graph. Namely, the vertices in \(S_{t}\) that are not in \(A_{t}\) must satisfy the conditions imposed upon defensive alliance vertices as they will never see any further neighbors when walking towards the root of the tree decomposition. Notice that these conditions are not only locally looking at the part of the graph that has been processed when walking bottom-up along the tree decomposition, but the degree conditions are referring to the whole graph \(G\). This feature is different from most algorithms that use dynamic programming for computing graph parameters along tree decompositions. Conversely, from the perspective of the vertex set \(A_{t}\) presumed to be a subset of a defensive alliance, the exact set of neighbors in \(Y_{t}\setminus X_{t}\) is irrelevant, it only counts how many friends or foes are within or outside the set \(S_{t}\supseteq A_{t}\). This information is fixed in the last \(2\cdot j\) descriptors of a table row. Let \(Z_{t}[A_{t},\Delta_{1}^{i},\Delta_{1}^{e},\ldots,\Delta_{j}^{i},\Delta_{j}^{e}]\) be the underlying set of sets, so that \[\tau_{t}[A_{t},\Delta_{1}^{i},\Delta_{1}^{e},\ldots,\Delta_{j}^{i},\Delta_{j}^ {e}]=\min\{|S_{t}|\mid S_{t}\in Z_{t}[A_{t},\Delta_{1}^{i},\Delta_{1}^{e}, \ldots,\Delta_{j}^{i},\Delta_{j}^{e}]\}\,.\] If \(Z_{t}[A_{t},\Delta_{1}^{i},\Delta_{1}^{e},\ldots,\Delta_{j}^{i},\Delta_{j}^{e} ]=\emptyset\), we set \(\tau_{t}[A_{t},\Delta_{1}^{i},\Delta_{1}^{e},\ldots,\Delta_{j}^{i},\Delta_{j}^ {e}]\coloneqq\infty\). Obviously, the number of entries of each table is bounded by \(2^{l+1}\cdot(2\Delta+1)^{2(l+1)}\). We now specifically present how to build the table of \(t\in V(T)\) by using the tables of its children. The initialization sets all tables of leaf nodes \(t\) of the tree decomposition with one entry \(\tau_{t}[\emptyset]=0\). * Consider the case of an introduce node \(t\) with unique child \(t^{\prime}\), where \(X_{t}=X_{t^{\prime}}\cup\{v\}\). For each configuration \([A_{t},\Delta_{1}^{i},\Delta_{1}^{e},\ldots,\Delta_{j}^{i},\Delta_{j}^{e}]\) of \(t\), we have: * If \(v\notin A_{t}\), then \(\tau_{t}[A_{t},\Delta_{1}^{i},\Delta_{1}^{e},\ldots,\Delta_{j}^{i},\Delta_{j}^ {e}]=\tau_{t^{\prime}}[A_{t},\Delta_{1}^{i},\Delta_{1}^{e},\ldots,\Delta_{j}^ {i},\Delta_{j}^{e}]\). * If \(v\in A_{t}\), without loss of generality, we regard \(v\) as the \(j\)th vertex of \(A_{t}\), for each \(k\in\{1,\ldots,j-1\}\), we define \[\Delta_{k}^{\prime,i}\coloneqq\begin{cases}\Delta_{k}^{i}-1,&\text{if $v_{k}v_{j} \in E^{+}$}\\ \Delta_{k}^{i}+1,&\text{if $v_{k}v_{j}\in E^{-}$}\\ \Delta_{k}^{i},&\text{otherwise}\end{cases}\] \[\Delta^{\prime,e}_{k}\coloneqq\begin{cases}\Delta^{e}_{k}-1,&\text{if }v_{k}v_{j}\in E ^{+}\cup E^{-}\\ \Delta^{e}_{k},&\text{otherwise}\end{cases}\] as well as \[\widetilde{\Delta}^{i}_{k}\coloneqq\deg^{+}_{A_{t}}(v_{k})-\deg^{-}_{A_{t}}(v_ {k})+1\] \[\widetilde{\Delta}^{e}_{k}\coloneqq\deg_{A_{t}}(v_{k})-\deg^{-}(v_{k})+1\] for \(k\in\{1,\ldots,j\}\). Then, \[\tau_{t}[A_{t},\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_{j},\Delta^{e}_ {j}]=\begin{cases}\tau_{\nu}[A_{t}\setminus\{v\},\Delta^{\prime,i}_{1},\Delta^ {\prime,e}_{1},\ldots,\Delta^{\prime,i}_{j-1},\Delta^{\prime,e}_{j-1}]+1,& \text{if }\quad\Delta^{i}_{j}=\widetilde{\Delta}^{i}_{j}\text{ and}\\ &\qquad\Delta^{e}_{j}=\widetilde{\Delta}^{e}_{j}\\ \infty,&\text{otherwise}\end{cases}\] * Now, consider the case of a forget node \(t\) with unique child \(t^{\prime}\), where \(X_{t}=X_{t^{\prime}}\setminus\{v\}\). For each configuration \([A_{t},\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_{j},\Delta^{e}_{j}]\) of \(t\), we can compute the table entry \(\tau_{t}[A_{t},\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_{j},\Delta^{e}_ {j}]\) as \[\min\{\tau_{\nu}[A_{t},\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_{j}, \Delta^{e}_{j}],\tau_{\nu^{\prime}}[A_{t}\cup\{v\},\Delta^{i}_{1},\Delta^{e}_ {1},\ldots,\Delta^{i}_{j},\Delta^{e}_{j},k^{i},k^{e}]\mid k^{i},k^{e}\geq 0\}\] * Finally, consider the case of a join node \(t\) with two children \(t^{\prime}\) and \(t^{\prime\prime}\), where \(X_{t}=X_{t^{\prime}}=X_{t^{\prime\prime}}\). For each configuration \([A_{t},\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_{j},\Delta^{e}_{j}]\) of \(t\), we set: \[\tau_{t}[A_{t},\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_{j},\Delta^{e}_ {j}]=\min\{\tau_{t^{\prime}}[A_{t},x^{i}_{1},x^{e}_{1},\ldots,x^{i}_{j},x^{e}_ {j}]+\tau_{t^{\prime\prime}}[A_{t},x^{i}_{1},x^{e}_{1},\ldots,x^{i}_{j},x^{e}_ {j}]-|A_{t}|\] \[\mid x^{i}_{k}+y^{i}_{k}=\Delta^{i}_{k}+\widetilde{\Delta}^{i}_{k}, x^{e}_{k}+y^{e}_{k}=\Delta^{e}_{k}+\widetilde{\Delta}^{e}_{k}\}\] For the remaining proof, we want to show that the given recursion fits the definition. For this we use induction on the tree decomposition structure. We start with a leaf node \(t\). This implies \(X_{t}=Y_{t}=\emptyset\). Hence, \(A_{t}=\emptyset\) and \(S_{t}=\emptyset\) fulfills all necessary conditions of the underlying set of \(\tau_{t}[\emptyset]\). This implies \(\tau_{t}[\emptyset]=0\). Let now \(t\) be a node and assume we calculated all values for the tables of the children. We will use case distinction for \(t\) depending on three cases, \(t\) being either an introduce, a forget or a join node. We start with the introduce node case. Let \(t^{\prime}\) be the child of \(t\) and \(v\) be the introduced vertex. \(A_{t}=\{v_{1},\ldots,v_{j}\}\subseteq X_{t}\) and \(\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_{j},\Delta^{e}_{j}\in\{-\Delta+1,\ldots,\Delta+1\}\). Assume \(v\notin A_{t}\). Hence, \(A_{t}\subseteq X_{t^{\prime}}\) and as \(Y_{t}=Y_{t^{\prime}}\cup\{v\}\), \(S_{t}\subseteq Y_{t^{\prime}}\) for each \(S_{t}\in Z_{t}[A_{t},\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_{j}, \Delta^{e}_{j}]\). Therefore, \(Z_{t}[A_{t},\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_{j},\Delta^{e}_{j}]\)\(=Z_{t^{\prime}}[A_{t},\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_{j},\Delta^{e}_{j}]\) and \(\tau_{t}[A_{t},\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_{j},\Delta^{e}_ {j}]=\tau_{\nu^{\prime}}[A_{t},\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_ {j},\Delta^{e}_{j}]=\tau_{\nu^{\prime}}[A_{t},\Delta^{i}_{1},\Delta^{e}_{1}, \ldots,\Delta^{i}_{j},\Delta^{e}_{j}]\). Let \(v\in A_{t}\). Without loss of generality, \(v=v_{j}\). Since \(N(v)\cap Y_{t}\subseteq X_{t}\) by the separator property of bags, \(\Delta^{i}_{j}\neq\deg^{+}_{A_{t}}(v_{j})-\deg^{-}_{A_{t}}(v_{j})+1=\widetilde{ \Delta}^{i}_{j}\) or \(\Delta^{e}_{j}\neq\deg_{A_{t}}(v_{j})-\deg^{-}(v_{j})+1=\widetilde{\Delta}^{e}_{j}\) would imply \(Z_{t}[A_{t},\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_{j},\Delta^{e}_{j} ]=\emptyset\) and \(\tau_{t}[A_{t},\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_{j},\Delta^{e}_ {j}]=\infty\). Therefore, assume \(\Delta^{i}_{j}=\widetilde{\Delta}^{i}_{j}\) and \(\Delta^{e}_{j}=\widetilde{\Delta}^{e}_{j}\). Let \(S_{t}\in Z_{t}[A_{t},\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_{j}, \Delta^{e}_{j}]\) and \(S_{t^{\prime}}\coloneqq S_{t}\setminus\{v\}\) with \(|S_{t}|=\tau_{t}[A_{t},\Delta^{i}_{1},\Delta^{e}_{1},\ldots,\Delta^{i}_{j}, \Delta^{e}_{j}]\). For all \(k\in\{1,\ldots,j-1\}\), \(\Delta^{\prime,i}_{k}=\deg^{+}_{S_{t^{\prime}}}(v_{k})-\deg^{-}_{S_{t^{\prime}}}(v _{k})+1\) and \(\Delta^{\prime,e}_{k}=\deg_{S_{t^{\prime}}}(v_{k})-\deg^{-}(v_{k})+1\) holds by definition of \(\Delta^{\prime,i}_{k},\Delta^{\prime,e}_{k}\). For each \(u\in S_{t}\setminus X_{t}=S_{t^{\prime}}\setminus X_{t^{\prime}}\), \(\deg^{+}_{S_{t}}(u)=\deg^{+}_{S_{t^{\prime}}}(u)\) and \(\deg^{-}_{S_{t}}(u)=\deg^{-}_{S_{t^{\prime}}}(u)\). Since further \(S_{t^{\prime}}\subseteq Y_{t^{\prime}}\), \(S_{t^{\prime}}\in Z_{t^{\prime}}[A_{t}\setminus\{v\},\Delta^{\prime,i}_{1}, \Delta^{\prime,e}_{1},\ldots,\Delta^{\prime,i}_{j-1},\Delta^{\prime,e}_{j-1}]\). Assume there exists an \(S^{\prime}_{t^{\prime}}\in Z_{t^{\prime}}[A_{t}\setminus\{v\},\Delta^{\prime,i}_{1}, \Delta^{\prime,e}_{1},\ldots,\Delta^{\prime,i}_{j-1},\Delta^{\prime,e}_{j-1}]\) with \(|S^{\prime}_{t^{\prime}}|<|S_{t^{\prime}}|\). This would contradict \(|S_{t}|=\tau_{t}[A_{t},\Delta^{i}_{1},\Delta^{i}_{1},\Delta^{e}_{1},\ldots, \Delta^{i}_{j},\Delta^{e}_{j}]\), \(S^{\prime}_{t^{\prime}}\cup\{v\}\in Z_{t}[A_{t},\Delta^{i}_{1},\Delta^{e}_{1}, \ldots,\Delta^{i}_{j},\Delta^{e}_{j Finally, assume that \(t\) is a join node with two children \(t^{\prime}\) and \(t^{\prime\prime}\), where \(X_{t}=X_{t^{\prime}}=X_{t^{\prime\prime}}\). We know that \(Y_{t}=Y_{t^{\prime}}\cup Y_{t^{\prime\prime}}\), from the property of tree decomposition, \(Y_{t^{\prime}}\cap Y_{t^{\prime\prime}}=X_{t}=X_{t^{\prime}}=X_{t^{\prime \prime}}\). Let \(A_{t}=\{v_{1},\ldots,v_{j}\}\subseteq X_{t}\) and \(\Delta_{1}^{i},\Delta_{1}^{e},\ldots,\Delta_{j}^{i},\Delta_{j}^{e}\in\{- \Delta+1,\ldots,\Delta+1\}\), for each \(S_{t}\subseteq Y_{t}\) with \(S_{t}\cap X_{t}=A_{t}\), let \(S_{t}=S_{1}\cup S_{2}\cup S_{3}\), where \(S_{1}\subseteq S_{t}\subseteq Y_{t^{\prime}}\setminus Y_{t^{\prime\prime}},S_{ 2}\subseteq S_{t}\subseteq Y_{t^{\prime}}\setminus Y_{t^{\prime}},S_{3}=S_{t} \setminus(S_{1}\cup S_{2})\). Clearly, \(S_{t^{\prime}}:=S_{1}\cup S_{3}\subseteq Y_{t^{\prime}}\) with \(S_{t^{\prime}}\cap X_{t^{\prime}}=S_{3}\), \(S_{t^{\prime\prime}}\coloneqq S_{2}\cup S_{3}\subseteq Y_{t^{\prime\prime}}\) with \(S_{t^{\prime\prime}}\cap X_{t^{\prime\prime}}=S_{3}\), \(S_{3}=S_{t}\cap(Y_{t^{\prime}}\cap Y_{t^{\prime\prime}})=A_{t}\). So for each \(v_{k}\in A_{t}\), \[\Delta_{k}^{i}=\deg_{S_{t}}^{+}(v_{k})-\deg_{S_{t}}^{-}(v_{k})+1=\deg_{S_{1}}^ {+}(v_{k})+\deg_{S_{2}}^{+}(v_{k})+\deg_{S_{3}}^{+}(v_{k})-\deg_{S_{1}}^{-}(v_{ k})-\deg_{S_{2}}^{-}(v_{k})-\deg_{S_{3}}^{-}(v_{k})+1\,,\] i.e., \(\Delta_{k}^{i}=(\deg_{S_{1}}^{+}(v_{k})+\deg_{S_{3}}^{+}(v_{k})-\deg_{S_{1}}^{ -}(v_{k})-\deg_{S_{3}}^{-}(v_{k})+1)+(\deg_{S_{2}}^{+}(v_{k})+\deg_{S_{3}}^{+} (v_{k})-\deg_{S_{3}}^{-}(v_{k})-\deg_{S_{3}}^{-}(v_{k})+1)-(\deg_{S_{3}}^{+}(v _{k})-\deg_{S_{3}}^{-}(v_{k})+1)=\Delta_{k}^{\prime,i}+\Delta_{k}^{\prime,i}- \widetilde{\Delta_{k}^{i}}\), and analogously, \(\Delta_{k}^{e}=\Delta_{k}^{\prime,e}+\Delta_{k}^{\prime\prime,e}-\widetilde{ \Delta_{k}^{e}}\). Therefore, \(Z_{t^{\prime}}[A_{t},\Delta_{1}^{\prime,i},\Delta_{1}^{\prime,e},\ldots, \Delta_{j}^{\prime,i},\Delta_{j}^{\prime,e}]\cup Z_{t^{\prime\prime}}[A_{t}, \Delta_{1}^{\prime,i},\Delta_{1}^{\prime,\prime,e},\ldots,\Delta_{j}^{\prime,i},\Delta_{j}^{\prime,e}]\subseteq Z_{t}[A_{t},\Delta_{1}^{i},\Delta_{1}^{e}, \ldots,\Delta_{j}^{i},\Delta_{j}^{e}]\), as \[|S_{t}|=|S_{1}|+|S_{2}|+|S_{3}|=(|S_{1}|+|S_{3}|)+(|S_{2}|+|S_{3}|)-|S_{3}|\,,\] \(\tau_{t}[A_{t},\Delta_{1}^{i},\Delta_{1}^{e},\ldots,\Delta_{j}^{i},\Delta_{j}^ {e}]\) equals the minimum over all sums \[\tau_{t^{\prime}}[A_{t},\Delta_{1}^{\prime,i},\Delta_{1}^{\prime,e},\ldots, \Delta_{j}^{\prime,i},\Delta_{j}^{\prime,e}]+\tau_{t^{\prime\prime}}[A_{t}, \Delta_{1}^{\prime,i},\Delta_{1}^{\prime,\prime,e},\ldots,\Delta_{j}^{\prime,i},\Delta_{j}^{\prime,e}]-|A_{t}|\] where \(\Delta_{k}^{\prime,i}+\Delta_{k}^{\prime\prime,i}=\Delta_{k}^{i}+\widetilde{ \Delta}_{k}^{i},\Delta_{k}^{\prime,e}+\Delta_{k}^{\prime\prime,e}=\Delta_{k}^{ e}+\widetilde{\Delta}_{k}^{e}\), as we claimed it above. This result raises the question if we can find efficient algorithms for sparse graphs of bounded treewidth that might model at least some realistic network scenarios. We next discuss a new structural parameter for signed graphs, _signed neighborhood diversity_\(\mathsf{snd}\), which makes the problems tractable, parameterized by \(\mathsf{snd}\). Our construction is a non-trivial extension of a similar result for unsigned graphs shown in Gaikwad, Maity, and Tripathi (2021). **Definition 4**.: _(**Signed Neighborhood Diversity \(\mathsf{snd}\)**) Let \(G=(V,E^{+},E^{-})\) be a signed graph. Define the binary relation \(\equiv_{\mathsf{snd}}\) on \(V\) by \(v\equiv u\) if and only if \(N^{+}(v)\setminus\{u\}=N^{+}(u)\setminus\{v\}\) and \(N^{-}(v)\setminus\{u\}=N^{-}(u)\setminus\{v\}\). \(\mathsf{snd}(G)\) is the number of equivalence classes of \(\equiv_{\mathsf{snd}}\) on \(G\)._ We will exhibit the parameterized algorithm in the following, which is implemented by an ILP with \(2\cdot\mathsf{snd}(G)\) many variables. By (Cygan et al., 2015, Theorem 6.5), this ILP can be solved in \(\mathsf{FPT}\)-time. Let \(d\coloneqq\mathsf{snd}(G)\), and \(C_{1},\ldots,C_{d}\) denote the equivalence classes of \(\equiv_{\mathsf{snd}}\). Further, define \(N_{i}^{+}\coloneqq\{j\in\{1,\ldots,d\}\mid C_{j}\subseteq N^{+}(C_{i})\}\) and \(N_{i}^{-}\coloneqq\{j\in\{1,\ldots,d\}\mid C_{j}\subseteq N^{-}(C_{i})\}\). For the ILP, we need \(z_{1}^{+},\ldots,z_{d}^{+}\in\{0,1\}\) with \(z_{i}^{+}=1\) if and only if \(i\in N_{i}^{+}\). Similarly, define \(z_{1}^{-},\ldots,z_{d}^{-}\in\{0,1\}\). The ILP has the variables, \(x_{1},\ldots,x_{d},w_{1},\ldots,w_{d}\). Let \(i\in\{1,\ldots,d\}\). \(x_{i}\in\{0,\ldots,|C_{i}|\}\) represents the number of vertices in \(C_{i}\) which are in our solution. By Equation (3) and Equation (4), \(w_{i}\in\{0,1\}\) will only express if \(x_{i}\) is \(0\) or not. The ILP is given by: \[(\sum_{j\in N_{i}^{+}}x_{j})+1-z_{i}^{+} \geq(\sum_{j\in N_{i}^{-}}x_{j})-z_{i}^{-}-2\cdot(1-w_{i})\cdot|V|\] \[\forall i \in\{1,\ldots,d\} \tag{1}\] \[(\sum_{j\in N_{i}^{+}}x_{j})+1-z_{i}^{+} \geq(\sum_{j\in N_{i}^{-}}|C_{j}|-x_{j})-2\cdot(1-w_{i})\cdot|V|\] \[\forall i \in\{1,\ldots,d\}\] (2) \[x_{i} \leq|C_{i}|\cdot w_{i}\quad\forall i\in\{1,\ldots,d\}\] (3) \[w_{i} \leq x_{i}\quad\quad\quad\forall i\in\{1,\ldots,d\}. \tag{4}\] **Lemma 31**.: _There exists a defensive alliance \(S\subseteq V\) on \(G\) if and only if the ILP has a solution. If there exists such a solution, then \(|S|=\sum_{i=1}^{d}x_{i}\)._ Proof.: Let \(S\) be a defensive alliance on \(G\). Then define \(x_{i}^{\prime}\coloneqq|S\cap C_{i}|\), for each \(i\in\{1,\ldots,d\}\). Let \(i\in\{1,\ldots,d\}\). Then set \(w_{i}^{\prime}\) to \(1\) if and only if \(x_{i}^{\prime}\neq 0\). Otherwise, set \(w_{i}\) to \(0\). Clearly, \(w_{i}^{\prime}\in\{0,1\}\) and \(0\leq x_{i}^{\prime}\leq|C_{i}|\). Further, \(x_{i}^{\prime}\) and \(w_{i}^{\prime}\) fulfill the inequalities 3 and 4. Now we want to show that \(\deg_{S}^{+}(v)=\left(\sum_{j\in N_{i}^{+}}x_{j}^{\prime}\right)-z_{i}^{+}\) holds for each \(v\in S\cap C_{i}\). Since, for each \(j\in\{1,\ldots,d\}\) and \(v,u\in C_{j}\), \(N^{+}(v)\setminus\{u\}=N^{+}(u)\setminus\{v\}\), \(C_{k}\subseteq N^{+}(C_{i})\) or \(C_{k}\cap N^{+}(C_{i})=\emptyset\). Therefore, \(\deg_{S\setminus C_{i}}^{+}(v)=\sum_{j\in N_{i}^{+}\setminus\{i\}}x_{j}^{\prime}\) for each \(v\in C_{i}\). Let \(v\in S\cap C_{i}\). If \(i\in N_{i}^{+}\), then \(\deg_{S\cap C_{i}}^{+}(v)=|S\cap C_{i}|-1=x_{i}^{\prime}-z_{i}\). This implies \(\deg_{S}^{+}(v)=\left(\sum_{j\in N_{i}^{+}}x_{j}^{\prime}\right)-z_{i}^{+}\) for all \(v\in S\cap C_{i}\). Analogously, \(\deg_{S}^{-}(v)=\left(\sum_{j\in N_{i}^{-}}x_{j}^{\prime}\right)-z_{i}^{-}\) and \(\deg_{\overline{S}}(v)=\left(\sum_{j\in N_{i}^{-}}|C_{j}|-x_{j}^{\prime}\right)\) (as \(\deg_{C_{j}\setminus S}^{-}(v)=|C_{j}\setminus S|=|C_{j}|-x_{j}^{\prime}\) for each \(j\in\{1,\ldots,d\}\)) holds for each \(v\in S\cap C_{i}\). If \(w_{i}^{\prime}=0\), then we observe \[(\sum_{j\in N_{i}^{+}}x_{j}^{\prime})+1-z_{i}^{+}\geq 0>|V|-2|V|\geq(\sum_{j \in N_{i}^{-}}x_{j}^{\prime})-z_{i}^{-}-2\cdot(1-w_{i})\cdot|V|.\] Analogously, \(\left(\sum_{j\in N_{i}^{+}}x_{j}^{\prime}\right)+1-z_{i}^{+}>\left(\sum_{j \in N_{i}^{-}}|C_{j}|-x_{j}^{\prime}\right)-2\cdot(1-w_{i})\cdot|V|\) holds. For \(w_{i}^{\prime}=1\), there exists a \(v\in S\cap C_{i}\). Then \[(\sum_{j\in N_{i}^{+}}x_{j}^{\prime})+1-z_{i}=\deg_{S}^{+}(v)\geq\deg_{S}^{-} (v)\geq(\sum_{j\in N_{i}^{-}}x_{j}^{\prime})-z_{i}^{-}\geq(\sum_{j\in N_{i}^{ -}}x_{j}^{\prime})-z_{i}^{-}-2\cdot(1-w_{i})\cdot|V|.\] Further \[(\sum_{j\in N_{i}^{+}}x_{j}^{\prime})+1-z_{i}=\deg_{S}^{+}(v)\geq\deg_{S}^{-} (v)\geq(\sum_{j\in N_{i}^{-}}|C_{j}|-x_{j}^{\prime})\geq(\sum_{j\in N_{i}^{-} }|C_{j}|-x_{j}^{\prime})-2\cdot(1-w_{i})\cdot|V|.\] Hence \(x_{1}^{\prime},\ldots,x_{d}^{\prime},w_{1}^{\prime},\ldots,w_{d}^{\prime}\) is a solution for the ILP. Further \[|S|=\sum_{j=1}^{d}|S\cap C_{j}|=\sum_{j=1}^{d}x_{j}^{\prime}\] Now assume there exists a solution \(x_{1}^{\prime},\ldots,x_{d}^{\prime},w_{1}^{\prime},\ldots,w_{d}^{\prime}\) for the ILP. Let \(i\in\{1,\ldots,d\}\) If \(x_{i}^{\prime}=0\), then \(0\leq w_{i}^{\prime}\leq x_{i}^{\prime}\leq 0\) (see inequality 4) implies \(w_{i}^{\prime}=0\). For \(x_{i}^{\prime}\neq 0\), \(0<x_{i}^{\prime}\leq|C_{i}|\cdot w_{i}^{\prime}\) implies \(w_{i}^{\prime}=1\). Choose a set \(S\) such that \(x_{j}=|S\cap C_{j}|\) for \(j\in\{1,\ldots,d\}\). This can be done as \(0\leq x_{j}\leq|C_{j}|\) and \(C_{1},\ldots,C_{d}\) is a partition. As above, \(\deg_{S}^{+}(v)=\left(\sum_{j\in N_{i}^{+}}x_{j}^{\prime}\right)-z_{i}^{+}\), \(\deg_{S}^{-}(v)=\left(\sum_{j\in N_{i}^{-}}x_{j}^{\prime}\right)-z_{i}^{-}\) and \(\deg_{\overline{S}}(v)=\left(\sum_{j\in N_{i}^{-}}|C_{j}|-x_{j}^{\prime}\right)\) hold for all \(i\in\{1,\ldots,d\}\) and \(v\in C_{i}\cap S\). Let \(i\in\{1,\ldots,d\}\) and \(v\in S\cap C_{i}\). This implies \(x_{i}^{\prime}>0\) and \(w_{i}^{\prime}=1\). Therefore, \[\deg_{S}^{+}(v)+1=(\sum_{j\in N_{i}^{+}}x_{j}^{\prime})-z_{i}^{+}+1\geq(\sum_{j \in N_{i}^{-}}|C_{j}|-x_{j}^{\prime})-2\cdot(1-w_{i})\cdot|V|=(\sum_{j\in N_{i }^{-}}|C_{j}|-x_{j}^{\prime})=\deg_{\overline{S}}^{-}(v)\] and \[\deg_{S}^{+}(v)+1=(\sum_{j\in N_{i}^{+}}x_{j}^{\prime})-z_{i}^{+}+1\geq(\sum_{j \in N_{i}^{-}}x_{j}^{\prime})-z_{i}^{-}-2\cdot(1-w_{i})\cdot|V|=(\sum_{j\in N_{i }^{-}}x_{j}^{\prime})-z_{i}^{-}=\deg_{S}^{-}(v).\] Therefore, \(S\) is a defensive alliance. **Theorem 32**.: _sd-Minimum Defensive Alliance and snd-Defensive Alliability are in FPT._ As a side note, observe that the ILP formulation above also contains, as a sub-system of equations, a way how to express (Min)DefAll as an ILP, which could be interesting for solving such problems in practice. We were not able to find a similar parameterized algorithm when considering the parameterization by the neighborhood diversity of the underlying unsigned graph. This is also due to the fact that we do not know if Minimum Defensive Alliance is still hard or possibly polynomial-time solvable if the underlying graph is a clique. Therefore, it is natural to consider weaker parameterizations of the underlying unsigned graph. This discussion makes the following result interesting. **Theorem 33**.: DefAll _and MinDefAll can be solved in FPT-time when parameterized by the vertex cover number of the underlying unsigned graph._ Proof.: Let \(\mathsf{vc}\) be the vertex cover number of the underlying graph of \(G\) and \(C\) be a minimum vertex cover of the underlying graph. Notice that we can compute such a set \(C\) in FPT-time; the currently best algorithm is described in Chen, Kanj, and Xia (2010). Then, \(I\coloneqq V\setminus C\) is an independent set. For each vertex in \(I\), all its neighbors are in \(C\). Distinguished from attributes of edge relationship, there are at most \(3^{\mathsf{vc}}\) different neighborhoods for the vertices of \(I\). So the signed neighborhood diversity of \(G\) is bounded by \(3^{\mathsf{vc}}+\mathsf{vc}\). From Theorem 32, the claim follows. ## Discussion In this paper, we have introduced the notion of defensive alliances in signed networks and provided several algorithmic results for different problem settings. We conclude with suggesting further meaningful variants of problems concerning defensive alliances motivated from a practical perspective. * In our model, we considered negative relationships within an alliance in the same way as negative relationships outside, but with a different intuition and reasoning behind: negative relations outside the alliance are meant to be "enemies", while negative relations inside should stay neutral, but we do not like to see too many of them also for psychological reasons; or if there are some issues on which the alliance should agree, then it is also bad if one partner in the alliance has too many negative relations inside. But one could think of treating both outside and inside negative relationships differently, possibly also in the sense of the next item. * In the literature of alliances, also a 'parameterized variant' was discussed, requiring that the difference between (positive) relations inside and (negative) relations outside is at least \(r\) for some fixed \(r\). We only discussed the case \(r=-1\) in this paper. The background is that, according to military history, an attacker should be really stronger than a defender to have a chance of victory. We did not consider this type of parameterizations at all so far. * As in Defensive Alliance Building, counting the necessary flips means counting costs, it would make sense to introduce costs on the edges: some parties might be harder to persuade to become friends than others. * Likewise, in Minimum Defensive Alliance, it would make sense to put weights on the vertices, modelling different strengths of the involved parties. For instance, not all members of NATO can be considered to be equally powerful. * Finally, one may argue that it would be more justifiable not to treat "internal" and "external" enemies alike, as we do in our proposed definition of a defensive alliance \(S\). For instance, one could introduce a scaling factor \(\alpha\in[0,1]\) and require that \(\deg_{S}^{+}(v)+1\geq\alpha\cdot\deg_{S}(v)\) for all \(v\in S\), so that \(\alpha=1\) would correspond to the (first) condition discussed in this paper, while \(\alpha=0\) means that "internal enemies" are completely ignored, as they are supposed to stay neutral in case of a conflict, i.e., only the friends within an alliance count. Clearly, setting \(\alpha=0\) will take off the hardness of Defensive Alliability. On a more technical side, a very interesting question is the complexity status of Defensive Alliability or of Minimum Defensive Alliance if the underlying graph is _complete_. Recall that for the well-known question of Correlation Clustering, NP-hardness is preserved under this restriction. This question is related to asking if (Min)DefAll can be solved in FPT-time when parameterized by the neighborhood diversity of the underlying unsigned graph, or even, more restrictively, by the _distance to clique_ (which denotes the vertex cover number of the complement graph). To round up the picture of our paper, it would be also nice to know if Defensive Alliability is in FPT when parameterized by treewidth or if MinDefAll is in FPT when parameterized by the combined parameter treewidth plus solution size. We pose both as open questions here.
2308.00181
Study of the azimuthal asymmetry in heavy ion collisions combining initial state momentum orientation and final state collective effects
In the present work we investigate the source of azimuthal asymmetry for nuclear collision using a model that contemplates particles produced in the initial hard collisions and the collective effects described by a Blast-Wave like expansion. The latter is described by the relaxation time approximation of the Boltzmann transport equation. The parameters regarding collective flow and asymmetry are fitted by the experimental data from $p_T$ spectrum and $v_2$ for PbPb and XeXe collisions at different centrality classes. As a by-product the ratio of final elliptic flow with the initial anisotropy, $v_2/\epsilon_2$, and the average transverse momentum are predicted.
Lucas Soster Moriggi, Érison dos Santos Rocha, Magno Valério Trindade Machado
2023-07-31T22:33:48Z
http://arxiv.org/abs/2308.00181v2
Study of the azimuthal asymmetry in heavy ion collisions combining initial state momentum orientation and final state collective effects ###### Abstract In the present work we investigate the source of azimuthal asymmetry for nuclear collision using a model that contemplates particles produced in the initial hard collisions and the collective effects described by a Blast-Wave like expansion. The latter is described by the relaxation time approximation of the Boltzmann transport equation. The parameters regarding collective flow and asymmetry are fitted by the experimental data from \(p_{T}\) spectrum and \(v_{2}\) for PbPb and XeXe collisions at different centrality classes. As a by-product the ratio of final elliptic flow with the initial anisotropy, \(v_{2}/e_{2}\), and the average transverse momentum are predicted. ## I Introduction The available relativistic heavy ion collision experiments allow us to investigate the underlying dynamics of quarks and gluons at very high energy density [1]. The thermalized deconfined parton system, i.e. the Quark-Gluon Plasma (QGP) [2; 3], generated in these reactions has important signatures like the collective flow, parton energy loss, quarkonium production suppression and many others [4; 5]. In particular, the elliptic flow [6; 7; 8; 9] (\(v_{2}\), the second harmonic coefficient of the azimuthal Fourier decomposition of the momentum distribution) measures the non-uniformity of the flow in all directions as viewed along the beam-line [10; 11; 12; 13]. Namely, it characterizes the azimuthal momentum space anisotropy of particle emission from non-central heavy-ion collisions in the plane transverse to the beam direction. As the anisotropy is largest in the first instance of the system evolution, the \(v_{2}\)-coefficient is quite sensible to the early stages of the collisions. Thus, elliptic flow encodes the residual asymmetry of the particle density in momentum space referring to the reaction plane subsequent to the hadronization. The main motivation of present work is to address the simultaneous description of the nuclear modification factor, \(R_{AA}\), and the elliptic flow in a consistent way. It is already known that parton energy loss models that have been tuned to describe \(R_{AA}\) in identified hadron production underestimate the elliptic flow \(v_{2}\) at intermediate transverse momentum [14; 15; 16; 17; 18; 19; 20]. This has been named in literature as the \(R_{AA}\otimes v_{2}\) puzzle. Both observables are nicely described by the QGP's hydrodynamic expansion as a strongly coupled fluid at low transverse momentum (\(p_{T}\lesssim 2\) GeV) [11; 21; 22; 23; 24; 25; 26; 27; 28; 29]. At high momentum, typically \(p_{T}\gtrsim 10\) GeV, they can be correctly described in terms of jet quenching due to hard parton energy loss in their propagation across the hot QGP medium [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46]. On the other hand, it is a challenge to describe \(R_{AA}\) and \(v_{2}\) consistently in the intermediate transverse momentum region characterized by the soft and hard physics confluence. In our case, the parton/gluon saturation physics embedded into the theoretical approach allows to use weak coupling methods in this interface region. Recently, studies using a combination of the state-of-art on transport models, hadronization and hydrodynamic evolution have been carried out. For instance, in Ref. [47] both quark coalescence and a hadronic afterburner in the COLBT-hydro model are implemented. That investigation combines event-by-event hydrodynamics, jet quenching as well as hadron cascade and simultaneously describe \(R_{AA}\) and \(v_{2}\) in the full range of transverse momentum. Similar approaches have been proposed, see [48; 49], which will shed light into further investigations of the topic. In present study the question is to what extent the elliptic flow can be determined by the initial state effects and how these effects can be detached from the final state ones. Concerning identified particle spectra in transverse momenta (\(p_{T}\)), the hadron production can be described within the \(k_{T}\)-factorization formalism (including the primordial parton transverse momenta) and one considers that the cold matter nuclear effects are generated in the hard interaction of the nuclei at initial states of the corresponding collision. On the other hand, afterwards those systems undergo a hydrodynamics evolution to freeze-out which alters the corresponding \(p_{T}\)-spectra. In the context of the relaxation time approximation (RTA) of the Boltzmann transport equation (BTE) [50] the \(p_{T}\) spectrum can be described by performing a temporal separation between hadrons produced in initial state hard collision and those produced in the equilibrium situation [51; 52; 53]. In Refs. [54; 55] we considered this approach to describe the spectra of light hadrons in lead-lead (PbPb) collisions at the Large Hadron Collider (LHC) and at the Relativistic Heavy Ion Collider (RHIC) as well. In particular, the approach has been used to describe particle production in small systems at RHIC [55]. The nuclear modification factors for pAl, pAu, dAu and HeAu have been successfully reproduced as a function of \(p_{T}\) for different centralities. An important result was that the thermal parametrization can be considerably modified by taking into consideration the nuclear effects embedded in the gluon distribution function of the target. In this work we investigate the possible sources of azimuthal asymmetry for relativistic heavy ion collisions using a theoretical model that contemplates particles produced in the initial hard collisions and the collective effects described by a Blast-Wave like expansion. The interface between the hard process described within the QCD \(k_{T}\)-factorization formalism and the final state collective effects is studied in details. In the hard part of spectrum a contribution to the elliptic flow is included associated to the azimuthal orientation of the momentum of the produced particles in relation to the reaction plane. This effect is introduced in the QCD color dipole amplitude from which the unintegrated gluon distribution is obtained. The charged hadron production in proton-proton cross section has been described successfully using this approach in Ref. [56]. In addition, azimuthal anisotropy is also incorporated in the BTE within the RTA approximation. There, the transverse rapidity variable \(\rho\) has been modified in order to include an anisotropy dependence in the flow. In this last context we are following closely the Refs. [57; 58; 59; 60]. The nuclear modification factor \(R_{AA}\) and \(v_{2}\) are described simultaneously for PbPb and XeXe collisions at the LHC. This paper is organized as follows. In Sec. II the theoretical model for the charged hadron production is presented in the context of the \(k_{T}\)-factorization approach including azimuthal asymmetry and nuclear shadowing. This initial distribution is then embedded in the formalism of hydrodynamical blast wave model (BTE-RTA approach). Expressions for both the nuclear modification factor, \(R_{AA}\), and elliptic flow, \(v_{2}\) are provided. In Sec. III the comparison of the results to the LHC data are done and the interplay between initial and collective azimuthal asymmetries is investigated. Finally, we summarize our conclusions in the last section IV. ## II Theoretical framework and main predictions We shall study the anisotropic flow, which is one of the key observables to analyse the transport properties of the QGP. It is determined by the flow harmonic coefficients \(v_{n}\) obtained from the Fourier decomposition of the azimuthal distribution of the produced particles in the following way [6; 7; 8; 9]: \[E\frac{d^{3}N}{d^{3}\vec{p}}=\frac{d^{2}N}{2\pi p_{T}dp_{T}dy}\left(1+2\sum_{n =1}^{\infty}v_{n}\mathrm{cos}(n\phi)\right), \tag{1}\] where \(\phi\) is the azimuthal angle with respect to the reaction plane. Accordingly, \(E\) is the particle energy with \(p_{T}\) and \(y\) being its transverse momentum value and rapidity. The corresponding anisotropy for an identified hadron \(h\) in a inclusive nucleus-nucleus collision, \(AA\to hX\), is characterized by the set of Fourier flow coefficients \(v_{n}\), defined as: \[v_{n}(p_{T})\equiv\frac{\int_{0}^{2\pi}d\phi\,\mathrm{cos}(n\phi)\frac{d^{3} \sigma(AA\to hX)}{dyd^{2}\vec{p}_{T}}}{\int_{0}^{2\pi}d\phi\frac{d^{3}\sigma( AA\to hX)}{dyd^{2}\vec{p}_{T}}}, \tag{2}\] with \(y\) and \(\vec{p}_{T}\) being the rapidity and transverse momentum vector of the produced hadron, respectively. The integration is done over the azimuthal angle \(\phi\) in coordinate space. Here, the RTA approximation of the BTE will be considered which is an effective model where the collisional term has the form \(\mathrm{C}[f]=-(f-f_{eq})/t_{r}\). The Boltzmann local equilibrium distribution, \(f_{eq}\), for the distribution of particles \(f\) is typified by a freeze-out temperature \(T_{eq}\) and \(t_{r}\) is the relaxation time. The latter corresponds to the time for a non-equilibrium system to reach the equilibrium whereas \(t_{f}\) is the freeze-out time parameter. Given \(\mathrm{C}[f]\), the BTE is then solved with the following initial conditions, \(f(t=0)=f_{in}\) and \(f(t=t_{f})=f_{fin}\). Therefore, the final distribution \(f_{fin}\) is evaluated as a function of the ratio \(t_{f}/t_{r}\)[52], \[f_{fin}=f_{eq}+(f_{in}-f_{eq})\,e^{-\frac{t_{f}}{t_{r}}}, \tag{3}\] with \(f_{fin}\to f_{eq}\) if the system is given enough time or \(t_{f}>t_{r}\). Let us now focus on the determination of the initial distribution of particles in the context of the high energy factorization approach. In this case, the inclusive cross section for producing identified particles is given in terms of the convolution of the unintegrated gluon distribution (UGD) for both target and projectile and the gluon-gluon sub-process cross section. The distribution \(f_{in}\) is proportional to the production multiplicity, \(d^{3}N(AA\to hX)/dyd^{2}\vec{p}_{T}\). The UGDs containing initial state nuclear effects can be computed within the QCD color dipole framework. The main advantage is that the input dipole-target scattering amplitude can be constrained from experimental data in the small-\(x\) regime. The dipole-nuclei \(S\)-matrix at transverse distance \(r\) coordinate space for a given impact parameter \(b\) can be computed from the dipole-proton cross-section, \(\sigma_{p}(x,r)\), considering the Glauber-Mueller multiple-scattering formalism [61; 62], \[S_{A}(x,r,b)=e^{-\frac{1}{2}T_{A}(b)\sigma_{p}(x,r)}, \tag{4}\] where \(T_{A}(b)\) is the nuclear thickness function and \(b\) is the distance to the nuclei center. The nuclear \(S\)-matrix is a function of the gluon longitudinal momentum fraction, \(x\), the transverse size of the QCD color dipole \(r\) and \(b\). We are considering the proton as a homogeneous target in the impact parameter, with transverse area \(\pi R_{p}^{2}=\sigma_{0}/2\), so that \(\sigma_{p}(x,r)=2\int d^{2}b(1-S_{p}(x,r,b))=\sigma_{0}(1-S_{p}(x,r))\). The last quantity is the dipole-proton cross section. The scattering matrix for the proton case, \(S_{p}\), can be obtained from the Fourier transform of the UGD as a function of transverse momentum. In this work we are considering the UGD parametrization proposed by Moriggi, Peccini, and Machado fitted to the \(p_{T}\) spectrum at different energies in proton-proton collisions [56]. The corresponding analytical dipole-proton cross section is given by, \[\sigma_{p}(\tau_{r})=\sigma_{0}\left(1-\frac{2(\frac{\tau_{r}}{2})^{\xi}K_{ \xi}(\tau_{r})}{\Gamma(\xi)}\right), \tag{5}\] where \(\tau_{r}=Q_{s}(x)r\) is the scaling variable as a function of \(r\) and \(K_{\xi}(\tau_{r})\) the Bessel function of the second kind. The saturation scale and the power parameter \(\xi\) are fitted considering the geometric scaling in the momentum spectrum \(\tau_{Q}=Q^{2}/Q_{s}^{2}(x)\), so that \(Q^{2}=p_{T}^{2}\), \[\xi = 1+ar_{Q}^{b}, \tag{6}\] \[Q_{s}^{2}(x) = \left(\frac{x_{0}}{x}\right)^{0.33}. \tag{7}\] The parameters \(a\), \(b\), and \(x_{0}\) are given by Moriggi et al in Ref. [56]. The main nuclear effect in Eq. (4) is to introduce shadowing and the Cronin peak[54]. For \(r\to 0\) the QCD dipole-nucleus cross section is \(\sigma_{A}(x,r)\approx A\sigma_{p}(x,r)\), which implies in a scaling of binary collisions \(N_{coll}\) for \(p_{T}\gg Q_{s}(x)\) and consequently the nuclear modification factor \(R_{AA}\to 1\). An azimuthal asymmetry can be generated if there is a dipole orientation described by the transverse size \(\vec{r}\) regarding the impact parameter \(\vec{b}\) (the angle between them is denoted by \(\phi_{rb}\)). Namely, the scattering amplitude will depend upon the angle between \(\vec{r}\) and \(\vec{b}\) which should introduce a dependence on angle \(\phi_{p}\) between \(\vec{p}_{T}\) and \(\vec{B}\) (the impact parameter of the nucleus-nucleus collision). The evaluation of the amplitude taking into account the dipole orientation is not an easy task (see Refs. [65; 66; 67; 68; 69] for more detailed theoretical analyzes). The numerical solution of the Balitsky-Kovchegov (BK) equation including dipole orientation has been studied in Ref. [70; 71]. In what follows we consider a way to introduce such asymmetry in a more fundamental level as proposed by Iancu and Rezaeian [65]. It was shown that by taking into account a non homogeneous color charge sources distribution for the target in the context of the Color Glass condensate Condensate (CGC) formalism one produces \(S_{A}=\exp[-\mathcal{N}_{2g}]\) with \(\mathcal{N}_{2g}(r,b,\phi_{rb})=\mathcal{N}_{0}(r,b)+\mathcal{N}_{\theta}(r, b)\cos(2\phi_{rb})\). Expressions are provided in [65] for the auxiliary functions \(\mathcal{N}_{0,\theta}\) for the case of a Gaussian distribution. In this paper, we chose a simpler way to incorporate this effect from a phenomenological perspective, by angular modulation, given by the substitution \[r^{2}\to r^{2}\left[1+a_{r}\cos(2\phi_{rb})\right]. \tag{8}\] The parameter \(a_{r}\) will be fitted from the experimental data of each centrality class, and it measures the amount of asymmetry needed to describe \(v_{2}\). Such modification should give a non-zero \(v_{2}\) coefficient in the momentum spectrum of produced gluons. The cross section for inclusive gluon production with transverse momentum \(p_{T}\) in the \(k_{T}\)-factorization formalism, can be described with the dipole scattering matrix \(S_{A}(x,r,b)\) in the position space and the corresponding initial particle distribution is given by: \[f_{in}(\phi_{p}) = \frac{1}{p_{T}^{2}}\frac{2C_{F}}{(2\pi)^{4}\alpha_{s}}\int d^{2} bd^{2}re^{i\vec{p}_{T}\cdot\vec{r}}\nabla_{r}^{2}S_{A}(x_{1},r,b) \tag{9}\] \[\times \nabla_{r}^{2}S_{A}(x_{2},r,b^{\prime}),\] such that \(\vec{b}^{\prime}=\vec{b}-\vec{B}\) and \(x_{1,2}=p_{T}e^{\pm y}/\sqrt{s}\) are the momentum fractions carried by the gluons for each nuclei and \(\nabla_{r}^{2}\) is the Laplacian with respect to \(r\) coordinate. We can decompose the distribution \(p_{T}\) into its harmonic components using the identity \[e^{ip_{T}r\cos(\phi_{p}-\phi_{r})}=\sum_{n=-\infty}^{n=\infty}i^{n}J_{n}(p_{T}r )e^{in(\phi_{p}-\phi_{r})}, \tag{10}\] where \(J_{n}(x)\) is the Bessel function of first kind of order \(n\) and \(\phi_{r}\) is the angle between \(\vec{r}\) and \(\vec{B}\). The second distribution harmonic is given by \[\int_{0}^{2\pi}f_{in}(\phi_{p})\cos(2\phi_{p})d\phi_{p}=\frac{-2 \pi}{p_{T}^{2}}\int bdbrdrd\phi_{b}d\phi_{r}\,\mathcal{I}_{2},\] \[\mathcal{I}_{2}=J_{2}(p_{T}r)\cos(2\phi_{r})\nabla_{r}^{2}S_{A}(r, b)\nabla_{r}^{2}S_{A}(r,b^{\prime}), \tag{11}\] where \(\phi_{b}\) is the angle between \(\vec{b}\) and \(\vec{B}\). In addition, the integral over \(\phi_{p}\) is expressed as, \[\int_{0}^{2\pi}f_{in}(\phi_{p})d\phi_{p}=\frac{2\pi}{p_{T}^{2}} \int bdbrdrd\phi_{b}d\phi_{r}\,\mathcal{I}_{0},\] \[\mathcal{I}_{0}=J_{0}(p_{T}r)\nabla_{r}^{2}S_{A}(x_{1},r,b)\nabla_{r }^{2}S_{A}(x_{2},r,b^{\prime}). \tag{12}\] The gluon jet decay with mass \(m_{j}\) models the hadron production with transverse momentum \(p_{T_{h}}\), where the hadron carries the average momentum fraction \(\left\langle z\right\rangle\). The initial produced hadron spectrum can be written as \[f_{in}(p_{T_{h}},\phi_{p})=\frac{K}{\left\langle z\right\rangle^{2}}f_{in} \left(p_{T}^{2}=p_{T_{h}}^{2}/\left\langle z\right\rangle^{2}+m_{j}^{2},\phi_{ p}\right). \tag{13}\] Here the parameters \(K\), \(\left\langle z\right\rangle\), and \(m_{j}\) were determined for pion production in \(pp\) collisions by Moriggi, Peccini, and Machado for different energy collisions [56]. Using the Boltzmann-Gibbs-Blast-Wave (BGBW) model for produced hadrons in thermal equilibrium, we could model the collective flow and subsequent hydrodynamic expansion [72]. This phenomenological model takes into account the main characteristics of hydrodynamic evolution. Thus we consider a velocity profile \(\rho_{0}=tanh^{-1}(\beta_{r})\) (with \(\beta_{r}=\beta_{s}(\xi)^{n}\)), determined by the surface expansion velocity \(\beta_{s}\). Here, \(\beta_{r}\) is the radial flow with \(\xi=r/R_{0}\) being the ratio between the radial distance of the transverse plane and the fireball radius \(R_{0}\). It is assumed a linear velocity profile, i.e. \(n=1\). The azimuthal asymmetry can be parameterized by modulation in the velocity profile \(\rho=\rho_{0}+\rho_{a}\cos(2\phi_{s})\), where \(\phi_{s}\) is the azimuthal angle in relation to reaction plane, as proposed by Huovinen et al [59]. The quantity \(\rho_{a}\) is the anisotropy parameter in the flow. We observe the need to incorporate an extra parameter \(s_{2}\) to describe the small \(p_{T_{h}}\) region, whose purpose is introduce an azimuthal variation of the source density as described by STAR Collaboration at RHIC [73]. Hence, the equilibrium distribution will be given by \[f_{eq}(\phi_{p})\propto m_{T_{h}}\int_{0}^{R_{0}}rdr\int_{0}^{2 \pi}d\phi_{s}K_{1}\left(\frac{m_{T_{h}}}{T_{eq}}\cosh(\rho(\phi_{s}))\right)\] \[\exp\left(\frac{p_{T_{h}}}{T_{eq}}\sinh(\rho(\phi_{s}))\cos( \phi_{s}-\phi_{p})\right)\left[1+2s_{2}\cos(2\phi_{s})\right], \tag{14}\] where \(m_{T_{h}}=\sqrt{p_{T_{h}}^{2}+m_{h}^{2}}\) is the transverse mass of produced hadron and \(K_{1}\) is the modified Bessel function of second kind. The model above assumes that elliptic flow from collective origin is generated by a blending of an azimuthal velocity alteration and a spatially anisotropic freeze-out hyper-surface. In the expression above Bjorken correlation in rapidity, \(\eta=y\), is assumed where \(\eta\) is the space-time rapidity. The equilibrium temperature \(T_{eq}\) (in units of GeV) and the dimensionless parameters \(\beta_{s},\rho_{a}\), \(s_{2}\) are dependent on the collision impact parameter, \(\vec{B}\), since the final momentum asymmetry must be dependent on the initial geometry of the nuclear overlapping area. Such quantities are fitted from experimental data for each centrality class regarding \(v_{2}\) and \(\frac{dN_{A}d_{p}}{dp_{T_{h}}dy}\). As already mentioned, the final \(p_{T}\) spectrum can be obtained from RTA-BTE approach [52], \[f_{fin}(\phi_{p})=f_{eq}(\phi_{p})+\left[f_{in}(\phi_{p})-f_{eq}(\phi_{p}) \right]e^{-t_{f}/t_{r}}, \tag{15}\] where \(t_{r}\) and \(t_{f}\) are respectively the relaxation and freeze-out time. The initial hard distribution \(f_{in}\), given by the Eqs. (9) and (13), evolves until reach the equilibrium distribution given by Eq. (14), where the ratio \(t_{f}/t_{r}\) is fitted for each centrality class. The second harmonic of \(f_{fin}(\phi_{p})\) gives the final elliptic flow coefficient \(v_{2}\), which can be written as, \[v_{2}=\frac{\int_{0}^{2\pi}f_{fin}(\phi_{p})\cos(2\phi_{p})d\phi_{p}}{\int_{0} ^{2\pi}f_{fin}(\phi_{p})d\phi_{p}}. \tag{16}\] It is worth pointing out that not only the azimuthal asymmetry of the initial distribution has its origin in the dipole orientation, that depends on the impact parameter, but also generates a momentum asymmetry of Fourier Transform given by the integration shown in Eq. (9). Meanwhile, in the equilibrium equation given by the Eq. (14), the asymmetry can arise from the geometry of the initial collision, even with a vanishing second harmonic of the initial distribution. An important point is that in our approach nuclear effects (nuclear shadowing) are already present in the nuclear UGD. It was shown in Figure 1: Multiplicity of \(\pi^{\pm}\) produced in PbPb and XeXe collisions at \(\sqrt{s}=5.02\) TeV and \(\sqrt{s}=5.44\) TeV, respectively, compared with experimental data from ALICE collaboration [63; 64] for different centrality classes. Each curve has been multiplied by a factor of \(2^{i}\) for a better displaying. Refs. [54; 55] that the nuclear shadowing effect changes significantly the spectrum at low-\(p_{T}\) and consequently alters the fitted parameters in the equilibrium distribution. The eccentricity \(\varepsilon_{2}\) of the initial collision can be obtained by the integration of Eq. (13) with respect to transverse momentum \(p_{T_{\rm{h}}}\). We shall write this result as \[\varepsilon_{2}=\frac{\left\langle b_{y}^{2}-b_{x}^{2}\right\rangle}{\left\langle b _{y}^{2}+b_{x}^{2}\right\rangle}, \tag{17}\] where \(b_{x}\) is the component of \(\vec{b}\) in the \(\vec{B}\) direction. In the present analysis, we consider only the spectrum of produced pions due the fact that its distribution is well described up to \(p_{T_{\rm{h}}}\lesssim 10\) GeV for LHC energies in \(pp\) collisions [56]. We need a good description of the region \(p_{T_{\rm{h}}}\gtrsim 5\) GeV to find appropriate values of the yield, Eq. (18), from a suitable parameter fitting. In this region the thermal distribution of equilibrium, given by Eq. (14), is very small. The opposite scenario results in an unrealistic increase of \(T_{eq}\) and \(\beta_{s}\) values to fit the spectrum in the large \(p_{T_{\rm{h}}}\) region. After these consideration, we can write down the hadron productions in nuclear collision as \[\frac{dN_{AA}}{p_{T_{\rm{h}}}dp_{T_{\rm{h}}}dyd\phi_{p}}=e^{-t_{f}/t_{r}}f_{ in}(\phi_{p})+(1-e^{-t_{f}/t_{r}})f_{eq}(\phi_{p}), \tag{18}\] and the nuclear modification factor will be given by, \[R_{AA}=\frac{\frac{d^{3}N_{AA}}{dp^{3}}}{\left\langle T_{AA}\right\rangle \frac{d^{3}\sigma_{pp}}{dp^{3}}}, \tag{19}\] where \(\left\langle T_{AA}\right\rangle\) is the average value of the nuclear overlapping function. ## III Results and discussion We now proceed to the analysis of the available data at the LHC for the nuclear modification factor and elliptic flow by using the theoretical approach presented in previous section. The \(p_{T_{\rm{h}}}\) spectrum and \(v_{2}\) coefficient were fitted in the interval \(0.1<p_{T_{\rm{h}}}<8\) GeV for data of ALICE collaboration [63; 74; 75; 64], regarding pion production in PbPb and XeXe collisions at \(\sqrt{s}=5.02\) TeV and \(\sqrt{s}=5.44\) TeV, respectively. Since that initial distribution given by Eq. (9) is not capable of generating enough asymmetry to describe \(v_{2}\) in these very central collisions, we excluded the centrality class of 0-5% in our analysis. The BGBW distribution parameters and the ratio \(t_{f}/t_{r}\) are shown in the Table 1. The Woods-Saxon nuclear density was considered to compute the nuclear thickness function \(T_{A}(b)\), using parameters from De Vries et al [76]. It is worth mentioning that in the present analysis we did not consider the deformation of Xe nuclei. Thus, this set of parameters are in good agreement with the expected behavior where \(T_{eq}\) decreases with the centrality, whereas the quantities that define azimuthal asymmetry \(\rho_{a}\) and \(s_{2}\) increase with the centrality. In order to discuss the freeze-out parameters, for PbPb collisions it is shown that \(\left\langle\beta_{r}\right\rangle=\left(\frac{2}{2+n}\right)\beta_{s}\) (\(\left\langle\beta_{r}\right\rangle=(2/3)\beta_{s}\), for \(n=1\)) decreases with centrality, reaching \(\left\langle\beta_{r}\right\rangle=0.336\pm 0.024\) in \(5-10\%\) central collisions, while the equilibrium temperature increases going from \(T_{eq}=(0.091\pm 0.015)\) GeV to \(T_{eq}=(0.229\pm 0.084)\) GeV. As considered in a previous analysis [54], the radial velocity \(\beta_{s}\) is anti-correlated with \(T_{eq}\) and therefore diminishes with the centrality. Although the values of \(\chi^{2}/dof\) shows sizable deviation from the unit for the PbPb case, the resulting parameters have values physically consistent. Namely, the transverse momentum spectra data cover greater values of \(p_{T_{\rm{h}}}\) and they help to restrict \(T_{eq}\) and \(\beta_{s}\), generating smaller errors in the evaluation of these parameters. It should be stressed that the value of the exponent of the expansion velocity profile, \(n\), is equal unity in the present case. The situation is different in the fits of spectra with a blast-wave function done by ALICE Collaboration in Refs. [63; 77] where \(n\) is about 0.74 in central collisions and it increases up to 2.52 in more peripheral collisions. Here, a variation of \(n\) was not necessary to reproduce the large-\(p_{T}\) tail of the spectra as it is not thermal over the full range of transverse momentum. In general, the centrality dependence of the thermal parameters \(T_{eq}\) and \(\beta_{s}\) appearing in Tab. 1 has the opposite behavior compared to the BGBW approaches [57; 58; 59]. The main reason is that the shadowing/anti-shadowing effect in the small-\(p_{T}\) region alters the corresponding particle distribution and modifies the parameter associated to equilibrium temperature. Let us take for instance a Boltzmann distribution \(\sim\exp(-p_{T}/T)\): the temperature quantifies how much disperse the \(p_{T}\) spectrum is. On the other hand, the initial state nuclear \begin{table} \begin{tabular}{l|l l l l l l} \hline \hline & centrality(\%) & \(T_{eq}\) (GeV) & \(\beta_{s}\) & \(\rho_{a}\) & \(s_{2}\) & \(t_{f}/t_{r}\) & \(\chi^{2}/dof\) \\ \hline PbPb & 05-10 & 0.229 & 0.504 & 0.0386 & 0.0590 & 2.05 & 0.990 \\ & 10-20 & 0.148 & 0.787 & 0.0678 & 0.0670 & 1.86 & 1.126 \\ & 20-30 & 0.122 & 0.856 & 0.103 & 0.0835 & 1.64 & 1.389 \\ & 30-40 & 0.116 & 0.867 & 0.128 & 0.110 & 1.38 & 1.488 \\ & 40-50 & 0.100 & 0.900 & 0.167 & 0.132 & 1.17 & 1.683 \\ & 50-60 & 0.090 & 0.911 & 0.202 & 0.164 & 0.89 & 1.122 \\ & 60-70 & 0.091 & 0.910 & 0.210 & 0.254 & 0.36 & 0.766 \\ \hline XeXe & 05-10 & 0.190 & 0.671 & 0.0351 & 0.057 & 2.03 & 0.556 \\ & 10-20 & 0.121 & 0.862 & 0.0782 & 0.065 & 1.65 & 0.232 \\ & 20-30 & 0.131 & 0.829 & 0.0941 & 0.106 & 1.29 & 0.167 \\ & 30-40 & 0.116 & 0.868 & 0.129 & 0.129 & 1.02 & 0.105 \\ & 40-50 & 0.125 & 0.843 & 0.113 & 0.187 & 0.68 & 0.250 \\ & 50-60 & 0.151 & 0.767 & 0.0771 & 0.324 & 0.31 & 0.098 \\ \hline \hline \end{tabular} \end{table} Table 1: Kinetic freeze-out parameters in each centrality class for production of charged pions in PbPb and XeXe collisions at \(\sqrt{s}=5.02\) TeV and \(\sqrt{s}=5.44\) TeV at the LHC, respectively. Parameters \(\beta_{s}\), \(\rho_{a}\), \(s_{2}\) and \(t_{f}/t_{r}\) are dimensionless. Only central values are shown and the errors are typically of 20%. effects also produce a broadening of that distribution. In more peripheral collisions the hard (initial) distribution starts to become more prominent and its effect is almost sufficient to describe particle spectrum. In this sense, the temperature can be smaller in this case compared to more central collisions. The dimensionless parameter \(a_{r}\) associated to azimuthal modulation in the dipole cross section is more relevant in the large \(p_{T}\) region. In the \(p_{T_{h}}\sim 5\) GeV region the distribution \(f_{eq}\) is very small and the final asymmetry is basically related to \(f_{in}\) and therefore the parameter \(a_{r}\) can be determined separately. Further, we defined \(v_{2in}\) associated only with \(f_{in}\), which was fitted within the region of \(p_{T_{h}}>5\) GeV. The results are shown in the Table 2. We can see that in more central collisions (5-10%) the asymmetry generated by the initial distribution is very small and \(a_{r}\) should be large to fit the data. For bigger values of centrality, \(a_{r}\) is almost constant, with \(a_{r}\approx 0.18\). At this stage of the phenomenological study it is not clear how to explain the dependence of \(a_{r}\) on the centrality. Based on the work of Ref. [65], the nuclear color dipole amplitude with dipole orientation can be written as \(N_{2g}(r,B,\phi_{rb})=\mathcal{N}_{0}^{\alpha}(r,B)\left[1+\kappa(B)\cos(2 \phi_{rb})\right]\), with \(\kappa(B)\sim[T_{A}^{\prime\prime}(B)-T_{A}^{\prime}(B)/B]/T_{A}\). Here, \(T_{A}^{\prime\prime}\) and \(T_{A}^{\prime}\) are the second and first derivative of the thickness function, respectively. Using by simplification of a Gaussian thickness \(T_{A}\propto\exp(-B^{2}/R_{A}^{2})\), with \(R_{A}\) being the nuclear radius, one obtains \(\kappa(B)\sim(2B/R_{A}^{2})^{2}\) meaning that the asymmetry is more intense for peripheral collisions. This feature is also confirmed by the numerical solution of BK evolution equation when dipole orientation is included [70; 71]. We present in the Figure 1 the final spectrum, given by the Eq. (18), in comparison with experimental data of ALICE collaboration for XeXe and PbPb collisions. The spectrum of produced pions in \(pp\) collisions is also shown. It is important to note that we need a good agreement with the \(pp\) comparison case to have a suitable description of nuclear modification factor \(R_{AA}\). If this is not the case, we can still have a good fit of multiplicity but not a suitable description of \(R_{AA}\). In the Figure 2, the resulting nuclear modification factor from the fitting is compared with data from ALICE collaboration [63] of charged pion production. The dashed lines are the fitting results of the spectrum for XeXe collision evaluated with the Eq. (19). For central collisions, \(R_{AA}\) is basically equal for PbPb and XeXe. In our model, this is due mainly to the fact that in both cases the ratio \(t_{f}/t_{r}\) is essentially the same, implying that both systems evolve in a similar time. In more peripheral collisions, the ratio \(t_{f}/t_{r}\) for XeXe is remarkably smaller, which generates less suppression of initial spectrum. The differential elliptic flow \(v_{2}(p_{T_{h}})\) evaluated by Eq. (16) is shown in the Figure 3. The results are contrasted with data for XeXe and PbPb collisions in several centralities. We can see that our model has better agreement with \(v_{2}\) for XeXe case, where \(\chi^{2}\) is smaller than the PbPb one. At a given value of \(p_{T_{h}}\), the more peripheral \begin{table} \begin{tabular}{l|l l l l l l l} Centrality & 5-10 & 10-20 & 20-30 & 30-40 & 40-50 & 50-60 & 60-70 \\ \hline \(PbPb\) & \(0.668\pm 0.068\) & \(0.261\pm 0.015\) & \(0.151\pm 0.086\) & \(0.130\pm 0.075\) & \(0.113\pm 0.073\) & \(0.114\pm 0.01\) & \(0.114\pm 0.015\) \\ \(XeXe\) & \(0.70\pm 0.42\) & \(0.190\pm 0.054\) & \(0.1809\pm 0.033\) & \(0.128\pm 0.031\) & \(0.171\pm 0.038\) & \(0.189\pm 0.057\) \\ \end{tabular} \end{table} Table 2: Results of fitting the dimensionless parameter \(a_{r}\) at different centralities for PbPb and XeXe collisions. Figure 2: Nuclear modification factors for PbPb collisions compared with data of ALICE collaboration [63] for different centrality classes. The results for XeXe collisions are are also presented. collisions have the largest value of elliptic flow, and \(v_{2}\) decreases for more central collisions. For all the centralities, in the small \(p_{T_{h}}\) range, the transverse momentum dependence of the flow for charged pions is approximately linear. The interface between the initial and final distribution is illustrated by Figure 4, where the nuclear modification factor \(R_{AA}\) (plot on the right) and \(v_{2}\) (plot on the left) as function of \(p_{T_{h}}\) are shown. In these results, the dashed lines represent the \(f_{in}\) (curves in color red) and \(f_{eq}\) (curve in color black) contributions and the solid lines the summation. To be clear, Fig. 4 shows the relative contribution to the \(p_{T}\)-spectrum coming from initial and equilibrium distributions appearing in Eq. (18). Namely, it is shown the contributions driven by \(f_{in}(p_{T})\times e^{-t_{f}/t_{r}}\) and \(f_{eq}\times(1-e^{t_{f}/t_{r}})\) terms. The nuclear modification factor \(R_{AA}\) is given by the sum of these two contributions. Both \(R_{AA}\) and \(v_{2}\) grows up to their highest values for \(p_{T_{h}}\sim 2\) GeV, where the \(f_{eq}\) contribution is maximum. After this region, \(R_{AA}\) and \(v_{2}\) decreases in value, indicating a predominance of the contribution of produced particles at initial stages of the collision. In our model, the growth of \(v_{2}\) and the Cronin peak has a common origin as the interface between these two distributions. It also Figure 4: Detailed analysis for the \(f_{in}\) and \(f_{eq}\) contributions for the final asymmetry \(v_{2}\) (left plot) and nuclear modification \(R_{AA}\) (right plot) for PbPb collision at centrality (50-60)%. Figure 3: Elliptic flow coefficient \(v_{2}(p_{T_{h}})\) for XeXe and PbPb collisions compared to data from ALICE collaboration [74; 75]. The results for PbPb have been multiplied by factor 2 for a better visualization. indicates a limit to our approach, since that for \(p_{T_{h}}\gtrsim 10\) GeV \(v_{2}\) decreases, while \(R_{AA}\) increases. In this picture, models that consider energy loss or color transparency can be considered. Please, see [18; 47; 66; 78] for further discussion on this topic. We can see that the azimuthal asymmetry of the momentum generated by initial collision is basically given by partons with large \(p_{T}\), whereas the asymmetry in the equilibrium occurs mainly in small \(p_{T}\). In order to illustrate the contributions arising from \(f_{in}\) and \(f_{eq}\) as a function of the angle between \(\vec{p}_{T}\) and impact parameter \(\vec{B}\), \(\phi_{p}\), in Fig. 5 the corresponding distributions are shown. Accordingly, as in Fig. 4 the distributions \(f_{in}\) (upper dot-dashed curve), \(f_{eq}\) (lower dot-dashed curve) and \(f_{fin}\) (solid line curve) are presented as a function of \(\phi_{p}\) for PbPb collisions at 50-60% centrality class. Transverse momentum \(p_{T}=2\) GeV has been considered in which \(f_{in}\) and \(f_{eq}\) give similar order of magnitude contributions to the final distribution. The values of the final distribution, \(f_{fin}\), are in the middle of the equilibrium \(f_{eq}\) and initial \(f_{in}\) cases. In the remainder we investigated the behaviour of \(v_{2}\) as a function of \(p_{T_{h}}\) for different centralities and its dependence on the anisotropies generated at initial states and from the generalization of the blast wave model. Before going into the conclusions we discuss additional observables sensitive to the ingredients considered in our approach. As examples, the ratio \(v_{2}/\varepsilon_{2}\) and the average transverse momentum of particles are discussed. In the work of Voloshin and Poskanze [79], is proposed that the ratio of final elliptic flow with the initial anisotropy, \(v_{2}/\varepsilon_{2}\), can indicate the equilibrium level of produced system. In the limit of low density one expects \[\frac{v_{2}}{\varepsilon_{2}}\propto\frac{1}{A_{T}}f_{in}, \tag{20}\] where \(A_{T}\) is the transverse area given by the region of nuclear overlapping area and \(f_{in}\) is the integrated initial distribution in \(p_{T}\). On the other hand, in the hydrodynamic limit, representing the complete thermalization of the final system, we should have \(v_{2}\propto\varepsilon_{2}\). Therefore, scaling deviation given by the Eq. (20) can indicate different mechanisms of particle production. The centrality dependency of \(v_{2}\) is considered for each centrality by integrating \(v_{2}\) in \(p_{T_{h}}\), \[v_{2}(B)=\frac{\int dp_{T_{h}}p_{T_{h}}v_{2}(p_{T_{h}})f_{fin}}{\int dp_{T_{h} }p_{T_{h}}f_{fin}}. \tag{21}\] The elliptic flow replicates the space and momentum correlation developed because of early stage pressure gradient. On the other hand, \(A_{T}^{-1}f_{in}\) can be associated to a measure of the transverse particle density. Therefore, the plot \(v_{2}/\varepsilon_{2}\times A_{T}^{-1}f_{in}\) could be viewed as an analogous of pressure versus energy density plot. The Figure 6 the variation of eccentricity scaled elliptic flow with such transverse density. It shows the ratio \(v_{2}/\varepsilon_{2}\) as function of \(f_{in}/A_{T}\), with \(f_{in}\) integrated over \(p_{T_{h}}\) for different centralities for both PbPb (open circles) and XeXe collisions (open squares). The expected linear scaling (solid line in figure) is a good approximation in more peripheral collisions. For large multiplicities, there is a substantial deviation of the fitted line. In our model, such effect results of the increasing in \(f_{eq}\) contribution for the spectrum in more central collisions. Finally, the initial and final mean transverse momentum is presented in Figure 7 (left plot) for different centralities. Initial \(\langle p_{T_{h}}\rangle\) are represented by dot-dashed line for PbPb and dashed line for XeXe, whereas final \(\langle p_{T_{h}}\rangle\) is represented by open circles for PbPb and open squares for XeXe. For more central collisions, there is a large increasing of \(\langle p_{T}\rangle\) due the temperature increasing of the distribution \(f_{eq}\). However, for more peripheral collisions, the final transverse momentum comes closer of initial transverse momentum. The multiplicity by wounded nucleon Figure 5: The \(\phi_{p}\) dependence of the initial, final and equilibrium distributions. Contributions for \(p_{T}=2\) GeV and 50-60% centrality class in PbPb collisions are presented. Distributions \(f_{in}\) and \(f_{eq}\) are respectively labeled by the upper and lower dot-dashed curves whereas \(f_{fin}\) is given by the solid curve. Figure 6: Ratio \(v_{2}/\varepsilon_{2}\) as function of \(f_{in}/A_{T}\) for PbPb and XeXe at different centralities. The solid line represents the linear fit following Eq. 20. Plot in the bottom shows the ratio data/model. is also shown. While the final distribution scales with \(N_{p}\), \(f_{in}\) grows faster than \(N_{p}\) in more central collisions. We did not discuss universality aspects related to the elliptic flow in the context of parton saturation physics used in our theoretical formalism for \(f_{in}\). Interestingly, it was demonstrated in Refs. [80; 81] that the available data on the \(v_{2}\) of charged particles for collisions at RHIC and LHC for different centralities present a scaling law. Namely, the elliptic flow normalized by the product of the inverse of the Knudsen number, \(\text{Kn}^{-1}=Q_{s,A}L\), and eccentricity \(\epsilon_{1}\) satisfy geometrical scaling. Here, \(Q_{s,A}=Q_{s,A}(x,B;A)\) is the nuclear saturation scale and \(L\) is the length related to the size of the collision area. The normalized \(v_{2}\) is a function only the \(\tau_{A}=p_{T_{h}}^{2}/Q_{s,A}^{2}\) variable, \[\frac{v_{2}(p_{T_{h}})}{\epsilon_{1}/\text{Kn}}=F(\tau_{A})=a\tau_{A}^{b}, \tag{22}\] with the parameters \(a\) and \(b\sim 1/3\) fitted from data [80; 81] and \(0\leq\tau_{A}\leq 1\). It is a relative hard task to derive such a scaling law from the geometrical scaling phenomenon. It is argued in [82] that the scaling law can be traced back to the parton energy lost in nucleus-nucleus collisions. ## IV Summary and conclusions Let us now organize the main results and draw some conclusions. In this work we presented a phenomenological model that incorporates the main characteristics of produced particle spectrum in heavy ions collisions. For evaluate the inclusive gluon production, it has been considered hard initial collision, nuclear shadowing of partonic distribution, and azimuthal asymmetry in this model. Also the process makes use of dipole scattering matrix in the position space, where an initial asymmetry has been added. However, collective effects are described by BGBW distribution, leading to an increasing of the momentum asymmetry for small \(p_{T_{h}}\). One advantage of the present parameterization is that it allow us to describe an interface between different mechanisms of hadron production providing a good description of both soft and hard contributions. In particular, we argue that the behavior of \(v_{2}\) and \(R_{AA}\) has a common origin, i.e., the increasing in the contribution of produced particles in thermal equilibrium and the subsequent predominance of the distribution of produced particle at initial hard peripheral collisions. The nuclear modification factor for pion production in PbPb and XeXe collisions and respective elliptic flow \(v_{2}(p_{T})\) have been simultaneously described. The fitted parameters \(T_{eq}\), \(\beta_{s}\), \(\rho_{a}\) and \(s_{2}\) are associated to the modified BGBW approach whereas the parameter \(a_{r}\) is connected to the azimuthal asymmetry in the early stages of the collision. The quality of fit is good and the role played by \(f_{in}\) and \(f_{eq}\) in description of \(R_{AA}\) and \(v_{2}\) has been discussed. Finally, the ratio \(v_{2}/\varepsilon_{2}\) as function of \(fin/A_{T}\) is computed and analyzed in terms of linear scaling between these two quantities. The average transverse momentum, \(\langle p_{T}\rangle\), has been also investigated. ###### Acknowledgements. This research was funded by the Brazilian National Council for Scientific and Technological Development (CNPq) under the contract number 303075/2022-8.
2309.16421
Distilling ODE Solvers of Diffusion Models into Smaller Steps
Abstract Diffusion models have recently gained prominence as a novel category of generative models. Despite their success, these models face a notable drawback in terms of slow sampling speeds, requiring a high number of function evaluations (NFE) in the order of hundreds or thousands. In response, both learning-free and learning-based sampling strategies have been explored to expedite the sampling process. Learning-free sampling employs various ordinary differential equation (ODE) solvers based on the formulation of diffusion ODEs. However, it encounters challenges in faithfully tracking the true sampling trajectory, particularly for small NFE. Conversely, learning-based sampling methods, such as knowledge distillation, demand extensive additional training, limiting their practical applicability. To overcome these limitations, we introduce Distilled-ODE solvers (D-ODE solvers), a straightforward distillation approach grounded in ODE solver formulations. Our method seamlessly integrates the strengths of both learning-free and learning-based sampling. D-ODE solvers are constructed by introducing a single parameter adjustment to existing ODE solvers. Furthermore, we optimize D-ODE solvers with smaller steps using knowledge distillation from ODE solvers with larger steps across a batch of samples. Comprehensive experiments demonstrate the superior performance of D-ODE solvers compared to existing ODE solvers, including DDIM, PNDM, DPM-Solver, DEIS, and EDM, particularly in scenarios with fewer NFE. Notably, our method incurs negligible computational overhead compared to previous distillation techniques, facilitating straightforward and rapid integration with existing samplers. Qualitative analysis reveals that D-ODE solvers not only enhance image quality but also faithfully follow the target ODE trajectory.
Sanghwan Kim, Hao Tang, Fisher Yu
2023-09-28T13:12:18Z
http://arxiv.org/abs/2309.16421v2
# Distilling ODE Solvers of Diffusion Models into Smaller Steps ###### Abstract Distillation techniques have substantially improved the sampling speed of diffusion models, allowing of the generation within only one step or a few steps. However, these distillation methods require extensive training for each dataset, sampler, and network, which limits their practical applicability. To address this limitation, we propose a straightforward distillation approach, _Distilled-ODE solvers_ (D-ODE solvers), that optimizes the ODE solver rather than training the denoising network. D-ODE solvers are formulated by simply applying a single parameter adjustment to existing ODE solvers. Subsequently, D-ODE solvers with smaller steps are optimized by ODE solvers with larger steps through distillation over a batch of samples. Our comprehensive experiments indicate that D-ODE solvers outperform existing ODE solvers, including DDIM, PNDM, DPM-Solver, DEIS, and EDM, especially when generating samples with fewer steps. Our method incur negligible computational overhead compared to previous distillation techniques, enabling simple and rapid integration with previous samplers. Qualitative analysis further shows that D-ODE solvers enhance image quality while preserving the sampling trajectory of ODE solvers. ## 1 Introduction Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song & Ermon, 2019) have recently gained attention as a promising framework for generative models, demonstrating state-of-the-art performance across a wide range of applications. These models are designed to progressively remove noise from a sample during training and generate new data samples from a predefined prior distribution during inference. They have achieved notable success in various domains, including image generation (Song et al., 2020; Dhariwal & Nichol, 2021), text generation (Hoogeboom et al., 2021; Austin et al., 2021), audio generation (Mittal et al., 2021; Lu et al., 2021), 3D shape generation (Cai et al., 2020; Luo & Hu, 2021), video synthesis (Harvey et al., 2022; Yang et al., 2022b), and graph generation (Niu et al., 2020; Vignac et al., 2023). Despite their ability to produce high-quality samples and mitigate issues such as mode collapse (Salimans et al., 2016; Zhao et al., 2018), the sampling process of diffusion models typically involves a substantial number of network evaluations, rendering the process slow and computationally intensive (Xiao et al., 2021). Consequently, recent research has concentrated on accelerating or optimizing the sampling process while preserving the quality of generated samples (Song et al., 2020; Karras et al., 2022; Salimans & Ho, 2021). In particular, methods aimed at improving the sampling efficiency of diffusion models can be broadly categorized into two groups: _learning-free sampling_ and _learning-based sampling_(Yang et al., 2022a). Learning-free sampling can be applied to pre-trained diffusion models without additional training and typically relies on efficient solvers for stochastic differential equations (SDEs) or ordinary differential equations (ODEs) (Song et al., 2020). For instance, DDIM (Song et al., 2020) employs a non-Markovian process to accelerate sampling. PNDM (Liu et al., 2021) introduces a pseudo-numerical method for solving differential equations on given data manifolds. EDM (Karras et al., 2022) utilizes Heun's second-order method and demonstrates improved sampling quality compared to the naive Euler's method (Song et al., 2020). More recently, methods like DPM-Solver (Lu et al., 2022) and DEIS (Zhang & Chen, 2022) leverage the semi-linear structure of diffusion ODEs and employ numerical methods of exponential integrators. On the other hand, learning-based sampling requires additional training to optimize specific learning objectives, such as knowledge distillation (Salimans and Ho, 2021; Song et al., 2023) and optimized discretization (Nichol and Dhariwal, 2021; Watson et al., 2021). For example, progressive distillation (Salimans and Ho, 2021) iteratively distills pre-trained diffusion models into a student model that requires only half the number of sampling steps. Recently, Song et al. (2023) introduces consistency models, which are trained to predict consistent outputs along the same ODE trajectory. Consistency models can be trained independently or with knowledge distillation. While learning-free and learning-based sampling have been studied independently, their combination remains relatively unexplored. In this paper, we propose a novel distillation method for diffusion models, Distilled-ODE solvers (D-ODE solvers), that leverages the underlying principles of existing ODE solvers. Our approach is grounded in a key observation that the outputs of denoising networks exhibit high correlation within neighboring time steps. D-ODE solvers introduce a single additional parameter to ODE solvers, optimized by minimizing the difference between the output of D-ODE solvers with smaller steps (student) and that of ODE solvers with larger steps (teacher). Once the optimal parameter for D-ODE solvers is established, it can be reused across different batches during sampling, while keeping the denoising network fixed. Our method represents an intersection of learning-free and learning-based sampling, employing a straightforward distillation process to optimize D-ODE solvers while capitalizing on the sampling dynamics inherent in ODE solvers. Our main contributions can be summarized as follows: * We introduce Distilled-ODE solvers (D-ODE solvers), which transfer the knowledge from ODE solvers with larger steps to those with smaller steps through a simple formulation. * D-ODE solvers significantly reduce distillation times by optimizing existing ODE solvers and eliminate the need for extensive parameter updates in pre-trained denoising networks. * In quantitative studies, our new sampler outperforms state-of-the-art ODE solvers in terms of FID scores on several image generation benchmarks. ## 2 Background Forward and reverse diffusion processes:The forward process \(\{\mathbf{x}_{t}\in\mathbb{R}^{D}\}_{t\in[0,T]}\) initiates with \(\mathbf{x}_{0}\) drawn from the data distribution \(p_{data}(\mathbf{x})\) and evolves to \(\mathbf{x}_{T}\) at timestep \(T>0\). Given \(\mathbf{x}_{0}\), the distribution of \(\mathbf{x}_{t}\) can be expressed as follows: \[q_{t}(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t}|\alpha_{t}\mathbf{x}_{0}, \sigma_{t}^{2}\mathbf{I}), \tag{1}\] where \(\alpha_{t}\in\mathbb{R}\) and \(\sigma_{t}\in\mathbb{R}\) determine the noise schedule of the diffusion models, with the signal-to-noise ratio (SNR) \(\alpha_{t}^{2}/\sigma_{t}^{2}\) strictly decreasing as \(t\) progresses (Kingma et al., 2021). This ensures that \(q_{T}(\mathbf{x}_{T})\), the distribution of \(\mathbf{x}_{T}\), approximates pure Gaussian noise in practice. The reverse process of diffusion models is approximated using a denoising network to iteratively remove noise. Starting from \(\mathbf{x}_{T}\), the reverse process is defined with the following transition (Ho et al., 2020): \[p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}(\mathbf{x}_{t-1}|\mu_{\theta}(\bm {x}_{t},t),\Sigma_{\theta}(\mathbf{x}_{t},t)), \tag{2}\] where \(\theta\) represents the trainable parameters in the denoising network, and \(\mu_{\theta}(\mathbf{x}_{t},t)\) and \(\Sigma_{\theta}(\mathbf{x}_{t},t)\) are the Gaussian mean and variance estimated by the network \(\theta\). SDE and ODE formulation:Song et al. (2020) formulate the forward diffusion process using a stochastic differential equation (SDE) to achieve the same transition distribution as Equation 1: \[d\mathbf{x}_{t}=f(t)\mathbf{x}_{t}dt+g(t)d\mathbf{w}_{t},\quad\mathbf{x}_{0}\sim p_{data}(\bm {x}), \tag{3}\] where \(\mathbf{w}_{t}\in\mathbb{R}^{D}\) is the standard Wiener process, and \(f(t)\) and \(g(t)\) are functions of \(\alpha_{t}\) and \(\sigma_{t}\). Song et al. (2020) also introduce the reverse-time SDE, which evolves from timestep \(T\) to \(0\), based on Anderson (1982): \[d\mathbf{x}_{t}=[f(t)\mathbf{x}_{t}-g^{2}(t)\nabla_{\mathbf{x}}\log q_{t}(\mathbf{x}_{t})]dt+g (t)d\bar{\mathbf{w}}_{t},\quad\mathbf{x}_{T}\sim q_{T}(\mathbf{x}_{T}), \tag{4}\] where \(\bar{\mathbf{w}}_{t}\) is the standard Wiener process in reverse time, and \(\nabla_{\mathbf{x}}\log q_{t}(\mathbf{x}_{t})\) is referred to as the score function (Hyvarinen and Dayan, 2005). The randomness introduced by the Wiener process can be omitted to define the diffusion ordinary differential equation (ODE) in the reverse process, which corresponds to solving the SDE on average: \[d\mathbf{x}_{t}=[f(t)\mathbf{x}_{t}-\frac{1}{2}g^{2}(t)\nabla_{\mathbf{x}}\log q_{t}(\mathbf{x}_{ t})]dt\quad\mathbf{x}_{T}\sim q_{T}(\mathbf{x}_{T}). \tag{5}\] The formulation of the probability flow ODE opens up possibilities for using various ODE solvers to expedite diffusion-based sampling processes (Liu et al., 2021; Lu et al., 2022; Zhang and Chen, 2022; Karras et al., 2022). **Denoising score matching:** To solve Equation 5 during sampling, the score function \(\nabla_{\mathbf{x}}\log q_{t}(\mathbf{x}_{t})\) must be estimated. Ho et al. (2020) propose estimating the score function using a noise prediction network \(\epsilon_{\mathbf{\theta}}\) such that \(\nabla_{\mathbf{x}}\log q_{t}(\mathbf{x}_{t})=-\epsilon_{\mathbf{\theta}}(\mathbf{x}_{t},t)/ \sigma_{t}\) with \(\mathbf{x}_{t}=\alpha_{t}\mathbf{x}+\sigma_{t}\mathbf{\epsilon}\). The noise prediction network \(\epsilon_{\mathbf{\theta}}\) is trained using the \(L_{2}\) norm, given samples drawn from \(p_{data}\): \[\mathbb{E}_{\mathbf{x}\sim p_{data}}\mathbb{E}_{\mathbf{\epsilon}\sim\mathcal{N}( \mathbf{0},\sigma_{t}^{2}\mathbf{I})}||\epsilon_{\mathbf{\theta}}(\alpha_{t}\mathbf{x}+ \sigma_{t}\mathbf{\epsilon},t)-\mathbf{\epsilon}||^{2}. \tag{6}\] Here, Gaussian noise is added to the data \(\mathbf{x}\) following the noise schedule \((\alpha_{t},\sigma_{t})\), and the noise prediction network predicts the added noise \(\mathbf{\epsilon}\) from the noisy sample. Alternatively, the score function can be represented using a data prediction network \(x_{\mathbf{\theta}}\) instead of \(\epsilon_{\mathbf{\theta}}\) with \(\nabla_{\mathbf{x}}\log q_{t}(\mathbf{x}_{t})=(x_{\mathbf{\theta}}(\mathbf{x}_{t},t)-\mathbf{x}_{ t})/\sigma_{t}^{2}\). The data prediction network \(x_{\mathbf{\theta}}\) is trained with following \(L_{2}\) norm: \[\mathbb{E}_{\mathbf{x}\sim p_{data}}\mathbb{E}_{\mathbf{\epsilon}\sim\mathcal{N}( \mathbf{0},\sigma_{t}^{2}\mathbf{I})}||x_{\mathbf{\theta}}(\alpha_{t}\mathbf{x}+\sigma_{t }\mathbf{\epsilon},t)-\mathbf{x}||^{2}. \tag{7}\] It is worth noting that estimating the original data \(\mathbf{x}\) is theoretically equivalent to learning to predict the noise \(\mathbf{\epsilon}\)(Ho et al., 2020; Luo, 2022). While some works argue that predicting the noise empirically results in higher quality samples (Ho et al., 2020; Saharia et al., 2022), Karras et al. (2022) recently achieved state-of-the-art performance using the data prediction network. In this work, we conduct comprehensive experiments with both noise and data prediction networks. ## 3 Method As introduced in Section 1, our study aims to bridges the gap between learning-based and learning-free sampling, leveraging the advantages of both approaches. We utilize the sampling dynamics of ODE solvers while enhancing sample quality through a simple and efficient distillation process. This section commences with a fundamental observation of the high correlation among denoising outputs, which motivates the formulation of D-ODE solvers. We then delve into the details of transferring knowledge from ODE solvers to D-ODE solvers. ### Correlation between Denoising Outputs ODE solvers typically improve the sampling process by exploiting the denoising network's output history, allowing for the omission of many intermediate steps. Hence, comprehending the connections between denoising outputs is paramount when developing D-ODE Solvers. Our aim is to create novel ODE solvers that harness the benefits of sampling dynamics while keeping optimization degrees of freedom to a minimum. Figure 1 presents heatmaps based on cosine similarity calculations between all denoising outputs from a 1000-step DDIM (Song et al., 2020) run. We observe that predictions from neighboring timesteps exhibit high correlations in both denoising networks, with cosine similarities close to one. This observation suggests that denoising outputs contain redundant and duplicated information, allowing us to skip the evaluation of denoising networks for most Figure 1: Heatmaps are drawn by cosine similarity among denoising outputs with 1000-step DDIM on CIFAR-10. Noise prediction model (left) and data prediction model (right). timesteps. For example, we can combine the history of denoising outputs to better represent the next output, effectively reducing the number of steps required for accurate sampling. This idea is implemented in most ODE solvers, which are formulated based on the theoretical principles of solving differential equations. These solvers often adopt linear combinations or multi-step approaches, leveraging previous denoising outputs to precisely estimate the current prediction (Watson et al., 2021; Liu et al., 2021; Karras et al., 2022; Lu et al., 2022; Zhang and Chen, 2022). Consequently, ODE solvers can generate high-quality samples with far fewer timesteps compared to the thousand steps of DDPM (Ho et al., 2020). ### Formulation of D-ODE Solver We now introduce a D-ODE solver with a straightforward parameterization to distill knowledge from ODE solvers. We begin by outlining a fundamental method for representing the new denoising prediction \(\tilde{D}_{t}\) at timestep \(t\) as a linear combination of current and previous denoising outputs \(\{D_{\mathbf{\theta}}(\hat{\mathbf{x}}_{k},k)\}_{k=t}^{T}\): \[\tilde{D}_{t}=\sum_{k=t}^{T}\lambda_{k}D_{\mathbf{\theta}}(\hat{\mathbf{x}}_{k},k), \tag{8}\] where \(\hat{\mathbf{x}}_{k}\) represents the estimated sample, and \(\lambda_{k}\in\mathbb{R}\) is a weight parameter at timestep \(k\). We anticipate that this new denoising prediction can approximate the denoising target and lead to improved sample quality. Some ODE solvers adopt similar formulations to Equation 8 with numerically determined \(\{\lambda_{k}\}_{k=t}^{T}\)(Liu et al., 2021; Lu et al., 2022; Zhang and Chen, 2022). One challenge with Equation (8) is that the value of the denoising prediction \(\tilde{D}_{t}\) can be unstable and volatile depending on the weights \(\{\lambda_{k}\}_{k=t}^{T}\). This instability is less likely to occur with carefully computed weights in ODE solvers, but convergence is not guaranteed when the weights are optimized through distillation. To generate high-quality samples, the sampling process must follow the true ODE trajectory on which the diffusion models were trained (Liu et al., 2021; Song et al., 2023). In other words, the denoising network might not produce meaningful predictions for samples outside the target manifold of data. This issue has been investigated from various perspectives in recent papers (Xiao et al., 2021; Ning et al., 2023; Li et al., 2023). In order to avoid these problems, we need to constrain Equation (8) so that it adheres to the previous ODE trajectory. The new prediction \(\tilde{D}_{t}\) can be defined as Equation (9): \[\tilde{D}_{t} =D_{\mathbf{\theta}}(\hat{\mathbf{x}}_{t},t)+\sum_{k=t+1}^{T}\lambda_{k} (D_{\mathbf{\theta}}(\hat{\mathbf{x}}_{t},t)-D_{\mathbf{\theta}}(\hat{\mathbf{x}}_{k},k)) \tag{9}\] \[\approx D_{\mathbf{\theta}}(\hat{\mathbf{x}}_{t},t)+\lambda_{t}(D_{\mathbf{\theta}}( \hat{\mathbf{x}}_{t},t)-D_{\mathbf{\theta}}(\hat{\mathbf{x}}_{t+1},t+1)). \tag{10}\] Furthermore, we empirically find that using only the denoising output from the previous timestep is sufficient for distilling knowledge from the teacher sampling. Hence, we apply a first-order approximation to obtain Equation (10). The mean of the new denoising prediction approximates that of the original denoising output since the mean does not change significantly between timesteps \(t\) and \(t+1\) (e.g., \(\mathbb{E}[\tilde{D}_{t}]\approx\mathbb{E}[D_{\mathbf{\theta}}(\hat{\mathbf{x}}_{t},t)]\)). This is a key feature of D-ODE solvers, as we aim to remain on the same sampling trajectory as ODE solvers. In Section 5, we visually demonstrate that D-ODE solvers can globally follow the trajectory of existing ODE solvers. In conclusion, \(\tilde{D}_{t}\) can replace existing outputs of the denoising network to build D-ODE solvers. If ODE solvers already have their own rules for obtaining new denoising predictions, we can calculate Equation (10) with their new predictions instead of the denoising output (e.g., \(D_{\mathbf{\theta}}(\hat{\mathbf{x}}_{t},t)\)). Specific applications of D-ODE solvers can be found in Appendix D.1 and D.2. Additionally, we also compare different formulations of D-ODE solvers in Appendix D.4. ### Distillation of D-ODE Solver Each denoising step in diffusion models typically comprises two parts: (1) a denoising network \(D_{\mathbf{\theta}}\) and (2) an ODE solver \(S\). Given an estimated noisy sample \(\hat{\mathbf{x}}_{t}\) at timestep \(t\), the denoising network produces a denoising output \(\hat{\mathbf{d}}_{t}=D_{\mathbf{\theta}}(\hat{\mathbf{x}}_{t},t)\), and the ODE solver subsequently generates the next sample \(\hat{\mathbf{x}}_{t-1}=S(\hat{\mathbf{d}}_{t},\hat{\mathbf{x}}_{t})\), utilizing the denoising output and the current noisy sample. While some ODE solvers also utilize the history of denoising outputs \(\{\hat{\mathbf{d}}_{k}\}_{k=t}^{T}\), we omit this notation here for simplicity. This procedure is iterated until the diffusion models reach the estimated original sample \(\hat{\mathbf{x}}_{0}\). Now, we are ready to explain how ODE solvers with large steps can be distilled into D-ODE solvers with small steps. In Figure 2, the teacher sampling process begins with the noisy sample at timestep \(Ct\) and undergoes \(C\) denoising steps to generate a sample at timestep \(C(t-1)\). The student sampling process starts with a noisy sample at timestep \(t\) and obtains a sample at timestep \(t-1\) after one denoising step. In order to optimize \(\lambda_{t}\) in the D-ODE solver, the teacher sampling is initially performed for one batch to save intermediate samples \(\{\hat{\mathbf{x}}_{k}^{(t)}\}_{k=C(t-1)}^{Ct}\) as targets and the student sampling is also executed to acquire intermediate samples \(\{\hat{\mathbf{x}}_{k}^{(s)}\}_{k=t-1}^{t}\) as predictions. Then, \(\lambda_{t}^{*}\) is determined by minimizing the difference between the targets and the predictions on batch \(B\) as follows: \[\lambda_{t}^{*} =\operatorname*{arg\,min}_{\lambda_{t}}\mathbb{E}_{\mathbf{x}\in B} \|\hat{\mathbf{x}}_{C(t-1)}^{(t)}-S_{d}(D_{\mathbf{\theta}}(\hat{\mathbf{x}}_{t}^{(s)},t),\hat{\mathbf{x}}_{t}^{(s)};\lambda_{t})\|^{2} \tag{11}\] \[=\operatorname*{arg\,min}_{\lambda_{t}}\mathbb{E}_{\mathbf{x}\in B} \|\hat{\mathbf{x}}_{C(t-1)}^{(t)}-\hat{\mathbf{x}}_{t-1}^{(s)}\|^{2}. \tag{12}\] The above equation is solved for every timestep \(t\) of the student sampling, yielding a set of optimal \(\lambda_{t}\) values (e.g., \(\lambda^{*}=\{\lambda_{1}^{*},\lambda_{2}^{*},...,\lambda_{T}^{*}\}\)). Notably, \(\lambda^{*}\) is estimated using only one batch of samples, a process that typically takes just a few CPU minutes, and can be reused for other batches later. Algorithm 1 outlines the overall sampling procedure of the D-ODE solver. When aiming to generate \(N\) samples, it is customary to divide \(N\) into \(M\) batches and sequentially execute the sampling process for each batch \(B\), which contains \(|B|=N/M\) samples (Line 3). For the first batch, the teacher sampling is performed with denoising network \(D_{\mathbf{\theta}}\) and ODE solver \(S\) for \(CT\) steps to obtain intermediate outputs, which will serve as target samples (Line 5). Subsequently, the student sampling takes place for \(T\) steps with \(\lambda\) (Line 6). At this point, \(\lambda^{*}\) is estimated and saved for each timestep by solving Equation (12) (Line 7). Starting from the second batch onwards, sampling can proceed using the same denoising network \(D_{\mathbf{\theta}}\) and D-ODE solver \(S_{d}\) equipped with \(\lambda^{*}\) (Line 9). It is important to note that the student's samples can be generated in just \(T\) steps, which should exhibit similar quality to the teacher's samples generated over \(CT\) steps. The scale \(C\) and batch size \(|B|\) are integer values determined prior to experiments, with ablation studies on these parameters presented in Appendix F. Figure 2: The distillation of ODE Solver. \(C\) steps of teacher sampling are distilled into a single step of student sampling. D-ODE Solver \(S_{d}\) is equipped with parameter \(\lambda_{t}\) to be optimized through distillation while denoising network \(D_{\mathbf{\theta}}\) remains frozen for both teacher and student sampling. ``` 1:Pre-trained denoising network \(D_{\mathbf{\theta}}\), ODE solver \(S\), D-ODE solver \(S_{d}\) 2:Number of batches \(M\) with size \(|B|\), Student sampling steps \(T\), Teacher sampling steps \(CT\) 3:for\(m=1,...,M\)do 4:if\(m=1\)then 5:\(\{\hat{\mathbf{x}}_{k}^{(k)}\}_{k=0}^{CT}\) = Sampling(\(D_{\theta}\), \(S\), \(CT\)) \(\triangleright\) Obtain teacher samples 6:\(\{\hat{\mathbf{x}}_{k}^{(s)}\}_{k=0}^{T}\) = Sampling(\(D_{\theta}\), \(S_{d}\), \(T\); \(\lambda\)) \(\triangleright\) Obtain student samples 7: Estimate \(\lambda^{*}=\{\lambda_{1}^{*},\lambda_{2}^{*},...,\lambda_{T}^{*}\}\) with Equation (12) 8:endif 9:\(\{\hat{\mathbf{x}}_{k}^{(s)}\}_{k=0}^{T}\) = Sampling(\(D_{\theta}\), \(S_{d}\), \(T\); \(\lambda^{*}\)) 10: Save sample \(\hat{\mathbf{x}}_{0}^{(s)}\) 11:endfor ``` **Algorithm 1** Sampling with D-ODE solver ## 4 Experiments In this section, we compare D-ODE solvers to ODE solvers across diverse image generation benchmarks at various resolutions, including CIFAR-10 (\(32\times 32\)), CelebA (\(64\times 64\)), ImageNet (\(64\times 64\) and \(128\times 128\)), FFHQ (\(64\times 64\)), and LSUN bedroom (\(256\times 256\)). We carry out comprehensive experiments for both noise and data prediction models, each involving a distinct set of ODE solvers. The Frechet Inception Distance (FID) (Heusel et al., 2017) is measured with 50K generated samples across various numbers of denoising function evaluations (NFE), following Lu et al. (2022). Our reported FID scores are averages from three independent experiment runs with different random seeds. For the distillation of ODE solvers, we opt for a scale parameter of \(C=10\) and a batch size of \(|B|=100\). However, due to GPU memory constraints in the case of LSUN bedroom, we use a batch size of 25. It is noteworthy that, unless explicitly specified, DDIM serves as the primary teacher sampling method for guiding the student sampling. This choice is informed by the consideration that certain ODE solvers employ multi-step approaches during sampling, making it challenging to set their intermediate outputs as targets for distillation. In contrast, DDIM generates a single intermediate output per denoising step, simplifying the establishment of matching pairs between DDIM targets and student predictions. More experimental details can be found in Appendix E. Figure 3: Image quality measured by FID \(\downarrow\) with NFE \(\in\{2,5,10,25,50,100,250\}\). For DPM-Solver3 and DEIS3, we use 3 NFE instead of 2 NFE as the third-order method requires at least three denoising outputs. Dotted lines denote ODE solvers while straight lines represent the applications of the D-ODE solver to them. More experiments can be found in Appendix H. ### Noise prediction model We apply D-ODE solvers to discrete-time ODE solvers employed in the noise prediction model, which includes DDIM (Song et al., 2020), iPNDM (Zhang and Chen, 2022), DPM-Solver (Lu et al., 2022), and DEIS (Zhang and Chen, 2022). For DPM-Solver and DEIS, we selected third-order methods. While these prior ODE solvers were primarily evaluated with NFE greater than 10, we also conducted experiments with extremely small NFE such as 2 or 3, to assess the performance of D-ODE solvers during the initial stages of the sampling process. As shown in Figure 2(a) and 2(d), D-DDIM outperforms DDIM when NFE exceeds 5, gradually converging to FID score similar to that of DDIM as NFE increases. It is important to note that DDIM with small NFE (2 or 5) lacks the capability to produce meaningful images, which is also reflected in the performance of D-DDIM. iPNDM, a high-order method that utilizes previous denoising outputs, consistently exhibits improvements with the D-ODE solver formulation, except at 2 NFE. This improvement is particularly notable for high-order methods like DPM-Solver3 and DEIS3. Specifically, D-DPM-Solver3 effectively alleviates the instability associated with multi-step approaches at extremely small NFE values, surpassing the performance of DPM-Solver3 by a significant margin. While DEIS3 already provides a precise representation of the current denoising prediction through high-order approximation, Figure 3 illustrates that D-DEIS3 can further enhance the approximation through distillation. ### Data prediction model To conduct experiments on data prediction models, we followed the configuration outlined by Karras et al. (2022). We applied the D-ODE solver to DDIM, rebuilt based on this configuration, and EDM (Karras et al., 2022), which employs Heun's second-order method. Notably, while Karras et al. (2022) also re-implemented Euler-based samplers in their paper, we chose not to include them in our experiments, as EDM demonstrates superior FID scores. As depicted in Figure 4, D-ODE solvers outperform ODE solvers, especially for smaller NFE. For instance, D-DDIM with 25 NFE can produce samples comparable to DDIM with 250 NFE in terms of FID, resulting in a speedup of around 10 times. With increasing NFE, FID scores of both ODE and D-ODE solvers asymptotically converge to each other. Given that the performance of student sampling is closely tied to that of teacher sampling, it is natural to observe similar FID scores for student and teacher sampling with larger NFE. Moreover, it is worth noting that around NFE 2, DDIM occasionally outperforms D-DDIM slightly. This observation suggests that the 2-step DDIM may not possess sufficient capacity to effectively distill knowledge from teacher sampling, particularly when DDIM is already generating noisy images (FID score exceeding 250). ### comparison with previous distillation methods The distillation process for D-ODE solvers typically requires only a few CPU minutes, adding negligible computational overhead to the entire sampling process. In contrast, previous distillation techniques for diffusion models (Salimans and Ho, 2021; Meng et al., 2023; Song et al., 2023) neces Figure 4: Image quality measured by FID \(\downarrow\) with various NFE values (DDIM: {2, 5, 10, 25, 50, 100, 250} and EDM: {3, 5, 9, 25, 49, 99, 249}). Dotted lines denote ODE solvers and straight lines represent the applications of the D-ODE solver to them. D-ODE solvers outperform ODE solvers, especially for smaller NFE. stimate the optimization of the entire parameters of the denoising network. As a result, these methods demand a substantial amount of training time for each setting. Table 1 directly compares the computational times required by each distillation method to reach 3 FID on CIFAR-10 given the same pre-trained denoising network. The total time encompasses the distillation time following their configurations and the time taken for generating 50k samples. For instance, D-EDM first optimizes \(\lambda\) and then proceeds with the sampling process, while consistency distillation (CD) (Song et al., 2023) and progressive distillation (PD) (Salimans and Ho, 2021) need numerous training iterations before executing a few-step sampling. The results clearly demonstrate that optimizing ODE solvers instead of the denoising network can significantly reduce computational time and resource requirements while achieving comparable sample quality. It is important to note that the results may vary depending on the training configuration of CD and PD, as the majority of their time is consumed during the distillation process. In this context, our method aligns well with the recent trend of democratizing diffusion models by minimizing or circumventing extensive training that relies on a large number of GPUs (Hang et al., 2023; Wang et al., 2023; Zheng et al., 2023; Wu et al., 2023). Appendix C provides a detailed explanation for distillation methods in diffusion models and Appendix G contains additional comparisons with other sampling methods. ## 5 Analysis This section includes visualizations of the sampling process and qualitative results. We begin by conducting a visual analysis following the methodology of Liu et al. (2021) to assess the global and local characteristics of the sampling process. Subsequently, we compare the quality of the generated images produced by ODE solvers and D-ODE solvers. To facilitate the visualization of high-dimensional data, we employ two distinct measures: the change in norm as a global feature and the change in specific pixel values as a local feature, as proposed by Liu et al. (2021). In the top row of Figure 5, we observe that the norm of D-ODE solvers closely follows the trajectory traced by the norm of ODE solvers. This observation suggests that D-ODE solvers remain within the high-density regions of the data, exerting minimal influence on the ODE trajectory. This aligns with our design objective of D-ODE solvers, ensuring that the new denoising prediction should match the mean of the denoising output of ODE solvers, as discussed in Section 3.2. For reference, we also include the norm of DDIM with 1000 steps as it \begin{table} \begin{tabular}{l c c c} \hline \hline Method & D-EDM & CD & PD \\ \hline Time & \(2.55\) & \(187.25\) & \(106.16\) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison on computational time to achieve 3 FID. The unit of time corresponds to the time required to generate 50k samples with 10-step DDIM. Figure 5: The top row illustrates the change of norm comparing ODE and D-ODE solvers. The bottom row presents the update path of two randomly selected pixels in the images. The result of 1000-step DDIM is drawn as the target trajectory and 10-step sampler is conducted for ODE and D-ODE solvers. The figures are generated from 1000 samples using a noise prediction model trained on CIFAR-10. adheres to the target data manifold. In the bottom row of Figure 5, we randomly select two pixels from the image and depict the change in their values, referencing the 1000-step DDIM as the target. Clearly, the pixel values of D-ODE solvers exhibit closer proximity to the target than those of ODE solvers. In conclusion, D-ODE solvers can achieve high-quality image generation by guiding their pixels toward the desired targets while remaining faithful to the original data manifold. In Figure 6, we present a comparison of the generated images produced by ODE and D-ODE solvers using data prediction models trained on the ImageNet and FFHQ datasets. In general, our method exhibits an improvement in image quality over ODE solvers, particularly for smaller NFE. DDIM tends to generate blurry images with indistinct boundaries, while D-DDIM produces clearer images with more prominent color contrast. EDM, especially with NFE smaller than 5, generates images characterized by high noise levels and artifacts, leading to FID scores exceeding 250. In contrast, D-EDM manages to generate relatively clear objects even at 5 NFE. More qualitative results can be found in Appendix H. ## 6 Conclusion and Limitations In this work, we have introduced D-ODE solvers, a novel distillation method tailored for diffusion models. D-ODE solvers are simply formulated by adding a single parameter to ODE solvers. They efficiently distill knowledge from teacher sampling with larger steps into student sampling with smaller steps, leveraging the sampling dynamics of existing ODE solvers. Our experiments have demonstrated the effectiveness of D-ODE solvers in improving FID scores of state-of-the-art ODE solvers, particularly for scenarios involving smaller NFE. Furthermore, through visual analysis, we have gained insights into both the global and local features of our method and have observed significant improvements in image quality. However, the magnitude of improvement tends to be marginal or limited for large NFE values, ultimately converging to the FID score of the teacher sampling process. Nevertheless, D-ODE solvers remain an attractive option for enhancing sample quality with negligible additional computational cost. Moreover, for the generation of high-resolution images, D-ODE solvers may not be sufficient on their own, as they are parameterized by a single parameter. To better capture the intricate relationships among denoising outputs, one may explore the use of local-specific parameters, achieved by dividing images into smaller grids or by working within the latent space of samples (Rombach et al., 2022). While these possibilities are intriguing, we leave them for future work. Figure 6: Comparison of generated samples between ODE and D-ODE solvers. Data prediction models are used with increasing NFE (DDIM: {2, 5, 10, 25, 50}, EDM: {3, 5, 9, 25, 49}).
2309.06632
Hidden non-collinear spin-order induced topological surface states
Rare-earth monopnictides are a family of materials simultaneously displaying complex magnetism, strong electronic correlation, and topological band structure. The recently discovered emergent arc-like surface states in these materials have been attributed to the multi-wave-vector antiferromagnetic order, yet the direct experimental evidence has been elusive. Here we report the observation of non-collinear antiferromagnetic order with multiple modulations using spin-polarized scanning tunneling microscopy. Moreover, we discover a hidden spin-rotation transition of single-to-multiple modulations 2 K below the Neel temperature. The hidden transition coincides with the onset of the surface states splitting observed by our angle-resolved photoemission spectroscopy measurements. Single modulation gives rise to a band inversion with induced topological surface states in a local momentum region while the full Brillouin zone carries trivial topological indices, and multiple modulation further splits the surface bands via non-collinear spin tilting, as revealed by our calculations. The direct evidence of the non-collinear spin order in NdSb not only clarifies the mechanism of the emergent topological surface states, but also opens up a new paradigm of control and manipulation of band topology with magnetism.
Zengle Huang, Hemian Yi, Daniel Kaplan, Lujin Min, Hengxin Tan, Ying-ting Chan, Zhiqiang Mao, Binghai Yan, Cui-Zu Chang, Weida Wu
2023-09-12T22:48:50Z
http://arxiv.org/abs/2309.06632v1
# Hidden non-collinear spin-order induced topological surface states ###### Abstract Rare-earth monopnictides are a family of materials simultaneously displaying complex magnetism, strong electronic correlation, and topological band structure. The recently discovered emergent arc-like surface states in these materials have been attributed to the multi-wave-vector antiferromagnetic order, yet the direct experimental evidence has been elusive. Here we report observation of non-collinear antiferromagnetic order with multiple modulations using spin-polarized scanning tunneling microscopy. Moreover, we discover a hidden spin-rotation transition of single-to-multiple modulations 2 K below the Neel temperature. The hidden transition coincides with the onset of the surface states splitting observed by our angle-resolved photoemission spectroscopy measurements. Single modulation gives rise to a band inversion with induced topological surface states in a local momentum region while the full Brillouin zone carries trivial topological indices, and multiple modulation further splits the surface bands via non-collinear spin tilting, as revealed by our calculations. The direct evidence of the non-collinear spin order in NdSb not only clarifies the mechanism of the emergent topological surface states, but also opens up a new paradigm of control and manipulation of band topology with magnetism. Rare-earth monopnictides R(Bi,Sb) (R=Ce, Nd, Sm, etc.) with a rock salt structure have been extensively studied in the past sixty years for their intricate magnetic phase diagrams [1; 2; 3; 4]. Except for the lanthanum, yttrium, and praseodymium compounds, most rare-earth monopnitides are antiferromagnets at low temperatures [1; 2; 3; 4]. Upon cooling or applying a magnetic field, many of them undergo another or even multiple antiferromagnetic transitions [5; 6; 7], as a consequence of competing magnetic interactions. A notable example is the "Devil's staircase" in CeSb [8] and the magnetic transitions have a significant influence on the electronic band structures [9]. More recently, rare-earth monopnictides have been proposed to have topologically non-trivial band structures. A first-principle calculation predicted a systematic band inversion reduction from Ce to Yb compounds, and a topologically non-trivial to trivial transition occurs around SmSb and DyBi [10]. Later angle-resolved photoemission spectroscopy (ARPES) experiments confirmed the band inversion in rare-earth monobismuthides and identified linear-dispersed Dirac-like states [11; 12; 13; 14]. Magnetotransport experiments revealed non-trivial Berry phase in SmBi [15] and SmSb [16], and possible Weyl or Dirac fermions in CeSb and NdSb when driving the samples to ferromagnetic phase with magnetic field or pressure [17; 18; 19]. Therefore, rare-earth monopnictides are an intriguing platform to explore the interplay between magnetism, strong electronic correlation, and topological band structure. More strikingly, recent ARPES studies reveal that a number of rare-earth monopnictides, including NdBi, NdSb, and CeBi, host disconnected Fermi surface arcs emerging from an unconventional magnetic band splitting below Neel temperature (\(T_{\rm N}\)) [20; 21]. Initial density functional theory (DFT) calculations suggest that the emergent surface states cannot be explained by the ordinary A-type antiferromagnetism observed by neutron scattering experiment [3]. Later analysis suggests that multiple-wave-vector (multi \(q\)) antiferromagnetic order [22] can reproduce the emergent surface states [23; 24]. So far there is no direct experimental evidence of multi-q antiferromagnetic order, which is difficult to be differentiated from multi-domain state of simple \(1q\) order in powder neutron diffraction [3]. Real-space magnetic imaging probes, such as spin-polarized scanning tunneling microscopy (SP-STM), are suitable for directly visualizing the non-collinear magnetic order [25; 26] In this work, we provide real space evidence of non-collinear antiferromagnetic order in NdSb using SP-STM. We observe a checkerboard magnetic contrast on the (001) surface of NdSb that is consistent with a non-collinear spin order, which is a superposition of two A-type magnetic orders with orthogonal in-plane modulations. Furthermore, we discover a hidden spin-rotation transition (\(1q\)-\(2q\)) 2 K below the Neel ordering temperature \(T_{\rm N}\) that has not been reported. More interestingly, the hidden transition coincides with the onset of band splitting observed by ARPES. DFT study reveals a band inversion driven by magnetic order-induced band folding, resulting in the unconventional magnetism-induced topological surface states in the family of rare-earth monopnictides. Fig. 1a shows an STM topographic image taken at 4.5 K with a nonmagnetic tip at sample bias \(V_{\rm bias}=+1.4\) V. The image reveals a square lattice of Nd atoms with Nd-Nd spacing approximately 4.3 A calculated from the Bragg peaks \(q_{\rm lat}=(1,0)\) and \((0,1)\) in the Fourier transform map (Fig. 1c), in good agreement with prior studies [4]. The bias dependence of STM topography indicates the unoccupied states are dominated by Nd orbitals while the occupied states are predominantly of Sb orbitals, which is confirmed by our DFT calculations (Extended Data Fig. 2). Interestingly, SP-STM topography in Fig. 1b reveals an additional checkerboard-like pattern at +1.4 V bias, resulting in two pairs of superlattice peaks of wave vectors \(q_{\rm AFM}=\left(\pm\frac{1}{2},\pm\frac{1}{2}\right)\) as shown in Fig. 1d. The bias dependence of SP-STM topography shows that the checkerboard modulation is mainly visible in unoccupied states dominated by Nd orbitals, and gradually diminishes in occupied states with significant Sb character. The disappearance of checkerboard-like modulation is accompanied by a fractional \(\left(\frac{1}{2},\frac{1}{2}\right)\) shift of the atomic lattice, presumably from Nd to Sb atoms (Extended Data Fig. 2, 3 and 4). This further confirms that the observed checkerboard pattern comes from the spin-dependent tunneling between the magnetic tip and the local spin-polarized Nd orbitals. Close inspection of the Fourier transform map (Fig. 1d) reveals that there is an asymmetry of the magnetic superlattice peak intensity along two orthogonal directions: the intensity of \(q_{\rm AFM}^{A}=\left(\frac{1}{2},\frac{1}{2}\right)\) is stronger than that of \(q_{\rm AFM}^{B}=\left(\frac{1}{2},-\frac{1}{2}\right)\). We also performed bias-dependent studies to confirm that asymmetry persists in the whole bias range when the magnetic modulation is visible (Extended Data Fig. 3). This magnetic asymmetry is uniform over hundreds of micrometers, indicating a global \(C_{4}\) symmetry breaking. To confirm that this symmetry breaking is intrinsic, we examine the sample surface extensively over hundreds of micrometers and locate a magnetic domain boundary shown in Fig. 2c. The SP-STM image is taken at \(V_{\rm bias}=+0.3\) V to maximize the magnetic contrast. The magnetic modulation asymmetry rotates \(90^{\circ}\) across the domain boundary, as clearly demonstrated in the Fourier transform maps of two domains and the corresponding line profiles along orthogonal directions (Fig. 2b,e). This magnetic asymmetry suggests a magnetic structure are different from those proposed in prior DFT analysis where the angle between Nd spin and cubic axes is either \(0^{\circ}\) (\(1q\)) or \(45^{\circ}\) (simple \(2q\)) [21, 23, 24]. The out-of-plane spin modulation in \(3q\) cannot be detected by SP-STM, so we focus on the in-plane multi-wave-vector modulation and do not differentiate \(2q\) and \(3q\) in the following text. For the \(1q\) magnetic structure, there is no magnetic modulation on the (001) surface, thus no magnetic superlattice peak. In contrast, there is a single modulation direction on the (110) [i.e., cubic (010)] surface, resulting in one pair of magnetic superlattice peaks (Fig. 1e). For the proposed simple \(2q\) modulation [21; 23], the Nd spin moments are \(45^{\circ}\) with respect to the cubic \(a,b\)-axes, so the 4-fold rotational symmetry (plus a fractional lattice translation) is restored. This magnetic modulation can be considered as a superposition of two orthogonal \(1q\) modulations with equal spin components along the modulation directions, giving rise to two pairs of magnetic peaks with equal intensity on the (001) surface. Therefore, the two pairs of magnetic peaks with unequal intensity must stem from a magnetic structure with a spin configuration between \(1q\) and the simple \(2q\). In other words, the magnetic structure of NdSb is a \(2q\) modulation with broken \(C_{4}\) symmetry where the angle between Nd spin moments and the cubic \(a,b\)-axes is between \(0^{\circ}\) and \(45^{\circ}\). In this scenario, the spin components along \(q^{A}_{\rm AFM}\) and \(q^{B}_{\rm AFM}\) are proportional to their modulation amplitudes, which are proportional to the magnetic peak amplitudes. Using the ratios of superlattice peak intensities, we estimate that the Nd moment rotates \(\theta\approx 18^{\circ}\) from \(q^{A}_{\rm AFM}\). The angle is weakly temperature-dependent below 13 K (see Methods and Extended Data Fig. 6). To understand the origin of asymmetric magnetic modulation, we perform SP-STM measurements at various temperatures to examine the temperature dependence (Fig. 3b-d and Supplementary Information). At 11 K, the magnetic contrast is much weaker than that at 4.5 K, indicating a dramatic reduction of order moments. At around 14 K, the magnetic contrast is barely visible in SP-STM topography. From the Fourier transform the magnetic peaks with higher intensity (\(q^{A}_{\rm AFM}\)) remain while the ones with lower intensity (\(q^{B}_{\rm AFM}\)) disappear. Above \(T_{\rm N}(\approx 15\) K), all the magnetic superlattice peaks disappear, indicating the recovery of the paramagnetic phase. Fig. 3e shows the line profile of the Fourier transform intensity along two orthogonal directions crossing \(q^{A}_{\rm AFM}\) and \(q^{B}_{\rm AFM}\). Clearly, \(q^{B}_{\rm AFM}\) vanishes above 13 K while \(q^{A}_{\rm AFM}\) persists to \(T_{\rm N}\), indicating a \(1q\) magnetic structure in this 2 K window. Therefore, there is a hidden \(1q\)-\(2q\) transition at 13 K, which can be considered as anti-phase rotation of Nd spins in neighboring layers (Fig. 3a and Extended Data Fig. 8). So the \(1q\)-\(2q\) transition is a spin-rotation transition. Just below \(T_{\rm N}\), the peak intensity (\(q_{\rm AFM}^{A}\)) is weak and develops slowly. However, it grows sharply below \(T_{\rm R}\) accompanied by the emergence of orthogonal modulation (\(q_{\rm AFM}^{B}\)), whose intensity also increases drastically. Therefore, our SP-STM results also suggest that the magnetic order parameter is greatly enhanced below spin rotation transition \(T_{\rm R}\). The hidden spin-rotation transition provides new insight into the mechanism of the mysterious splitting of emergent surface states observed in prior ARPES studies [20; 21]. In Fig. 4a, we plot the Fermi surface of NdSb measured by ARPES at 5.5 K. Consistent with previous results [21], the surface states are two-fold symmetric. In addition, our ARPES unveils a larger momentum space and the surface states also emerge at the original \(M\)-point in the Brillouin zone of the paramagnetic phase, demonstrating the band-folding caused by the magnetic modulations. Fig. 4c shows the temperature evolution of band dispersion. Besides the two split surface bands reported before [21], which we denoted as \(\alpha\) and \(\beta\), we observe a third surface band \(\gamma\), which is not reported in prior studies [20; 21]. We extract the momentum distribution curves (MDCs) at the Fermi energy (Fig. 4e). Above \(T_{\rm N}\), only a broad shoulder-like feature due to the bulk bands can be observed. Below \(T_{\rm N}\), a peak appears as a signature of the emergent surface states. The intensity of the peak increases gradually upon cooling but it does not split into \(\alpha\) and \(\beta\) bands until below 13 K (Fig. 4e). The \(\gamma\) band is almost temperature-independent. The onset temperature of band splitting agrees with the spin-rotation transition around 13 K observed in our SP-STM measurements. We note that similar behavior of peak splitting has been observed in sister compound NdBi where \(T_{\rm N}\) (\(\approx 24\) K) is higher. Interestingly, the emerging surface states do not split until below 20 K [20], suggesting that there is a similar hidden spin-rotation at 20 K in NdBi. Therefore, the hidden transition is likely a universal phenomenon for the Nd monopnictides. To understand the fundamental nature of emergent surface states and their splitting, we perform DFT calculations to understand the bulk and surface states evolution from PM to \(1q\) and from \(1q\) to \(2q\) transitions. The PM phase of NdSb is topologically trivial, unlike NdBi. In contrast, the \(1q\) phase leads to a band anti-crossing near the Fermi energy (\(E_{\rm F}\)) due to folding along \(\Gamma-X\) in the bulk. Here, spins align along the \(x\)-direction in the \(1q\) phase. Consequently, two surface bands emerge from the upper and lower bulk band edges involved in the anti-crossing gap and disperse in the direction of \(\Gamma-X\) (Fig. 4d). The surface band splitting is small because of the small anti-crossing gap. Because NdSb exhibits multiple band crossings at different energies and momenta in the full Brillouin zone as a metal, the global topological invariant is trivial. However, this does not necessarily negate the existence of topological surface states near \(E_{\rm F}\) in the local momentum region around \(\Gamma\), distinct from the case of a true insulator. In the \(1q\)-\(2q\) transition, spins rotate in the \(xy\) plane to form a non-collinear order. Such spin rotation increases the anti-crossing gap and thus splits the surface bands further (Figs. 4d,g, and Extended Data Fig. 9), in accordance with ARPES findings. The surface band split in Fig. 4g corresponds to the zero temperature case. In finite temperature, the split is expected to be proportional to the size of order moments. Consistently, the observed temperature dependence of surface band split shown in Fig. 4f follows that of spin-polarized contrast shown in Fig. 3f. In addition, we verify that the Fermi surface projected on the (001) surface of \(2q\) phase with \(\theta=20^{\circ}\) is two-fold symmetric in calculations (Fig. 4b), in good agreement with the ARPES results shown in Fig. 4a. Therefore, the ground state of NdSb is a \(2q\) antiferromagnetic structure with broken \(C_{4}\) symmetry, in good agreement with SP-STM results. Between 13 and 15 K, only one pair of magnetic peaks with weak intensity is observed in Fourier transform maps, corresponding to \(1q\) antiferromagnetic order with much reduced modulation amplitude, thus, a negligible split of topological surface bands as observed in ARPES measurements. The combined SP-STM, ARPES, and DFT studies of NdSb establish that the band folding due to \(2q\) antiferromagnetic modulation induces topological band inversion at Brillouin zone boundaries, resulting in emergent topological surface states. Furthermore, our studies unveil a hidden spin-rotation transition from \(1q\) to \(2q\) spin orders 2 K below \(T_{\rm N}\). which is responsible for the unconventional magnetic band splitting. The non-collinear spin order and hidden spin-rotation transition might also exist in other rare-earth monopnictides. Besides clarifying the mechanism of topological surface states, our work provides an example of magnetism-induced topological phase transition, thus enabling an exciting possibility of tuning band topology with magnetism.
2301.04607
The Impact of National Culture on Innovation A Comparative Analysis between Developed and Developing Nations during the Pre and Post Crisis Period 2007_2021
This empirical study investigates the impact of the Hofstede cultural dimensions (HCD) on the Global Innovation Index (GII) scores in four different years (2007, 2009, 2019 and 2021) to compare the impacts during the pre- and post-crisis (financial and COVID-19) period by employing ordinary least square (OLS) and robust least square (Robust) analyses. The purpose of this study is to identify the impact of cultural factors on the innovation development for different income groups during the pre- and post-crisis period. We found that, in general, the same cultural properties were required for countries to enhance innovation inputs and outputs regardless of pre- and post-crisis periods and time variances. The significant cultural factors (driving forces) of the innovation performance do not change over time. However, our empirical results revealed that not the crisis itself but the income group (either developed or developing) is the factor that influences the relationship between cultural properties and innovation. It is also worth noting that cultural properties have lost much of their impact on innovation, particularly in developing countries, during recent periods. It is highly likely that in terms of innovation, no cultural development or change can significantly impact the innovation output of developing countries without the construction of the appropriate systems.
Han-Sol Lee, Sergey U. Chernikov, Szabolcs Nagy, Ekaterina A. Degtereva
2022-12-26T10:20:29Z
http://arxiv.org/abs/2301.04607v1
The Impact of National Culture on Innovation: A Comparative Analysis between Developed and Developing Nations during the Pre- and Post-Crisis Period 2007-2021 ###### Abstract This empirical study investigates the impact of the Hofstede cultural dimensions (HD) on the Global Innovation Index (GII) scores in four different years (2007, 2009, 2019 and 2021) to compare the impacts during the pre- and post-crisis (financial and COVID-19) period by employing ordinary least square (OLS) and robust least square (Robust) analyses. The purpose of this study is to identify the impact of cultural factors on the innovation development for different income groups during the pre- and post-crisis period. We found that, in general, the same cultural properties were required for countries to enhance innovation inputs and outputs regardless of pre- and post-crisis periods and time variances. The significant cultural factors (driving forces) of the innovation performance do not change over time. However, our empirical results revealed that not the crisis itself but the income group (either developed or developing) is the factor that influences the relationship between cultural properties and innovation. It is also worth noting that cultural properties have lost much of their impact on innovation, particularly in developing countries, during recent periods. It is highly likely that in terms of innovation, no cultural development or change can significantly impact the innovation output of developing countries without the construction of the appropriate systems. H + Footnote †: journal: Social sciences Academic Editor Louis Brennan Received: 3 October 2022 Accepted: 10 November 2022 Published: 15 November 2022 ## 1 Introduction Innovation plays an important role in promoting economic progress and competitiveness in both developed and developing countries (Sener and Sandogan, 2011). Many governments place innovation at the heart of their growth strategies (Patanakul and Pinto, 2014). Nowadays innovation encompasses social, business and technical innovations (Dawson and Daniel, 2010). Innovation in emerging economies is crucial to inspire people, which is especially true for the next generation of entrepreneurs and innovators (Reddy, 2011). Culture is a vital basis for innovation (Kaasa and Vadi, 2008) and has a significant positive impact on it (Laznjak, 2011; Rinne et al., 2012; Taylor and Wilson, 2012; Andrijauskiene and Dumciurivene, 2017; Prim et al., 2017; Cox and Khan, 2017; Yun et al., 2017; Handoyo, 2018; Bukowski and Rudnicki, 2019; Tekic and Tekic, 2021; Espig et al., 2021). There are several studies exploring how culture affects innovation (Sun, 2009); however, the evolution of cultural dimensions that influence countries' innovation performance over time and the impact of the crisis on them have not yet been studied. Theoretically, in general, some specific cultural features are continuously manifested as desirable for innovation development. However, during a crisis, society should be tightly cooperating under strong leadership to efficiently respond to sudden changes and be quickly normalized from it. Thus, we identify whether different cultural properties are required for innovation development during pre- and post-crisis periods. In addition, the
2309.14236
MoDem-V2: Visuo-Motor World Models for Real-World Robot Manipulation
Robotic systems that aspire to operate in uninstrumented real-world environments must perceive the world directly via onboard sensing. Vision-based learning systems aim to eliminate the need for environment instrumentation by building an implicit understanding of the world based on raw pixels, but navigating the contact-rich high-dimensional search space from solely sparse visual reward signals significantly exacerbates the challenge of exploration. The applicability of such systems is thus typically restricted to simulated or heavily engineered environments since agent exploration in the real-world without the guidance of explicit state estimation and dense rewards can lead to unsafe behavior and safety faults that are catastrophic. In this study, we isolate the root causes behind these limitations to develop a system, called MoDem-V2, capable of learning contact-rich manipulation directly in the uninstrumented real world. Building on the latest algorithmic advancements in model-based reinforcement learning (MBRL), demo-bootstrapping, and effective exploration, MoDem-V2 can acquire contact-rich dexterous manipulation skills directly in the real world. We identify key ingredients for leveraging demonstrations in model learning while respecting real-world safety considerations -- exploration centering, agency handover, and actor-critic ensembles. We empirically demonstrate the contribution of these ingredients in four complex visuo-motor manipulation problems in both simulation and the real world. To the best of our knowledge, our work presents the first successful system for demonstration-augmented visual MBRL trained directly in the real world. Visit https://sites.google.com/view/modem-v2 for videos and more details.
Patrick Lancaster, Nicklas Hansen, Aravind Rajeswaran, Vikash Kumar
2023-09-25T15:51:29Z
http://arxiv.org/abs/2309.14236v2
# MoDem-V2: Visuo-Motor World Models ###### Abstract Robotic systems that aspire to operate in _uninstrumented real-world environments_ must perceive the world directly via onboard sensing. Vision-based learning systems aim to eliminate the need for environment instrumentation by building an implicit understanding of the world based on raw pixels, but navigating the contact-rich high-dimensional search space from solely sparse visual reward signals significantly exacerbates the challenge of exploration. The applicability of such systems is thus typically restricted to simulated or heavily engineered environments since agent exploration in the real-world without the guidance of explicit state estimation and dense rewards can lead to unsafe behavior and safety faults that are catastrophic. In this study, we isolate the root causes behind these limitations to develop a system, called MoDem-V2, capable of learning contact-rich manipulation directly in the uninstrumented real world. Building on the latest algorithmic advancements in model-based reinforcement learning (MBRL), demo-bootstrapping, and effective exploration, MoDem-V2 can acquire contact-rich dexterous manipulation skills directly in the real world. We identify key ingredients for leveraging demonstrations in model learning while respecting real-world safety considerations - exploration centering, agency handover, and actor-critic ensembles. We empirically demonstrate the contribution of these ingredients in four complex visuo-motor manipulation problems in both simulation and the real world. To the best of our knowledge, our work presents the first successful system for demonstration-augmented visual MBRL trained directly in the real world. Visit sites,google.com/view/modem-v2Z for videos and more details. ## I Introduction Robot agents learning manipulation skills directly from raw visual feedback avoid the need for explicit state estimation and extensive environment instrumentation for rewards, but face heightened exploration, and thereby safety, challenges in navigating the contact-rich high-dimensional search space purely based on sparse visual reward signals. These challenges are especially critical for agents operating in the real world where inefficiency can be expensive, and safety faults can be catastrophic. One approach to developing robot manipulation policies that avoid such safety restrictions is simulation to reality transfer [1, 2, 3, 4]. However, the creation and calibration of accurate physics simulations (from first principles) for contact-rich tasks is extremely challenging and time-consuming. In this work, we alternatively study the use of visual world model learning [5, 6, 7] for robot manipulation directly from real-world interaction. Model-Based Reinforcement Learning (MBRL) with visual world models involves the learning of dynamics models using real-world data, directly from visual observations. When applied to robot manipulation, visual MBRL can mitigate the need for detailed physics simulations from first principles, as well as the need for specialized sensor instrumentation and state estimation pipelines. However, visual MBRL for real-world robotics still has two major challenges: (a) sample inefficiency; and (b) sparse/weakly shaped rewards. While a number of recent algorithms such as RRL [8] and MoDem [9] circumvent these challenges by leveraging a small number of expert demonstrations to improve sample-efficiency, they rely on aggressive exploration to compensate for weak reward supervision that can result in unsafe behaviors, restricting their application to simulated or heavily engineered scenarios. We indeed found MoDem to be infeasible for direct application in the real world due to excessively aggressive exploratory behavior. Even with significant engineering investments for safety, we found that the built-in low-level hardware controllers/drivers fault repetitively owing to ex Fig. 1: We use MoDem-V2 to train the robot on four contact-rich manipulation tasks. These tasks cover a wide range of manipulation skills, namely non-prehensile pushing, object picking, and in-hand manipulation. In recognition of the difficulty of robust pose tracking and dense reward specification in the real world, the robot performs these tasks using only raw visual feedback, proprioceptive signals, and sparse rewards. cessive velocity, acceleration, and torque in the robot's joints that exert dangerous amounts of force on the environment or the robot itself. While some level of safety can be imposed through hard-coded limits on velocity, acceleration, and torque (as done in all of our experiments), this is an insufficient solution for preventing unsafe behavior during online interaction (Figure 2 left). Tuning these task-specific limits for interaction in the real world is costly and faced with operational and safety challenges. Balancing the tightness of these limits presents a further challenge, as aggressive limits can prohibit the robot from exerting the minimum energy needed to solve the task and weak limits will fail to inhibit unsafe actions. Finally, these static limits fail to incorporate the need for dynamic risk sensitivities across relevant time scales (such as at the scale of a single episode, or even at the scale of training epochs). Furthermore, simply penalizing the amount of torque exerted by the agent is an ineffective, retrospective solution that does not prevent unsafe actions at the onset of exploration as shown in Figure 2 and Figure 8. How can we get around these challenges? Our key insight is that conservative exploration can respect the safety constraints of real-world environments while still allowing the agent to modulate its strategy according to task progress such that it is able to learn quickly and efficiently. We translate this insight into implementation in three steps. First, rather than sampling actions from the entire action space, **warm-starting** exploration with actions sampled from a policy learned via behavioral cloning (BC) [10] prevents the agent from straying far from the provided demonstrations at the onset of online learning. Second, as our world model gains better coverage through online exploration, **agency-transfer** gradually shifts the agent from executing BC policy actions to short-horizon planning. Agency transfer provides a mechanism for increased exploration while stymying over-optimistic evaluation of regions of the observation-action space far from the agent's previous experience. Third, we use **actor-critic ensembles** to estimate the epistemic uncertainty of the value estimations of these short-horizon trajectories, allowing the agent to avoid overly optimistic actions. We integrate these three enhancements into the recently proposed MoDem [9] algorithm to develop MoDem-V2. Despite the simplicity of the individual components, their combination, and the resulting effectiveness in transforming overly aggressive, fault-prone MoDem agents into MoDem-V2 agents that efficiently and safely learn manipulation behaviors in the real-world is quite unique. Our work is, to the best of our knowledge, the first successful demonstration of demonstration-augmented visual MBRL trained directly in the real world. To evaluate the effectiveness of our approach, we study four robot manipulation tasks from visual feedback (Figure 1), and their simulated counterparts. Our main contributions are: * We **identify unsafe exploration and over-optimism** as the key issues in leveraging visual MBRL algorithms for real-world applications. * From this insight, we develop MoDem-V2 by integrating three ingredients into MoDem, namely **policy centering, agency transfer**, and **actor-critic ensembles**. * We demonstrate that MoDem-v2's conservative exploration significantly enhances its safety profile compared to other baselines, while still retaining the sample-efficient learning capability of MoDem. * Finally, we demonstrate that MoDem-V2 can **quickly learn a variety of contact-rich manipulation skills**, such as pushing, picking, and in-hand manipulation directly in the real world. * We contribute towards lowering the barrier of entry for RL in the real world by **open-sourcing our implementation of MoDem-V2** and discussing practical considerations for training on real hardware. ## II Preliminaries We begin by introducing notations and providing an overview of MBRL settings. **Notation:** The general setting of an agent interacting with its environment can be formulated as a Markov Decision Process (MDP) described by the tuple \(\mathcal{M}:=(\mathcal{S},\mathcal{A},\mathcal{T},R,\gamma)\). Here, \(\mathcal{S}\) denotes the state space, \(\mathcal{A}\) is the action space, the conditional probability distribution \(\mathbf{s}_{t+1}\sim\mathcal{T}(\cdot|\mathbf{s}_{t},\mathbf{a}_{t})\) defines the dynamics of the MDP, and a scalar reward function is given by \(r_{t}=R(\mathbf{s}_{t},\mathbf{a}_{t})\). Finally, \(\gamma\in[0,1)\) defines the discount factor for the MDP to trade-off future rewards to current ones. The goal for an agent is to learn a policy \(\pi:\mathcal{S}\mapsto\mathcal{A}\) that can achieve high long term performance given by \(\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}r_{t}\right]\). We specifically consider the problem of robotic manipulation from visual feedback on real hardware. We aim to Fig. 2: Agent performance on the inclined pushing task before failure due to safety violations. An asterisk indicates that agent training was terminated due to significant safety violations. _Left:_ On a real-world robot, MoDem violates (robot manufacturer specified) torque limits at the onset of online interaction and is unable to learn, whereas MoDem-V2’s conservative exploration allows it to perfect the task. _Right:_ Further evaluation in simulation reveals that simply penalizing the amount of torque exerted by the robot does not prevent termination due to significant safety violations. Other baseline agents are either terminated due to unsafe behavior or achieve significantly lower success than MoDem-V2. learn a control policy that controls a physical robot from RGB observations provided by cameras placed in the scene, and robot proprioception. We model this setting as a **high-dimensional** MDP with **sparse rewards.** This assumes that while the state space of the MDP (_e.g._ object poses) is not directly observable by the agent, a sufficient representation of the state can be well-approximated through the combination: \(\mathbf{s}=(\mathbf{x},\mathbf{q})\) where \(\mathbf{x}\) denotes stacked RGB observations from the robot's camera(s) and \(\mathbf{q}\) denotes proprioception from the robot. Finally, we only assume access to a sparse task completion reward, which is much easier to obtain via visual inputs compared to a detailed well shaped reward function. The final goal is to learn a policy that achieves high performance, using minimal online interactions, while respecting the safety considerations of hardware. **MoDem - Model-Based Reinforcement Learning with Demonstrations:** Our approach is based on _MoDem_[9] - a MBRL algorithm that combines _(i)_ model predictive control (MPC) and the decoder-free world model of TD-MPC [7] with _(ii)_ a small number of demonstrations to efficiently solve continuous control problems with limited online interaction. Concretely, MoDem learns the following five components: \[\begin{array}{ll}\text{State embedding}&\mathbf{z}=h_{\theta}(\mathbf{s})\\ \text{Latent dynamics}&\mathbf{z}^{\prime}=d_{\theta}(\mathbf{z},\mathbf{a})\\ \text{Reward predictor}&\hat{r}=R_{\theta}(\mathbf{z},\mathbf{a})\\ \text{Terminal value}&\hat{q}=Q_{\theta}(\mathbf{z},\mathbf{a})\\ \text{Policy guide}&\hat{\mathbf{a}}=\pi_{\theta}(\mathbf{z})\end{array} \tag{1}\] where \(h_{\theta},d_{\theta},R_{\theta}\) and \(Q_{\theta}\) are learned end-to-end using a combination of joint-embedding predictive learning [11], reward prediction, and Temporal Difference (TD) learning, and \(\pi_{\theta}\) is a deterministic policy that learns to maximize \(Q_{\theta}\) conditioned on a latent state \(\mathbf{z}\) (see Equation 2 of the Appendix). Throughout this work, we will refer to \((h_{\theta},d_{\theta},R_{\theta},Q_{\theta})\) as the _world model_, and \(\pi_{\theta}\) as the _policy_. See [7, 9] for additional details on the world model. While _MoDem_ has been shown to be effective in simulation, owing to aggressive exploration its applicability in domains where safety constraints can't be overlooked is limited (Figure 2). ## III Safety Agents that seek to learn via continuous operations need to operate respecting hardware and environmental safety constraints. Such constraints are diverse, obscure, and unobservable (intrinsic to low-level hardware details, or lack of appropriate sensing) and therefore cannot be directly accounted for by the agent during operations. Alternatives such as action penalization and user-defined safety are also insufficient (Figure 2) requiring extensive human intervention and monitoring for operations. In this work, we refer to these unobservable constraints as **safety violations**. For real-world operations, they are defined as hardware faults that require human intervention. For simulated runs, we define them as violations (unobserved by the policy) in either the robot's torque limits (as defined by the robot manufacturer) or excessive contact force applied by the robot's end effector. ``` 1:\(\theta\): learned network parameters 2:\(\mu,\sigma\): initial parameters for \(\mathcal{N}\) 3:\(N\): num sample trajectories 4:\(\mathbf{s}_{0},h\): current state, planning horizon 5:\(\tau\): trajectory weighting temperature 6:\(\alpha\): probability of using model rollouts \(w_{1},w_{2}\): ensemble mixing weights 7:encode state \(\mathbf{z}_{0}\leftarrow h_{\theta}(\mathbf{s}_{0})\)\(\triangleleft\)State embedding 8:if\(\mathrm{rand}()>\alpha\)then 9:\(\Gamma\coloneqq\{\mathbf{a}_{0}\}^{N}\sim\pi_{\theta}^{BC}\)\(\triangleleft\)Center actions around BC 10:\(\phi_{\Gamma}=Q_{\theta}^{BC}(\mathbf{z}_{0},\{\mathbf{a}_{0}\}^{N})\)\(\triangleleft\)Critic evaluation 11:else 12:\(\mathbf{\Gamma}\coloneqq\mathbf{a}_{t:t+n}\)\(\triangleleft\)Prior sampling 13:\(\Gamma\coloneqq\mathbf{a}_{t:t+h}\)\(\pi_{\theta}^{1:M},d_{\theta}\triangleleft\)Policy ensemble sampling 14:for all \(N\) trajectories \(\Gamma_{i}=(\mathbf{a}_{t},\mathbf{a}_{t+1},\dots,\mathbf{a}_{t+h})\)do 15:for step \(t=0..h-1\)do 16:\(\phi_{\Gamma}=\phi_{\Gamma}+\gamma^{t}R_{\theta}(\mathbf{z}_{t},\mathbf{a}_{t})\)\(\vartriangleleft\)Reward 17:\(\mathbf{z}_{t+1}\leftarrow\mathbf{\theta}_{\theta}(\mathbf{z}_{t},\mathbf{a}_{t})\)\(\vartriangleleft\)Latent transition 18:\(\phi_{\Gamma}\leftarrow\phi_{\Gamma}+\gamma^{h}Q_{\theta}(\mathbf{z}_{t\Gamma}, \mathbf{a}_{t\Gamma})\)\(\vartriangleleft\)Terminal value 19:\(\#\) Ensemble of terminal values 20:\(\phi_{\Gamma}^{1:M}=\phi_{\Gamma}+\gamma^{h}Q_{\theta}^{1:M}(\mathbf{z}_{H}, \mathbf{a}_{H})\) 21:\(\#\)Epistemic uncertainty estimation 22:\(\phi_{\Gamma}=w_{1}\)\(\mathrm{mean}(\phi_{\Gamma}^{1:M})+w_{2}\)\(\mathrm{std}\)\((\phi_{\Gamma}^{1:M})\) 23:\(\Omega=e^{\tau(\phi_{\Gamma})}\), \(\mu=\frac{\sum_{i=1}^{N}\Omega_{i}\Gamma_{i}}{\sum_{i=1}^{N}\Omega_{i}}\), \(\sigma=\sqrt{\frac{\sum_{i=1}^{N}\Omega_{i}(\Gamma_{i}-\mu)^{2}}{\sum_{i=1}^{N} \Omega_{i}}}\) 24:return\(\mathbf{a}\sim\mathcal{N}(\mu,\mathbf{I}\sigma^{2})\) ``` **Algorithm 1** Planning procedure of MoDem-V2 ## IV Method In this work, we aim to learn manipulation skills through environment interaction on _real_ robots, from _visual_ feedback, and with _minimal_ human supervision and intervention. We first discuss two important metrics for any agent that aims to quickly and safely learn manipulation skills in the real world. We then address these limitations with three proposed enhancements to MoDem in order to develop MoDem-V2. **Strengths and Weaknesses of MoDem:** MoDem accelerates learning through a three-stage framework in which \(h_{\theta},\pi_{\theta}\) are first pretrained on a set of demonstrations using Behavior Cloning (BC), and the resulting policy \(\pi_{\theta}\circ h_{\theta}(\cdot)\) is then used to _seed_ the model, _i.e._ collect a small initial dataset for learning the model (with Gaussian noise injected into \(\pi_{\theta}\) for exploration). After initializing the model on seeding data, the world model is iteratively used to collect new data via online interaction, and is optimized on all data: demonstrations, seeding data, and online interaction data. Using these insights, MoDem demonstrates accelerated model learning in a collection of simulated tasks. Yet, when exposed to the real world, MoDem faces a variety of challenges that were suppressed during its original simulation study. We find that MoDem exerts excessive forces and torques far beyond those exemplified in the provided demonstrations (see Section VI). Even at the beginning of the online interaction phase at which the agent has only observed data close to the BC policy, MoDem relies on its world model and value function to discriminate between high and low reward actions. Thus when MoDem samples actions across the entire action space, the world model and value function cannot (at least initially) provide good estimates. This can result in task failure due to poor action selection, or lead to the robot exerting unsafe forces and torques by choosing consecutive actions that are far apart in action space. Our primary contribution lies in eliminating these limitations while also improving the effectiveness of the original method. ### _MoDem-V2: Real-World Model-Based Reinforcement Learning with Demonstrations_ MoDem generates agent actions by first mapping raw observations into a learned lower-dimensional space, and then performing efficient short horizon planning in that latent space with its learned dynamics model and value function. We propose the following three adaptations to MoDem's planning procedure in order to improve its safety while maintaining its strengths in autonomy and data-efficiency (Algorithm 1). **Policy centered actions:** Rather than sampling actions from across the entire action space, we propose to sample actions from our learned policy. This more conservative exploration strategy reduces the likelihood of world model and value function evaluation over unseen regions of the state-action space, enabling them to better discriminate the quality of the generated actions. **Agency transfer from BC actions to MPC:** At the beginning of the online interaction phase, MoDem immediately begins using its learned world model and value function to do MPC. Yet both of these components have only seen limited data near the BC policy, so relying on them to choose actions for multiple consecutive timesteps at the beginning of interaction can quickly lead the agent into an unexplored region of the observation-action space from which it cannot recover. Our remedy is to _gradually_ shift from executing actions sampled from the BC policy to actions computed by short horizon planning. We implement this in Algorithm 1 with a hyperparameter \(\alpha\) that is initialized to 0 at the beginning of interaction and linearly increases to 1.0 over a fixed number of interaction steps. **Actor-Critic Ensembles for uncertainty aware planning:** The use of actor-critic ensembles [12] improves the agent's value estimations in two primary ways. First, note that each actor is trained by optimizing it to maximize its corresponding critic. While this provides a solution for efficiently finding the maximum value of Q over actions, it is subject to significant overestimation bias [13]. We mitigate this by only evaluating a critic with final trajectory actions produced by policies not directly optimized to maximize that particular critic. Actor-critic ensembles also improve value estimation by providing the agent with a pool of independently trained value functions, each of which computes its own value estimate. By estimating the epistemic uncertainty [14] of a trajectory, the agent can make uncertainty-aware decisions. We incorporate this into MoDem-V2 with weights \(w_{1}>0\) and \(w_{2}<0\) in Algorithm 1. ## V Experimental Design We design experiments to evaluate the design choices behind MoDem-V2 in enabling real-world contact-rich manipulation. Our investigation focuses on the following directions: * How sample-efficient is MoDem-V2 relative to other methods? * Is MoDem-V2 safer (_i.e._ fewer safety violations) than other methods including MoDem? * Does MoDem-V2 actually enable physical robots to learn real-world manipulation tasks? Our first step in answering these questions is to use simulation [15] to compare our method to both the original MoDem and other strong reinforcement learning baselines that also uses demonstrations to guide policy learning, Demonstration Augmented Policy Gradients (DAPG) [16] and the Framework for Efficient Robot Manipulation (FERM) [17]. We also provide further analysis of MoDem-V2 by ablating each of its three design decisions. Finally, we deploy MoDem-V2 onto a physical robot and evaluate its capability to learn four different manipulation tasks in the real world. ### _Hardware_ We adopt a set of core hardware components that are common across all of our experimental tasks. Each task uses a Franka Panda arm. The pushing and picking tasks use a Robotiq two-fingered gripper, while the in-hand reorientation task uses a ten-degree of freedom D'Manus hand [18] from the _ROBEL_ ecosystem [19]. For perception, three RealSense D435 cameras are mounted to the left, right and above the robot. Our hardware setup is depicted in Figure 3. ### _Task Suite_ We evaluate MoDem-V2 on four manipulation tasks in both simulation and the real world. These tasks encompass a variety of manipulation skills, namely pushing, picking, and in-hand manipulation as shown in Figure 1. We briefly describe each task below, see Subsection IX-B for more details. **Planar Pushing:** This task requires the robot to push an oblong object towards a fixed goal position on a table top. This task is likely the easiest of all four tasks, and we view it as base case with which to compare the other tasks. **Inclined Pushing:** This task requires the robot to push an object up an incline to reach a fixed goal position. During execution of the task, the robot must raise its gripper such that it can progress up the incline while also making sufficiently precise contact with the block to prevent it from slipping beneath or around the side of the gripper. **Bin Picking:** To complete this task, the robot must grasp a juice container and then raise it out of the bin. This task requires accurate positioning of the gripper because the (mostly) non-deformable container has a primary width that Fig. 3: A view of the in-hand re-orientation task as an example of our hardware setup. is approximately 65% of the gripper's maximum aperture. This task also requires the robot to disambiguate spatially similar states; _e.g._ if the gripper is above the bin, the robot must understand whether or not the object is in its grasp so that it can decide to go down towards the bin to pick up the object or stay where it is in order to receive reward. **In-Hand Reorientation:** This task requires the robot to grasp a water bottle laying on its side and then in-hand manipulate it to an upright position. Using the multi-fingered D'Manus hand more than doubles the dimensionality of the action space relative to the previous tasks. ## VI Experiments In this section, we evaluate MoDem-V2 against strong baselines in simulation and measure its performance in real world environments. Our simulation experiments measure both sample-efficiency and safety (defined in Section III), two important aspects for any learning method that is used in the real world. We then use MoDem-V2 to train a robot to perform all four manipulation tasks in the real world. All experiments train an initial policy with only ten demos, and each evaluation is aggregated over 30 trials. ### _Simulated Comparison to Baselines_ With respect to sample-efficiency, MoDem-V2 and MoDem both significantly outperform DAPG (State), as shown in Figure 4 (bottom). This is despite DAPG having access to privileged state information (such as the object's pose) and dense rewards (such as rewards based on the distance between the gripper and the object)! Although FERM achieves similar performance to both versions of MoDem on the easier pushing tasks, it is unable to learn the bin picking and in-hand re-orientation tasks. While MoDem achieves similar efficiency to MoDem-V2 on the first three tasks, it suffers a significant performance drop in the early stages of training across all tasks. This coincides with the point at which MoDem begins to aggressively explore the action space, resulting in the robot applying excessive forces/torques beyond that observed in the provided demonstrations. We measured the number of safety violations that occurred throughout training for MoDem and MoDem-V2, as shown in Figure 4 (top). While both methods initially commit few violations since their BC policies were trained from the same demonstrations, MoDem's safety violations sharply increase as the interaction phase begins. Thanks to our design decisions, the number of violations exerted by MoDem-V2 is generally lower throughout online learning comparatively. Here MoDem-V2 demonstrates that it can achieve similar or better sample-efficiency than MoDem, while committing significantly fewer safety violations and thereby safer behavior. ### _Ablation of design choices_ We perform ablations of MoDem-V2 by individually adding each improvement specified in Subsection IV-A to MoDem. As shown in Figure 5, we found that all of the ablations generally maintained or improved over the sample-efficiency of MoDem while significantly improving safety by committing fewer violations. The one exception to this is the bin picking task, for which the _Centering_ and _Ensemble_ ablations commit a greater number of safety violations whereas MoDem-v2 exhibits much superior performance. When comparing between ablations it is clear that each individual modification has both benefits and drawbacks. First, note that _Centering_ is a necessary sub-component of _Schedule_. While _Schedule_ is generally safer than _Ensemble_, _Ensemble_ typically has better sample-efficiency. By combining these two ingredients, MoDem-V2 is able to achieve the improved sample-efficiency of _Ensemble_ while maintaining the improved safety profile of _Schedule_. ### _Real World Results_ As suggested by our simulation experiments, the original MoDem is unsafe for learning real-world manipulation tasks. When we did attempt to run MoDem on our real robot, we found that its aggressive exploration frequently violated Fig. 4: The number of safety violations (top row) and success rate (bottom row) for each of the four manipulation tasks in simulation. Lower is better for safety violations (top row) while higher is better for episode success (bottom row). While both MoDem-V2 and MoDem achieve similar or better sample-efficiency than all of the baselines, MoDem-V2 exhibits significantly safer learning as evidenced by the drastically lower amount of safety violations. Fig. 5: Ablations of the three MoDem-V2 enhancements for all four tasks. Lower is better for safety violations (top row) while higher is better for episode success (bottom row). MoDem-V2 achieves both the higher sample-efficiency of _Ensemble_ and the increased safety profile of _Schedule_. safety limits at the beginning of online interaction. For example, on the inclined pushing task, the robot triggered a safety fault within the first two exploration episodes because it exerted excessive force/torque on the incline. Due to the unsafe behavior MoDem induces and the significant human intervention that would be required, it is not feasible to evaluate MoDem in the real world. In contrast, MoDem-V2 is capable of safely learning manipulation tasks with minimal human intervention. MoDem-V2 enabled the robot to significantly exceed the performance of its initial BC policy with about two hours worth of online training data or less. Figure 6 shows the success rate of the initial policy cloned from just ten demonstrations and the best MoDem-V2 agent performance achieved throughout online training, as well as example trajectories from the MoDem-V2 agents. Please see sites.google.com/view/modem-v2G for additional videos and results. ## VII Related Work **Visual MBRL:** Improving sample-efficiency of visual RL by learning a model of the environment has been explored extensively in literature [20, 5, 6, 21, 22, 23, 7]. Here we focus on MBRL algorithms that leverage planning. Prior work typically learns a latent dynamics model from online interaction, and uses a sampling-based planning technique for action selection with candidate action sequences evaluated by the learned model. For continuous control, planning can be formalized as Model Predictive Control (MPC) [20, 6, 7, 9], whereas Monte-Carlo Tree Search (MCTS) is used for discrete action spaces [22, 23]. Regardless, the majority of work on visual MBRL focus on sample-efficiency in simulated tasks, where practicality and safety are of limited concern. Our work extends the MBRL algorithm of [7, 9] which has already been shown to be very sample-efficient in simulation, and instead focus on the challenges that arise when training MBRL in the real world. **RL with Demonstrations:** RL and imitation learning each have their own drawbacks: RL is notoriously sample-inefficient due to limited supervision, and IL is difficult to scale due to the immense cost of data collection. In recent years, there has been increasing interest in combining the two paradigms. Depending on the data composition, this can be viewed as either finetuning an imitation policy with RL [24], or accelerating RL with a small number of demonstrations [16, 17, 9]. For instance, Baker et al. [24] learns a general behavioral prior with BC on a large-scale dataset of Minecraft videos, and finetunes the policy with RL to solve hard-exploration tasks in this domain. Rajeswaran et al. [16] and Zhan et al. [17] demonstrate that augmenting model-free policy gradient and Q-learning algorithms, respectively, effectively alleviate the difficulty of exploration. Hansen et al. [9] show that model-based methods, too, benefit from a small number of demonstrations, and learn more efficiently than model-free alternatives in this setting. We identify shortcomings of [9], propose solutions, and demonstrate the effectiveness of our method on real hardware. **Real-World Robot Learning:** Researchers have explored a wide variety of approaches for robot learning on real hardware, most of which fall into one of three categories: learning from human demonstrations [25, 26, 27, 28], learning from large uncurated datasets [29, 30, 20], learning from online interaction via RL [31, 32], or any combination of them [17, 33, 34, 35]. Our work is most similar to Zhan et al. [17] in problem setting (RL with demonstrations) and experimental setup (robotic manipulation tasks in the real world). However, we focus on the unique challenges and opportunities of MBRL for real-world robot learning. Wu et al. [32] study real world MBRL, but consider simpler tasks with limited variation and do not leverage demonstrations. ## VIII Discussion In this work, we tackled the challenge of learning manipulation skills in the real world from only proprioceptive and visual feedback with sparse rewards. We developed MoDem-V2, a real-world ready adaptation of MoDem, by proposing to initially center rollouts around the BC policy, gradually increase the proportion of actions chosen by the learned world model, and implement uncertainty aware planning with actor-critic ensembles. We evaluated the sample-efficiency and safety of MoDem-V2 against strong baselines in simulation and found that it maintained the high sample-efficiency of MoDem while exhibiting significantly safer behavior through lower contact force exertion. We found that MoDem-V2 enabled a real, physical robot to learn a variety of manipulation skills, such as pushing, picking, and in-hand manipulation, from an hour or less worth of interaction data. **Limitations**: One limitation of our work is that it requires a small number of demonstrations, which may not always be available or easy to collect. Also, this work assumes that the environment can be reset to a narrow set of starting states, which may not always be the case in the real-world. In future work, we hope to explore the reuse of our learned world model across changes in manipulated object and task goal. Fig. 6: _Left:_ Example rollouts from our MoDem-V2 agents on real-world manipulation tasks. _Right:_ The success rate of our best performing policy and its initial BC policy. MoDem-V2 is able to outperform its BC policy in hours or less.
2309.13559
Swashplateless-elevon Actuation for a Dual-rotor Tail-sitter VTOL UAV
In this paper, we propose a novel swashplateless-elevon actuation (SEA) for dual-rotor tail-sitter vertical takeoff and landing (VTOL) unmanned aerial vehicles (UAVs). In contrast to the conventional elevon actuation (CEA) which controls both pitch and yaw using elevons, the SEA adopts swashplateless mechanisms to generate an extra moment through motor speed modulation to control pitch and uses elevons solely for controlling yaw, without requiring additional actuators. This decoupled control strategy mitigates the saturation of elevons' deflection needed for large pitch and yaw control actions, thus improving the UAV's control performance on trajectory tracking and disturbance rejection performance in the presence of large external disturbances. Furthermore, the SEA overcomes the actuation degradation issues experienced by the CEA when the UAV is in close proximity to the ground, leading to a smoother and more stable take-off process. We validate and compare the performances of the SEA and the CEA in various real-world flight conditions, including take-off, trajectory tracking, and hover flight and position steps under external disturbance. Experimental results demonstrate that the SEA has better performances than the CEA. Moreover, we verify the SEA's feasibility in the attitude transition process and fixed-wing-mode flight of the VTOL UAV. The results indicate that the SEA can accurately control pitch in the presence of high-speed incoming airflow and maintain a stable attitude during fixed-wing mode flight. Video of all experiments can be found in youtube.com/watch?v=Sx9Rk4Zf7sQ
Nan Chen, Fanze Kong, Haotian Li, Jiayuan Liu, Ziwei Ye, Wei Xu, Fangcheng Zhu, Ximin Lyu, Fu Zhang
2023-09-24T06:16:23Z
http://arxiv.org/abs/2309.13559v1
# Swashplateless-elevon Actuation for a Dual-rotor Tail-sitter VTOL UAV ###### Abstract In this paper, we propose a novel swashplateless-elevon actuation (SEA) for dual-rotor tail-sitter vertical takeoff and landing (VTOL) unmanned aerial vehicles (UAVs). In contrast to the conventional elevon actuation (CEA) which controls both pitch and yaw using elevons, the SEA adopts swash-plateless mechanisms to generate an extra moment through motor speed modulation to control pitch and uses elevons solely for controlling yaw, without requiring additional actuators. This decoupled control strategy mitigates the saturation of elevons' deflection needed for large pitch and yaw control actions, thus improving the UAV's control performance on trajectory tracking and disturbance rejection performance in the presence of large external disturbances. Furthermore, the SEA overcomes the actuation degradation issues experienced by the CEA when the UAV is in close proximity to the ground, leading to a smoother and more stable take-off process. We validate and compare the performances of the SEA and the CEA in various real-world flight conditions, including take-off, trajectory tracking, and hover flight and position steps under external disturbance. Experimental results demonstrate that the SEA has better performances than the CEA. Moreover, we verify the SEA's feasibility in the attitude transition process and fixed-wing-mode flight of the VTOL UAV. The results indicate that the SEA can accurately control pitch in the presence of high-speed incoming airflow and maintain a stable attitude during fixed-wing mode flight. Video of all experiments can be found in youtube.com/watch?v=Sx9Rk4Zf7sQ ## I Introduction Vertical takeoff and landing (VTOL) unmanned aerial vehicles (UAVs) can take off and land vertically like multi-rotor UAVs and achieve efficient long-range and high-speed flight similar to fixed-wing UAVs [1]. VTOL UAVs can be implemented using different configurations, such as tilt-rotor [2, 3], tilt-wing [4, 5], and tail-sitter [6, 7, 8]. Among these, the tail-sitter UAV is advantageous due to its ability to transition its attitude to enter fixed-wing flight mode without additional tilting mechanisms, resulting in a simpler and more compact structure [9]. Tail-sitter UAVs can be controlled using various actuator configurations, such as quad-rotor type [10, 11] and dual-rotor type with additional control surfaces (e.g., elevons) [12, 13, 14]. Compared with the quad-rotor type, the dual-rotor type uses fewer motors and propellers, making it less expensive, lighter, and more portable. However, the limited deflection range and small size of the additional control surfaces may lead to saturation, which constrain the achievable performance or even cause instability [15]. In particular, when the UAV hovers in multi-rotor mode, wind disturbances can significantly affect the UAV's attitude due to its large wing area. These disturbances often act on the pitch and yaw directions of the dual-rotor-type VTOL UAVs, which are controlled by the elevons (call it conventional elevon actuation (CEA)). One problem of the CEA is that large control efforts in both directions can induce elevons' deflection sharply, causing actuation saturation that further degrades the control performance or even destabilizes the system. One approach to mitigate this problem is to place elevons at the high-speed airflow region of the propellers by reducing the distance between rotors and elevons [16]. However, this design also decreases the airflow speed flowing through the main wing, resulting in lift loss. Overall, wind-resistance is one of the major challenges in moving the dual-rotor tail-sitter VTOL UAV towards practical use. Another problem of CEA is encountered in the take-off process. Due to the constraints of the UAV footprint and landing stability, the landing gear cannot be very high, Fig. 1: Dual-rotor tail-sitter VTOL UAV, named Hong He. (a) The UAV hovers at the multi-rotor mode. Main components on the front side of the UAV are labeled. (b) The UAV flies at the fixed-wing mode. Main components on the back side of the UAV and the definition of coordinate frame are shown. resulting in a short distance between the elevons and the ground. Before take-off, the downwash airflow generated from the propellers can be easily affected by the ground, leading to a decreased or even disappeared control moment of the elevons [17]. As a result, the pitch attitude may become unstable, potentially causing the UAV to crash. Motivated by these problems, in this study, we propose a swashplateless-elevon actuation (SEA) which employs a swashplateless mechanism with improved structure to dual-rotor tail-sitter VTOL UAVs. The swashplateless mechanism is a passive structure mounted on the motors. It generates lateral moment by controlling the high-frequency acceleration and deceleration of the motor without requiring additional actuators [18, 19]. For VTOL UAVs, it can produce an extra control moment in the pitch direction, which can dramatically alleviate elevons' deflection saturation caused by large disturbances applied in both pitch and yaw directions or by large moments required by controllers, leading to a better disturbance rejection performance and higher maneuverability. Furthermore, the moment generated by the swashplateless mechanism is independent of the propeller's airflow, which enables a more stable take-off process for this type of UAV. We compare the SEA with the CEA in various experiments, which are described in the following sections. ## II System Design ### _UAV Structure and Avionics_ We designed and manufactured a dual-rotor tail-sitter VTOL UAV named Hong He as shown in Fig. 1. For realizing large-scale scanning and mapping, it is equipped with a Livox AVIA LiDAR and an onboard computer Manifold 2-C. The LiDAR is capable of achieving long-distance scanning up to 400 m, making it suitable for outdoor large-scale scenes. To meet the requirements of long-range flight and control performance, we choose a canard wing layout and optimize the aircraft's aerodynamic shape design. For this prototype, our mechanical design concept emphasizes ease of assembly and maintenance, and hence we use formed wings and 3D printed materials (e.g., PLA, Nylon) to manufacture the UAV, making it simple to assemble. The avionics control system is centered around a Pixhawk mini 4 flight controller attached to the fuselage. It is connected to two Electronic Speed Controllers (ESCs) (model T-MOTOR F35A 3-6S) for driving two motors (model T-MOTOR MN5006 KV450), two servos (model KST DS215MG V3.0) to control the deflection of two elevons, and two magnetic encoders (model AS5600) to measure the rotor angles of two motors. The final designed aircraft has a wingspan of 107 cm, weighs about 2.25 kg, and has a cruising speed of approximately 9 m/s. Detailed parameters can be found in Table I. As shown in Fig. 1 (b), the origin of UAV's coordinate frame (i.e. body frame) is attached to the UAV's center of mass. Beside the \(x\) axis is perpendicular to the plane of main wing, both \(y\) and \(z\) axes are located in the plane of main wing, but the \(y\) axis is parallel with the main wing while the \(z\) axis points to the tail of UAV. The roll, pitch, and yaw are defined in UAV's multi-rotor mode, and hence their rotation are along with the \(x\), \(y\), and \(z\) axes, respectively. All the experimental data in this paper follow this definition. ### _Swashplateless Mechanism_ The swashplateless mechanism, originally proposed in [20], can realize functions similar to the cyclic blade pitch control mechanism of swashplates that has been widely used in helicopters, providing both thrust and moment. The moment of the swashplateless mechanism comes from the unbalanced thrust of blades, induced by cyclic blade pitch changes. Unlike traditional swashplates driven by additional servos, the swashplateless mechanism is entirely passive. As shown in Fig. 2, two passive hinges connect side hubs to the central hub asymmetrically. Through periodic acceleration and deceleration of the motor (i.e., motor speed modulation), the unsymmetrical hinges rotate due to blade inertia, leading to different pitch angle changes of positive and negative blades. The blade with an increased pitch angle produces more thrust, while the blade with a decreased pitch angle produces less thrust, resulting in a net moment being generated. Detailed working principles are detailedly introduced in [20, 21, 22]. The propulsion system shown in Fig. 2 mainly composes of a brushless DC motor, an improved swashplateless mechanism mounted on top of the motor's rotor, a pair of propeller blades, and a magnetic encoder mounted on the bottom of the motor's shaft. Compared with the original design, the improved structure of swashplateless mechanism includes additional ball bearings and pressure bearings to reduce the friction caused by the high-frequency rotation of the hinges, providing a smoother and more linear moment output and a faster response. Fig. 2: Components of one set of Hong He’s propulsion system, mainly including a motor, a swashplateless mechanism, and two blades. ## III Control and Actuation ### _Overview of Control System_ The overview of Hong He's control system with the proposed SEA is shown in Fig. 3. The external inputs of the position & velocity controller can be switched to two sources: remote controller (i.e., manual mode) and onboard computer (i.e., autonomous mode). In the manual mode, the remote controller directly generates the desired velocity. In the autonomous mode, the onboard computer produces desired position as the input of the position controller and the desired velocity & acceleration as the feed-forward terms. The control section (i.e., the orange zone) is the standard realization of PX4 while the actuation section (i.e., the green zone) are specifically designed for actuating the swashplateless mechanism and hence should be focused on. The operating frequency of the SEA mixer is dependent on the output frequency of the angular velocity controller (i.e., the measurement frequency of the inertial measurement unit (IMU)). Therefore, we set the measurement frequency of the IMU to 1 kHz, allowing the mixer run at the same frequency for processing the 910-Hz measurement from the two magnetic encoders without dropping any measurement. The outputs of the mixer are motor commands and servo commands, which are sent to the ESCs using the DShot600 communication protocol and to the servo by a standard 50-Hz PWM signal. The DShot600 protocol has a very short communication delay of 26.7 us, making it suitable for high-frequency speed modulation of the motor. ### _Actuation Principles of SEA and CEA_ The SEA and CEA exhibit identical actuation principles with respect to thrust generation and the control of roll and yaw moments. The two propellers, which are driven by motors, contribute to the total UAV thrust. Differential thrust produced by the propellers induces the roll control moment, while the airflow deflection by the elevons create the yaw control moment. However, there exists a difference between the SEA and CEA in terms of pitch control moment generation. The elevons contribute to the pitch control moment in the CEA, whereas the SEA employs swashplateless mechanisms for this purpose. This actuation principle facilitates decoupling of the attitude control of pitch and yaw. In the CEA, the mixer blends the pitch and yaw moments to regulate the elevons' deflection through servo commands. Conversely, in the realization of SEA (i.e., the green zone in Fig. 3), the pitch and yaw controls are directed towards separate actuators (i.e., the motors and the elevons). The SEA can mitigate elevon saturation by decoupling the generation of pitch control moment, thereby enhancing control and disturbance rejection performance. This feature is of particular significance, given that disturbances are frequently applied in the pitch and yaw directions due to the UAV's large main wing surface area. ### _Design of SEA Mixer_ #### Iii-C1 Cyclic speed control of motor Before introducing the mixer, the cyclic speed control of motor is needed to be described since it defines a part of the output variables of the mixer. To prevent undesired vibration caused by motor speed modulation, a sinusoidal signal is employed for two motors. The total motor throttle \(U_{i},i=1,2\) is designed as \[\begin{bmatrix}U_{1}\\ U_{2}\end{bmatrix}=\begin{bmatrix}C_{1}\\ C_{2}\end{bmatrix}+\begin{bmatrix}A_{1}&0\\ 0&A_{2}\end{bmatrix}\begin{bmatrix}\cos(\theta_{1}-\phi_{1}+\gamma_{0})\\ \cos(\theta_{2}-\phi_{2}-\gamma_{0})\end{bmatrix}, \tag{1}\] where \(C_{i}\), \(A_{i}\), \(\theta_{i}\), and \(\phi_{i}\) are the nominal throttle, sinusoidal amplitude, motor's rotor angle measured by the magnetic encoder, and the moment direction in the UAV coordinate frame of motor \(i\), \(i=1,2\), respectively, the \(\gamma_{0}\) is a positive constant for compensating the angle delay caused by blades' Fig. 4: Coordinate frame definition for the swashplateless mechanism of the two motors shown in the top view of the UAV. The UAV coordinate frame is also labeled for reference. Fig. 3: Control system overview. All modules in the control part are inherited from the standard PX4 and the mixer module of actuation part is additionally implemented into the PX4. inertia and can be calibrated in advance through experiment data from a test stand of the swashplateless mechanism. Despite the fact that the motors rotate in opposite directions, we can define the same coordinate frame for both motors, as shown in Fig. 4. The absolute angle of the rotor, \(\theta_{i}\), is defined as the angle between the \(x\)-axis and the positive blade, regardless of the rotation direction. In the motor coordinate frame, the moment direction is represented as \(\phi_{i}+\pi/2\). However, due to the 90\({}^{\circ}\) rotation between the UAV frame and the motor frame, \(\phi_{i}\) can represent the actual moment direction in the UAV frame. The sign of \(\gamma_{0}\) is determined by the rotation direction since the delay direction is opposite to the rotation direction. As only the pitch angle is controlled by the moment of the swashplateless mechanism, \(\phi_{i}\) can be simply set as \(0\) or \(\pi\), depending on the direction of the desired pitch moment. #### Iii-B2 Mixer mapping The mixer is designed to calculate all actuator inputs (e.g., motor throttles, servo angles) according to the desired thrust \(f_{T,d}^{\mathcal{B}}\) and moment \(\mathbf{\tau}_{d}^{\mathcal{B}}\). Given the geometry of the mechanical structure of the UAV, the mapping from actuator outputs to the body thrust and moment are \[\begin{bmatrix}f_{T}^{\mathcal{B}}\\ \tau_{x}^{\mathcal{B}}\\ \tau_{y}^{\mathcal{B}}\\ \tau_{z}^{\mathcal{B}}\end{bmatrix}=\begin{bmatrix}1&1&0&0&0&0\\ -L&L&0&0&0&0\\ 0&0&1&1&0&0\\ 0&0&0&0&1&1\end{bmatrix}\begin{bmatrix}T_{1}\\ T_{2}\\ \tau_{s,1}\\ \tau_{s,2}\\ \tau_{e,1}\\ \tau_{e,2}\end{bmatrix}, \tag{2}\] where \(T_{i}\) and \(\tau_{s,i}\) are the thrusts and moments generated by the motor \(i,i=1,2\), respectively. \(\tau_{e,j}\) are the moments generated by the elevon \(j,j=1,2\), respectively. \(L\) is the body-\(y\)-axis distance between UAV's center of mass and the motor. Assuming that the actuator outputs are approximately linear to the actuator inputs, leads to \[\begin{bmatrix}T_{1},T_{2},\tau_{s,1},\tau_{s,2},\tau_{e,1},\tau_{e,2} \end{bmatrix}^{T}=\\ \text{diag}(K_{t},K_{t},K_{a},K_{a},K_{e},K_{e})\left[C_{1},C_{2},A_{1},A_{2},\delta_{1},\delta_{2}\right]^{T},\end{bmatrix} \tag{3}\] where \(\delta_{j}\) are the angle command of servo \(j\), \(j=1,2\). \(K_{t}\), \(K_{a}\), and \(K_{e}\) are proportion coefficients, representing the conversions from throttle to thrust, from sinusoidal amplitude to swashplateless mechanism moment, and from servo angle to elbow moment, respectively. Here, the \(C_{1}\), \(C_{2}\), \(A_{1}\), and \(A_{2}\) are required by the cyclic speed control of motor. Given the desired thrust \(f_{T,d}^{\mathcal{B}}\) and the moment vector \(\mathbf{\tau}_{d}^{\mathcal{B}}=\begin{bmatrix}\tau_{x,d}^{\mathcal{B}}&\tau_{y, d}^{\mathcal{B}}&\tau_{z,d}^{\mathcal{B}}\end{bmatrix}^{T}\), combining (2) and (3) and assuming the equal distribution of moments, we can finally determine the desired actuator inputs as \[\begin{bmatrix}C_{1,d}\\ C_{2,d}\\ A_{1,d}\\ A_{2,d}\\ \delta_{1,d}\\ \delta_{2,d}\end{bmatrix}=\begin{bmatrix}\frac{1}{2K_{t}}&\frac{-1}{2LK_{t}}& 0&0\\ \frac{1}{2K_{t}}&\frac{1}{2LK_{t}}&0&0\\ 0&0&\frac{1}{2K_{a}}&0\\ 0&0&0&\frac{1}{2K_{a}}\\ 0&0&0&\frac{1}{2K_{a}}\\ 0&0&0&\frac{1}{2K_{a}}\\ 0&0&0&\frac{1}{2K_{a}}\\ \end{bmatrix}\begin{bmatrix}f_{T,d}^{\mathcal{B}}\\ \tau_{x,d}^{\mathcal{B}}\\ \tau_{y,d}^{\mathcal{B}}\\ \tau_{z,d}^{\mathcal{B}}\end{bmatrix}, \tag{4}\] where the \(\delta_{j,d}\) are directly sent to the servo \(j\) by PWM signal. The \(C_{i,d}\) and \(A_{i,d}\) are used to generate the total motor throttle \(U_{i,d}\) sent to the ESCs. In (4), the desired moments of pitch and yaw are decoupled into different actuators (i.e., the motors' swashplateless mechanisms and elevons). ## IV Experiments This section presents experimental results comparing the performance of the SEA and the CEA in several aspects, including take-off, trajectory tracking, and disturbance rejection. Additionally, tests of attitude transition and fixed-wing-mode flight are also performed using the SEA. To ensure a fair comparison, the controllers' parameters of angular rate, attitude, and position are tuned carefully in the best effort to optimize the performance for both SEA and CEA. The tuning process are conducted by analyzing the logged actual flight data to ensure fast responses while keeping small overshoots. All experiments can be found in an accompanying video uploaded in youtube.com/watch?v=Sx9Rk4Zf7sQ ### _Comparison of Take-off Performance_ First, three take-off experiments are conducted on the UAV with attitude control (i.e., temporarily turn off the position controller). They are CEA with ground take-off, CEA with take-off on a lifted pedestal, and SEA with ground take-off. The results are shown in Fig. 5 (a)-(c), respectively. In Fig. 5 (a), the pitch angle unexpectedly increases to a maximum value of 8.6\({}^{\circ}\), although the desired attitude is still zero during take-off. The large pitch error during ground take-off is caused by the small distance between elevons and the ground. Constrained by both the footprint and landing stability, the landing gear cannot be higher, making the elevons easily affected by the near-ground airflow and deteriorating their control effect. In contrast, Fig. 5 (b) shows the same take-off process on a lifted pedestal, and the pitch error significantly decreases to a maximum value of 2.1\({}^{\circ}\). The lifted pedestal alleviates the effect caused by the near-ground airflow, enabling the elevons to maintain a control effect during the whole take-off process. In the case of SEA shown in Fig. 5 (c), because the pitch is fully controlled by the swashplateless mechanism, which is not affected by the near-ground airflow, the pitch error during take-off is minimal (1.4\({}^{\circ}\)). By turning the position controller off, one can manually control the throttle and allow the UAV to take-off quickly. In such a case, the pitch error can be less than 10\({}^{\circ}\) and does not cause significant disadvantages. However, when the UAV takes off with position control, the throttle is decided by the calculation of position controller. If a smooth take-off is expected, the take-off throttle may increase gradually, resulting in a significant pitch angle and position error in the CEA. Figure 6 shows the UAV's position and attitude during take-off in position control mode. A significant position error (0.79 m) is caused by the large pitch error (14.7\({}^{\circ}\)) when using the CEA. For the SEA, the position error is only 0.04 m, which is merely 5.1% of the CEA. Therefore, the take-off performance of the SEA control is much better than that of the CEA, leading to a smooth take-off even when the UAV is close to the ground, no matter the position controller is turned on or off. ### _Comparison of Trajectory Tracking Performance_ The tracking performance of both SEA and CEA are compared by conducting a figure-of-eight trajectory tracking experiment, as shown in Fig. 7. The length and width of the figure-of-eight trajectory are 2 m and 1 m, respectively, and the trajectory is executed for four cycles. The trajectory period is 5 s, but in the first and last cycles, the trajectory period is extended to 7.5 s to achieve smooth acceleration and deceleration. During tracking, the yaw angle is commanded to maintain the body \(x\)-axis same as the horizontal component of the velocity direction, which can generate more attitude control effort to evaluate the differences between SEA and CEA. The results in Fig. 8 show that the tracked trajectory of SEA is more consistent than that of CEA. The absolute errors of position norm and attitude yaw during tracking are shown in Fig. 9. For the absolute error of position norm, the median value of SEA and CEA are 14.17 cm and 15.35 cm, respectively, and the maximum values for both are 38.91 cm and 40.64 cm, respectively, indicating that the SEA has a slightly better performance than the CEA in the position control. For the absolute error of attitude yaw, the median value of SEA and CEA were 19.5\({}^{\circ}\) and 27.1\({}^{\circ}\), respectively, and the maximum value for both were 87.9\({}^{\circ}\) and 119.3\({}^{\circ}\), respectively, indicating that the SEA has relatively obvious advantage in yaw control than the CEA during the fast trajectory tracking. ### _Comparison of Disturbance Rejection_ Three experiments are performed to evaluate the disturbance rejection performance of the SEA and the CEA: (i) hovering under balanced wind disturbance, (ii) forward position steps under unbalanced wind disturbance, and (iii) lateral position steps under unbalanced wind disturbance. The position information is provided by a motion capture system (Vicon). #### Iv-C1 Hover under balanced wind disturbance As shown in Fig. 10 (a), two fans are placed in parallel and located 1 m from the UAV's hover position. The fans are off initially and are then turned on to produce a wind gust of about 4.5 m/s in this distance. After the UAV reaches a stable pose, the fans Fig. 5: Comparison of pitch angle error during take-off in attitude control mode with different conditions. Fig. 8: The reference trajectory and tracked trajectory in the trajectory tracking experiment. Fig. 6: Comparison of \(x\)-position error and pitch angle error during take-off in position control mode with different actuation approach. Fig. 7: The overlaid snapshots of Hong He when it tracks a 3D figure-of-eight trajectory with continuous yaw rotation. are turned off to cancel the wind disturbance. The results are shown in Fig. 10 (b). Since the wind disturbance is applied on the body \(x\)-axis, we can only focus on the errors on \(x\)-axis position and pitch angle. For the \(x\)-axis position, both the median value and maximum value of SEA are smaller than that of CEA, which are 1.77 cm versus 3.83 cm and 6.60 cm versus 7.92 cm, respectively. The results show the SEA can maintain the UAV's position better. For the pitch angle errors, although the CEA has slightly smaller median values than the SEA (i.e., 1.09\({}^{\circ}\) versus 1.32\({}^{\circ}\)), the maximum value of CEA are relatively larger (i.e., 5.78\({}^{\circ}\) versus 4.08\({}^{\circ}\)). This shows that both actuation approaches have good pitch angle control performance under wind disturbance but the SEA can realize a smaller error band. #### V-B2 Forward position steps under unbalanced wind disturbance Maintaining good response in the presence of external disturbances is more difficult than without disturbance, as the actuators need to actively suppress the disturbance, leading to reduced actuator margins for tracking controller commands. To verify actual performance under disturbances, a position step response in the \(x\) direction is performed with a fan wind disturbance being applied to a single side of the main wing. In the experiment, the UAV first hovers 1 m from a fan that has been turned on. Then, the UAV steps 1 m in the same direction as the wind airflow and finally steps back to the original hover position. Since the wind area of the UAV is large, we pay more attention to the attitude response during the wing disturbance. The results of attitude response is shown in Fig. 11. When the UAV steps away from the fan, the wind disturbance decreases. In this condition, and attitude response of the two actuation approaches are similar. However, when the UAV steps back to the original hover position, the wind disturbance increases. Because the elevons of CEA need to deflect an angle to generate control moment both in pitch and yaw directions, they cannot provide sufficient moment to resist the wind disturbance on these two directions. Consequently, when using CEA, obvious oscillation occurred in the converging process of pitch angle and significant shaking in yaw angle (maximum error of 31.2\({}^{\circ}\)). Since large attitude error occurred in pitch and yaw, the wind disturbance affects roll more easily due to the changing of the windward side, leading to a maximum error of 8.2\({}^{\circ}\). In contrast, no oscillation or obvious error occurs in the attitude response of the SEA during the entire process, indicating that it can achieve more actuator margin and maintain good attitude control performance even with external disturbance. #### V-B3 Lateral position steps under unbalanced wind disturbance In addition to the forward and backward position Fig. 11: Attitude responses (left panel) and their corresponding errors (right panel) when applied a step command in body \(x\) direction under an unbalanced wind disturbance on the main wing. Fig. 12: Attitude responses (left panel) and their corresponding errors (right panel) when applied a step command in body \(y\) direction under an unbalanced wind disturbance on the main wing. Fig. 10: Hover under balanced wind disturbance. (a) Experimental setup for the disturbance rejection test of fan wind. (b) errors of \(x\)-axis position and pitch angle of SEA and CEA during a balanced wind disturbance. Fig. 9: The errors of position norm and yaw angle during tracking the 3D figure-of-eight trajectory. The central mark indicates the median, and the bottom and top edges of the box represent the 25th and 75th percentiles, respectively. The whiskers extend to the maximum and minimum value. step experiment in the body \(x\) direction, we also conducted a lateral position step experiment in the body \(y\) direction. The experiment setup is same as before. The UAV initially hovered 1 m away from a fan that had been turned on, then stepped 1 m away from the fan in the direction which is orthogonal to the direction of the wind airflow, and finally stepped back to the original hover position. The attitude response results of this experiment are shown in Fig. 12. When the UAV stepped away from the fan, the wind disturbance decreased. In this condition, the attitude control of the two actuation approaches are still similar. However, when the UAV stepped back to the original hover position, the wind disturbance increased. Similar to the forward step experiment, the CEA showed obviously larger yaw angle shaking (maximum error of 22.1\({}^{\circ}\)) than the SEA (maximum error of 8.4\({}^{\circ}\)), further verifying that the SEA has better disturbance rejection performance than the CEA. ### _Transition and Fixed-wing Mode Flight_ We also conducted experiments to verify the performance of the proposed SEA in the fixed-wing mode flight as shown in Fig. 1 (b). The results are shown in Fig. 13. The definition of UAV coordinate frame in the fixed-wing mode is the same as that of the multi-rotor mode. The pitch angle of fixed-wing mode is set at -65\({}^{\circ}\) (i.e., the angle between main wing and horizontal plane is 35\({}^{\circ}\)). During the transition, the attitude controller tracked the desired attitude accurately without any overshoot, showing that the swashplateless mechanism has a fast response and can generate sufficient moment. In the fixed-wing mode flight, the UAV accelerated continuously and finally reached a speed of 9.6 m/s. The pitch angle had no vibration and always remained at 65\({}^{\circ}\), indicating that the swashplateless mechanism can work well in the fixed-wing mode. However, the roll and yaw, which are controlled by the motors and elevons, respectively, had slight vibrations around the desired values. The maximum errors in roll and yaw appeared when the UAV just completed the transition process, which may have been caused by the airflow disturbance. Nonetheless, these errors were not obvious, only 5.3\({}^{\circ}\) in roll and 3.8\({}^{\circ}\) in yaw. It is should be noted that the CEA-based fixed-wing mode flight of a VTOL UAV has been verified previously. Hence, we did not present a quantitative comparison experiment here and only verify the feasibility of the fixed-wing mode flight based on SEA. ## V Conclusion In this paper, an actuation approach called SEA is proposed for dual-rotor VTOL UAV to decouple pitch and yaw control and improve take-off and disturbance rejection performance. The proposed SEA-based UAV showed reduced pitch and position errors during take-off compared to the CEA-based UAV, which had noticeable errors due to ground-distorted airflow. The control performance of both SEA and CEA are evaluated by tracking a 3D figure-of-eight trajectory with continuous yaw angle rotation, showing that the SEA has less error both in position and yaw angle. Disturbance rejection performance was evaluated by the experiment of hovering in the balanced wind gust produced by two fans. The SEA exhibited better performance in position and pitch angle, indicating that SEA is more robust in an environment with wind gust. Step response experiments under unbalanced wind disturbance showed that the SEA outperformed the CEA with obviously lower attitude errors. These experiments validate that the SEA can mitigate actuator saturation by decoupling the actuation of pitch and yaw, and improve the performances of both tracking control and disturbance rejection. Finally, we validate the SEA in the transition process and fixed-wing mode flight in an outdoor environment, demonstrating its capability to maintain a stable attitude for a VTOL UAV in the presence of high-speed incoming airflow. ## Acknowledgment This work is supported by the Grants Committee Early Career Scheme of The University of Hong Kong under Project 27202219, General Research Fund of Hong Kong under project 17206920, and a DJI research donation.
2309.04025
A Framework for Computational Design and Adaptation of Extended Reality User Interfaces
To facilitate high quality interaction during the regular use of computing systems, it is essential that the user interface (UI) deliver content and components in an appropriate manner. Although extended reality (XR) is emerging as a new computing platform, we still have a limited understanding of how best to design and present interactive content to users in such immersive environments. Adaptive UIs offer a promising approach for optimal presentation in XR as the user's environment, tasks, capabilities, and preferences vary under changing context. In this position paper, we present a design framework for adapting various characteristics of content presented in XR. We frame these as five considerations that need to be taken into account for adaptive XR UIs: What?, How Much?, Where?, How?, and When?. With this framework, we review literature on UI design and adaptation to reflect on approaches that have been adopted or developed in the past towards identifying current gaps and challenges, and opportunities for applying such approaches in XR. Using our framework, future work could identify and develop novel computational approaches for achieving successful adaptive user interfaces in such immersive environments.
Kashyap Todi, Tanya R. Jonker
2023-09-07T21:37:52Z
http://arxiv.org/abs/2309.04025v1
# A Framework for Computational Design and Adaptation of Extended Reality User Interfaces ###### Abstract. To facilitate high quality interaction during the regular use of computing systems, it is essential that the user interface (UI) deliver content and components in an appropriate manner. Although extended reality (XR) is emerging as a new computing platform, we still have a limited understanding of how best to design and present interactive content to users in such immersive environments. Adaptive UIs offer a promising approach for optimal presentation in XR as the user's environment, tasks, capabilities, and preferences vary under changing context. In this position paper, we present a design framework for adapting various characteristics of content presented in XR. We frame these as five considerations that need to be taken into account for adaptive XR UIs: _What?_, _How Much?_, _Where?_, _How?_, and _When?_. With this framework, we review literature on UI design and adaptation to reflect on approaches that have been adopted or developed in the past towards identifying current gaps and challenges, and opportunities for applying such approaches in XR. Using our framework, future work could identify and develop novel computational approaches for achieving successful adaptive user interfaces in such immersive environments. + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). + Footnote †: 2023: Copyright held by the owner/author(s). solving (Steintein et al., 2017), optimization (Steintein et al., 2018), and machine learning (Krishnan et al., 2019), developed and applied for achieving promising design solutions. While existing literature can provide valuable insights on approaches that can be adopted towards achieving certain adaptive behavior for future XR systems, current understanding of what aspects should be addressed when adapting such UIs is limited and fragmented. This position paper is motivated by the need to holistically understand adaptive behavior of UIs that will be required for achieving highly usable and performant interactions in future XR environments. To this end, we introduce a novel framework that characterizes UI design and adaptation along five dimensions (or questions): _what?_, _how much?_, _how?_, _where?_, and _when?_. By answering these questions, adaptive XR systems could dynamically select appropriate content, presentation, and placement of UIs. We review prior literature on computational design and adaptation approaches across a range of application domains to understand what properties of the UI they address and features that drive adaptations in the UI. We also summarize some of the technical approaches that have been developed and applied towards achieving optimal design and adaptation. Our review highlights the need for further studying various UI adaptations directly in the context of extended reality, with varying tasks, environments, and capabilities. Our findings can inform future research directions in the area of adaptive XR UIs, driven by computational methods, to improve usability and user experience. ### Overview: Adaptive XR UI Framework Context, including user's task, situational capabilities, and the environment, plays a key role in determining the right UI in XR environments (Steintein et al., 2018; Steintein et al., 2018; Steintein et al., 2018). We develop a framework for designing and adapting XR UIs based on varying context. In our framework, we consider different aspects that influence the final UI available to the user (Figure 1). This includes: (1) _content selection_ (section 2), which addresses _what_ content should be made available and _how much_ of it; (2) _presentation_ (section 3), which refers to _how_ and _when_ content and components are presented; and (3) _placement_ (section 4), which determines _where_ UIs are positioned in the 3D environment. ## 2. Content Selection: _What?_ and _How Much?_ Selecting the _right content_ ("what?") to present to the user is necessary so as to ensure that users have important task-relevant elements readily available, while minimizing task-irrelevant distractions. Further, the _right amount_, or level of detail, of content ("how much?") should be modulated to ensure that users can interact at the appropriate granularity. **What**: Prior works have explored several factors that can influence _what?_ should be presented to users in adaptive UIs, including: interaction history, user preferences, user capabilities, task, environment, aesthetics, and device capabilities. Interaction history, containing prior usage of a system, has been used in traditional desktop-based environments to identify and select important items for adaptation based on features such as frequency, recency, and user interests (Brock et al., 2018; Steintein et al., 2018; Steintein et al., 2018; Steintein et al., 2018). Other works have investigated the role of user preferences (Steintein et al., 2018; Stein et al., 2018) and capabilities (Steintein et al., 2018) to determine and adapt content selection to individual users. Further, task (Stein et al., 2018) and environment (Stein et al., 2018) are also deemed to be key factors that can inform what content should be presented to the user: for example, some applications may be used more consistently for certain tasks or some components have higher priority in certain environments. Aesthetics can also influence content selection (Brock et al., 2018). In traditional 2D interfaces, aesthetics relates exclusively to how a selection of UI components can compose a harmonic and aesthetically pleasing UI; for XR, however, the harmonization of the real world and virtual elements can also influence content selection. Finally, device capabilities can inform suitable content; this has been used, for example, to select content for presentation (Stein et al., 2018; Stein et al., 2018) or appropriate distribution across devices (Stein et al., 2018; Stein et al., 2018). **How Much**: In addition to identifying what content should be selected, prior works have also investigated how much content is appropriate. 'How much' content can be defined by properties such as number of items, level of detail, and information density. For example, cognitive load (Stein et al., 2018) is a key factor that influences usability and is closely related to how much content is presented to the user. As such, prior work has used this as a measure to determine the appropriate granularity (Stein et al., 2018). Task and environment are also key factors that have been used to determine how much content or UI should be presented (Stein et al., 2018; Stein et al., 2018; Stein et al., 2018; Stein et al., 2018; Stein et al., 2018). As the context changes, the utility of various content changes too, which can be used to determine the amount of information presented. Similarly, some prior works have considered user abilities and capabilities to drive adaptations or design variations (Stein et al., 2018; Stein et al., 2018; Stein et al., 2018), and others have looked at adapting information density, level of detail, or granularity based on varying device capabilities (Stein et al., 2018; Stein et al., 2018; Stein et al., 2018; Stein et al., 2018). In general, content selection is key to ensuring that desired content is readily available to users, while distracting and irrelevant content is minimized. Inappropriate selection can lead to increased cognitive load (e.g. due to excessive content or inappropriate granularity), additional interaction steps to retrieve unavailable content, and decreased performance when interacting with the UI. As a first step of the adaptation process, we believe that future XR systems should first apply computational methods to optimally select content at the right level of detail. ## 3. Presentation: _How?_ and _When?_ Content and UIs can often be presented to users via multiple representations. For example, information can be presented textually or graphically, incoming notifications can be pushed using audio or haptics, or a continuous value selection component can be presented using a slider or a dial. Selecting the appropriate representation is crucial to ensure usability. Further, suitable timing for presenting content or adapting the presented content is important to ensure users are not surprised or confused by changes and continuity is maintained. **How**: Selecting the appropriate modality and representation for content or components can improve how users perceive and interact with the UI. As such, this aspect has been studied by prior works in the context of 2D and 3D UIs. One approach for selecting a representation is to consider other UIs - from different applications with similar features or different devices with similar applications - that are also being used: ensuring consistency or compatibility across UIs can make it easier for users to learn and understand the system (Krishnan et al., 2017; Krizhevsky et al., 2017). To improve aesthetics, color harmony across components has been studied in the context of UIs (Krishnan et al., 2017; Krizhevsky et al., 2017). For XR systems, in addition to harmony across virtual components, an additional consideration of harmonizing with the surrounding environment would be beneficial as the level of integration between virtual elements and the real world influences decision making and reaction time (Krishnan et al., 2017). Task and environment have also been used by prior works for determining appropriate representation of UIs (Krizhevsky et al., 2017; Krizhevsky et al., 2017). Finally, varying device and user capabilities have also influenced how content should be represented (Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). **When**: Timing of content presentation or adaptation is another important factor that can determine how distracting, confusing, and usable the system might be; an inappropriately timed adaptation can at best surprise the user and at worst prevent them from completing their tasks. Unlike other factors, to date, there is only a limited understanding of when changes should be triggered in adaptive systems. Some prior works have used heuristics such as changes in the environment, task, or perceived cognitive load to determine appropriate timing (Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). A more promising approach is to model and predict performance improvements that would be achieved if the system triggered an adaptation. This has been studied in the context of 2D menu-based UIs (Krishnan et al., 2017) and 3D XR UIs (Krishnan et al., 2017; Krizhevsky et al., 2017) as a principled approach to adapt the system. We hypothesize that the representation and timing aspects when presenting UIs to users in XR environments will largely influence how acceptable and usable these systems will be during all-day usage under varying contexts. ## 4. Placement: Where? The last question or consideration for designing and adapting UIs in XR environments is their placement: where should content and UI components be placed such that users can interact with them with minimal effort? As such, placement affects key usability aspects such as discoverability, reachability, exertion, and performance. Determining placement is an especially hard problem as special attention needs to be paid to relative placement of components to other components, in addition to each element's absolute position. This includes aspects such as sequential or logical ordering, reading order, and semantic relationships between components. A wide range of factors have been used by prior research to determine where UIs and content should be placed, in the context of 2D, 3D, and cross-device UIs. Constraint-based layouts, where relationship between components are defined using constraints, have been one common approach to determine final placement of elements (Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). Similarly, abstract UI specifications have been used to generate final UI placement for varying context (Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). This enables systems to position components while maintaining consistency when features such as screen size and aspect ratio vary. Grid and geometric based approaches have also been widely used (Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017): here, structural properties are captured using mathematical formulation to ensure properties such as alignment and rectangularity. Occlusion avoidance and ensuring viewability also influence placement decisions and have been used to generate or adapt UIs (Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). Some prior works have made placement decision based on other relative placement in other UIs to ensure consistency and similarity (Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). Appropriate placement in UIs also highly correlates to task performance and perceived aesthetics. As such, interactive systems have studied UI placement to optimize these qualities (Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). Finally, user task and environment has also been utilized as a factor that determines where components are placed (Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). In XR, content and components do not occupy a dedicated screen or canvas; instead, they are overlaid on the real world and can span the entire 360" environment around the user. Further, virtual components can not only have semantic relationships with each other, but with real world objects too. As such, typical approaches such as grid- and constraint- based layouts can not be directly applied. Recently, some research has explored how aspects such as task (Krizhevsky et al., 2017) and semantic relationships (Krizhevsky et al., 2017) can be used to adaptive determine where elements should be placed in XR environments. We suggest that future XR systems will need to carefully address placement issues by considering various aspects pertaining to both the user and the environment, such as cognitive load, ergonomics, task performance, aesthetics, semantic consistency, user task and capabilities, and device constraints. ## 5. Discussion In this position paper, we have studied design and adaptation of UIs in the context of extended reality (XR) systems. We introduce a framework consisting of five dimensions - _what?_, _how much?_, _how?_, _where?_, and _when?_ - to characterize various factors that can influence the final quality of a UI. By addressing each of these, future XR systems could systematically select appropriate content, manipulate their presentation, and place them to optimally support interactions. We discuss existing literature on computational approaches for UI design and adaptations that touch upon these questions to understand the state-of-the-art and identity opportunities for future research. By discussing prior work, we can identify gaps, challenges, and opportunities for novel contributions towards making adaptive-first XR a reality. Firstly, it is evident that we need to carefully consider the blending of the real environment and virtual UIs when making decisions related to content selection and presentation; this includes aspects related to both performance and aesthetics. Second, as situational capabilities vary with changing contexts, this is another key aspect that should be studied when determine adaptive system behavior. Finally, inherently adaptive systems will need to pay more attention to timing of changes and content representations that are suitable to the user's context. As the computing paradigm shifts from strict separation of the real world from virtual interactive interfaces to a blending of these environments in extended reality, we face new and amplified challenges related to developing design and development of UIs. While there is extensive prior research on computational design and adaptation of UIs that can be applied to XR scenarios, our understanding so far is fragmented and targeted towards specific challenges. We hope that our design framework can provide new insights that lead to a structured approach for adaptive XR systems of the future that tackle key considerations holistically.
2310.00446
Reconstructing supply networks
Network reconstruction is a well-developed sub-field of network science, but it has only recently been applied to production networks, where nodes are firms and edges represent customer-supplier relationships. We review the literature that has flourished to infer the topology of these networks by partial, aggregate, or indirect observation of the data. We discuss why this is an important endeavour, what needs to be reconstructed, what makes it different from other network reconstruction problems, and how different researchers have approached the problem. We conclude with a research agenda.
Luca Mungo, Alexandra Brintrup, Diego Garlaschelli, François Lafond
2023-09-30T17:45:03Z
http://arxiv.org/abs/2310.00446v1
# Reconstructing supply networks1 ###### Abstract Network reconstruction is a well-developed sub-field of network science, but it has only recently been applied to production networks, where nodes are firms and edges represent customer-supplier relationships. We review the literature that has flourished to infer the topology of these networks by partial, aggregate, or indirect observation of the data. We discuss why this is an important endeavour, what needs to be reconstructed, what makes it different from other network reconstruction problems, and how different researchers have approached the problem. We conclude with a research agenda. **Keywords**: link prediction; supply networks ## I Introduction Following the 2008 financial crisis, financial networks have been extensively studied by the complex systems community. For example, studying liabilities in banking networks has been key to developing the notion of systemic risk [1; 2], and explaining how certain types of interconnections may amplify the impact of isolated shocks. A key component of this research was the development of methods to reconstruct the network of interdependencies between financial institutions, which are not easily observable. More recently, systemic failures on of the _supply network_ have captured the attention of the complex systems community, as researchers observed the impact of several significant failures, such as disruptions following the Great East Japan Earthquake in 2011, protective equipment shortages during the COVID-19 pandemic, supply shocks after the Suez Canal obstruction by the Ever Given, and the energy supply chain reorganization due to the war in Ukraine. Production networks, also known as "supply chains" or "supply networks", consist of millions of firms producing and exchanging goods and services. From a mathematical perspective, they can be represented as weighted, directed graphs, where nodes symbolize firms (or establishments), and links may denote a supply-buy relationship with weights denoting transaction volume, such as the monetary value of the goods or services supplied over a given period. Supply networks share many properties with other economic networks, but also exhibit unique features. Some of their empirical properties include [3]: small-world properties (short average path lengths and high clustering), heavy-tailed degree distributions, heavy-tailed (link and/or node) weight distributions, strong correlations between node strength and degree, and similarly between in- and out-degrees. It is also relatively well documented that, like biological and technological networks but unlike social networks derived from co-affiliation [4], supply networks feature negative degree assortativity. However, supply networks are in many ways very different from other natural and economic networks. Their properties are deeply influenced by their function. First, the likelihood of a link between any two firms is driven by what the two firms are producing: for instance, steel manufacturers buy more iron than sugar. In general, each link in a supply network may represent one or more types of products; the diversity of products involved may depend on how the data are collected and may crucially affect network properties such as the reciprocity of connections. Product quality also plays a role, with "high quality" firms usually connecting with other "high quality" firms [5]. Second, supply networks are strongly embedded in geographic space, so that the likelihood of connections and their intensity decreases with distance [6]. Third, in contrast to financial networks, supply networks are less constrained by strict external regulations, and emerge as the result of a decentralized multi-criteria optimization process whereby millions of organizations simultaneously attempt to outsource in a way that minimizes their costs while maintaining acceptable levels of resilience to disruptions, for instance by multi-sourcing. These characteristics make production networks incredibly complex: in modern economies, a sophisticated product such as an aircraft might involve contracting thousands of firms and sourcing millions of parts that cross national borders multiple times. Organizations in the network choose their dyadic relations and make local decisions, but hardly have visibility over their wider network. No single entity controls, designs and keeps track of the large-scale emergent network. Visibility over the network is, however, increasingly important for several reasons: monitoring of environmental pledges to ensure firms quantify their greenhouse gas emissions, including those from their suppliers and customers; food and pharmaceutical traceability; analysing and improving supply chain resilience; and supply chain due diligence to ensure that actors that violate human rights or engage in environmentally damaging actions are not present in the chain. In the past decade, researchers in economics and complex systems have worked extensively to better understand supply chains. A key barrier to these studies has been a lack of data, as supply chains compete with one another [7], making information on them highly commercially sensitive. As a result, most studies to date have used firm-centred (e.g. starting with [8]) or sector-specific (e.g. global automotive [9] and aerospace [10], computer and electronics [11]) supply chains. While firm-centric and industry-specific studies have been important to gather insights into how network features shape the operation of supply chains, it remains hard to generalize these findings, due to the sector-specific and incomplete nature of these datasets. Due to the above challenges, several recent studies have suggested the development of methods to reconstruct or predict the existence of hidden links in supply chain networks, offering a variety of approaches. These range from the use of natural language processing to extract and infer data from the World Wide Web to probabilistic maximum-entropy methods, each with varying success rates. In this paper, we synthesize recent research on reconstructing supply networks. We start by describing the key problems: what data is available, what data is missing, and how to evaluate reconstruction performance (Section II). We then summarise recent approaches to inferring the network topology (Section III), and to infer the values of transactions when the topology is known (Section IV). We conclude with a discussion (Section V) and two research agendas (Section VI) focusing on macroeconomic and supply chain management applications. ## II The supply network reconstruction problem Production networks can be modelled at different levels of detail, both for nodes and edges. Naturally, the properties of the network depend on the level of aggregation. At the most granular level, nodes would represent individual production plants where goods undergo processing and transformation. A more aggregate model would equate nodes with the companies operating these plants. One could further aggregate by either consolidating firms under a common parent company or grouping them by industry sector1. Footnote 1: One could think that the industry level is more aggregated than the firm. While this is mostly true, it is sometimes important to recognize that large firms span many industries. Indeed, industry-level input-output networks produced by National Accounts arise from Supply and Use Tables, which attempt to reallocate the output and inputs of multi-product firms into their appropriate categories. Firms exchange various goods and services. In a very detailed approach, each product type could be identified with a specific type of edge, rendering the production network as an edge-labelled multigraph. A simpler model would connect two nodes if they are involved in any type of trade, irrespective of the products' nature. Link weights can also have different definitions, measuring either the flow of goods (in terms, e.g., of the number of items traded) or the monetary value of such flow. In the context of this paper, we define a _supply network_\(G\) as a graph where nodes represent firms while directed, weighted links represent the value of the flow of goods and services in a buyer-customer relation. This definition proves practical when reconstructing real-world supply networks from empirical data, which frequently adopts this format. ### What data is available? Almost all countries officially release Input-Output (I-O) tables, which provide the flow of money between industries, typically at the level of 50-500 industries. While we focus on firms here, this data is sometimes useful in the methods below. Besides, I-O tables provide a meso-scale ground truth that could be a good target for reconstruction methods. Bacilieri _et al._ provides a taxonomy of existing datasets documenting different representations of supply networks. These are mainly: commercial datasets, confidential datasets held by governments, payment data, and industry-specific datasets. We briefly describe these types of data below. Purchasing data from data providers, such as FactSet, Capital IQ, or Bloomberg is relatively straightforward, but commercial datasets can be very expensive, and cover only a fraction of firms, a very small fraction of links, and do not systematically include the value of the transactions. As commercial data providers typically assemble their data from publicly available information, researchers may also decide to collect this information themselves. An example is the extraction of data from the World Wide Web, after which machine learning algorithms are trained to predict supply-buy relationships [12]. Such an approach enables researchers to successfully gather rudimentary maps of supply chains, although it is limited to publicly available data, hence necessitating reconstruction efforts to identify missing relationships. The option of using government-held data necessitates datasets to be shared by national authorities, which may not always be feasible. However, where data has been collected by a national authority it tends to be of very high quality. For example, VAT reporting may contain the value of transactions and timestamped data between virtually all firms within a country. Bacilieri _et al._ show that VAT datasets with no reporting thresholds exhibit strikingly similar properties, while incomplete datasets (either because of a reporting threshold or because they are assembled from publicly available information) usually have fewer links, so that many key statistics are likely to be highly biased. A third option is payment data, which is usually (but not always) limited to individual banks collecting payment flows between their client firms (see, e.g., [13]). Although it is not guaranteed that every transaction corresponds to a business link within a supply network, it can be viewed as a plausible indicator. These datasets are extremely detailed for any subset of firms affiliated with the same bank. However, they do not cover firms served by different banks or accounts held by their clients in different institutions. Finally, datasets focusing on specific industry verticals are also sometimes gathered by private companies (e.g., MarkLines' automotive dataset used in Brintrup _et al._[14])) and public regulatory bodies (e.g., the U.S. Drug Enforcement Administration's dataset of controlled substances flow). However, they are usually limited to specific geographies and production sectors. There are no large-scale publicly available datasets on firm-level production networks, making it impossible at the moment to portray the global supply network. Summing up the number of nodes in the datasets reported in Bacilieri _et al._[3] gives less than 3m, so less than 1% of the 300m nodes reported earlier. Merging all the available datasets would give only an even smaller portion of the links and weights. This limitation forces researchers to use alternative options to proxy supply networks from smaller-scale, more specific datasets. These methodologies, developed to reconstruct or infer missing information about supply networks, are the main focus of this paper. ### A taxonomy of supply network reconstruction approaches Clearly, what we actually mean by'reconstructing' a supply network necessarily depends on the data already available to the researchers and on the ultimate use of the (inferred) network, i.e. the goal of the analysis. We discuss these points in what follows and classify the studies we review along four primary axes. We do not see these classifications as having rigid boundaries, but rather as providing continuous dimensions along which models can be placed. Predicting network topology and/or weights on transactions.Consider a matrix \(\Omega\) where \(\Omega_{ij}\) shows the amount paid by \(j\) to \(i\). We distinguish between methods that focus only on finding the network's _topology_, i.e., the presence or absence of a commercial connection between two firms encoded in the (binary) adjacency matrix \(A_{ij}=1\leftrightarrow\Omega_{ij}>0\), and those that assume that the adjacency matrix is known and try to infer the monetary value of the existing connections, i.e. the _link weights_\(\Omega_{ij}|A_{ij}=1\) (see also point \(c\) below). Note that some methods try to simultaneously reconstruct both the topology and the weights of the network. Most of the methods we review focus on network topology. Predicting individual links or the full network.Some methods focus on identifying the presence of specific links independently, while others try to reconstruct the entire network at once. The difference is subtle, yet important. Typically, links in real-world production networks are not independent. This happens, for instance, if firms tend to avoid "multi-sourcing": this happens if, when they are connected to supplier \(j\) for a key input, they are less likely to be connected to other suppliers for that input. In reconstruction methods, links are sometimes assumed to be mutually dependent, and sometimes assumed to be independent. Generally (but not necessarily), the assumption made is related to the ultimate goal of the reconstruction method. The task of trying to identify the presence of specific links is usually known as _link prediction_[15], while that of inferring the full network architecture is referred to (at least in this paper) as _network inference_. In general, network inference computes the full distribution \(P\left(G\right)\) over the set \(\mathcal{G}=\left\{G\right\}\) of all possible networks. Link prediction, instead, computes the marginal probability \(p_{ij}\) of an edge between nodes \(i\) and \(j\)2. Again, there is no hard boundary between the two methods, which are occasionally equivalent: if one considers link independence as (the result of) a modelling assumption, computing the values \(\left\{p_{ij}\right\}\) for all pairs of nodes and reconstructing the whole network become two equivalent operations, as the probability \(P\left(G\right)\) factorizes as Footnote 2: More generally, link prediction methods produce a _score_\(s_{ij}\), such that \(s_{ij}>s_{kl}\implies p_{ij}>p_{kl}\). However, such scores are not necessarily smaller than one, and the ratio between two scores is not necessarily equal to the ratio between links probabilities. \[P(G)=\prod_{(i,j)\in E(G)}p_{ij}\prod_{(i,j)\notin E(G)}\left(1-p_{ij}\right), \tag{1}\] where \(E(G)\) denotes the set of edges realized in graph \(G\). In this case, link prediction and network inference coincide. On the other hand, whenever the full probability \(P\left(G\right)\) in a network inference method is available (and irrespective of whether edges are assumed to be independent or not), it is always possible to compute the _marginal_ connection probability \(p_{ij}\) as \(p_{ij}=P\left(A_{ij}=1\right)=\sum_{G\in\mathcal{G}}P\left(G\right)A_{ij}\) and use it in a link prediction exercise. It is fair to say that the factorization in Eq. (1) is, at most, only approximately true in reality. However, some methods with independent edges can still capture meso- and macro-scale features of the network (see, e.g., [13]) and, by framing the reconstruction problem as a binary classification task, link prediction facilitates easy comparison of methods through standard performance metrics. Using topological information or not.Of course, all reconstruction methods need, at the end of the procedure, the whole empirical network as the 'ground truth' to _test_ their predictions. However, while some methods need the full adjacency matrix also in their training, other methods can learn from node-level or pair-level features only. This is important because the methods that do not rely on the adjacency matrix for training can be used in contexts where the detailed network is not observed, as long as certain node-level (and possibly pair-level) features are available. Probabilistic or deterministic.Some models produce _deterministic_ outputs, usually finding a network configuration by minimizing or maximizing a given loss function. Consequently, their output is a single network realisation that is on one hand optimal according to some score, but on the other hand very unlikely to represent the true network. Other methods provide _probabilities_ over possible network realisations. The goal of these methods can then be viewed as finding a 'good' probability distribution, peaked 'around' or 'close' to the true one. Equipped with this probability distribution, researchers can find the typical and most likely realisations of the network and compute, for instance, expected values and confidence intervals for properties of the network. ### Evaluating the reconstructed networks In their review paper on network reconstruction, Squartini _et al._ provide a useful taxonomy of performance metrics: _statistical_, _topological_, and _dynamical_ indicators. _Statistical_ indicators evaluate the quality of the reconstructed network on a link-by-link (or weight-by-weight) basis. Different statistical indicators apply to deterministic and probabilistic outcomes. In the realm of deterministic outcomes, perhaps the most commonly employed indicator is _accuracy_ (or _precision_), the proportion of correct predictions. In supply networks, however, there is a strong class imbalance: the number of pairs not linked is much higher than the number of pairs linked. Thus, it is generally easy to make "correct" predictions since predicting that a link does not exist is very likely to be correct. For this reason, a commonly used metric is the _F1-score_, defined as the harmonic mean of precision and recall (how many existing links are predicted as existing), which offers a more balanced performance metric in unbalanced datasets. For probabilistic reconstructions, the evaluation is often based on the _area under the receiver operating characteristic curve_ (AUROC) and the _area under the precision-recall curve_. AUROC, derived from the Receiver Operating Characteristic (ROC) curve, essentially quantifies the ability of the models to discern between classes at varying threshold levels. The ROC curve plots the true positive rate (recall) against the false positive rate for different decision thresholds (i.e., by considering "true" all the predictions with probability larger than a certain threshold \(\tau\), for different values of \(\tau\)), giving insights into the trade-off between sensitivity (true positive rate) and specificity (true negative rate). The AUROC, being the area under this curve, ranges from 0.5 to 1, with 1 implying an ideal classifier and 0.5 corresponding to no better than random guessing. Because statistical indicators focus on individual links, they may not adequately evaluate if the reconstructed network replicates complex network structures. _Topological_ indicators measure how well the network's macro-level and meso-level features are reproduced. Topological indicators gauge how effectively the reconstruction captures the network 'coarse-grained' features. For instance, Ialongo _et al._, validate their reconstruction methodology by assessing how accurately it replicates the network degree distribution. Topological indicators can tell us whether the reconstructed and true networks are "similar". However, ultimately the key question is whether a reconstructed network is good enough to give good answers to substantive economic questions. _Dynamical_ (or more generally model-based) indicators assess the similarity in the process' evolution on the real and reconstructed networks. As an example, Diem _et al._ introduced the _Economic Systemic Risk Index_ (ESRI) to quantify each firm's importance within an economy. The metric measures the percentage drop in the economy's overall production caused by the removal of a firm from the network. Its computation requires running a dynamical process, wherein the sudden disappearance of a firm first impacts its suppliers and customers and, iteratively, spreads to firms that are further away in the network, until the system reaches an equilibrium. Conceivably, accurately estimating firm-level ESRI may only necessitate identifying a subset of key links, so a good prediction of the other links is not necessarily important for the final economic result. Armed with these evaluation indicators, we now examine in detail the models employed for reconstructing production networks, starting from methods focusing only on the network topology, and then discussing methods for reconstructing network weights. ## III Reconstructing the network topology We start by reviewing studies that reconstruct the network using link prediction, and then those that do so using network inference methods. Table 1 provides an overall summary of the methods and their differences. ### Link prediction #### iii.1.1 Setting up the problem An early stream of research employs machine learning for link prediction in production networks. The key idea is to construct a dataset in the form of Fig. 1A, where for each pair \((i,j)\), we collect some features \(f_{(i,j)}\) that can be features of each node (e.g., the product it makes, its total sales, etc.) or of the pair (e.g. geographical distance, whether they have a common supplier or client, etc.), and the response \(A_{ij}\), which is equal to \(0\) or \(1\). With such a dataset, one can then train a machine-learning classifier on a set of examples \(\left\{f_{(i,j)},A_{ij}\right\}\). Different papers have then made different choices for the predictors \(f_{(i,j)}\) and the predictive algorithm, as we will discuss in detail. But before, let us note another critical element, which is the construction of the dataset. Production networks are very sparse [3], so the ratio between the number of existing (\(A_{ij}=1\)) and non-existing (\(A_{ij}=0\)) links is very large. Therefore, training a model on the entire set of available examples might simply be computationally intractable (there are \(\sim n^{2}\) pairs). Moreover, sampling a random subset would usually lead to poor predictions, because the scarce number of positive examples hinders the model's ability to effectively discriminate between the two classes. This phenomenon, known as the _class imbalance_ problem, can potentially lead to models that are biased toward predicting the majority class, thus failing to accurately identify the existing links. This problem is commonly addressed by applying _undersampling_ (Fig. 1B), a technique that aims to rebalance the class distribution. In the context of production networks, undersampling involves carefully curating the training set to ensure a predetermined ratio between positive (\(A_{ij}=1\)) and negative (\(A_{ij}=0\)) examples. This controlled selection helps foster a more balanced, discriminative model and was employed in all the machine learning approaches that we are now set to survey. Figure 1: (a) Datasets for link prediction are usually built by filling rows with two nodes features (\(f_{u}\), \(f_{v}\), \(f_{u,v}\)) and by indicating if there is a link between the two nodes (\(A_{u,v}\)). (b) These datasets are usually undersampled: in the original dataset, a small minority of the rows will be s.t. \(A_{u,v}=1\) (blue), while most of the rows will be s.t. \(A_{u,v}=0\) (red); undersampling discards a portion of them to generate a more balanced dataset. \begin{table} \begin{tabular}{c c c c} \hline \hline _Coverage_ & _Dataset_ & _Inputs_ & _Probabilistic_ \\ \hline Mori _et al._[18] Regional & Tokyo Area Manufacturing Firms, Source unspecified & Several features regarding firms’ activities, balance sheets, management \\ Zuo _et al._[19] National & Tokyo Shoko Research & Firms’ sales, profits, industrial sector, location, number of employees, network centrality \\ Sasaki and Sakata [20] Regional & Tohoku region, Teikoku Databank & Firms’ sales, capital, size, industrial sector, network centrality \\ Lee and Kim [21] National & Korean Enterprise Data & Description of firms’ activities, firms’ industrial sector and location, aggregate transaction volumes between industrial sectors \\ Brintrup _et al._[14] Automotive & Markline Firms’ known connections, Automotive Information & X products, intermediate inputs \\ Kossaih and Brintrup [22] Automotive & Markline Automotive Platform & Firms’ known connections & X \\ Minakawa _et al._[23] Global & Asian bank’s transaction data & Firms’ known connection, description of firms’ activities \\ Mungo _et al._[24] Global, National & Computat, FactSet, Ecuador & Firms’ sales, industrial sector, location \\ Zhang _et al._[25] Global & Specialized Press (Reuters) & Media coverage & X \\ Wichmann _et al._[12] Global & Specialized Press & Media coverage & \\ Schaffer P. [26] Global & Specialized Press & Media coverage & \\ Reisch _et al._[27] National & Phone calls, survey, Hungary & Firms’ phone calls, national IOTs & X \\ Hooijmaaijers and Buiten [28] National, 4 commodity groups. & IOTs, Business Register, Structural Business Statistics & Firms’ known connections, sales, geographic location, industrial sector \\ Hillman _et al._[29] National & IOTs, Business Register, Structural Business Statistics & Firms’ known connections, sales, geographic location, industrial sector \\ Ialongo _et al._[13] National & Dutch banks’ transaction data & Firms’ sales, intermediate expenses by sector, network density (for calibration) \\ Mungo and Moran [30] Global & FactSet & Firms’ sales (time series), industrial sector, network sector structure \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of the papers that reconstruct the supply network topology. However, this procedure has implications for model evaluation. Typically, an algorithm is trained on a subsample (the training set), and evaluated on the remaining data (the testing set). If subsampling is done before the split into a testing and training set, the testing set will contain many more positives than a "real-life" testing set, so metrics such as accuracy will be severely biased. [24] found that metrics such as AUC were not substantially affected by the undersampling ratio, so we will tend to report AUCs, which are more comparable across studies. Many studies, however, report the F-score, which is highly dependent on class imbalance [24], so when reporting F-scores we will also report undersampling ratios. #### iii.2.2 Predicting new business partners Interestingly, link prediction in production networks has not been originally pursued to reconstruct existing networks, but rather to build recommender systems that could suggest new partnerships to companies trying to expand their supplier or customer bases. In this framework, the ability of a model to identify existing (or past) supply-chain links is a target in so far as it is a proxy for their ability to make sensible recommendations, i.e., to identify _candidate_ links that firms could turn into existing ones. Despite aiming for different goals, these studies share several similarities with those on network reconstruction in the problem's layout, framed as a link prediction task, and the tools used, often relying on statistical models and network science. Mori _et al._ focuses on \(\sim\) 30k manufacturing firms in Japan. They build a business partner recommendation system by feeding a Support Vector Machine (SVM) with several companies' features, such as size, industrial sector, and geographic location. On a dataset comprising \(\sim\) 34k links and an equal number of negative instances, they achieve an F-score of 0.85. The approach is refined in [19], who still use an SVM but add topological properties in the list of companies' features, such as their degree, betweenness centrality, and closeness centrality. For a network of 180k firms and half a million links assembled through the Tokyo Shoko Research dataset, and again an undersampling ratio of 1:1, they achieve an F-score of 0.81. Sasaki and Sakata explicitly incorporate the network of second-tier suppliers and their respective industries, providing a more contextual analysis. The authors' intuition is that two firms within the same industry but with different suppliers will have different probabilities to sell to a specific customer. In other words, establishing a relationship between firms \(A\) (supplier) and \(B\) (customer) does not depend solely on the identity of \(A\) and \(B\), but also on who are \(A\)'s suppliers. Thus, the authors first extract from their network all the triads of firms connected in sequence (i.e., all the motifs \(A\to B\to C\)). Then, they replace each firm with its industrial sector (e.g., if we call \(S_{i}\) the industrial sector of firm \(i\), the triplet \(A\to B\to C\) becomes \(S_{A}\to S_{B}\to S_{C}\)), and use a Bayesian model called _n-gram_ to compute the link probability between \(B\) and \(C\) given \(B\) and \(C\)'s industrial sectors and the industrial sectors of \(B\)'s suppliers. Finally, the authors use these probabilities as features in a random model classifier, together with a few firms' attributes (total revenues, number of employees, etc.) and network centralities. The authors focus on \(\sim\) 50k links in a network of 130k Japanese firms3, achieving an F-Score of 0.80 with an undersampling ratio of 1:1. Footnote 3: The authors test their method on “new” links, missing from their 2010 snapshot of the network and present in the 2011 snapshot. The data is provided by Teikoku Databank Ltd., a business intelligence company. More recently, Lee and Kim integrated information on firms' geographical position and industrial sector with aggregate trade volumes between sectors and textual information on companies' activities and products. The authors encode this information and use it to train a deep neural network. On a sample of \(\sim\) 90k connections between South Korean firms, where 20% of the examples are used as a test set, the authors achieve an AUROC of 0.924. Footnote 4: The authors do not specify the undersampling ratio of their exercise. This trajectory of studies reflects a consistent evolution in methodology, with each iteration contributing incremental enhancements in feature integration and model sophistication, partially akin to what we will see now for papers which address supply network reconstruction specifically. Can a firm better understand its supply network dependencies? From a supply chain management perspective, a focal firm is interested in understanding hidden dependencies within its supply network - for instance, two suppliers may rely on a hidden "second tier" supplier, creating a vulnerability for the focal firm that is invisible at first sight. In such a context, the focal firm would typically see a fair part of the network and could use this topological information to make further inferences. This is the context of the early investigation by Brintrup _et al.[14]_, who focuses on the supply networks of three specific major car manufacturers (Jaguar, Saab, and Volvo, using data from the Marklines Automotive Information Platform). Using their domain expertise, the authors create four features for each potential link \((i,j)\): _Outsourcing Association_ (the overlap between the goods produced by company \(i\) and those bought by company \(j\)), _Buyer Association_ (how frequently firms that purchase the same inputs as firm \(i\) also buy the products of firm \(j\)), _Competition Association_ (the overlap between the products of firm \(i\) and those of firm \(j\).), and _Degrees_ (the number of partners of each firm). Training a logistic regression and a Naive Bayes using these features yields an AUROC of around 0.8. In a subsequent paper [22], the authors refine their approach using Graph Neural Networks (GNNs) [31]. The concept underlying GNNs is that the network's topological information should not be distilled by the researchers through the design of specific features (as was the case with the association measures of the previous paper), but should instead be discovered automatically by the neural network. For production networks, the intuition is that the association measures designed in earlier work [14], while informative, might not convey all the information lying in the network's topology. Instead, a neural network provided with a sufficient amount of examples would identify patterns hidden to the researchers. Practically, this is accomplished by: 1) for each link \(l=(i,j)\), isolating subnetworks \(G_{i}\), \(G_{j}\) composed by the nodes \(i\) and \(j\), along with the set of their neighbours; 2) embedding each node \(u\) in the subnetwork \(G_{l}=G_{i}\cup G_{j}\) in a vector \(f_{u,l}\)5; 3) feeding the nodes' embeddings \(f_{u,l}\) to a series of \(K\)_graph convolutional layers_, which are nonlinear functions \(f_{ul}^{k+1}=\phi\left(f_{ul}^{k},\{k_{u}\}\right)\), where \(k_{u}\) are the degrees of the nodes in \(G_{u}\); 4) averaging the final vectors \(f_{u,l}^{K}\) across all the different nodes \(u\), generating an embedding vector \(f_{l}^{\prime}\) for the subnetwork \(G_{l}\); 5) feeding the embedding through a sequence of fully connected layers to generate a single prediction for the probability \(p_{ij}\). Footnote 5: The embedding usually consists of computing an average distance \(d\) between node \(k\) and the nodes \(i\) and \(j\), and then embedding \(k\) in a vector \(f_{ij}^{k}=\delta_{dd^{\prime}}\). The dimension of this vector is the maximum possible distance, which must be specified as a parameter of the model. The weights in the graph-convolutional and fully connected layers are trained with the usual backpropagation algorithm. The authors find a significant improvement compared to the previous approach, with the GNNs scoring an AUROC value \(\sim 0.95\). While this is an impressive improvement in performance, a downside of this approach is that it becomes very difficult to interpret the predictions made by the neural network and develop novel insights into how firms connect. A similar approach is proposed in [23], where the authors train a graph neural network with topological information and textual information on firms' activities, encoded via the Doc2Vec algorithm [32]. On a network of 170k firms and 1.2M edges provided by a large Asian bank, the authors AUROC of 0.94-0.95, depending on the respective sizes of the training and the test data. They do not report the undersampling ratio. #### iv.1.4 Predicting the supply networks of entire countries where no network data exist Mungo _et al._ use similar methods for a different purpose. They observed that in some countries, excellent data is available, while in other countries (including the US), there is no fully reliable information on firm-to-firm transactions, creating a need for methods that predict the supply network using only information available locally (Hooijmaaijers and Buiten [28], reviewed in Section IV.2, first developed a method based on data held by the most statistical offices). Based on this observation, they ask whether a model trained on the production network of a country \(A\) accurately predicts links within firms in another country \(B\). In all countries, there is usually good data available on key features of firms and pairs of firms that could determine link formation. For example, it is well established that large firms have more connections [3], prefer to trade with geographically closer firms [6; 33], and have production recipes that put significant constraints on the inputs they buy. Based on these hypotheses, for each candidate link, the authors build a vector \(f_{(i,j)}\) containing information on firms' sales, industrial sector, and geographical distance. They then train a _gradient-boosting_ model to predict link probability. The study is run on three different datasets: two commercial, global datasets (_Computat_ and _FactSet_) and one dataset covering (a subsample of) Ecuador's national production network, assembled by Ecuador's government using VAT data. When tested on the same dataset used to train the model, the approach scores an AUROC similar to that of the previous approach (from \(\sim 0.91\) to \(\sim 0.95\) depending on the dataset), suggesting that indeed, knowing a firm's products, location and size provides sufficient information to make decent predictions. For making predictions on unobserved countries, they conduct two tests. In the first test, they considered different countries in the same dataset, for instance training their model on FactSet's US and Chinese networks and predicting links in Japan. In this case, the approach still performs relatively well (AUROC \(>\) 0.75). In the second test, they predict the links in Ecuador using FactSet and the other way around. Here, the performance deteriorates substantially, which the authors explain by showing that the distribution of features in FactSet (an incomplete, commercial dataset with large firms in rich countries) and Ecuador (a complete administrative dataset, with all firms from a developing economy) are very different. This partial success suggests that there is potential for further studies, but using multiple administrative datasets. For instance, while it is not possible to predict the Ecuadorian administrative data using the commercial data from FactSet, it might still be possible using similar administrative datasets, given the results from [3] showing that administrative datasets exhibit strikingly similar topological properties. This is a straightforward approach to reconstructing the global firm-level production network, using training data from a few countries, and large-scale firm-level datasets such as ORBIS. #### iii.1.5 Leveraging alternative data: news and phone calls The idea in [25] and [12] is that significant commercial deals might be announced in press releases or covered by the specialized press. [25] build a system to automate the analysis of articles and investor comments coming from Reuters and identify collaborative6 and competitive relationships between companies. The authors web-scrape a corpus of \(\sim 125k\) documents and manually annotate a sample of \(4.5k\), overall identifying 505 relationships. Then, they use a Latent Dirichlet Allocation (LDA) algorithm (a widely used algorithm in text analysis) to examine these examples, finding that the algorithm identifies collaborative relationships with an AUROC of 0.87. Footnote 6: Note that, for the authors, a “collaborative relationship” has a broader meaning than supply relationship. Similarly, [12] automates the analysis of textual data (coming from Reuters Corpora TRC2 and RCV1, NewsIR16, and specific web searches) to find mentions of commercial deals between the firms. First, the authors collect a text corpus describing the relationships between firms. Then, they classify these relationships as either a commercial relationship (e.g., firm \(i\) supplies firm \(j\)), an ownership relationship (firm \(i\) owns firm \(j\)), or none of the previous. The annotated examples are embedded into numerical vectors using the word embeddings in the Glove dataset and finally used to train a Natural Language Processing (NLP) classifier with a BiLSTM architecture. 30% of the sentences were left out of the data and used to assess the performance of the model, which scores an F1-score of 0.72 with a class imbalance of 1:7. Unfortunately, the choice of evaluating the score on a binary metric (the F1-Score) does not allow a straightforward comparison with the previous approaches. However, the authors report that a random classifier would get an F1-Score of 0.38. In a follow-up paper [26], the authors improve their results by running the same study using a BERT model, and reach an F1-Score of 0.81. In [27], instead, the authors use phone calls between companies and survey data to track down supplier-customer relationships in an undisclosed European country. The survey asked companies to list their ten most important suppliers and customers. On this subsample of the network, the authors find that if the average daily communication time between two firms \(i\) and \(j\), denoted \(\tau_{ij}\), is greater than \(30\) seconds, the probability that these two firms are connected is \(p_{ij}\approx 0.9\). Equipped with this observation, the authors reconstruct the network by first assuming the presence of a link between \(i\) and \(j\) if \(\tau_{ij}>30s\) and then assigning a direction to the link stochastically with a probability \[p\left(i\to j\right)=\frac{\omega_{a_{i}b_{j}}}{\omega_{a_{i}b_{j}}+\omega_{b_ {j}a_{i}}},\] where \(a_{i}\) and \(b_{j}\) are \(i\) and \(j\)'s respective industrial sector, and \(\omega_{ab}\) is the total amount of trade (in monetary value) from firms in sector \(a\) to firms in sector \(b\), as reported in the country's Input-Output tables7. The authors do not provide any'standard' evaluation metric for their reconstruction. However, they mention that choosing a threshold \(\tau_{ij}=30s/d\) minimizes the Kullback-Leibler divergence between the degree distribution of the reconstructed network and the degree distribution of a well-studied network, the Hungarian production network. The authors' ultimate goal was to compute firms' Economic Systemic Risk Index (ESRI, see Section II.3) in the reconstructed network, and they do find a good qualitative agreement between the ESRI sequence of firms in the reconstructed and the Hungarian network. Footnote 7: A consequence of the algorithm choosing edge direction is that the reconstructed network has null reciprocity, while we know that real networks exhibit reciprocity of around a few percent [3]. ### Network Inference A second stream of research tries to reconstruct the production network as a whole rather than link-by-link. We distinguish three sets of approaches: matching algorithms, maximum entropy methods, and methods based on time series correlations. #### ii.2.1 Matching algorithms A couple of papers have used matching algorithms to create supply networks. We classify these under "Network Inference" because while they reconstruct the network link-by-link, they typically try to match aggregate constraints, taken from I-O tables and/or from meso-level statistics published independently. An early study is the one from Hooijmaaijers and Buiten ([28], see [34] for details), who devise an algorithm that matches firms based on commonly observable firm characteristics (industry, size, location) and I-O tables. Roughly speaking, their method works as follows. First, using a relationship between sales and degrees of \(s_{i}\propto k_{i}^{1.3}\)[35], they can estimate out-degrees based on total sales. For expenses, using the I-O tables they can estimate expenses of each firm by industry, and assuming that in-degree by industry is a (specific) increasing function of expenses by industry, they can estimate the number of industry-specific suppliers for each firm. Knowing the degrees of all firms, the next task is to match them. To do this, they create pairwise scores based on assumptions about what determines the likelihood of a match. The final score is a linear combination of three scores: one that increases with firm size, one that decreases with distance, and one that acts as a bonus or penalty if the firms are in industries that trade in I-O tables. The matching algorithm then starts with the buyer that has the highest purchasing volume and goes in descending order. The number of suppliers connected to each buyer is determined by the buyer's in-degree. Among the potential suppliers, those with the highest scores are considered the most likely to trade with the buyer. If any of these top-rated suppliers have no remaining outgoing links, then the next most likely supplier in line is considered instead. Hillman _et al._ introduced another algorithm, driven by their need to create a synthetic firm-level network for their agent-based model of the impact of the Covid-19 pandemic. Again, their method makes use of I-O tables and data on sales, although it does not use location information. Their algorithm is less clearly documented, but essentially works by first using I-O tables to determine which industries a firm should sell to, then allocating chunks of its sales to randomly selected firms in the buying industry. They show that their algorithm is able to reproduce a positive strength-degree relationship. #### iii.2.2 Maximum-entropy for network inference In a sense, matching algorithms try to distribute connections "randomly", while matching some aggregate properties of the network. However, to do so they introduce "plausible" assumptions, such as specific functional forms to create scores. Instead of introducing assumptions, the Maximum Entropy assigns a probability to each possible network in a "maximally non-committal" way. This leads to the question of whether introducing assumptions about what is not fully known is better than just maximizing entropy conditional only on what is fully known. This is the question of Rachkov _et al._, who showed that the networks obtained from the matching method proposed in Ref. [28] have different properties than those obtained using a simple maximum-entropy model, suggesting possible biases in heuristics-based reconstructions. That being said, simple maximum entropy methods are not well-suited for complete supply networks (i.e., not commodity-specific), because they do not use information on firms' products, which we know is a critical determinant of their probability to link. Ialongo _et al._ introduced a method that tackles this issue and simultaneously reconstructs the whole network topology and link weights (see Sec. IV for the weights). Following a long-standing tradition in network reconstruction [16], they compute a probability distribution \(P(G)\) over the set of all possible graphs \(\mathcal{G}\) that maximizes the Shannon Entropy \(\mathcal{S}\), \[\mathcal{S}=-\sum_{G\in\mathcal{G}}P\left(G\right)\ln P\left(G\right).\] The maximization is subject to a normalization constraint, \(\sum_{G\in\mathcal{G}}P(G)=1\), and a collection of constraints \(\tilde{\mathbf{c}}\) representing the macroscopic properties enforced on the system. These constraints are usually enforced in a soft way, that is, by constraining the expected values of the constraints over the set of all possible networks \(\mathcal{G}\), \[\sum_{G\in\mathcal{G}}P(G)c_{i}\left(G\right)=\tilde{c}_{i}.\] The authors expand on a pre-existing model [36], constraining the network's density \(\rho\), each firm's total sales \(\omega_{i}^{out}\) and the money spent by firm \(i\) on inputs from each industrial sector \(a\), \(\{\omega_{a\to i}\}\). However, as we have already emphasized, a crucial feature in supply networks is that firms connect to others specifically for the products they make. A method that does not take into account the product or industry of the firm is, in the context of supply networks, doomed to fail. As a result, the authors design a new model able to handle sector-specific constraints. For instance, in a hypothetical economy with two sectors, \(a\) and \(b\), the model enforces three constraints on each firm: one for total sales, \(\sum_{G\in\mathcal{G}}P\left(G\right)\omega_{a}^{out}=\tilde{\omega}_{a}^{out}\) and one for spending on each of the sectors: the money spent on inputs from sector \(a\), \(\sum_{G\in\mathcal{G}}P\left(G\right)\omega_{a\to i}=\tilde{\omega}_{a \to i}\), and the spending on inputs from sector \(b\), \(\sum_{G\in\mathcal{G}}P\left(G\right)\omega_{b\to i}=\tilde{\omega}_{b \to i}\) (we use tildas to denote observed quantities). The model accepts an analytical solution for the marginals \(p_{ij}\), \[p_{ij}=\frac{z\tilde{\omega}_{i}^{out}\tilde{\omega}_{a_{i}\to j}}{1+z\tilde{ \omega}_{i}^{out}\tilde{\omega}_{a_{i}\to j}}, \tag{2}\] where \(a_{i}\) is the industrial sector of firm \(i\), and \(z\) is chosen such that \(\sum_{i}\sum_{j\neq i}p_{ij}=\tilde{\rho}\). The authors show that their method significantly improves upon the model by [36], where each firm is subject to a single constraint for the overall intermediate expenses. In a maximum-entropy framework, imposing only one constraint on the intermediate expenses would distribute a firm's supplier equally across all industrial sectors. This is at odds with the reality of supply chains, where firms require only a select range of goods from the basket of products available in an economy. The authors do not report any standard reconstruction metric, but they show that the in-degree and out-degree distribution of the reconstructed network are, in expectation, in good agreement with the empirical degree distribution. Moreover, the relationship between degrees and strengths of firms is generally well replicated. A limitation of all the studies discussed so far is that they consider only firm-to-firm links. For macroeconomic applications, it would be useful to reconstruct complete synthetic populations (see Sec. VI), including links between firms (including banks) and consumers/workers. Hazan uses a maximum-entropy approach (more precisely, the fitness-induced configuration model, [38]) for firm-to-firm networks and firm-to-consumer networks, taking average degrees from the literature to estimate \(z\) separately in each network. #### iii.3.3 Leveraging the correlation matrix using graph learning An established literature tackles the problem of reconstructing a network starting from \(N\) node-level time series encoded in vectors \(x\left(t\right)\in\mathbb{R}^{N}\)[39; 40]. The general philosophy is that the structure of the network \(\mathcal{G}\) determines the joint probability distribution of the observations. If one assumes that each observation \(x\left(t\right)\) is drawn from a probability distribution \(p\left(x|\Theta\right)\) with a parameter matrix \(\Theta\in\mathbb{R}^{N\times N}\), the problem of reconstructing a graph, or _graph learning_, becomes that of finding the correct value of \(\Theta\). Production networks serve as a contagion channel for economic shocks. They spread negative or positive shocks from one firm to its customers and suppliers, generating correlations between firms' fundamentals, such as market valuation and sales [41; 42; 43]. Starting from this observation and leveraging the graph learning literature, Mungo and Moran introduce a method to reconstruct the production network from the time series of firm sales, \(s_{i}\left(t\right)\). First, the authors show empirically that the correlation between the log-growth rates of firms connected in the production network surpasses the average correlation yielded by randomly sampled firm pairs, and this excess correlation decreases as firms get further apart in the supply chain. Then, the authors harness this observation to design a network reconstruction approach, framed within Gaussian Markov Random Fields [39]. Adapting a modern graph learning strategy [44], the authors assumed that the growth time series data could be modelled as a sequence of draws from a multivariate Gaussian distribution. This distribution's precision matrix (the inverse of the covariance matrix) is, in turn, identified with the network Laplacian \(L=D-A\) where \(D_{ij}=k_{i}\delta_{ij}\). To estimate the precision matrix, the authors employed a maximum likelihood approach, constraining the possible Laplacians \(L\) to preserve the expected connections' density within and across economic sectors. In addition, a penalization term is included to enforce network sparsity. Upon assessment against smaller network fragments, their methodology reports an F1-score within the range of \(0.2-0.3\). Nevertheless, it does not consistently surpass all benchmark tests under consideration. While it is true that, on average, firms that are more closely connected are more correlated, there is a lot of overlap between the distributions of correlations at various distances. In other words, knowing that firms are highly correlated is not very informative of their distance, making the task of network inference based on time series data very challenging. ## IV Inferring the value of transactions While methods for reconstructing weights have been used extensively on financial and global trade networks [e.g. 16; 45; 46] and aggregate I-O tables [e.g. 47], their application to firm-level networks is relatively novel. A first set of methods uses meso-level information from I-O tables, while another set of papers relies on the maximum entropy principle. ### Matching I-O tables Inoue and Todo incorporate aggregate I-O information into their estimates of the weights in the supply network of Japan. They assign to each link between a supplier \(i\) and a customer \(j\) a weight proportional to the product of firm sales, \(\omega_{ij}\propto\tilde{\omega}_{i}^{\text{out}}\frac{\tilde{\omega}_{j}^{\text {out}}}{\sum_{j\in\mathcal{N}_{i}}\tilde{\omega}_{j}^{\text{out}}}\), where \(\sum_{j\in\mathcal{N}_{i}}\) means that the sum runs only on \(i\)'s customers. The weights are then rescaled to align with the aggregate transaction amounts within industry sectors \(\tilde{\omega}_{ab}\), \[\omega_{ij}=\tilde{\omega}_{i}^{\text{out}}\frac{\tilde{\omega}_{j}^{\text{ out}}}{\sum_{j\in\mathcal{N}_{i}}\tilde{\omega}_{j}^{\text{out}}}\frac{ \tilde{\omega}_{a_{i}b_{j}}}{\sum_{k\in a_{i},l\in b_{j}}\tilde{\omega}_{k}^{ \text{out}}\tilde{\omega}_{l}^{\text{out}}},\] where \(a_{i}\) and \(b_{j}\) denote the respective industrial sectors of \(i\) and \(j\). A similar approach has been used by [29] where, starting from data on firms' sales and inputs, the authors construct individual-firm networks, that, when aggregated, align with the sectoral IO table. The authors rescale firms' input and output to match the IO tables8, and then allocate links in the network with an iterative algorithm that matches buyers to suppliers, while also imposing that larger firms will have more customers. The weight of each connection is then set to the smallest value between the supplier's maximum capacity and the customer's demand. Footnote 8: More precisely, they match intermediate inputs (roughly, inputs that are neither labour nor investment goods), and gross output (roughly, total sales). Instead of reconstructing the weights, Carvalho _et al._ estimate the _input shares_\(\alpha_{ij}\) of each link, \[\alpha_{ij}=\frac{\omega_{ij}}{\sum_{i}\omega_{ij}}.\] For any given customer-supplier pair of firms \((i,j)\) in the data, they assign \(\alpha_{ij}\) proportionally to the input-output table entry corresponding to industries \(i\) and \(j\) belong to, i.e., \(\alpha_{ij}\propto\tilde{\omega}_{a_{i}b_{j}}\), and renormalize them to ensure \(\sum_{i}\alpha_{ij}=1\). Real-world scenarios often present situations where it is unfeasible to find weights that align with aggregate observations. In [49], the authors design an inference strategy that aims to minimize the discrepancy between reconstructed and observed aggregate properties of the network. More specifically, the authors observe that, given a binary network \(G\), it is not always possible to assign weights \(\omega_{ij}\) that satisfy constraints \(\sum_{j}\omega_{ij}=\tilde{\omega}_{i}^{\text{out}}\) and \(\sum_{j}\omega_{ji}=\tilde{\omega}_{i}^{\text{in}}\). Take as an example a firm \(i\) who supplies only a single firm \(j\), and assume that \(i\) is the only supplier of \(j\). The aggregate constraints will only be satisfied if \(i\)'s sales match exactly \(j\)'s expenses, \(\tilde{\omega}_{i}^{\text{out}}=\tilde{\omega}_{j}^{\text{in}}\), a condition not always respected in the data. The authors solve this issue by introducing a'residual node' \(r\) to capture the portion of the economy that is not covered by the network \(G\). This node accounts for all the firms that are not present in the data. They propose to find the set of weights \(\omega_{ij}\) that minimize the loss \(\mathcal{L}=\sum_{i}\omega_{i,r}+\sum_{i}\omega_{r,i}\), where \(\omega_{ij}\) are subject to the usual constraints. Finally, Hazan reconstructs the weights for a complete stock-flow consistent economy, with households, firms, banks, and flows of money in the form of consumption, firm-to-firm payments, wages, and interest payments. After reconstructing the network using maximum entropy methods (Sec. III.2.2), stock-flow consistency makes it possible to write a linear system for the weights, which can be solved using Non-Negative Least Squares to avoid negative values. The performance of the methods reviewed in this subsection is unfortunately unknown, as information on the real weights was not available to the authors, who could not compare their reconstructions to the respective ground truths. However, in the future, researchers using these methods could partially validate their results by comparing them to the empirical regularities observed in [3] for weight distributions and the relationships between in- and out-degrees and strengths. ### Maximum entropy for weights inference Another way of predicting weights given some aggregate trade information is to use the maximum entropy principle. The intuition behind this principle is computing a distribution that is _maximally non-committal_ with respect to unknown information [51]b or, in simpler words, to build a distribution that minimizes unjustified assumptions about the network. In Sec. III.2.2, we saw how maximum entropy can be used to compute probabilities for possible binary networks. We are now going to see how it can be used to predict weights. If we consider the weights \(\omega_{ij}\), subject to the ("hard") constraints \(\sum_{j}\omega_{ij}=\tilde{\omega}_{i}^{out}\), and \(\sum_{j}\omega_{ji}=\tilde{\omega}_{i}^{in}\), where \(\tilde{\omega}_{i}^{out}\) and \(\tilde{\omega}_{i}^{in}\) represent the observed total outflow (intermediate sales) and inflow (intermediate expenses) of firm \(i\), we find that the set of weights that maximize the Shannon Entropy \[\mathcal{S}=-\sum_{i}^{N}\sum_{j}^{N}\omega_{ij}\ln\omega_{ij},\] are \[\omega_{ij}=\frac{\tilde{\omega}_{i}^{out}\tilde{\omega}_{j}^{in}}{\tilde{ \Omega}}, \tag{3}\] where \(\tilde{\Omega}=\sum_{i}\tilde{\omega}_{i}^{out}=\sum_{i}\tilde{\omega}_{i}^{in}\). This approach was also used in [27] for an undisclosed European country9. Footnote 9: Bartolucci _et al._ show that “upstreamness”, a classic metric in I-O economics, can be recovered very well from networks reconstructed from maximum entropy, as long as the networks are not too sparse. This is because, under very general conditions for the original network, the first-order approximation of a node’s upstreamness is its upstreamness in the maximum entropy-reconstructed network [53]. A different application of the maximum-entropy principle, where constraints are imposed softly (see Sec. III.1), results in the solution used in [50] to reconstruct Ecuador's national production network and in [13] to reconstruct the transaction network between customers of two Dutch banks. Building on [36], these papers first reconstruct the network's topology10, then sample the (positive) weights \(\omega_{ij}\) of the existing links from an exponential distribution, Footnote 10: In the case of [3], the topology is assumed to be known. \[P\left(\omega_{ij}=x\right)=\beta_{ij}\exp\left(-\beta_{ij}x\right),\] \begin{table} \begin{tabular}{c c c c c} \hline \hline _Coverage_ & _Dataset_ & _Inputs_ & _Probabilistic_ & MaxEnt \\ \hline Inoue and Todo [48] & National, & Tokyo Shoko & Firm sales, national IOTs & \\ Japan & Research & & & \\ Carvalho _et al._[43] & National, & Tokyo Shoko & Firm sales, national IOTs & \\ Japan & Research & & & \\ Welburn _et al._[49] & National, US & S\&P Capital IQ, & Firm sales and inputs & \\ & EDGAR. & (COGS). & & \\ Hazan [37] & National, & Full IOTs & Full IOTs & \\ & Czech & & & \\ & Republic & & & \\ Bacilieri and Astudillo-Estevez [50] & National, & Factset, Ecuador & Firm sales, intermediate & X & X \\ & International & VAT & expenses, network density & \\ Ialongo _et al._[13] & National & Dutch banks’ & Firm sales, intermediate & X & X \\ & & transaction data & expenses by sector, & \\ & & network density (for & \\ & & calibration) & & \\ \hline \hline \end{tabular} \end{table} Table 2: Overview of the papers that infer supply network weights. where \(\beta_{ij}\) is selected so that the expected value of \(\omega_{ij}\), conditional to the existence of a link, is \[\mathbb{E}_{ij}\left[\omega_{ij}|A_{ij}=1\right]=\frac{\tilde{\omega}_{i}^{out} \tilde{\omega}_{j}^{in}}{p_{ij}\sum_{i}\tilde{\omega}_{i}^{out}}.\] In [13], \(p_{ij}\) is defined by Eq. (2). In contrast, [50] omits sector-specific constraints for intermediate inputs11, and defines \(p_{ij}\) as Footnote 11: [13] simply assume that the meso-level constraints are observable since they have this in their firm-level data. [29; 43; 48] cannot read this information from the data, so they take meso-level information from the I-O tables. [50] argue that differences in accounting standards between firm- and industry-level networks are large so that the meso-level structure of a firm network should not be constrained to be like the I-O tables. [3] shows that there are indeed some important differences, especially in industries that follow different accounting conventions, such as retail and wholesale trade. \[p_{ij}=\frac{z\tilde{\omega}_{i}^{out}\tilde{\omega}_{j}^{in}}{1+z\tilde{ \omega}_{i}^{out}\tilde{\omega}_{j}^{in}}.\] Ref. [50] reports a cosine similarity of \(0.928\) between inferred and actual weights, and also compute a few "higher-order" properties of the nodes that describe the propagation of shocks in production networks in an established macroeconomic model [54], which the reconstructed network fails to capture adequately (the cosine similarity for the most relevant property, the _influence vector_, is \(\sim 0.5\)). In [13], visual inspection of the results shows a substantial enhancement in weight reconstruction when applying sector-specific constraints to firms' inputs, further underscoring the crucial role the economy's sectoral structure plays in the accurate reconstruction of production networks. ## V Discussion In this section, we take stock of what we can learn from existing studies, and provide suggestions on how the field could be further advanced. ### What have we learned? A first, clear message from the review is that in the context of supply networks, knowing the kind of product a firm makes is extremely important and substantially improves the reconstruction. This is evident both in the link prediction studies on industry data [14], commercial or country-level data [24], and in the maximum entropy reconstruction on payment data [13]. Unsurprisingly, ongoing research tries to predict the firms' products at a granular level, for instance from websites [55]. Second, the importance of products leads us to ask: to what extent can we, or should we rely on existing (national or inter-country) input-output matrices? While some studies reconstruct weights (conditional on links) using I-O links [29; 43; 48], others refrain from doing so [50], by fear that differences in accounting conventions [3] may create inconsistencies. Here the answer may depend on the goal of the reconstruction (see next section). A useful avenue for further research, however, would be to develop methods that easily make it possible to switch between business- and national accounting conventions. Such methods would necessarily use techniques and assumptions to allocate flows of money based on partially observed data, so that the methods reviewed here may be helpful. Third, we have seen that more sophisticated machine learning methods do provide substantial boosts in performance. This is clear from the improvement in link prediction performance between the logistic regression and graph neural nets in the automotive dataset [14; 22], and between simpler methods and gradient boosting in Mungo _et al._[24]12. Footnote 12: However, in both studies, predictions made by sophisticated models are harder to interpret. Fourth, there appears to be substantial scope for improving performance using "alternative" data. Zhang _et al._[25] and Wichmann _et al._[12] have provided a proof of concept that mining news and websites for supplier-buyer relations can be automated, and we have already mentioned that websites can be an important source of key metadata for link prediction (especially product-related information). While phone data is likely to be difficult to access, it is worth remembering the impressive result in [27] that firms with average daily communication of more than \(30s\)/day have a 90% probability of being connected. A related question for further research will be to establish the potential of "dynamical" data. Mungo and Moran [30] showed that while there is information about the network in the sales growth rates correlation matrix, predicting the network remains difficult, as the distribution of pairwise correlation for connected and unconnected pairs overlaps greatly, even though their average is statistically significantly different. Nevertheless, there are interesting developments in this area for networks generally, with only one application to supply networks. One limitation has been that very few supply networks' datasets have a reasonable time-series dimension, but as these become more common it will perhaps become possible to find other firm-level dynamical features that contain fingerprints of their network. Finally, many studies have shown that baking sensible economic intuition into the models usually improves predictions. To sum up, we have learned (or confirmed from existing literature) that link formation is likely driven by the kind of products firms make, their geographical distance, and their size. We have seen that firms who communicate a lot are likely to be in a supply-buy relationship and that firms that are in a relationship are likely to have a substantial co-movement in sales. While prediction is in some cases the ultimate goal, making methods that prioritize performance over interpretability appropriate [22], the quest for better reconstruction models has also prompted a deeper investigation into the behavioural and economic principles influencing how firms make and unmake their connections [14; 24]. Currently, no fully realistic supply network formation model has been developed (however, see [56] for an early example); we anticipate that reconstruction methods and the development of null models will, at least partly, go hand in hand. ### How can we learn more? What method works best for which task? We are not yet able to properly answer this question because the literature uses different datasets, takes different features of the data to make predictions, and uses different evaluation metrics. While this is warranted by the diversity of goals and applications, we think it would be valuable to organize "horse races", as has been done for financial networks [45], and provide standard datasets, as is common in the machine learning community. Let us first discuss the lack of comparability between studies. The methods proposed are very diverse and usually require distinct data to operate. The diversity of datasets and features used is understandable and valuable. For example, Kosasih and Brintrup [22] use topological features because one of their realistic use cases is to augment an existing "observed" network dataset, while Mungo _et al._[24] avoid using topological information because their envisioned use case is to port a trained model to a context where no such features are available. As another example, while phone data is very hard to access, the study using this data made it possible to evaluate the systemic risk of each firm in an entire European country. A slightly less justified "diversity of approaches" is the lack of standardized assessment metrics, as it is in principle relatively easy to report several metrics. Traditional statistical indicators (accuracy, AUROC, PR-AUC) provide an easy, well-known benchmark, and have already been functional in, e.g., propelling the development of computer-vision models [57]. Yet, the question remains as to whether they are sufficient to evaluate the reconstruction of a network, and what additional metrics should be adopted to supplement them. Some metrics, initially conceived for balanced datasets, may not hold up as reliably when applied to sparse networks, where non-existing links greatly outnumber the existing ones, further complicating the comparison between methods. Overall, the area under the Receiving Operator Characteristic Curve (AUROC) seems robust in the face of class imbalance: if one makes the imbalance more and more severe, its value does not change substantially (see Supplementary Material [24]). Consequently, AUROC is a sensible metric to compare results. The area under the Precision-Recall curve (PR-AUC), which is more sensitive to the performance of the model on the minority class, is also very sensitive to the level of imbalance in the data; PR-AUC and imbalance should always be reported jointly. Reporting basic topology metrics of the reconstructed network is also a sensible approach, as there is substantial evidence [3] that some topological properties are universally shared by all production networks. For instance, Bacilieri _et al._[3] showed that the tail exponents for the in- and out-degree distributions are remarkably similar in national, VAT-assembled datasets. Ultimately, as we plug reconstructed networks into economic models, the optimal metric will be the one that best correlates with accurate economic predictions. Identifying these proper "dynamical" indicators needs to go hand-in-hand with the development of economic models that are carefully validated on real-world data and can become legitimate standards for evaluating reconstruction performance. While agreeing on a set of metrics and features appears relatively easy, the key challenge ahead is data availability. To follow our previous analogy, in computer vision, researchers can access standard, large-scale datasets [58] of annotated images to train and evaluate their models. Similar datasets for production network reconstruction are not currently available and, due to the confidential or proprietary nature of such data, its assembly seems unlikely in the near future. The research community should unite to devise strategies to circumvent this issue, possibly by considering the use of synthetic data [59] as an alternative to real data. While synthetic data generation is currently an active and exciting area of research, it is less well-developed for networks than for tabular data and still suffers from either a lack of privacy guarantees (for traditional methods) or a lack of interpretability of the privacy guarantees (for differential privacy). ## VI Two research agendas For many practical applications, it is necessary to know much more than the value of transactions between firms. We lay out two research programs - one that aims to reconstruct supply networks to allow for real-time monitoring of disruptions and logistics optimization; and one that aims to reconstruct a granular version of global macroeconomic datasets. ### Towards supply chain visibility for risk management Past decades have been characterized by supply chain cost optimization objectives, which have led to just-in-time initiatives that stripped buffer inventories from supply lines, that have already become geographically longer with offshoring practices. While high-impact, rare events such as COVID-19 highlighted the vulnerability of these global, highly complex modes of operation, organisations often struggle with increased volatility in their day-to-day procurement. Supply chain researchers are increasingly seeking methods to build resilience in what is now frequently termed a "shortage economy" [60]. However, these efforts are often hindered by a lack of visibility on supply chain dependencies as companies do not disclose commercially sensitive information such as whom they buy goods and services from. As link prediction and reconstruction methods presented in this paper do not rely on companies' willingness to share data, they have the potential to become a primary toolset in supply chain risk management. Our review shows that buyer-supplier link prediction is possible with various differing methodologies and success rates. Recently proposed methods for reconstructing knowledge graphs go beyond who-supplies-whom, but also enable prediction of other types of relevant information such as where firms are located, and what they produce, paving the way for a new era of "digital supply chain surveillance" [61]. Much further work is needed in this context. For instance, use cases that evaluate how the identification of risky supplier locations and production dependencies might help with effective mitigation strategies such as multi-sourcing, supply chain reconfiguration, insurance, or inventory buffers. Beyond addressing supply disruption risk, an understanding of supply chain structure could be informative for the detection of supply chain fraud and counterfeit products. Improved visibility may help improve regulatory compliance on the Environmental, Social and Governance (ESG) practice. Methods that help detect transaction volumes could improve supply chain financing in which lenders often struggle with identifying financial risk exposure. To achieve these, different ontologies are needed to be built and integrated into existing knowledge graph completion methods. New methods for inferring compliance, fraud, and other areas of interest from knowledge graphs need to be developed. Lastly, any resulting graph will be limited by underlying assumptions and incomplete data, which, in turn, may be shaped by the observable data at hand. Hence data imputation and uncertainty quantification will need to inform the resulting graphs. ### Towards granular global economic accounts For macroeconomic applications, our interest extends beyond the mere flow of money between firms. Macroeconomics concerns quantities such as GDP, which represents at the same time the total income, total expenditures, and total "value added" of an economy. Firm-to-firm transactions are not sufficient to truly understand how economic agents create value, redistribute income, and spend on goods and services. As a result, to support the development of large-scale realistic agent-based models, we need an ambitious agenda to develop semi-synthetic populations, which would include all the available micro information and supplement it by synthetic micro data in a way that results in meso- and macro-level aggregates compatible with observed meso- and macro-level quantities typically observable from national accounts. We elaborate briefly on three strands of research within this agenda. First, it will be important to ensure compatibility between micro and meso-level data, which are usually compiled using different accounting rules. National accounting principles provide a solid conceptual framework, so developing reconstructed datasets that respect these principles would have many advantages, including making it easier to use this data in macro models, easier to improve this data using macro-level information, and easier to match this data with other relevant datasets. However, firm-level data is usually compiled using business accounting rules, so that simply "summing up" firm-level data does not necessarily lead to the supposedly equivalent concept in national accounts. As we have highlighted, this is a potential to, for instance, use IOTs as additional information to reconstruct firm-level networks. Second, modern work in economics shows that employer-employee-matched data and datasets on consumer baskets' heterogeneity are crucial to understanding inequality, long-run growth, or carbon emissions. As a result, a straightforward extension of the "reconstruction of economic networks" program would be to predict employer-employee relations and consumer-firm relations (See [37] for a first attempt). Existing efforts to develop data-driven agent-based models rely on such synthetic populations. While there exists a lot of work on recommender systems for suggesting products to consumers, and more recently some work on career suggestions, these efforts have not been leveraged to create reliable synthetic populations. Third, many of the studies presented here worked with money flows, omitting a distinction between prices and quantities. This is driven by the fact that firm-level supply networks with both price and quantity information are very rare, but this is a serious issue for economic modelling, where prices obviously play a key role. To model inflation, understand growth and business cycles, we need measures of quantities produced (or inflation-adjusted values). New methods for inferring prices, perhaps based on companies' websites and other features, would be very extremely helpful in this context. ## VII Conclusion The reconstruction of supply networks through mathematical methods is a young field. This paper offers a review of methodologies that researchers have proposed to grapple with this challenge. Good proof-of-concept studies exist, but much remains to be done. A striking feature of the literature is the diversity of methods, datasets and evaluation metrics. While this is justified by the different backgrounds and motivations of the researchers, we think that progress in this area would benefit from the availability of open datasets and the definition of standard metrics, so that horse races could be organised. We were able to propose some guidelines to standardize performance metrics, but the path to open datasets is more complicated and will require international cooperation that either facilitates researchers' access, or fosters the creation of high-fidelity synthetic datasets. Despite this difficulty, we think that reconstructing supply networks is an excellent playing ground for the complex systems community, as it requires a deep understanding of networks, statistics, and dynamical systems, together with an appreciation that these networks emerge from the decentralized interactions of millions of highly heterogenous, bounded-rational agents operating with different objectives at different time scales.
2309.15896
Holographic Weak Measurement
In this paper, we study a holographic description of weak measurements in conformal field theories (CFTs). Weak measurements can be viewed as a soft projection that interpolates between an identity operator and a projection operator, and can induce an effective central charge distinct from the unmeasured CFT. We model the weak measurement by an interface brane, separating different geometries dual to the post-measurement state and the unmeasured CFT, respectively. In an infinite system, the weak measurement is related to ICFT via a spacetime rotation. We find that the holographic entanglement entropy with twist operators located on the defect is consistent in both calculations for ICFT and weak measurements. We additionally calculate the boundary entropy via holographic entanglement as well as partition function. In a finite system, the weak measurement can lead to a rich phase diagram: for marginal measurements the emergent brane separates two AdS geometries, while for irrelevant measurements the post-measurement geometry features an AdS spacetime and a black hole spacetime that are separated by the brane. Although the measurement is irrelevant in the later phase, the post-measurement geometry can realize a Python's lunch.
Xinyu Sun, Shao-Kai Jian
2023-09-27T18:00:00Z
http://arxiv.org/abs/2309.15896v3
# Holographic Weak Measurement ###### Abstract In this paper, we study a holographic description of weak measurements in conformal field theories (CFTs). Weak measurements can be viewed as a soft projection that interpolates between an identity operator and a projection operator, and can induce an effective central charge distinct from the unmeasured CFT. We model the weak measurement by an interface brane, separating different geometries dual to the post-measurement state and the unmeasured CFT, respectively. In an infinite system, the weak measurement is related to ICFT via a spacetime rotation. We find that the holographic entanglement entropy with twist operators located on the defect is consistent in both calculations for ICFT and weak measurements. We additionally calculate the boundary entropy via holographic entanglement as well as partition function. In a finite system, the weak measurement can lead to a rich phase diagram: for marginal measurements the emergent brane separates two AdS geometries, while for irrelevant measurements the post-measurement geometry features an AdS spacetime and a black hole spacetime that are separated by the brane. Although the measurement is irrelevant in the later phase, the post-measurement geometry can realize a Python's lunch. ## 1 Introduction ### Weak measurement in an infinite system [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] \(\,\) \(\,\) \(\,\) \(\,\) \(\,\) \(\,\) \(\,\) \(\,\) \(\,\,\) [MISSING_PAGE_POST] \(\,\,\) [MISSING_PAGE_POST] \(\,\,\) \(\,\) \(\,\) \(\,\,\) \(\,\) \(\,\,\) \(\,\,\) \(\,\,\) \(\,\) [MISSING_PAGE_POST] \(\,\) [MISSING_PAGE_POST] \(\,\) \(\,\,\) \(\,\,\) \(\,\) [MISSING_PAGE_POST] \(\,\) [MISSING_PAGE_POST] C Different phases and phase transition for measurements C.1 Convention of metric and corresponding geodesics C.2 Details of calculation for the geodesic in the bubble-inside-horizon phase D Tensor network realization of Python's Lunch D.1 Brief review of Python's Lunch property D.2 Python's Lunch realization Introduction In the holographic principle, dual geometries emerge from the entanglement structure of boundary quantum systems [1; 2; 3; 4; 5; 6]. It will be interesting to explore the effect of quantum information theoretic operations on the dual spacetime. For instance, local projection measurements are operations that radically change the entanglement of the wavefunction: the region being measured becomes unentangled after the measurements. It is, in general, very difficult to describe the effect of measurements on a many-body wavefunction because the measurement outcome is stochastic in nature, and one would not expect a universal description for all of them. Nevertheless, in conformal field theory (CFT), there is a big class of local projection measurements that can be described efficiently: when the measurement outcome is conditioned on the conformal invariant boundary states of the CFT [7; 8; 9; 10; 11]. In 2d CFTs, these states are also called Cardy states [12]. After projecting (a subregion of) the wavefunction onto a Cardy state, the remaining parts still respect half of the conformal symmetry and are described by boundary conformal field theory (BCFT) [13; 14; 15; 16]. More concretely, the prescription to describe this class of measurements is to cut a slit, which represents the measurement, in the complex plane that represents the Euclidean path integral, and then map this plane with the slit, using conformal transformations, to a BCFT living in an upper half plane. This, according to AdS/BCFT [17; 18], motivates the description of local projection measurements on the holography side. As the holographic BCFT is dual to an end-of-the-world (ETW) brane ending at the boundary, the local projection measurements are also described by the ETW brane [19]. When we consider the remaining state after measurements on a contiguous interval, as shown in figure 1, it is dual to a spacetime with an ETW brane that ends on the slit, and cuts off a spacetime region that is homologous to the slit. We can also consider the bulk dual of the boundary state. Because the boundary state has vanishing spatial entanglement, it is holographically dual to a trivial spacetime [20]. This is consistent with the ETW brane description of the remaining parts, since the brane separates the dual spacetime of the remaining state, which still holds a good amount of entanglement to support the spacetime, and the trivial spacetime dual to the boundary state. The informational aspect of holographic projective measurements has also been considered [21; 22; 23; 24]. While the projection measurements onto a boundary state have a holographic description in terms of the ETW brane, to the extent of our understanding, much less is known about general measurements. In this paper, we consider weak measurements, and study its holographic description. Weak measurements are distinct from local projection measurements: the subregion being measured still holds finite entanglement. A way to describe a weak measurement is by introducing a parameter to describe the measurement strength. Such a parameter ranges from zero to infinity, corresponding to no measurement and projection measurement, respectively. We are interested in a generic measurement strength other than zero or infinity. The weak measurement has been studied in CFTs recently [25; 26; 27; 28]. In reference [26], we consider weak measurements performed on a Luttinger liquid described by a compactified free boson CFT with central charge \(c=1\). The compactified free boson CFT can be realized in the XXZ model, \(H=-\sum_{i}(X_{i}X_{i+1}+Y_{i}Y_{i+1}+\Delta Z_{i}Z_{i+1})\), where \(X_{i},Y_{i},Z_{i}\) denote Pauli matrices and \(|\Delta|<1\). The weak measurement operator is [26] \[M=e^{-W\sum_{i}(-1)^{i}Z_{i}}, \tag{1}\] where \(W\in[0,\infty)\) is the parameter that captures the measurement strength. The weak measurement is performed on all qubits, and its effect is irrelevant, marginal, or relevant when \(\Delta<0\), \(\Delta=0\), or \(\Delta>0\), respectively. Compared to projection measurements, weak measurements have richer behaviors. In particular, when it is marginal, the subsystem entanglement entropy of the resulted state exhibits a continuous effective central charge, \(c_{\text{eff}}(W)\). More specifically, the entanglement entropy of a subregion with length \(L_{A}\) (with lattice constant \(a\)) \[S_{A}=\frac{c_{\text{eff}}(W)}{3}\log\frac{L_{A}}{a}. \tag{2}\] The continuous central charge interpolates the original value (of central charge of the unmeasured system) and zero between the two limits of \(W\), \(W=0\) and \(W=\infty\), that stands for no measurements and local projection measurements respectively. Namely, \[0\leq c_{\text{eff}}(W)\leq c,\quad c_{\text{eff}}(0)=c,\quad c_{\text{eff}}( \infty)=0. \tag{3}\] While in the example of the XXZ model, we have the anisotropy parameter \(\Delta\) to tune the relevance/irrelevance of the measurement strength, here, in the holographic setup, we focus on the fixed point that is characterized by \(c_{\text{eff}}\), namely, 1 Footnote 1: We will also denote the central charge of the unmeasured CFT as \(c_{1}\) in the following discussion. \[c_{\text{eff}} =c\] irrelevant \[0<c_{\text{eff}} <c\] marginal (4) \[c_{\text{eff}} =0\] relevant From this example, we can deduce that weak measurements can support a state with nontrivial spatial entanglement, and thus, a nontrivial spacetime via AdS/CFT correspondence. In the case of local projection measurements, the ETW brane separates a nontrivial Figure 1: Local projection onto a Cardy state in a subregion. \(P\) denotes the slit induced by the projection and \(Q\) denotes the ETW brane anchoring on the slit. The gray region is the spacetime dual to the remaining state after measurements. spacetime of the remaining state and a trivial spacetime of the boundary state. Now, different from the trivial spacetime dual to the boundary state after projection, because the state after weak measurements is dual to a nontrivial spacetime, the ETW brane is replaced by an interface brane separating two nontrivial spacetimes with different central charges. To be concrete, on the CFT side, considering a weak measurement, given by \(M=e^{-WH_{M}}\), acting on the ground state \(|\Psi\rangle\), we can use path integral to present the matrix element, \(\langle\Psi|M^{\dagger}M|\Psi\rangle=\lim_{\epsilon\to 0}\langle\Psi|M^{ \dagger}_{\epsilon}M_{\epsilon}|\Psi\rangle\) \[\langle\Psi|M^{\dagger}_{\epsilon}M_{\epsilon}|\Psi\rangle=\int D\phi e^{-S}, \quad S=\int d\tau\left[L_{\rm CFT}+f_{\epsilon}(\tau)WH_{M}\right], \tag{5}\] where \(f_{\epsilon}(\tau)\) is a regularization function of the Dirac delta function, \[f_{\epsilon}(\tau)=\begin{cases}\frac{1}{2\epsilon}&|\tau|<\epsilon\\ 0&|\tau|>\epsilon\end{cases}. \tag{6}\] Such a Euclidean path integral is shown in figure 2 (a), where \(|\tau|=\epsilon\) separates two regions with different (effective) central charges. On the gravity side, we need to fill in the spacetime that is consistent with the boundary quantum system. Because we have different regions with different central charges at the boundary, the dual spacetime metrics are, in general, different, and separated by interface branes, as shown in figure 2 (b). \(M_{1}\) and \(M_{2}\) denote the bulk regions dual to the boundary of the unmeasured CFT and the measurement, and they are separated by interface branes. 2 Footnote 2: Here, \(M_{1}\) is dual to the unmeasured CFT for both \(\tau>\epsilon\) and \(\tau<-\epsilon\) owing to the time reflection symmetry. The interface branes generalize the holographic local projection measurements to weak measurements. This spacetime is similar to AdS/ICFT, where ICFT stands for interface CFT. The ICFT and its holographic duality have been extensively studied [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62] It describes joint spatial regions of possibly different CFTs. Nevertheless, an obvious difference Figure 2: (a) CFT with weak measurements on an infinite plane. A regularized measurement occurs in \(\tau\in(-\epsilon,\epsilon)\). The dashed line indicates the time reflection invariant line, \(\tau=0\). (b) Illustration of the bulk duality. \(M_{1}\) and \(M_{2}\) denote the bulk region dual to the boundary, the unmeasured CFT and the measurement. We also illustrate the branes separating \(M_{1}\) and \(M_{2}\). is that ICFT concerns the spatial interface, whereas weak measurements occur on an equal time slice. In this paper, we will investigate a few key features of the interface brane description of holographic weak measurements. Here is a brief summary of the main contribution in this paper: * We propose that the weak measurements that support an effective central charge \(0<c_{\text{eff}}<c\) on an infinite line are described by interface branes as shown in figure 2. The dual spacetime is given by two AdS geometries with different radii that originate from the weak measurement and the unmeasured CFT, respectively, see (15). * We establish the relation between the weak measurement and the spatial interface: we show that the weak measurement of CFTs defined on an infinite complex plane is related to ICFT via a spacetime rotation. In particular, the logarithmic entanglement entropy with a continuous effective central charge creates by the spatial interface is the same as that from a weak measurement at an equal time slice. Figure 7 is a clear illustration of the spacetime rotation between the weak measurement and the spatial interface. * We calculate the boundary entropy from holographic entanglement entropy of an ICFT, and show that it is the same as the measurement probability by evaluating the onshell action. * We investigate the holographic weak measurement in CFTs defined on a circle, which leads to different phases (see figure 13): for marginal measurements, the dual geometry is two AdS spacetimes with different central charges similar to the case for infinite systems (this is called the no-bubble phase 3); for irrelevant measurements, the dual geometry features an AdS spacetime and a black hole that are separated by the brane. More concretely, when measurements are irrelevant, the AdS spacetime dual to the unmeasured CFT has a bubble, which is separated by an interface brane from the black hole dual to the weak measurement region, in the time reversal invariant slice. If the black hole horizon is (not) contained in the solution, it is the bubble-inside (outside)-horizon phase. Interestingly, the post-measurement geometry of the bubble-inside-horizon phase realizes a Python's lunch [63]. Footnote 3: A similar calculation with different motivations has been studied in reference [51]. We use their conventions for the names of different phases for convenience. * While we primarily consider the thin brane description of the holographic weak measurement, we extend our discussion to a thick-brane description for the weak measurement in an infinite system (see Sec. 5). We find that a consistent result as the thin brane construction. The paper is organized as follows. In section 2, we consider the holographic dual of weak measurements on a CFT defined on an infinite complex plane. The geometry corresponds to pure AdS spacetime with different AdS radii separated by the interface branes, as shown in figure 2. We will also show that the weak measurement is related to interface CFT via a spacetime rotation. In section 3, we calculate the boundary entropy from holographic entanglement entropy and from the measurement partition function. We find that they are equal. In section 4, we consider holographic weak measurements in the CFTs defined on a circle. Depending on the AdS radii and the brane tension, there are three phases. We discuss the entanglement properties of different phases. In section 5, we present additional discussions of our results. We discuss the thick brane description of weak measurements in an infinite system. Furthermore, we point out bubble-inside-horizon phase in the case of weak measurements in finite systems has a structure of Python's lunch. Finally, we discuss a few future directions. In appendix A, we consider the entanglement entropy with a point defect, with the locations of twist operators not symmetric. Besides, we also consider the effect of ETW brane on the geodesic. In appendix B, we introduce some basic notations for calculating the partition function. In appendix C, we provide a detailed derivation of geodesic solutions in general and in the bubble-inside-horizon phase. In appendix D, we construct a tensor network to realize Python's lunch, whose complexity is exponential in the difference between the maximal and outermost minimal areas. The calculation in Appendix A leads to some interesting results of holographic interface CFT that we would like to summarize in the following. For the readers who are only interested in the results of weak measurements may skip this summary. * We consider two semi-infinite CFTs with different central charges, denoted by \(c_{1}\) and Figure 3: Illustration of the RT surface dual to the entanglement entropy of an interval in CFTs with different spatial interface. We plot the limit where the left boundary of the interval is much longer than the right boundary \(\sigma_{l}\gg\sigma_{r}\). See the Appendix A.1 and Appendix A.2 for detailed discussions. (a) The boundary has two CFTs with different central charges, denoted by \(c_{1}\) and \(c_{2}\), respectively. They are separated by an interface. The dual geometry is given by two AdS spacetimes with different AdS radii separated by an interface brane colored in blue. The entanglement entropy of an interval \((-\sigma_{l},\sigma_{r})\) is dual to the RT surface colored in orange. (b) The boundary is a CFT with a defect. The central charge of the CFT is denoted by \(c_{1}\). The defect supports a nontrivial AdS spacetime with a radius dual to an effective central charge \(c_{2}\). These AdS spacetimes are separated by interface branes colored in blue. The entanglement entropy of an interval \((-\sigma_{l},\sigma_{r})\) is dual to the RT surface colored in orange. \(c_{2}\). They are separated by a spatial interface. The gravity dual of such an interface CFT is given by two AdS spacetimes with radii determined by \(c_{1}\) and \(c_{2}\), respectively. The spacetimes are separated by an interface brane, as shown in figure 3 (a). The interface is located at \(0\), and we consider the entanglement entropy of an interval \((-\sigma_{l},\sigma_{r})\). The general result for the holographic entanglement entropy of such a subsystem has been considered previously [56]. We obtain a simple result at the leading order in the limit for \(\sigma_{l}\gg\sigma_{r}\) as follows 4, Footnote 4: We note that a similar result has also been reported in reference [60]. \[S_{-\sigma_{l},\sigma_{r}}=\left(\frac{c_{1}}{6}+\frac{1}{6}\min(c_{1},c_{2}) \right)\log\sigma_{l},\] (7) where the left and right twist operators contribute to \(\frac{c_{1}}{6}\) and \(\frac{1}{6}\min(c_{1},c_{2})\), respectively. It is interesting to see that while the right twist operator is located in the region of CFT with central charge \(c_{2}\), the holographic entanglement entropy tries to "minimize" the central charge: \(\frac{1}{6}\min(c_{1},c_{2})\). * We consider a CFT of central charge \(c_{1}\) with a defect. The defect supports dual AdS spacetime with an effective central charge \(c_{2}\). 5 The gravity dual of such a defect CFT is given by two AdS spacetimes with radii determined by \(c_{1}\) and \(c_{2}\), respectively. The spacetimes are separated by an interface brane, as shown in figure 3 (b). Again, we consider the entanglement entropy of an interval \((-\sigma_{l},\sigma_{r})\). We obtain a simple result at the leading order in the limit for \(\sigma_{l}\gg\sigma_{r}\) as follows, \[S_{-\sigma_{l},\sigma_{r}}=\left(\frac{c_{1}}{6}+\frac{1}{6}\min(c_{1},c_{2}) \right)\log\sigma_{l},\] (8) where the left and right twist operators contribute to \(\frac{c_{1}}{6}\) and \(\frac{1}{6}\min(c_{1},c_{2})\), respectively. It is surprising to see that while the right twist operator is located in the region of the CFT with central charge \(c_{1}\), the holographic entanglement entropy has the information about the effective central charge created by the defect when \(c_{2}<c_{1}\). This can be intuitively understood as in the limit of \(\sigma_{l}\gg\sigma_{r}\), there is a proxy effect to the defect for the right twist operator. Footnote 5: There is no essential difference between a defect CFT and an interface CFT. ## 2 Weak measurement in an infinite system In this section, we consider a semiclassical gravity in AdS\({}_{3}\) describing interface CFTs, and use Ryu-Takayanagi formula to calculate the entanglement entropy of various subregions. We will see that the entanglement structure of a CFT upon weak measurements is related to the interface CFTs via a spacetime rotation. ### Review of AdS/ICFT The interface conformal field theory has been studied extensively both in condensed matter [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39], and in holography [40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62]. The basic structure of interface CFTs considered in this paper is given by two CFTs, each lives in a half plane and are separated by an interface. Here, we consider a thin brane construction. In general, the two CFTs have different central charges. The gravity dual of this interface CFTs consists of the corresponding AdS geometry in bulk. There are two different AdS radii in two half spaces, accounting for the different central charges. The bulk AdS spacetimes with different radii are separated by a brane, connecting to the interface at the boundary. The Euclidean action of this gravity theory reads \[\begin{split} S_{\text{EH}}=-\frac{1}{16\pi G_{(3)}}& \left[\int_{\mathcal{M}_{1}}\text{d}^{3}x\ \sqrt{g_{1}}\left(R_{1}+\frac{2}{L_{1}^{2}}\right)+\int_{\mathcal{M}_{2}}\text {d}^{3}x\ \sqrt{g_{2}}\left(R_{2}+\frac{2}{L_{2}^{2}}\right)\right.\\ &+\left.2\int_{\mathcal{S}}\text{d}^{2}y\ \sqrt{h}(K_{1}-K_{2})-2T\int_{ \mathcal{S}}\text{d}^{2}y\ \sqrt{h}\right]+\text{corner and counter terms},\end{split} \tag{1}\] where \(G_{(3)}\) is Newton constant in 3d. \(\mathcal{M}_{1,2}\) denote the bulk spacetimes dual of the two CFTs, \(L_{1,2}\) are the corresponding AdS radii. \(g_{1,2}\) denote the metrics of two AdS bulks, and \(R_{1,2}\) are the Ricci scalars. \(\mathcal{S}\) denotes the brane separating two AdS bulk spacetimes. \(K_{1,2}\) denote the extrinsic curvatures and \(h_{ab}\) is the induced metric on the brane. \(T\) is the tension of the brane. For simplicity, we omit the corner and counter terms. The central charge of boundary CFT is related to the AdS radius via \[c_{1,2}=\frac{3L_{1,2}}{2G_{(3)}}. \tag{2}\] We work in the semiclassical limit, \(c_{1,2}\gg 1\). There are two junction conditions for induced metric \(h_{ab}\) and extrinsic curvatures \(K_{1,2}\), \[h_{ab}=\frac{\partial x_{1}^{\mu}}{\partial y^{a}}\frac{\partial x _{1}^{\nu}}{\partial y^{b}}g_{\mu\nu}^{1}=\frac{\partial x_{2}^{\mu}}{ \partial y^{a}}\frac{\partial x_{2}^{\nu}}{\partial y^{b}}g_{\mu\nu}^{2}, \tag{3a}\] \[\Delta K_{ab}-h_{ab}\Delta K=-Th_{ab}, \tag{3b}\] where \(\Delta K_{ab}=K_{1,ab}-K_{2,ab}\). After tracing (3b) we have \(\Delta K=2T\), and then \(\Delta K_{ab}=Th_{ab}\). Before we solve the junction conditions, we introduce a few coordinate systems that will be used in this paper. A Euclidean AdS\({}_{3}\) can be considered as a hyperboloid \[-(X^{0})^{2}+(X^{3})^{2}+\sum_{i=1}^{2}(X^{i})^{2}=-L^{2}, \tag{4}\] in four dimensional Minkowski spacetime with metric \(G_{\mu\nu}=\text{Diag}(-1,1,1,1)\). The first set of coordinates, which is useful for solving the junction conditions, is given by \[\begin{split} X^{0}=&\frac{L^{2}+\tau^{2}+y^{2}}{2y }\cosh\frac{\rho}{L},\qquad X^{1}=L\sinh\frac{\rho}{L},\\ X^{2}=&\frac{L^{2}-\tau^{2}-y^{2}}{2y}\cosh\frac{ \rho}{L},\qquad X^{3}=\frac{L\tau}{y}\cosh\frac{\rho}{L},\end{split} \tag{5a}\] \[\mathrm{d}s^{2}=\mathrm{d}\rho^{2}+L^{2}\cosh^{2}\left(\frac{\rho}{L}\right)\left( \frac{\mathrm{d}\tau^{2}+\mathrm{d}y^{2}}{y^{2}}\right), \tag{5b}\] where \(-\infty<\rho<\infty\) and \(y>0\). Under a coordinate transformation, \[z=\frac{y}{\cosh\left(\frac{\rho}{L}\right)},\qquad x=y\tanh\left(\frac{\rho}{ L}\right), \tag{6}\] we get the second set of coordinates \((\tau,x,z)\) with the familiar metric \[\mathrm{d}s^{2}=L^{2}\frac{\mathrm{d}\tau^{2}+\mathrm{d}x^{2}+\mathrm{d}z^{2} }{z^{2}}. \tag{7}\] To simplify the coordinate transformation, we introduce an angular variable \(\sin\chi=\tanh\left(\frac{\rho}{L}\right)\), such that \(z=y\cos\chi,\ x=y\sin\chi\). The asymptotic boundary is \(z=0\) or equivalently \(\chi=\pm\frac{\pi}{2}\). To solve the junction condition (3a), we use (5a) and consider a brane located at \(\rho_{i}=\rho_{i}^{*}\), \(i=1,2\), or equivalently, \(\sin\psi_{i}=\tanh\left(\frac{\rho_{i}^{*}}{L_{i}}\right)\). The convention for \(\chi_{i}\) is that \(\chi_{i}\to-\pi/2\) corresponds to the asymptotic boundary of the region \(M_{i}\). An example of the branes is shown in figure 4. Then the junction condition requires \[L_{1}^{2}\cosh^{2}\left(\frac{\rho_{1}^{*}}{L_{1}}\right) \left(\frac{\mathrm{d}\tau_{1}^{2}+\mathrm{d}y_{1}^{2}}{y_{1}^{2}} \right)=L_{2}^{2}\cosh^{2}\left(\frac{\rho_{2}^{*}}{L_{2}}\right)\left(\frac{ \mathrm{d}\tau_{2}^{2}+\mathrm{d}y_{2}^{2}}{y_{2}^{2}}\right), \tag{8a}\] \[\frac{1}{L_{1}}\tanh\frac{\rho_{1}^{*}}{L_{1}}+\frac{1}{L_{2}} \tanh\frac{\rho_{2}^{*}}{L_{2}}=T, \tag{8b}\] which the first equation leads to \(y_{1}=y_{2},\ \tau_{1}=\tau_{2}\) and \(L_{1}\cosh\left(\frac{\rho_{1}^{*}}{L_{1}}\right)=L_{2}\cosh\left(\frac{\rho_ {2}^{*}}{L_{2}}\right)\). Combining with the second equation, we have \[\tanh\frac{\rho_{1}^{*}}{L_{1}}=\frac{L_{1}}{2T}\left(T^{2}+\frac{1}{L_{1}^{2} }-\frac{1}{L_{2}^{2}}\right),\quad\tanh\frac{\rho_{2}^{*}}{L_{2}}=\frac{L_{2}} {2T}\left(T^{2}+\frac{1}{L_{2}^{2}}-\frac{1}{L_{1}^{2}}\right). \tag{9}\] Here \(T\in(T_{\mathrm{min}},T_{\mathrm{max}})\) where \(T_{\mathrm{max,min}}=\left|\frac{1}{L_{1}}\pm\frac{1}{L_{2}}\right|\). In terms of the angular variable, we have \[\sin\psi_{1,2}=\tanh\left(\frac{\rho_{1,2}^{*}}{L_{1,2}}\right). \tag{10}\] Therefore, the brane is determined by the AdS radii and the tension. In 3d gravity, the entanglement entropy of an interval \([\sigma_{1},\sigma_{2}]\) in the boundary theory is given by the geodesic length \(d(\sigma_{1},\sigma_{2})\) via the RT formula, \[S_{\sigma_{1},\sigma_{2}}=\frac{d(\sigma_{1},\sigma_{2})}{4G_{(3)}}. \tag{11}\] In general, we have to solve the geodesic equation subjected to the connection conditions to get the RT surface. In the simple metric (7), the relevant geodesic is given by an arc with the arc center located at the boundary (or the extension line of the boundary which will be clear later), and the connection condition is that the tangents of two arcs at the junction are the same. With these simple rules, we are able to determine the RT surface and calculate the geodesic length. In the following, we will investigate a special case with three AdS bulk regions separated by two interfaces. The full theory has a reflection symmetry, as indicated in figure 5. ### Weak measurement and effective central charge Consider a weak measurement on the ground state \(|\Psi\rangle\) defined in an infinite line, we can use path integral to present (unnormalized) measurement amplitude, \(\langle\Psi|M^{\dagger}M|\Psi\rangle=\lim_{\epsilon\to 0}\langle\Psi|M_{ \epsilon}^{\dagger}M_{\epsilon}|\Psi\rangle\) \[\langle\Psi|M_{\epsilon}^{\dagger}M_{\epsilon}|\Psi\rangle=\int D\phi e^{-S}, \quad S=\int d\tau\left[L_{\text{CFT}}+f_{\epsilon}(\tau)WH_{M}\right], \tag{12}\] where \(f_{\epsilon}(\tau)\) is a regularization function of the Dirac delta function, \[f_{\epsilon}(\tau)=\begin{cases}\frac{1}{2\epsilon}&|\tau|<\epsilon\\ 0&|\tau|>\epsilon\end{cases}. \tag{13}\] Such a Euclidean path integral is shown in figure 2 (a). We assume that the state after weak measurement, \(M|\Psi\rangle\), has an effective central charge \(c_{\text{eff}}\). Because there are different regions with different central charges at the boundary, the dual spacetime metrics are separated by interface branes, as in figure 2 (b). \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) denote the bulk regions dual to the boundary of the unmeasured CFT and the measurement, and they are separated by interface branes. The holographic dual of the weak measurements is shown in figure 5. \(AdS_{1}\) and \(AdS_{2}\) are dual to the original CFT and the weak measurement. The action is the same as (1), Figure 4: (a) An interface brane between two AdS spacetimes. The two branes are identified. (b) RT surface that connects \([-\sigma_{2},\sigma_{1}]\). Figure 5: The gravity dual of weak measurements. \(\tau=\pm\epsilon\) denote the regularized weak measurement. Note that the geometry is symmetric w.r.t. \(\tau=0\). Two branes connected by the dashed line are identified as the same one. except that the brane ends at \(\tau=\pm\epsilon\), \(z=0\). The metric is \[\mathrm{d}s^{2}=L_{i}^{2}\frac{\mathrm{d}\tau^{2}+\mathrm{d}x^{2}+\mathrm{d}z^{2} }{z^{2}}, \tag{14}\] with the AdS radii given respectively by \[c_{1}=\frac{3L_{1}}{2G_{(3)}},\quad c_{\mathrm{eff}}=\frac{3L_{2}}{2G_{(3)}}. \tag{15}\] Because \(c_{\mathrm{eff}}\leq c_{1}\) for weak measurements, we have \(L_{1}\geq L_{2}\). The two AdS spacetimes are joint via an interface brane, which is determined by the junction condition. Because the metric is symmetric under the exchange of \(x\) and \(\tau\), we can directly use the solution from AdS/ICFT in the appendix via an exchange of \(x\) and \(\tau\). Because the weak measurements respect translational symmetry in the spatial direction, the interface brane can be described by \((z,\tau_{i}(z),x)\). Here \(\tau_{1,2}\) denotes the interface bane in \(AdS_{1,2}\). The solutions of the interface branes ending at \(\tau=\epsilon\)6 are given by, Footnote 6: The solution of the interface branes ending at \(\tau=-\epsilon\) can be obtained by time reflection transformation, \(\tau\to-\tau\). \[\begin{split}\tau_{1}(z)&=-\tan\psi_{1}z+\epsilon, \quad\sin\psi_{1}=\frac{L_{1}}{2T}\left(T^{2}+\frac{1}{L_{1}^{2}}-\frac{1}{L_{2 }^{2}}\right),\\ \tau_{2}(z)&=\tan\psi_{2}z+\epsilon,\quad\sin\psi_{ 2}=\frac{L_{2}}{2T}\left(T^{2}+\frac{1}{L_{2}^{2}}-\frac{1}{L_{1}^{2}}\right). \end{split} \tag{16}\] For the solution to exist, it is easy to see that the tension should satisfy \(T\in(T_{\mathrm{min}},T_{\mathrm{max}})\), where \(T_{\mathrm{max}}=\frac{1}{L_{1}}+\frac{1}{L_{2}}\), \(T_{\mathrm{min}}=|\frac{1}{L_{2}}-\frac{1}{L_{1}}|\). The minimal tension is non-negative, and \(c_{\mathrm{eff}}\leq c\) implies \(\psi_{2}\geq 0\) with (16). The two branes in \(AdS_{2}\) will not have a crossing even at the limit \(\epsilon\to 0\) as seen from figure 5. 7 Footnote 7: Here we start from the CFT side with an assumption \(c_{\mathrm{eff}}<c\), so we have \(\psi_{2}>\psi_{1}\) and two brane will not cross. On the other hand, if we start from the gravity side and require the two branes not to cross, then we can actually allow \(L_{2}>L_{1}\), which means \(c_{\mathrm{eff}}>c\). It is because, for \(L_{2}>L_{1}\), there are the two branes which cross at some \(T\) and don’t cross at other \(T\). Therefore, non-crossing condition from the gravity side is not enough to require \(c_{\mathrm{eff}}<c\) in the CFT side. In the following, we will find stricter conditions from the gravity side for the marginal (relevant) phase, which turns out to imply \(c_{\mathrm{eff}}<c\) in the CFT side. In the following, we consider entanglement entropy of the post-measurement state \(|\Psi_{M}\rangle=M|\Psi\rangle\). To calculate holographic entanglement entropy of a subregion \(\{x:x\in(\sigma_{1},\sigma_{2})\}\) of \(|\Psi_{M}\rangle\), we will use the RT formula, \[S_{\sigma_{1},\sigma_{2}}=\frac{d(\sigma_{1},\sigma_{2})}{4G_{(3)}}. \tag{17}\] In 3d, the RT surface is nothing but geodesics. \(d(\sigma_{1},\sigma_{2})\) denotes the length of a geodesic ending at \(\sigma_{1}\) and \(\sigma_{2}\). Generally, for a geodesic with two end points \((\tau,x,z)\) and \((\tau^{\prime},x^{\prime},z^{\prime})\) in one uniform AdS spacetime, its length reads \[d=L\cosh^{-1}\bigg{(}\frac{(\tau-\tau^{\prime})^{2}+(x-x^{\prime})^{2}+z^{2}+z ^{\prime 2}}{2zz^{\prime}}\bigg{)}. \tag{18}\] To this end, we consider two end points at the time reversal invariant plane, two end points are located on \((x=0,z=\varepsilon,\tau=0)\) and \((x^{\prime}=l,z=\varepsilon,\tau^{\prime}=0)\)8. Because of the time reflection symmetry, the geodesic will be located at \(\tau=0\) slice, which is contained in \(AdS_{2}\). A schematic illustration is shown in figure 7 (a). The orange curve denotes the geodesic at the time reversal invariant slice. Therefore, the geodesic length is Footnote 8: \(\varepsilon\) is the UV cutoff in the \(z\) direction near the boundary. Do not confuse with \(\epsilon\) that is introduced to regularize the measurements. \[d=L_{2}\cosh^{-1}\left(\frac{l^{2}}{2\varepsilon^{2}}\right)\approx L_{2} \log\left(\frac{l^{2}}{\varepsilon^{2}}\right), \tag{2.19}\] which leads to \[S_{0,l}=\frac{c_{\rm eff}}{3}\log\frac{l}{\varepsilon}. \tag{2.20}\] The prefactor of log-term is \(\frac{c_{\rm eff}}{3}\), and it is because both end points are located in the region \(M_{2}\). From this result, we can find that without measurement, the entanglement entropy is \(S=\frac{c}{3}\log\frac{l}{\varepsilon}\); while in the presence of weak measurements, the prefactor will depend on the AdS radius of \(AdS_{2}\), which can be continuously tuned. This is consistent with the marginal weak measurements in [26]. If we take the limit \(c_{\rm eff}\to 0\), there is no log-term, which means we also retain the relevant phase with an area law. This is consistent with local projection measurements, which cannot support nontrivial spatial entanglement. On the gravity side, this corresponds to \(L_{2}=0\), and the interface brane reduces to the ETW brane. We can recover the AdS/BCFT description of local projection measurements. ### Spacetime rotation In this subsection, we show that a spacetime rotation can relate the spatial ICFT and the weak measurement. Consider an ICFT on a disk with a defect at \(x=0\). We are interested in the entanglement entropy of \(\{x:x\in(0,l)\}\), where \(l>0\) is the disk radius. To calculate this entanglement entropy, a twist operator is inserted at the defect, and it creates a branch cut \(\{x:x\in(0,l)\}\). This is shown in the top panel of figure 6. We denote the partition function with this branch cut by \(Z_{n}\). The entanglement entropy reads \[S=\lim_{n\to 1}S_{n},\quad S_{n}=\frac{1}{1-n}\log\frac{Z_{n}}{(Z_{1})^{n}} \tag{2.21}\] where \(Z_{1}\) is the original partition function without twist operator, and \(S_{n}\) is the Renyi entanglement entropy. To evaluate \(Z_{n}\), we can make a coordinate transformation \(z=\log w\), as shown in the top panel of figure 6. The defect line is mapped to \(\text{Im}(z)=(k-\frac{1}{2})\pi\), \(k=0,1,...,2n-1\), and in the \(z\) plane, it has the periodic boundary condition \(z=z+i2n\pi\). After this map, \(Z_{n}\) is a partition function on a cylinder with defect lines \(\text{Im}(z)=(k-\frac{1}{2})\pi\), \(k=0,1,...,2n-1\). It has been shown that the entanglement entropy across the defect is characterized by a logarithm with a continuous coefficient depending on the strength of the defect, namely, \[S=\frac{c_{\rm eff}}{6}\log\frac{l}{\varepsilon}, \tag{2.22}\] where \(c_{\rm eff}\) denotes the effective central charge. \(\varepsilon\) stands for the lattice constant, serving as a UV cutoff. We have assumed the other boundary of the system (as it is defined in a disk) has a normal boundary condition such that it does not contribute to the entanglement entropy. Consider a spacetime rotation of the defect from \((x=0,-\infty<\tau<\infty)\) to \((-\infty<\tau<\infty,\tau=0)\)9. This corresponds to the weak measurement given by (12). The entanglement entropy of the subregion \(\{x:x\in(0,l)\}\) is again calculated by inserting a twist operator at \(x=0\), as shown in the bottom panel of figure 6. We can use the same coordinate transformation \(z=\log w\) to evaluate \(\tilde{Z}_{n}\) in this case. Note that we use \(\tilde{Z}_{n}\) to denote the partition function with branch cut in the measurement case. In the bottom panel of figure 6, the defect line is mapped to \({\rm Im}(z)=k\pi\), \(k=0,1,...,2n-1\), with \(z=z+i2n\pi\). After this map, \(\tilde{Z}_{n}\) is a partition function on a cylinder with defect lines \({\rm Im}(z)=k\pi\), \(k=0,1,...,2n-1\). However, since the \({\rm Im}(z)\) is periodic, we can shift the coordinate by \(z\to z-i\pi/2\). \(\tilde{Z}_{n}\) is equivalent to \(Z_{n}\) in the spatial ICFT. Footnote 9: On a disk, this should be from \((x=0,-l<\tau<l)\) to \((-l<x<l,\tau=0)\) Therefore, under the spacetime rotation, we can deduce that after weak measurements, the entanglement entropy is the same as that in the case of ICFT when the twist operator is located at the defect, \[\tilde{S}=\frac{c_{\rm eff}}{6}\log\frac{l}{\varepsilon}. \tag{23}\] Figure 6: Illustration of spacetime rotation between ICFT and weak measurements. The blue region denotes an interface with regularization. The wavy line denotes the branch cut created by the twist operator at the defect. Top penal: ICFT defined in the \(w\) plane. The defect is located at \((x=0,-\infty<\tau<\infty)\). Bottom penal: Weak measurements in the \(w\) plane. The defect is located at \((-\infty<x<\infty,\tau=0)\). \(z=\log w\) is the coordinate transformation to map the Riemann sheet with defects to a cylinder. The defect lines are mapped onto \((k-1/2)\pi\) and \(k\pi\) for ICFT and for weak measurements, respectively. Indeed, in reference [26], the effective central charge from weak measurement reads \[c_{\rm eff}=-\frac{6}{\pi^{2}}\left([(1+s)\log(1+s)+(1-s)\log(1-s)]\log s+(1+s){ \rm Li}_{2}(-s)+(1-s){\rm Li}_{2}(s)\right), \tag{24}\] where \(s\) is a function of measurement parameter \(s=1/\cosh(2W)\) in (1). The same effective central charge 10 also appears in the entanglement entropy across a defect of Ising CFT. Footnote 10: Actually, \(c_{\rm eff}/2\) appears in the entanglement entropy across a defect of Ising CFT because reference [26] investigated a Luttinger liquid, that can decouple into two Ising CFT at the noninteracting limit. On the gravity side, because the metric is symmetric (14) under spacetime rotation, the solution of brane can also be obtained via a spacetime rotation. To explore the entanglement structure in both cases, we consider the entanglement entropy of a subregion with length \(l\). For the ICFT, we consider that one of the twist operators is located at the defect. The RT surface in the ICFT case and the weak measurement case is illustrated in figure 7. It is given by (20) for weak measurements. For ICFT, we expect the entanglement entropy to be \[S_{0,l}=\frac{c+c_{\rm eff}}{6}\log\frac{l}{\varepsilon}, \tag{25}\] because the two ends of the RT surface are located in \(M_{1}\) and \(M_{2}\) regions, respectively. We will calculate the entanglement entropy for the ICFT case in the next subsection, and find that this is indeed the case. Figure 7: (a) The spacetime due to weak measurements. \(M_{1,2}\) denote the bulk dual to the unmeasured CFT and the weak measurements, respectively. \(\epsilon\) denotes the regularization of the weak measurement. The blue surface indicates the interface brane. (b) The spacetime due to the spatial ICFT. Upon a spacetime rotation in \((x,\tau)\). The orange curve denotes the RT surface of a subregion. ### Entanglement entropy in holographic ICFTs Upon a spacetime rotation, the interface brane dual to the weak measurement becomes a spatial interface brane of ICFT. In the spatial ICFT, we arrive at branes \((z,\tau,x_{i}(z))\) with \[\begin{split}& x_{1}(z)=-\tan\psi_{1}z+\epsilon,\quad\sin\psi_{1}= \frac{L_{1}}{2T}\left(T^{2}+\frac{1}{L_{1}^{2}}-\frac{1}{L_{2}^{2}}\right),\\ & x_{2}(z)=\tan\psi_{2}z+\epsilon,\quad\sin\psi_{2}=\frac{L_{2}} {2T}\left(T^{2}+\frac{1}{L_{2}^{2}}-\frac{1}{L_{1}^{2}}\right),\end{split} \tag{26}\] as shown in figure 8 (a). We consider the entanglement entropy of a subregion \(\{x:x\in[0,l]\}\). The RT surface that ends on these two points is illustrated in figure 8. Because we take limit \(\sigma_{2}\to 0\), we may refer to figure 9. To solve the geodesic, we just need to compute \(|\text{OA}|\). For \(\triangle\text{OAP}\), we have \(\alpha=\frac{\pi}{2}-\psi_{2}\) and \(\beta=\frac{\pi}{2}+\psi_{1}\). Then the law of sines gives \[\frac{\sigma_{1}-R}{\sin\alpha}=\frac{R}{\sin\beta}=\frac{x}{\sin\left(\pi- \alpha-\beta\right)} \tag{27}\] where \(|\text{OA}|=x\). The solutions are \(R=\frac{\cos\psi_{1}}{\cos\psi_{1}+\cos\psi_{2}}\sigma_{1}\) and \(x=\frac{\sin\left(\psi_{2}-\psi_{1}\right)}{\cos\psi_{1}}R\). Now we can calculate the length of geodesic. For OA we have \(\text{O}=(0,\varepsilon)\) and \(\text{A}=(x\sin\psi_{2},x\cos\psi_{2})\), and \[\begin{split} d_{\text{OA}}=& L_{2}\cosh^{-1}\frac{ \varepsilon^{2}+(x\cos\psi_{2})^{2}+(x\sin\psi_{2})^{2}}{2\varepsilon x\cos \psi_{2}}\approx L_{2}\cosh^{-1}\left(\frac{x}{2\varepsilon\cos\psi_{2}} \right)\\ \approx& L_{2}\log\bigg{(}\frac{\sigma_{1}}{ \varepsilon}\frac{\sin\left(\psi_{2}-\psi_{1}\right)}{\cos\psi_{2}(\cos\psi_{ 1}+\cos\psi_{2})}\bigg{)}.\end{split} \tag{28}\] For AB, with origin P and \(\gamma=\frac{\pi}{2}+\psi_{1}-\psi_{2}\) we have \(\text{A}=(-R\sin\gamma,R\cos\gamma)\) and \(\text{B}=(R,\varepsilon)\), so \[\begin{split} d_{\text{AB}}=& L_{1}\cosh^{-1}\left( \frac{\varepsilon^{2}+(R\cos\gamma)^{2}+(R+R\sin\gamma)^{2}}{2\varepsilon R \cos\gamma}\right)\approx L_{1}\cosh^{-1}\left(\frac{R(2+2\sin\gamma)}{2 \varepsilon\cos\gamma}\right)\\ \approx& L_{1}\log\bigg{(}\frac{\sigma_{1}}{ \varepsilon}\frac{(2+2\cos\left(\psi_{2}-\psi_{1}\right))\cos\psi_{1}}{\sin \left(\psi_{2}-\psi_{1}\right)(\cos\psi_{1}+\cos\psi_{2})}\bigg{)}=L_{1}\log \bigg{(}\frac{\sigma_{1}}{\varepsilon}\frac{2\cos\psi_{1}}{\tan\left(\frac{ \psi_{2}-\psi_{1}}{2}\right)(\cos\psi_{1}+\cos\psi_{2})}\bigg{)}.\end{split} \tag{29}\] Figure 8: (a) Interface branes of the spatial ICFT. It is related to the weak measurements via a spacetime rotation. (b) RT surface ending at \([0,l]\) nd total length of geodesic with \(\sigma_{1}=l\) is \[d(0,l)=d_{\rm OA}+d_{\rm AB}=(L_{1}+L_{2})\log\frac{l}{\varepsilon}+d_{\rm sub}, \tag{30}\] where \(d_{\rm sub}\) is given by \[d_{\rm bdy}= L_{2}\log\bigg{(}\frac{\sin\left(\psi_{2}-\psi_{1}\right)}{\cos \psi_{2}(\cos\psi_{1}+\cos\psi_{2})}\bigg{)}+L_{1}\log\bigg{(}\frac{2\cos\psi_ {1}}{\tan\left(\frac{\psi_{2}-\psi_{1}}{2}\right)(\cos\psi_{1}+\cos\psi_{2})} \bigg{)}. \tag{31}\] Therefore, with (17) we have \[S_{0,\sigma_{2}}=\frac{c_{1}+c_{\rm eff}}{6}\log\frac{l}{\varepsilon}+S_{\rm sub}, \tag{32}\] where \(S_{\rm sub}=\frac{d_{\rm sub}}{4G_{(3)}}\). The subleading entropy \(S\) is UV dependent because the leading term is a logarithm with the UV dependence, and there is no way to subtract the leading term. 11 Here the prefactor of log-term is \(\frac{c_{1}+c_{\rm eff}}{6}\), which is because the location of each end points are in region 1 and 2. Footnote 11: We will see in the symmetric case discussed later, the subleading term is independent of the UV cutoff because there is a well-defined procedure to subtract the leading term. Now we discuss the condition of existence of nontrivial geodesic solution. Because we need \(\triangle\)OAP, it means there must be a crossing point for lines AP and OB. Therefore, \(\alpha+\beta<\pi\), which is equivalent to \(\psi_{1}<\psi_{2}\). With (16), we have \[\sin\psi_{2}-\sin\psi_{1}=\frac{L_{2}}{2T}\left(T^{2}+\frac{1}{L_{2}^{2}}- \frac{1}{L_{1}^{2}}\right)-\frac{L_{1}}{2T}\left(T^{2}+\frac{1}{L_{1}^{2}}- \frac{1}{L_{2}^{2}}\right)=\frac{(\frac{1}{L_{1}}+\frac{1}{L_{2}})^{2}-T^{2}} {2T}(L_{1}-L_{2}). \tag{33}\] Figure 9: AdS dual of CFT with interface defect for infinite system and its geodesic. Because \(T<T_{\rm max}=\frac{1}{L_{1}}+\frac{1}{L_{2}}\), \(\psi_{1}<\psi_{2}\) corresponds to \(L_{1}>L_{2}\), which means \(c_{1}>c_{\rm eff}\). Therefore, the existence of a nontrivial geodesic corresponds to marginal (relevant) phase, which is consistent with the CFT results. This motives us to derive \(c_{\rm eff}<c_{1}\) for marginal (relevant) measurement in the CFT side from the gravity perspective. In the two regions as shown in figure 4, if one region is dual to the measurement or defect after the spacetime rotation, 12 the boundary of that region has the length scale in the order of regularization parameter \(\epsilon\). The limit of \(\sigma\to 0\) must be well-defined. For a geodesic ending at a general \(\sigma_{l}\) and \(\sigma_{r}\) in figure 4, it will cross the interface brane. For the case of measurements, we expect that the geodesic continues to be well-defined in the limit \(\epsilon\to 0\). We find that this is only true when \(\psi_{1}<\psi_{2}\) because if \(\psi_{1}>\psi_{2}\), the condition that \(\alpha+\beta<\pi\) is not satisfied. Therefore, the existence of a limit \(\epsilon\to 0\) with a well-defined geodesic that crosses the interface brane implies \(L_{2}<L_{1}\). Footnote 12: In general, we have three regions with a reflection symmetry, where two are dual to the unmeasured CFT and one is dual to the measurement. In the discussion here, only two regions are relevant, so we refer to figure 4 for simplicity. ## 3 Boundary entropy of holographic measurement in an infinite system As mentioned above, the entanglement of a spatial ICFT is related to the weak measurement via spacetime rotation. In particular, the same effective central charge appears in both cases. It is given by the AdS radius that is dual to the weak measurement regions, \(c_{\rm eff}=\frac{3L_{2}}{2G_{(3)}}\). To determine the interface brane solution, we also need the tension \(T\) of the brane. How is the tension related to the weak measurement? In this section, we will show that the tension \(T\) is related to the boundary entropy of weak measurements. To this end, we first calculate the entanglement entropy of a region symmetric w.r.t. the defect in the spatial ICFT. The boundary entropy, \(S_{\rm bdy}\), appears as the excess of entanglement entropy induced by the defect. Then we calculate the amplitude in the weak measurement case, \[Z =\frac{\langle\Psi|M^{\dagger}M|\Psi\rangle}{\langle\Psi|\Psi\rangle} \tag{3.1}\] \[=e^{-(I_{M}-I_{0})}\] where in the second line, we use the saddle point approximation, and \(I_{M}\) and \(I_{0}\) denote the on-shall action with and without weak measurements, respectively. We find that the same boundary entropy 13 is given by the partition function \(S_{\rm bdy}=\log Z\). Footnote 13: We call it a boundary entropy because the ICFT can be mapped to a BCFT by folding trick. ### Entanglement entropy of a symmetric region in the spatial ICFT We consider a symmetric configuration in the spatial ICFT, as shown in figure 10. Assuming \(\sigma_{l}=\sigma_{r}=\sigma\), we can get analytical result for entanglement entropy and corresponding boundary entropy in the following. By symmetry, the center of the circle must be on the defect line, and its radius is \(\sigma\). Therefore, for AC with \({\rm A}=(-\sigma,\varepsilon),\ {\rm C}=(\sigma\sin\psi_{1},\sigma\cos \psi_{1})\), the length of the geodesic connecting them is \[d_{\rm AC}=L_{1}\cosh^{-1}\bigg{(}\frac{(-\sigma-\sigma\sin\psi_{1})^{2}+ \varepsilon^{2}+(\sigma\cos\psi_{1})^{2}}{2\varepsilon\sigma\cos\psi_{1}} \bigg{)}\approx L_{1}\log\bigg{(}\frac{2\sigma}{\varepsilon}\frac{1+\sin\psi _{1}}{\cos\psi_{1}}\bigg{)}. \tag{3.2a}\] or CD with \(\mathrm{C}=(-\sigma\sin\psi_{2},\ \sigma\cos\psi_{2})\) and \(\mathrm{D}=(\sigma\sin\psi_{2},\ \sigma\cos\psi_{2})\), the length of the geodesic connecting them is \[d_{\mathrm{CD}}=L_{2}\cosh^{-1}\bigg{(}\frac{(-2\sigma\sin\psi_{2})^{2}+2( \sigma\cos\psi_{2})^{2}}{2(\sigma\cos\psi_{2})^{2}}\bigg{)}=L_{2}\cosh^{-1} \bigg{(}\frac{1+\sin^{2}\psi_{2}}{\cos^{2}\psi_{2}}\bigg{)}. \tag{3.2b}\] By symmetry, we also have \(d_{\mathrm{AC}}=d_{\mathrm{DB}}\), so the total entanglement entropy for the symmetric case in figure 10 is \[S_{-\sigma,\sigma}=\frac{d_{\mathrm{AC}}+d_{\mathrm{CD}}+d_{\mathrm{DB}}}{4G_{ (3)}}=\frac{c_{1}}{3}\log\bigg{(}\frac{2\sigma}{\varepsilon}\bigg{)}+S_{ \mathrm{bdy}}, \tag{3.3}\] where the boundary entropy \(S_{\mathrm{bdy}}\) is \[S_{\mathrm{bdy}}=\frac{c_{1}}{3}\log\bigg{(}\frac{1+\sin\psi_{1}}{\cos\psi_{1 }}\bigg{)}+\frac{c_{\mathrm{eff}}}{6}\cosh^{-1}\bigg{(}\frac{1+\sin^{2}\psi_{ 2}}{\cos^{2}\psi_{2}}\bigg{)}. \tag{3.4}\] Actually, the two terms in the boundary entropy have the same forms after some manipulation. For the first term, we have \[\frac{1+\sin\psi_{1}}{\cos\psi_{1}}=\frac{1-\cos\big{(}\psi_{1}+\frac{\pi}{2} \big{)}}{\sin\big{(}\psi_{1}+\frac{\pi}{2}\big{)}}=\frac{2\sin^{2}\frac{\psi_{ 1}+\frac{\pi}{2}}{2}}{2\sin\frac{\psi_{1}+\frac{\pi}{2}}{2}\cos\frac{\psi_{1}+ \frac{\pi}{2}}{2}}=\tan\bigg{(}\frac{\psi_{1}}{2}+\frac{\pi}{4}\bigg{)}. \tag{3.5}\] To simplify the second term, (i) we first prove that \[\cosh^{-1}\bigg{(}\frac{1+\sin^{2}\psi}{\cos^{2}\psi}\bigg{)}=2\tanh^{-1} \big{(}\sin\psi).\] (3.6a) Define l.h.s. \[=2p\], then \[\frac{2}{\cos^{2}\psi}-1=\cosh 2p\], so \[\cos^{2}\psi=\big{[}\frac{1}{2}\left(\cosh 2p+1\right)\big{]}^{-1}=\frac{1}{ \cosh^{2}p}\]. Therefore, \[\sin^{2}\psi=1-\cos^{2}\psi=1-\frac{1}{\cosh^{2}p}=\tanh^{2}p\]. Finally, we get \[p=\tanh^{-1}\big{(}\sin\psi\big{)}\], and finish the proof. (ii) Now we prove \[\tanh^{-1}\big{(}\sin\psi\big{)}=\log\bigg{(}\tan\bigg{(}\frac{\psi}{2}+\frac{ \pi}{4}\bigg{)}\bigg{)}. \tag{3.6b}\] Figure 10: AdS dual of CFT with interface defect for infinite system with symmetry and its geodesic. Similar to the proof above, we define \(\mathrm{r.h.s.}=p\), then \(e^{p}=\tan\left(\frac{\psi}{2}+\frac{\pi}{4}\right)\). Therefore, \(\sin\psi=-\cos\left(\psi+\frac{\pi}{2}\right)=\frac{1-2\cos^{2}\frac{\psi+ \frac{\pi}{2}}{2}}{\sin^{2}\frac{\psi+\frac{\pi}{2}}{2}+\cos^{2}\frac{\psi+ \frac{\pi}{2}}{2}}=\frac{\tan^{2}\frac{\psi+\frac{\pi}{2}}{2}-1}{\tan^{2}\frac {\psi+\frac{\pi}{2}}{2}+1}=\frac{e^{2p}-1}{e^{2p}+1}=\tanh p\). It means \(p=\tanh^{-1}\left(\sin\psi\right)\) and we finish the proof. With (3.6a) and (3.6b), the second term in (3.4) can be expressed as \[\cosh^{-1}\left(\frac{1+\sin^{2}\psi_{2}}{\cos^{2}\psi_{2}}\right)=2\tanh^{-1} \left(\sin\psi_{2}\right)=2\log\bigg{(}\tan\left(\frac{\psi_{2}}{2}+\frac{\pi }{4}\right)\bigg{)}. \tag{3.7}\] Therefore, the boundary entropy (3.4) can be simplified as \[S_{\mathrm{bdy}}=\frac{c_{1}}{3}\log\bigg{(}\tan\left(\frac{\psi_{1}}{2}+\frac {\pi}{4}\right)\bigg{)}+\frac{c_{\mathrm{eff}}}{3}\log\bigg{(}\tan\left(\frac {\psi_{2}}{2}+\frac{\pi}{4}\right)\bigg{)}. \tag{3.8}\] Here are some remarks about the results: (i) One may wonder if the boundary entropy is UV dependent from (3.3) by rescaling the UV cutoff \(a\). However, we can properly regularize this dependence by defining the boundary entropy as the excess of entanglement entropy due to the defect \[S_{\mathrm{bdy}}=S_{-\sigma,\sigma}-S_{-\sigma,\sigma}^{(0)}, \tag{3.9}\] where the second term \(S_{-\sigma,\sigma}^{(0)}\) denotes the entanglement entropy of the same region but without a defect. (ii) While (3.8) seemingly has two independent terms from two regions, this is not the case. Recall that the solutions of the branes \(\psi_{1,2}\) in (2.26) depend on both radii \(L_{1,2}\) and the tension \(T\). It is clear that the parameters in the holographic description are uniquely determined by the central charge of the unmeasured CFT \(c_{1}\), the effective central charge after measurement \(c_{\mathrm{eff}}\), and the boundary entropy \(S_{\mathrm{bdy}}\). (iii) In the symmetric case considered here, the prefactor of the logarithm in (3.3) is \(\frac{c_{1}}{3}\). This can be intuitively understood: the two twist operators are located in region 1, and when we take the long-wave length limit, they approach deep in the bulk of the system symmetrically. However, when the interval is not symmetric and the limit is not taken symmetrically, the prefactor of the logarithm can also be related to \(c_{\mathrm{eff}}\) as we will show in appendix A. ### Boundary entropy from weak measurements partition function In a BCFT, the boundary entropy (3.8) can be related to g-theorem with the definition of boundary entropy \(S_{\mathrm{bdy}}=\log g\) where \(g=\langle 0|B\rangle\), and \(|B\rangle\) is the boundary state of the BCFT. We consider a similar quantity in this subsection: the partition function with the measurement, as is given by (3.1). In reference [18], the authors have given the boundary entropy of one AdS region with a single ETW brane, for completeness, we review the calculation in appendix B. For our interested case shown in figure 10, we apply a similar procedure and obtain the boundary entropy in the following. As mentioned in appendix B, we first apply a special conformal transformation, which will lead to the geometry in figure 11. There are three bulk regions and two interface branes. Although it seems that there is overlap between two connected regions, we should calculate the action separately and consider that the different regions are connected by some magic glue. o this end, we define the corresponding action \[I =-\frac{1}{16\pi G_{(3)}}\left[\int_{\mathcal{M}_{1}}\mathrm{d}^{3} x\ \sqrt{g_{1}}\left(R_{1}+\frac{2}{L_{1}^{2}}\right)+\int_{\mathcal{M}_{2}}\mathrm{d}^{3 }x\ \sqrt{g_{2}}\left(R_{2}+\frac{2}{L_{2}^{2}}\right)+\int_{\mathcal{M}_{3}} \mathrm{d}^{3}x\ \sqrt{g_{3}}\left(R_{3}+\frac{2}{L_{3}^{2}}\right)\right.\] \[+\left.2\int_{\mathcal{S}_{12}}\mathrm{d}^{2}y\ \sqrt{h_{12}}(K_{1}-K_{2})-2T_{12}\int_{ \mathcal{S}_{12}}\mathrm{d}^{2}y\ \sqrt{h_{12}}+2\int_{\mathcal{S}_{23}}\mathrm{d}^{2}y\ \sqrt{h_{23}}(K_{2}-K_{3})-2T_{23}\int_{\mathcal{S}_{23}} \mathrm{d}^{2}y\ \sqrt{h_{23}}\right]. \tag{3.10}\] By the symmetry, we have \(L_{1}=L_{3}\) and \(T_{12}=T_{23}=T\). The connection conditions (2.3b) lead to \(K_{1}-K_{2}=K_{2}-K_{3}=2T\). With a similar method in appendix B, we have the Figure 11: Spacial conformal transformation that maps the brane into part of a sphere. (a) Illustration of the map in the xz-plane. (b) Illustration of the map in the xt-plane. onshell action \[\begin{split} I=&-\frac{1}{16\pi G_{(3)}}\left[\int_{ \mathcal{M}_{1}}\mathrm{d}^{3}x\ \sqrt{g_{1}}\left(R_{1}+\frac{2}{L_{1}^{2}}\right)+\int_{\mathcal{M}_{2}} \mathrm{d}^{3}x\ \sqrt{g_{2}}\left(R_{2}+\frac{2}{L_{2}^{2}}\right)+\int_{ \mathcal{M}_{3}}\mathrm{d}^{3}x\ \sqrt{g_{3}}\left(R_{3}+\frac{2}{L_{3}^{2}}\right)\right.\\ &+\left.\int_{\mathcal{S}_{12}}\mathrm{d}^{2}y\ \sqrt{h_{12}}(K_{1}-K_{2})+\int_{ \mathcal{S}_{23}}\mathrm{d}^{2}y\ \sqrt{h_{23}}(K_{2}-K_{3})\right]\\ =&-\frac{1}{16\pi G_{(3)}}\sum_{\alpha=1,2,3}\left[ \int_{\mathcal{M}_{\alpha}}\mathrm{d}^{3}x\ \sqrt{g_{\alpha}}\left(R_{\alpha}+\frac{2}{L_{\alpha}^{2}}\right)+\sum_{\beta= \mathrm{L,R}}\left(\int_{\mathcal{S}_{\alpha,\beta}}\mathrm{d}^{2}y\ \sqrt{h_{\alpha,\beta}}K_{\alpha,\beta}\right)\right]\\ =&\sum_{\alpha=1,2,3}I_{\alpha},\end{split} \tag{3.11}\] where we define \(\beta\) as the label of boundary of the region \(\alpha\), so \(K_{\alpha,\beta}\) corresponds to left and right brane of the region \(\alpha\), and the corresponding signs are absorbed by redefining the directions for \(K_{\alpha,\beta}\) point outside from their bulk regions. Here for \(AdS_{1}\) we only have one boundary brane and one boundary term in action, while for region \(AdS_{2}\) we have two boundary terms. However, from the discussion before, for the largest region after SCT (here it is right \(AdS_{1}\)), we must apply UV-cutoff, which can be an ETW brane on the right of right \(AdS_{1}\) in figure 11. The ETW brane is perpendicular to the boundary CFT and has no contribution to the boundary entropy. Besides, we can also add another ETW\({}^{\prime}\) brane on the left on left \(AdS_{1}\) without contribution. Then each region can be calculated with the same method in appendix B with (B), and the result of each part is \[\begin{split} I_{\alpha}=&\frac{L}{4G_{N}}\left[ \left(\frac{r_{D}^{2}}{2\varepsilon^{2}}+\log\frac{\varepsilon}{r_{D}}-\frac{ \rho^{*}}{L}-\frac{1}{2}+\frac{r_{D}}{\varepsilon}\sinh\frac{\rho^{*}}{L} \right)-\left(\frac{{r_{D}^{\prime}}^{2}}{2\varepsilon^{2}}+\log\frac{ \varepsilon}{r_{D}^{\prime}}-\frac{{\rho^{*}}^{\prime}}{L}-\frac{1}{2}+\frac{ r_{D}^{\prime}}{\varepsilon}\sinh\frac{{\rho^{*}}^{\prime}}{L}\right) \right],\end{split} \tag{3.12}\] where \(r_{D},\rho^{*}\) correspond to one brane and \(r_{D}^{\prime},{\rho^{*}}^{\prime}\) correspond to another brane. 14 Therefore, the total boundary entropy is Footnote 14: For example, for \(AdS_{2}\) we can consider its region after special conformal transformation is bounded by two branes. We can first calculate the contribution without the left brane, which is the first term. Then the region bounded by the left brane which we should subtract corresponds to the second term. So actually the direction of \({\rho^{*}}^{\prime}\) in the second term is opposite to the direction of the brane of \(AdS_{2}\). \[\begin{split} S_{\mathrm{bdy}}=&\frac{\rho_{1}^{*}- (-\rho_{0}^{*})}{4G_{(3)}}+\frac{\rho_{2}^{*}-(-\rho_{2}^{*})}{4G_{(3)}}+\frac{ \rho_{0}^{*}-(-\rho_{1}^{*})}{4G_{(3)}}=\frac{2\rho_{1}^{*}+2\rho_{2}^{*}}{4G_{ (3)}}\\ =&\frac{c_{1}}{3}\log\left(\tan\left(\frac{\psi_{1} }{2}+\frac{\pi}{4}\right)\right)+\frac{c_{2}}{3}\log\left(\tan\left(\frac{ \psi_{2}}{2}+\frac{\pi}{4}\right)\right)\!,\end{split} \tag{3.13}\] where \(\rho_{0}^{*}=0\). As mentioned before, here we change the sign of second \(\rho^{*}\) in the numerator because we use the proper definition in reference [56]. Besides, although here we add ETW brane which is perpendicular to CFT surface artificially, we can remove it as discussed in appendix B. Without ETW brane, we can consider it as the case with ETW brane with location \(x\rightarrow\infty\). Under corresponding conformal transformation, we can map the ETW brane to a semi-sphere at the infinity, and it covers the whole space with \(r_{D}\to\infty\). As shown in (3.13), the final boundary entropy doesn't rely on \(r_{D}\), so we can safely apply the limit above and get the same result. And this result is also consistent with (3.8) without ETW brane. ## 4 Holographic weak measurements in a finite system: phase transition and entanglement entropy In the previous section about weak measurements in an infinite system, the holographic description is given by interface branes. The effective central charge \(c_{\text{eff}}\leq c_{1}\) distinguishes the different cases of the weak measurements: \[\begin{split} c_{\text{eff}}=c_{1}&\text{ irrelevant}\\ 0<c_{\text{eff}}<c_{1}&\text{ marginal}\\ c_{\text{eff}}=0&\text{ relevant}\end{split} \tag{4.1}\] The interface brane solution exists for the irrelevant and marginal cases. In the relevant case, the interface brane becomes an ETW brane. In this section, we consider weak measurements in a finite system. We will see that in the finite system, the transition between the irrelevant and marginal weak measurements is dual to a transition of interface branes with different topologies. After discussing the phase diagram, we calculate the subregion entanglement entropy in different phases, and identify the fate of weak measurements. ### Phase diagram The Euclidean path integral of a CFT, defined on a circle \(x=x+2\pi\), with weak measurements is shown by the cylinder in figure 12. \(|\Psi\rangle\) denotes the ground state of the CFT and \(M|\Psi\rangle\) denotes the state after the weak measurements. The measurements are performed in the full system, and preserve translational symmetry. Again, we denote the central charge of the unmeasured CFT as \(c_{1}\) and the effective central charge of the measured state \(M|\Psi\rangle\) as \(c_{\text{eff}}\leq c_{1}\). The gravity dual is to "fill in" the bulk of the cylinder with the boundary condition given by the Euclidean path integral defined on the surface of the cylinder. In reference [51], the authors have discussed the phase diagram of the dual gravity solution, though they considered a different problem. In the following, we summarize the results in reference [51]. To this end, we start from a similar action in (2.1). This time, \(\mathcal{M}_{1,2}\) denote regions dual to the original unmeasured CFT on a circle and the post-measurement state, respectively, and we use a different convention for the metric: \[\text{d}s^{2}=f_{i}(r)\text{d}t^{2}+\frac{\text{d}r^{2}}{f_{i}(r)}+r^{2}\text {d}x^{2}, \tag{4.2}\] where \(f_{i}(r)=(1-\mu_{i})+\lambda_{i}r^{2}\) with \(\lambda_{i}=L_{i}^{-2}\). 15 In this metric, \(t\) denotes the imaginary time, and \(x=x+2\pi\) denotes the spatial coordinate. \(r\) is the radius, with \(r\to\infty\) corresponding to the boundary. \(\mu_{i}\) determines whether the metric is thermal AdS (\(\mu_{i}<1\)) or BTZ black hole (\(\mu_{i}>1\)). Again, we have the relation \[c_{1}=\frac{3L_{1}}{2G_{(3)}},\quad c_{\text{eff}}=\frac{3L_{2}}{2G_{(3)}}. \tag{10}\] The parameter \(\mu_{i}\) and the brane solution are determined by the radius \(L_{i}\) and the tension \(T\). Notice that there is a regularization parameter \(\epsilon\), at which the interface brane is located. To determine the phase diagram of weak measurements, we take \(\epsilon\to 0\). Similar to the previous case, the solution exists when the tension falls in the following range: \[\frac{1}{L_{2}}-\frac{1}{L_{1}}\leq T\leq\frac{1}{L_{2}}+\frac{1}{L_{1}}. \tag{11}\] Note that we focus on \(L_{2}\leq L_{1}\). In this range of parameters, there are three different phases, which are called "no-bubble", "bubble-inside-horizon" and 'bubble-outside-horizon" in reference [51]. We show the phase diagram in figure 13. In all three phases, for the region dual to the unmeasured CFT, the metric is given by thermal AdS space with \(\mu_{1}=0\). For simplicity, we define \(\mu_{2}=\mu\) without loss of generality. Depending on whether \(\mu<1\) or \(\mu>1\), the spacetime dual to the weak measurement region is given by AdS or BTZ black hole, respectively. The no-bubble phase is given by \(\mu=0\). The dual gravity theories of the unmeasured CFT and the weak measurement are given by AdS spacetimes, \(AdS_{1,2}\), with different radii, similar to the weak measurement in the infinite system. The two AdS spacetimes are separated by interface branes, as is illustrated in figure 13 (b). In this phase, the time reversal invariant slice \(\tau=0\) is included in \(AdS_{2}\) that is dual to the weak measurement region. Hence, it is referred to as the no-bubble phase. On the other hand, in the bubble-inside-horizon and bubble-outside-horizon phases, \(\mu>1\). The dual gravity theory of the weak measurement region is given by a BTZ black hole. The interface brane separates the AdS spacetime and the black hole spacetime, as is illustrated in figure 13 (c). In this phase, the time reversal invariant slice will cut through both the AdS spacetime and the black hole spacetime. The AdS spacetime emerges as Figure 12: Euclidean path integral of a CFT, defined in a circle, with weak measurements regularized at \(\tau=\pm\epsilon\). \(|\Psi\rangle\) (\(\langle\Psi|\)) indicates the ket (bra) of the ground state prepared from Euclidean path integral. a bubble, surrounded by the interface brane, in the black hole spacetime. Furthermore, depending on whether the horizon of the black hole is included or not, the bubble solution splits into two cases. When the horizon is not included in the solution, we have the bubble-outside-horizon phase. Namely, the interface brane (the boundary of the bubble) is located outside the black hole horizon, while the interior of the bubble is the AdS spacetime, so the solution does not include the horizon. When the horizon is included in the solution, we have the bubble-inside-horizon phase. The phase boundary between the no-bubble phase and the bubble phase is given by Figure 13: (a) The phase diagram of the weak measurement in a finite system. The vertical cross-section of the cylinder in the no-bubble phase (b) and in the bubble phase (c). The light (dark) region is dual to the unmeasured CFT (the weak measurement). (b) The dual geometry consists of two AdS spacetimes with different radii. (c) The dual geometry of the unmeasured CFT (the weak measurement) is given by the AdS (black hole) spacetime. \(f(\hat{R},\hat{T})=0\), where \[\begin{split} f(\hat{R},\hat{T})=& 2\pi\Theta\left(\frac{\hat{T}}{ \hat{R}}-1\right)-\frac{1}{\sqrt{\hat{R}\hat{T}}}\Pi\left(0,\frac{1}{2}\sqrt{ \frac{1-(\hat{R}-\hat{T})^{2}}{\hat{R}\hat{T}}}\right)\\ &+\frac{1}{\sqrt{\hat{R}\hat{T}}}\frac{\hat{R}+\hat{T}}{\hat{R}- \hat{T}}\Pi\left(1-\frac{1}{(\hat{R}-\hat{T})^{2}},\frac{1}{2}\sqrt{\frac{1-( \hat{R}-\hat{T})^{2}}{\hat{R}\hat{T}}}\right),\end{split} \tag{4.5}\] where \(\hat{T}=TL_{2}\), \(\hat{R}=L_{2}/L_{1}\), and the function \(\Pi(\nu,z)\) is defined by \[\Pi(\nu,z)=\int_{0}^{1}\frac{\mathrm{d}t}{(1-\nu t^{2})\sqrt{(1-t^{2})(1-z^{2 }t^{2})}}. \tag{4.6}\] The phase boundary between the bubble-outside-horizon phase and the bubble-inside-horizon phase is given by \(TL_{1}=1\). Notice that these phase boundaries are obtained in the limit \(\epsilon\to 0\). We further review a few useful quantities from reference [51] that will be used later in the discussion of bubble phases. The black hole metric dual to the weak measurement has a horizon at \[r_{H}=\sqrt{\frac{\mu-1}{\lambda_{2}}}. \tag{4.7}\] The interface brane, on the other hand, is located at \[r_{0}=\frac{\mu}{\sqrt{2\sqrt{\mu^{2}T^{2}\lambda_{1}+\mu T^{2}(\lambda_{2}- \lambda_{1}-T^{2})+T^{4}}+\mu(\lambda_{2}-\lambda_{1}-T^{2})+2T^{2}}}. \tag{4.8}\] The transition point is when these two are degenerate. In the black hole solution, there is a relation between the measurement regularization \(\epsilon\) and \(\mu\), \[2\epsilon=\frac{1}{\sqrt{\mu}}f(\hat{R},\hat{T}). \tag{4.9}\] where \(f(\hat{R},\hat{T})\) is given in (4.5). Apparently, the existence of the measurement solution requires \(\epsilon>0\). The onshell action difference between the black hole phase and the AdS phase dual to the weak measurement is given by \(S^{BH}-S^{AdS}=-\frac{\mu\epsilon}{\sqrt{\lambda_{2}}}+O(\mu^{-1/2})\), which means that as long as the black hole solution exists, it dominates over the AdS solution. This also gives the phase boundary between the no-bubble phase and the bubble phase discussed before. This also indicates that in the bubble phase, \(f(\hat{R},\hat{T})>0\), if we take \(\epsilon\ll 1\), it means \(\mu\propto\epsilon^{-2}\gg 1\). ### Brief summary of geodesic equations and geodesic length Later we will discuss the length of geodesic in different phases, which may lead to different entanglement behaviors. Here we briefly summarize geodesic equations and their length for different metrics. The detail of the derivation is given in appendix C.1. Owing to the time reversal symmetry, we consider the geodesics at the time reversal invariant slice \(\tau=0\). They can be expressed as \((\tau=0,x,r(x))\). With metric (4.2), the geodesics for \(\mu>1\) and \(\mu<1\) read \[r^{-2}+(\frac{1}{L^{2}(\mu-1)}-c_{1})\sinh^{2}\left[\sqrt{\mu-1}(x+c_{2}) \right]=c_{1}\qquad\mu>1, \tag{4.10a}\] \[r^{-2}+(\frac{1}{L^{2}(1-\mu)}+c_{1})\sin^{2}\left[\sqrt{1-\mu}(x+c_{2}) \right]=c_{1}\qquad\mu<1, \tag{4.10b}\] where \(c_{1},c_{2}\) are two parameters to be determined. The geodesics above are usual geodesics for black hole and AdS metrics. While for the black hole metric, \(\mu>1\), we have another geodesic solution, \[r^{-2}+(\frac{1}{L^{2}(\mu-1)}-c_{1})\sinh^{2}\left[\sqrt{\mu-1}(x+c_{2}) \right]=\frac{1}{L^{2}(\mu-1)}. \tag{4.11}\] This is an unusual geodesic because it is always longer than (4.10a) in a pure black hole spacetime. The comparison of the length of the two geodesics is given in appendix C.1. For the length of geodesics, we start from \[d=\int\mathrm{d}l\ \sqrt{g}=\int\mathrm{d}s=\int\sqrt{\frac{r^{\prime}(x)}{f}+r ^{2}}\ \mathrm{d}x, \tag{4.12}\] where \(f\) is the function defined in metric (4.2), and \(r^{\prime}(x)\) is the derivative. Then the integral can be evaluated to give \[d=L\tanh^{-1}\left[\frac{\tanh\left(\sqrt{\mu-1}(x+c_{2})\right)}{L\sqrt{c_{1} (\mu-1)}}\right]\qquad\mu>1, \tag{4.13a}\] \[d=L\tanh^{-1}\left[\frac{\tan\left(\sqrt{1-\mu}(x+c_{2})\right)}{L\sqrt{c_{1} (1-\mu)}}\right]\qquad\mu<1. \tag{4.13b}\] Actually, in (4.10) and (4.13) we notice that the solutions of two cases are related: for example, starting from \(\mu>1\) in (4.13a), it can be continued to \(\mu<1\), then \(\sqrt{\mu-1}=\mathrm{i}\sqrt{1-\mu}\) leads to (4.13b). Let's discuss the asymptotic behavior of the geodesic length. Assuming two end points located at \((r=\frac{1}{\varepsilon},x=\pm\frac{1}{2}x_{0})\), in the AdS spacetime for \(\mu<1\), we can obtain \(c_{1,2}\), and thus the total length of the geodesic, \[\Delta d= 2L\tanh^{-1}\Bigg{[}\frac{\tan\left(\sqrt{1-\mu}\frac{x_{0}}{2} \right)}{L\sqrt{c_{1}(1-\mu)}}\Bigg{]}\approx 2L\log\bigg{(}\frac{2\sin \left(\sqrt{1-\mu}\frac{x_{0}}{2}\right)}{L\sqrt{1-\mu}\varepsilon}\bigg{)}. \tag{4.14}\] Therefore, for \(x_{0}\ll\frac{1}{\sqrt{1-\mu}}\), we obtain a logarithm \(\Delta d=2L\log\frac{x_{0}/L}{\varepsilon}\), as expected. On the other hand, in the black hole spacetime for \(\mu>1\), the total length of the geodesic is \[\Delta d= 2L\tanh^{-1}\Bigg{[}\frac{\tanh\left(\sqrt{\mu-1}\frac{x_{0}}{2 }\right)}{L\sqrt{c_{1}(\mu-1)}}\Bigg{]}\approx 2L\log\bigg{(}\frac{2\sinh \left(\sqrt{\mu-1}\frac{x_{0}}{2}\right)}{L\sqrt{\mu-1}\varepsilon}\bigg{)}, \tag{4.15}\] Therefore, for \(x_{0}\ll\frac{1}{\sqrt{\mu-1}}\), we obtain \(\Delta d=2L\log\frac{x_{0}/L}{\varepsilon}\). But, for \(x_{0}\gg\frac{1}{\sqrt{\mu-1}}\), we obtain \(\Delta d=2L\log\left(\frac{1}{\varepsilon L\sqrt{\mu-1}}\right)+L\sqrt{\mu-1}x _{0}\), which corresponds to volume-law entanglement. For the unusual geodesic (4.11), it has a fixed point (\(x=-c_{2},r^{-1}=\sqrt{\frac{1}{L^{2}(\mu-1)}}\)) located at the black hole horizon. The length of the unusual geodesic (4.11) is \[d=L\tanh^{-1}\Big{[}\sqrt{2-c_{1}L^{2}(\mu-1)}{\tanh{(\sqrt{\mu-1}(x+c_{2}))}} \Big{]}. \tag{4.16}\] Again if we assume two end points (\(r_{0}=\varepsilon^{-1},\pm\frac{x_{0}}{2}\)) on the geodesic, then \(c_{2}=0\), and \[c_{1}=\frac{r^{-2}+\frac{1}{L^{2}(\mu-1)}\left(\sinh^{2}{(\sqrt{\mu-1}\frac{x _{0}}{2})}-1\right)}{\sinh^{2}{(\sqrt{\mu-1}\frac{x_{0}}{2})}}, \tag{4.17}\] which leads to the total length \[\Delta d= 2L\tanh^{-1}\Bigg{[}\Bigg{(}1-\frac{L^{2}(\mu-1)\varepsilon^{2} }{\cosh^{2}{[\sqrt{\mu-1}\frac{x_{0}}{2}]}}\Bigg{)}^{\frac{1}{2}}\Bigg{]}\approx 2 L\log\bigg{(}\frac{2\cosh{(\sqrt{\mu-1}\frac{x_{0}}{2})}}{L\sqrt{ \mu-1}\varepsilon}\bigg{)}. \tag{4.18}\] However, different from (4.15), for \(x_{0}\ll\frac{1}{\sqrt{\mu-1}}\), it does not lead to a logarithmic function. While for \(x_{0}\gg\frac{1}{\sqrt{\mu-1}}\), we have \(\Delta d=2L\log{(\frac{1}{\varepsilon L\sqrt{\mu-1}})}+L\sqrt{\mu-1}x_{0}\), which is the same as the usual geodesic. ### No-bubble phase: marginal weak measurements In the no-bubble phase, the time reversal invariant slice is included in the \(AdS_{2}\) spacetime with radius \(L_{2}\). Due to the time reversal symmetry, the RT surface of a subregion is located within the time reversal invariant slice. Therefore, the entanglement entropy is given accordingly by \[S=\frac{c_{\text{eff}}}{3}\log\frac{l}{\varepsilon}, \tag{4.19}\] where \(l\) is the length of the subregion, and we have taken the limit \(\varepsilon/l\ll 1\) and \(l/(2\pi)\ll 1\). Notice that \(c_{\text{eff}}=\frac{3L_{2}}{2G_{(3)}}\). This corresponds to the marginal phase with a continuous effective central charge. If we take the limit \(L_{2}\to 0\), then the entanglement will be an area law. This corresponds to the relevant case. In this case, the interface brane becomes the ETW brane. The region dual to the weak measurement is a trivial spacetime, and it is similar to the gravity dual of a boundary state resulted from projection measurements. ### Geodesic in the bubble-outside-horizon phase We first consider the bubble-outside-horizon phase. The geometry of the bubble-outside-horizon phase is shown in figure 14. We expect the entanglement entropy to satisfy a log-law, because the horizon is not included in the black hole solution and the RT surface can cross the interface brane to enter the AdS spacetime. We can focus on the entanglement entropy of a subregion in the AdS space, as shown in figure 15. Naively, we would expect that the geodesic in region II has the form (4.10a), and the geodesic in region I has the form (4.10b). Let's try to solve the geodesic that crosses the brane with the connection conditions: the geodesic and its derivative are continuous on the crossing point. We assume that the subregion in the boundary of time reversal invariant slice is located at \((-\delta,\delta)\). For the limit we considered, the RT surface will cross the interface brane. We assume the crossing point to be \((r,x)=(r_{0},-x_{0})\), then for the geodesic in the region I, we have \[r_{0}^{-2}+(L_{1}^{-2}+c_{1})\sin^{2}x_{0}=c_{1},\] (4.20a) and for the geodesic in the region II, we have \[r_{0}^{-2}+\left(\frac{1}{L_{2}^{2}(\mu-1)}-b_{1}\right)\sinh^{2}\sqrt{\mu-1} (x_{0}-b_{2})=b_{1}. \tag{4.20b}\] Figure 14: The light (dark) shaded region denotes the AdS (black hole) spacetime. (a) The time reversal invariant slice in the bubble-outside-horizon phase. The blue circle indicates the interface brane at \(r_{0}\) and the dashed circle indicates the horizon \(r_{H}\). (b) The vertical cross-section of the cylinder indicates the AdS spacetime, and the circle indicates the black hole spacetime. Figure 15: (a) Illustration of the geodesic in the bubble-outside-horizon phase. It consists of two parts, one in the AdS spacetime (region I), and the other in the black hole spacetime (region II). The AdS geodesic solution (green) is given in (4.10), while the black hole geodesic solution (orange) is the unusual one (4.11). (b) Zooming the portion of geodesics in (a). The geodesic solution is symmetric in the coordinate \(x\). Because the geodesic in region II has the end point \((r,x)=(\varepsilon^{-1},-\delta)\), we also have \[\varepsilon^{2}+\left(\frac{1}{L_{2}^{2}(\mu-1)}-b_{1}\right)\sinh^{2}\sqrt{\mu- 1}(\delta-b_{2})=b_{1}. \tag{113c}\] Besides, the connection condition requires \[\left.\frac{\mathrm{d}r^{-1}}{\mathrm{d}x}\right|_{r=r_{0},\mathrm{ I}}=-\frac{(L_{1}^{-2}+c_{1})\sin x_{0}\cos x_{0}}{\sqrt{c_{1}-(L_{1}^{-2}+c_{1}) \sin^{2}x_{0}}}\] \[=\left.\frac{\mathrm{d}r^{-1}}{\mathrm{d}x}\right|_{r=r_{0}, \mathrm{II}}=-\sqrt{\mu-1}\frac{(\frac{1}{L_{2}^{2}(\mu-1)}-b_{1})\sinh\left[ \sqrt{\mu-1}(x_{0}-b_{2})\right]\cosh\left[\sqrt{\mu-1}(x_{0}-b_{2})\right]}{ \sqrt{b_{1}-(\frac{1}{L_{2}^{2}(\mu-1)}-b_{1})\sinh^{2}\left[\sqrt{\mu-1}(x_{ 0}-b_{2})\right]}}. \tag{113d}\] These four equations are enough to determine the four unknown parameters \(x_{0},c_{1},b_{1}\) and \(b_{2}\). However, it is difficult to obtain a general analytical solution. In the following, we focus on the limit \[\varepsilon\ll r_{H}^{-1}\ll x_{0}\ll 1, \tag{114}\] that can give analytical results. This limit is justified as follows: * \(\varepsilon\ll r_{H}^{-1}\) is due to the requirement that the UV-cutoff should be the minimal length scale. * \(r_{H}\propto\sqrt{\mu}\) in the large \(\mu\) limit. It means \(r_{H}^{-1}\propto\epsilon\), the regularization parameter of the weak measurement. 16 This justifies the limit \(r_{H}^{-1}\ll x_{0}\). Footnote 16: Note that we use \(\epsilon\) to denote the regularization of the weak measurement, and \(\varepsilon\) to denote the UV cutoff near the boundary. * \(x_{0}\ll 1\) is because for original CFT we set it on a circle with radius 1, and we want our subsystem is much smaller than the total system 17. Footnote 17: While \(x_{0}\) is not the length of the subsystem, we will see that at this limit, \(\delta\sim x_{0}\). Therefore, the limit (114) is equivalent to \(\varepsilon\ll r_{H}^{-1}\ll\delta\ll 1\). Besides, we assume \(L_{i}\) for \(i=1,2\) are order-one numbers \(\mathcal{O}(1)\). In such a limit, we first estimate the order of \(r_{0}\). From (109) and (110), up to \(\mathcal{O}(\mu^{-1})\) we have \[\left(\frac{r_{H}}{r_{0}}\right)^{2}\approx\lambda_{2}^{-1}(-T+\sqrt{\lambda_ {1}}+\sqrt{\lambda_{2}})(T-\sqrt{\lambda_{1}}+\sqrt{\lambda_{2}}), \tag{115}\] which leads to \(r_{H}/r_{0}\to\eta\) with \[\eta=\sqrt{1-\left(\frac{T-\sqrt{\lambda_{1}}}{\sqrt{\lambda_{2}}}\right)^{2}}. \tag{116}\] In the bubble phase, \(\eta<1\) and is an order-one number. There is a subtlety: we take the limit \(\mu\to\infty\) to determine the order of \(r_{0}\), but when we solve equations (113), we do not directly take this limit but leave it at the end of the calculation. However, it is unimportant because the results won't change in the order we consider in the discussion below. Let's try to solve the equations (4.1). From (4.1) and (4.1), the denominators of the two sides in (4.1) are both \(r_{0}^{-1}\). So we only need to look at the numerator. From (4.1) we know \(c_{1}=[r_{0}^{-2}+L_{1}^{-2}\sin^{2}x_{0}]/\cos^{2}x_{0}\sim\mathcal{O}(x_{0}^{2})\), then the numerator on the left-hand side of (4.1) is in the order of \(\mathcal{O}(x_{0})\). However, with the help of (4.1), the numerator on the right-hand side of (4.1) can be simplified as \[\sqrt{(\mu-1)(b_{1}-r_{0}^{-2})}\left[\frac{1}{L_{2}^{2}(\mu-1)}-r_{0}^{-2} \right]<\sqrt{(\mu-1)(r_{H}^{-2}-r_{0}^{-2})^{2}}=L_{2}^{-1}(1-\eta^{2})r_{H}^{ -1}\sim\mathcal{O}(r_{H}^{-1}) \tag{4.24}\] where we have used \(b_{1}<\frac{1}{L_{2}^{2}(\mu-1)}=r_{H}^{-2}\). Therefore, in (4.1) we have \(\mathcal{O}(x_{0})\) for the numerator of the left-hand side and \(\mathcal{O}(r_{H}^{-1})\) for the numerator of the right-hand side. With the assumption \(r_{H}^{-1}\ll x_{0}\) in (4.1), there is no solution for (4.1). The naive geodesic we use does not have a solution at the limit. The key is the derivative of geodesic in region II is too small to satisfy the connection condition. This problem can be solved by considering a different geodesic in the region II. We consider unusual geodesic in the region II. The four new equations are listed as follows \[r_{0}^{-2}+(L_{1}^{-2}+c_{1})\sin^{2}x_{0}=c_{1}, \tag{4.25a}\] \[r_{0}^{-2}+\left(\frac{1}{L_{2}^{2}(\mu-1)}-b_{1}\right)\sinh^{2}\left[\sqrt{ \mu-1}(x_{0}-b_{2})\right]=\frac{1}{L_{2}^{2}(\mu-1)},\] (4.25b) \[\varepsilon^{2}+\left(\frac{1}{L_{2}^{2}(\mu-1)}-b_{1}\right)\sinh^{2}\left[ \sqrt{\mu-1}(\delta-b_{2})\right]=\frac{1}{L_{2}^{2}(\mu-1)},\] (4.25c) \[\left.\frac{\mathrm{d}r^{-1}}{\mathrm{d}x}\right|_{r=r_{0},\mathrm{ I}}=-\frac{(L_{1}^{-2}+c_{1})\sin x_{0}\cos x_{0}}{\sqrt{c_{1}-(L_{1}^{-2}+c_{1}) \sin^{2}x_{0}}}\] \[=\frac{\mathrm{d}r^{-1}}{\mathrm{d}x}\right|_{r=r_{0},\mathrm{ II}}=-\sqrt{\mu-1}\frac{(\frac{1}{L_{2}^{2}(\mu-1)}-b_{1})\sinh\left[\sqrt{ \mu-1}(x_{0}-b_{2})\right]\cosh\left[\sqrt{\mu-1}(x_{0}-b_{2})\right]}{\sqrt{ \frac{1}{L_{2}^{2}(\mu-1)}-(\frac{1}{L_{2}^{2}(\mu-1)}-b_{1})\sinh^{2}\left[ \sqrt{\mu-1}(x_{0}-b_{2})\right]}}. \tag{4.25d}\] In the following we will again consider the limit (4.1) that gives an analytical solution. We first check if (4.25) has a solution that satisfies the connection condition (4.25d). Again, the denominators of the two sides in (4.25d) are both \(r_{0}\) owing to (4.25a) and (4.25b). The numerator for the left-hand side is the same as before, and is again in the same order of \(\mathcal{O}(x_{0})\). But for the right-hand side, with (4.25b) we have \[\begin{split}&\sqrt{(\mu-1)\left[\frac{1}{L_{2}^{2}(\mu-1)}-r_{0}^{ -2}\right]\left[\frac{2}{L_{2}^{2}(\mu-1)}-r_{0}^{-2}-b_{1}\right]}\\ &=L_{2}^{-1}\sqrt{(1-\eta^{2})(2r_{H}^{-2}-r_{0}^{-2}-b_{1})}. \end{split} \tag{4.26}\] So we require \(b_{1}\sim-x_{0}^{2}\) to satisfy the connection condition. In the limit (4.21), we expand variables in the order of \(x_{0}\) and \(r_{H}\). From (4.25a), (4.25b), and (4.25d), we can express \(c_{1}\), \(b_{1}\), and \(b_{2}\) as a function of \(x_{0}\), i.e., \[\begin{split} c_{1}=& L_{1}^{-2}x_{0}^{2},\\ b_{1}=&-L_{2}^{2}(1-\eta^{2})^{-1}L_{1}^{-4}x_{0}^{ 2},\\ b_{2}=& x_{0}-L_{1}^{2}(1-\eta^{2})\frac{r_{H}^{-2} }{x_{0}}.\end{split} \tag{4.27}\] We then substitute (4.27) into (4.25c) to get an equation that only involves \(x_{0}\). (4.25c) can be brought to \(\sinh^{2}\left[\sqrt{\mu-1}(\delta-b_{2})\right]=\frac{\frac{1}{L_{2}^{2}(\mu- 1)}-\varepsilon^{2}}{\frac{1}{L_{2}^{2}(\mu-1)}-b_{1}}\). We can use (4.27) to simplify both the left-hand side and right-hand side to get \[\begin{split}& L_{1}^{2}\sqrt{1-\eta^{2}}r_{H}^{-2}x_{0}^{-1} \left[1+\frac{1}{2}L_{2}^{-2}L_{1}^{4}(1-\eta^{2})^{2}\frac{r_{H}^{-2}}{x_{0} ^{2}}\right]\\ &=\delta-x_{0}+L_{1}^{2}(1-\eta^{2})\frac{r_{H}^{-2}}{x_{0}} \left(1+\frac{1}{2}L_{2}^{-2}L_{1}^{4}(1-\eta^{2})^{2}\frac{r_{H}^{-2}}{x_{0} ^{2}}\right).\end{split} \tag{4.28}\] Therefore, at the leading order, we obtain a quadratic equation for \(x_{0}\) \[L_{1}^{2}\sqrt{1-\eta^{2}}r_{H}^{-2}x_{0}^{-1}=\delta-x_{0}+L_{1}^{2}(1-\eta^{ 2})\frac{r_{H}^{-2}}{x_{0}}. \tag{4.29}\] There are two solutions. Because \(x_{0}\approx\delta\), the relevant solution reads \[\begin{split} x_{0}=&\frac{1}{2}[\delta+\delta(1-4 L_{1}^{2}\sqrt{1-\eta^{2}}(1-\sqrt{1-\eta^{2}})r_{H}^{-2}\delta^{-2})^{\frac{1}{2} }]\\ \approx&\delta-L_{1}^{2}\sqrt{1-\eta^{2}}(1-\sqrt{1- \eta^{2}})r_{H}^{-2}\delta^{-1},\end{split} \tag{4.30}\] where the second term has the order \(\mathcal{O}(\frac{r_{H}^{-2}}{x_{0}^{2}}\cdot x_{0})\ll\mathcal{O}(x_{0})\). Having obtained \(x_{0}\), we then substitute it back into (4.27) to get the other parameters. To verify our analytical geodesic solution, we also numerically solve (4.25). With parameters \(L_{1}=1,L_{2}=1.1,\mu=1000,\varepsilon=0.0001,T=0.6,\delta=0.1\), and the solutions are \[x_{0}=0.0979312,b_{1}=-0.0593183,b_{2}=0.0962979,c_{1}=0.0103255. \tag{4.31}\] From our analytical solution, we can get the following results \[x_{0}=0.0979612,b_{1}=-0.0598777,b_{2}=0.0963236,c_{1}=0.0095964. \tag{4.32}\] Thus, our analytical results are closed to numerical results in the limit (4.21). With the solution above, we can calculate the geodesic length. The geodesic can be split into two halves, each of which ranges from the boundary point \(x=\pm\delta\) to the symmetric point at \(x=0\). We focus on the length of one half, and the total length is simply \(\Delta d=2(\Delta d_{1}+\Delta d_{2})\), where \(\Delta d_{1,2}\) denotes the length in region I and II, respectively. For the geodesic in region II, from (4.16) we have \[\Delta d_{2} =(\Delta d_{2})_{1}-(\Delta d_{2})_{2}, \tag{4.33}\] \[(\Delta d_{2})_{1} =L_{2}\tanh^{-1}\bigg{[}\sqrt{2-b_{1}L_{2}^{2}(\mu-1)}\text{tanh} \left(\sqrt{\mu-1}(\delta-b_{2})\right)\biggr{]},\] \[(\Delta d_{2})_{2} =L_{2}\tanh^{-1}\bigg{[}\sqrt{2-b_{1}L_{2}^{2}(\mu-1)}\text{tanh} \left(\sqrt{\mu-1}(x_{0}-b_{2})\right)\biggr{]}.\] With the solution, we can simplify these two distances separately. For \((\Delta d_{2})_{1}\), we arrive at \[(\Delta d_{2})_{1}\approx L_{2}\left\{\log\left[\frac{2}{L_{2}\sqrt{\mu-1}\varepsilon} \right]+\frac{1}{2}(\mu-1)(\delta-b_{2})^{2}\right\}, \tag{4.34}\] where \(\delta-b_{2}\approx L_{1}^{2}\sqrt{1-\eta^{2}}r_{H}^{-2}\delta^{-1}\). And for \((\Delta d_{2})_{2}\), we arrive at \[(\Delta d_{2})_{2}\approx L_{2}\left[\tanh^{-1}\left(\sqrt{1-\eta^{2}}\right)+\frac{1}{2}(1- \eta^{2})^{\frac{3}{2}}L_{1}^{4}L_{2}^{-2}r_{H}^{-2}\delta^{-2}\right]. \tag{4.35}\] Therefore, the geodesic length in region II reads \[\Delta d_{2}= L_{2}\left\{\log\left(\frac{2r_{H}^{-1}}{\varepsilon}\right)+ \frac{1}{2}L_{2}^{-2}L_{1}^{4}(1-\eta^{2})(1-\sqrt{1-\eta^{2}})r_{H}^{-2} \delta^{-2}-\tanh^{-1}\left(\sqrt{1-\eta^{2}}\right)\right\}. \tag{4.36}\] Besides, for the geodesic in region I, it is \[\Delta d_{1}= L_{1}\tanh^{-1}\bigg{[}\frac{\tan x_{0}}{L_{1}\sqrt{c_{1}}} \bigg{]}=L_{1}\tanh^{-1}\Bigg{[}\bigg{(}\frac{\sin^{2}x_{0}}{L_{1}^{2}r_{0}^{ -2}+\sin^{2}x_{0}}\bigg{)}^{\frac{1}{2}}\Bigg{]}, \tag{4.37}\] With (4.30), we have \[\frac{\sin^{2}x_{0}}{L_{1}^{2}r_{0}^{-2}+\sin^{2}x_{0}}\approx 1-L_{1}^{2}\eta^{2}r_{H}^{-2}\delta^{-2}. \tag{4.38}\] Therefore, the length of geodesic in region I is \[\Delta d_{1}=L_{1}\tanh^{-1}\bigg{(}1-\frac{1}{2}L_{1}^{2}\eta^{2}r_{H}^{-2} \delta^{-2}\bigg{)}\approx L_{1}\cosh^{-1}\bigg{(}\frac{L_{1}^{-1}\eta^{-1} \delta}{r_{H}^{-1}}\bigg{)}\approx L_{1}\log\bigg{(}\frac{2L_{1}^{-1}\eta^{-1 }\delta}{r_{H}^{-1}}\bigg{)}. \tag{4.39}\] Finally, we have the total length of geodesic \[\Delta d= 2L_{2}\left\{\log\left(\frac{2r_{H}^{-1}}{\varepsilon}\right) -\tanh^{-1}\left(\sqrt{1-\eta^{2}}\right)\right\}+2L_{1}\log\bigg{(}\frac{2L _{1}^{-1}\eta^{-1}\delta}{r_{H}^{-1}}\bigg{)}, \tag{4.40}\] where we have taken the limit \(\mu\rightarrow\infty\) and \(r_{H}^{-1}\to 0\). Since the subsystem size is \(2\delta\), the corresponding entanglement entropy satisfies a log-law. Namely, the entanglement entropy of the subsystem is in turn given by \(S=\frac{c_{1}}{3}\log(2\delta)\), where we only keep the dependence of \(\delta\). The prefactor of the logarithm is the same as the central charge of the unmeasured CFT, which means the weak measurement is irrelevant. We numerically compute the geodesic length for different sizes of the subsystem, ranging from \(0.025\) to \(0.5\). In figure 16 we plot the numerical results and analytical results of (4.40), it shows that they are consistent with each other. ### Geodesic in the bubble-inside-horizon phase Now we consider the entanglement entropy of a subregion in the bubble-inside-horizon phase. Again, we assume that the subregion in the boundary of time reversal invariant slice is located at \((-\delta,\delta)\). In this phase, the bubble is outside the horizon, in other words, the black hole horizon exists in the bulk solution, as shown in the left panel of figure 21. Naively, we would expect that the geodesic starting from the boundary is located outside the horizon, and is contained in the back hole dual to the region II. If this was true, then the entanglement entropy would satisfy a volume law. However, in the following, we will show that there exists a new kind of geodesic that can cross the horizon. The new geodesic has a shorter length and leads to a log-law, and, therefore, is preferred. The new geodesic has two sections located in the region I, II, respectively. In region II, the usual geodesic in black hole geometry will never reach the horizon, so the unusual geodesic (4.11) that has a fix point \((x,r)=(-c_{2},r_{H})\) is needed to cross the horizon and connect to the section in the region II. Besides, this unusual geodesic (4.11) is symmetric at \(x=-c_{2}\), so the derivative vanishes at the horizon \(\left.\frac{\mathrm{d}r^{-1}}{\mathrm{d}x}\right|_{x=-c_{2}}=0\). The possible geodesic is shown in figure 17. We emphasize that this new geodesic (including two sections in region I and II) is shorter because of the existence of region I: it has a section in region I that shortens the length. In figure 17, the geodesic has three parts, denoted as \(l_{1,2,3}\). For parts \(l_{1}\) and \(l_{2}\), the corresponding geodesics are given by (4.11), with the same \(c_{2}\) but different \(c_{1}\). The part \(l_{3}\) is the usual geodesic in the AdS metric.18 Assuming that two end points are \((r^{-1},x)=(\varepsilon,\pm\delta)\), then for any \(\beta\) (which is defined as the symmetric point of the geodesic of part \(l_{1,2}\) as shown in the figure) we have geodesic solutions as shown in figure 17. In the following, we will first solve the general geodesic equations for any given \(\beta\) and then determine the \(\beta\) such that the total length of the geodesic is minimal. To obtain an analytical result, we again Figure 16: Geodesic length as a function of the subsystem size in the bubble-outside-horizon phase. The parameters are chosen to be \(L_{1}=1,L_{2}=1.1,\mu=1000,\varepsilon=0.0001,T=0.6\). consider the same limit, \[\varepsilon\ll r_{H}^{-1}\ll\delta\ll 1. \tag{108}\] For part \(l_{1}\), we have the following equation for any given \(\beta\) \[\varepsilon^{2}+\left(\frac{1}{L_{2}^{2}(\mu-1)}-b_{1}^{\prime}\right)\sinh^{2} \left[\sqrt{\mu-1}(\delta-\beta)\right]=\frac{1}{L_{2}^{2}(\mu-1)}. \tag{109}\] For the parts \(l_{2}\) and \(l_{3}\), we have the following equations, where the last one is from the connection condition \[r_{0}^{-2}+(L_{1}^{-2}+c_{1})\sin^{2}x_{0}=c_{1}, \tag{110a}\] \[r_{0}^{-2}+\left(\frac{1}{L_{2}^{2}(\mu-1)}-b_{1}\right)\sinh^{2} \left[\sqrt{\mu-1}(x_{0}-\beta)\right]=\frac{1}{L_{2}^{2}(\mu-1)},\] (110b) \[\left.\frac{\mathrm{d}r^{-1}}{\mathrm{d}x}\right|_{r=r_{0}, \mathrm{I}}=-\frac{(L_{1}^{-2}+c_{1})\sin x_{0}\cos x_{0}}{\sqrt{c_{1}-(L_{ 1}^{-2}+c_{1})\sin^{2}x_{0}}}\] \[= -\left.\frac{\mathrm{d}r^{-1}}{\mathrm{d}x}\right|_{r=r_{0}, \mathrm{II}}=\sqrt{\mu-1}\frac{(\frac{1}{L_{2}^{2}(\mu-1)}-b_{1})\sinh\left[ \sqrt{\mu-1}(x_{0}-\beta)\right]\cosh\left[\sqrt{\mu-1}(x_{0}-\beta)\right]}{ \sqrt{\frac{1}{L_{2}^{2}(\mu-1)}-(\frac{1}{L_{2}^{2}(\mu-1)}-b_{1})\sinh^{2} \left[\sqrt{\mu-1}(x_{0}-b_{2})\right]}}. \tag{111c}\] Notice that the minus sign in the last equation is because the definition of the geodesic in region II is opposite in the \(r\) direction. It is clear if we compare the part \(l_{2}\) and the part \(l_{1}\) in figure 17. From (110a) and (110b) we can get \(c_{1}\) and \(b_{1}\) as a function of \(x_{0}\) \[c_{1}=\frac{r_{0}^{-2}+L_{1}^{-2}\sin^{2}x_{0}}{\cos^{2}x_{0}}, \tag{112}\] \[b_{1}=\frac{1}{L_{2}^{2}(\mu-1)}-\frac{\frac{1}{L_{2}^{2}(\mu-1)}-r_{0}^{-2}} {\sinh^{2}\left[\sqrt{\mu-1}(x_{0}-\beta)\right]}. \tag{113}\] Figure 17: Illustration of the RT surface in the bubble-inside-horizon phase. The blue line indicates the interface brane, and the dashed line indicates the horizon. The RT surface has three parts, denoted by \(l_{1,2,3}\). The region II has two parts \(l_{1,2}\) that connect at the horizon. They are given by the unusual geodesic solution. \(\pm\beta\) is the symmetric point of the solution of the parts \(l_{1,2}\). \((-x_{0},r_{0}^{-1})\) is the connecting point between the geodesics of the part \(l_{3}\) and \(l_{2}\). Substituting these two expressions into (4.43c), we arrive at \[\tan x_{0}\tanh\left[\sqrt{\mu-1}(\beta-x_{0})\right]=\frac{\sqrt{\mu-1}(\frac{1}{ L_{2}^{2}(\mu-1)}-r_{0}^{-2})}{(L_{1}^{-2}+r_{0}^{-2})}. \tag{4.46}\] While we leave \(\beta\) an undetermined parameter to get an expression for geodesic length and determine \(\beta\) by demanding the geodesic length to be minimal, we can first estimate the order of \(\beta\). We argue that \(\beta\) must have the same order of \(\delta\) and \(|\delta-\beta|\ll\delta\). The intuitive argument is that if \(|\delta-\beta|\sim\mathcal{O}(\delta)\), the geodesic length will lead to a volume law: when \(\beta=\eta^{\prime}\delta\) with \(\eta^{\prime}<1\) and \(\eta^{\prime}\sim\mathcal{O}(1)\), the length of the part \(l_{1}\) will be proportional to \(\mathcal{O}(\delta)\). A precise estimate is given in the appendix C.2. But we can construct a geodesic with a log-law for \(|\delta-\beta|\ll\delta\). The right-hand side of (4.46) is of the order \(\mathcal{O}(r_{H}^{-1})\). Then the left-hand side implies that the solution is \(x_{0}\sim r_{H}^{-1}\) or \(x_{0}\sim\beta\). If \(x_{0}\sim r_{H}^{-1}\), with (4.43b) and (4.18), the length of the part \(l_{2}\) is \[L_{2}\tanh^{-1}\left[\left(1-\frac{L_{2}^{2}(\mu-1)r_{0}^{-2}}{\cosh^{2}\left[ \sqrt{\mu-1}(\beta-x_{0})\right]}\right)^{\frac{1}{2}}\right]\approx r_{H}( \beta-x_{0})-L_{2}\log\eta. \tag{4.47}\] Because \(\beta-x_{0}\approx\delta\), the geodesic length leads to volume-law entanglement. Therefore, we should consider the solution \(x_{0}\sim\beta\), which will later be shown to have log-law entanglement. In this case, (4.46) can be expanded as \[L_{2}^{-1}r_{H}(\beta-x_{0})x_{0}\approx L_{2}^{-1}L_{1}^{2}(1-\eta^{2})r_{H}^ {-1}. \tag{4.48}\] which leads to the solution \[x_{0}=\beta-(1-\eta^{2})L_{1}^{2}r_{H}^{-2}\beta^{-1}. \tag{4.49}\] With this solution, the other parameters, (4.44), (4.45), and thus the geodesic length can be expressed as a function of \(\beta\). We can verify our analytical result by numerically calculation. For instance, using parameters \(L_{1}=1,L_{2}=1.1,\mu=10000,\varepsilon=0.00001,T=1.6,\beta=0.1\), the numerical result of (4.46) is \[x_{0}=0.0996397, \tag{4.50}\] and the analytical solution, (4.49) gives \[x_{0}=0.09964. \tag{4.51}\] Therefore, our analytical result is close to numerical calculation in the limit (4.41). We can calculate the geodesic length with the solution above. For the part \(l_{1}\), with (4.42) and (4.18), we have \[\Delta d_{l_{1}}\approx\bigg{(}\frac{\cosh\left(\sqrt{\mu-1}(\delta-\beta) \right)}{L_{2}\sqrt{\mu-1}\varepsilon}\bigg{)}L_{2}\log\left(\frac{r_{H}^{-1} }{\varepsilon}\right)+L_{2}\log\Big{(}2\cosh\left(\sqrt{\mu-1}(\delta-\beta) \right)\Big{)}. \tag{4.52}\] If \(|\delta-\beta|\lesssim O(r_{H}^{-1})\), the second term in (115) will at most contribute to \(\mathcal{O}(1)\). For the part \(l_{2}\) with (102b) and (103), we have \[\begin{split}\Delta d_{l_{2}}\approx& L_{2}\tanh^{-1}\left[\left(1-\frac{\eta^{2}}{\cosh^{2}\left[ \sqrt{\mu-1}(\beta-x_{0})\right]}\right)^{\frac{1}{2}}\right]\\ \approx& L_{2}\left\{\tanh^{-1}\left(\sqrt{1-\eta^{2} }\right)+\frac{(\mu-1)(\beta-x_{0})^{2}}{2\sqrt{1-\eta^{2}}}\right\},\end{split} \tag{104}\] where we have used \(\sqrt{\mu-1}(\beta-x_{0})\approx L_{1}^{2}L_{2}^{-1}(1-\eta^{2})r_{H}^{-1} \beta^{-1}\sim\mathcal{O}(\frac{r_{H}^{-1}}{\delta})\ll 1\). For the part \(l_{3}\) with (102a) and (103), we have \[\Delta d_{l_{3}}= L_{1}\tanh^{-1}\sqrt{\frac{\sin^{2}x_{0}}{L_{1}^{2}r_{0}^{-2}+ \sin^{2}x_{0}}}\approx L_{1}\log\left(\frac{2L_{1}^{-1}\sin x_{0}}{r_{0}^{-1}} \right)\!, \tag{105}\] where we have used \(x_{0}\approx\beta\sim\mathcal{O}(\delta)\) and \(r_{0}^{-1}\sim\mathcal{O}(r_{H}^{-1})\ll\mathcal{O}(x_{0})\). Using the approximation above and (102), we have \[\begin{split}\Delta d_{l_{1}}+\Delta d_{l_{2}}+\Delta d_{l_{3}} \approx& L_{2}\log\left(2\cosh\left(\sqrt{\mu-1}(\delta-\beta) \right)\right)+\frac{1}{2}L_{2}^{-1}(1-\eta^{2})^{\frac{3}{2}}L_{1}^{4}r_{H}^ {-2}\beta^{-2}\\ &+L_{1}\log\left(\beta-(1-\eta^{2})L_{1}^{2}r_{H}^{-2}\beta^{-1} \right)+\text{const},\end{split} \tag{106}\] where const denotes the terms independent of \(\beta\). We minimize the geodesic length (106) w.r.t. \(\beta\). Denoting the right-hand side of (106) as \(f(\beta)\), its derivative is \[\begin{split} f^{\prime}(\beta)=&-r_{H}\tanh \left(L_{2}^{-1}r_{H}(\delta-\beta)\right)-L_{2}^{-1}(1-\eta^{2})^{\frac{3}{2}} L_{1}^{4}r_{H}^{-2}\beta^{-3}+L_{1}\frac{1+(1-\eta^{2})L_{1}^{2}r_{H}^{-2}\beta^{-2}} {\beta-(1-\eta^{2})L_{1}^{2}r_{H}^{-2}\beta^{-1}}\\ \approx&\frac{L_{2}}{\beta}\left\{\frac{L_{1}}{L_{2} }-L_{2}^{-1}\beta r_{H}\tanh\left(L_{2}^{-1}r_{H}(\delta-\beta)\right)\right\},\end{split} \tag{107}\] where in the last equation we only keep the order \(\mathcal{O}(\beta^{-1})\) and ignore higher orders, like \(O(\beta^{-1}\frac{r_{H}^{-2}}{\beta^{2}})\ll O(\beta^{-1})\). It turns out that the only consistent solution of \(f^{\prime}(\beta)=0\) is given by \[\beta\approx\delta-L_{1}L_{2}r_{H}^{-2}\delta^{-1}, \tag{108}\] corresponding to the minimal point of the geodesic length. Finally, substituting (108) into (106), the geodesic length reads \[\begin{split}\Delta d=& 2(\Delta d_{l_{1}}+\Delta d_{l_{2}} +\Delta d_{l_{3}})\\ \approx& 2L_{2}\left[\log\left(\frac{2r_{H}^{-1}}{ \varepsilon}\right)+\tanh^{-1}\left(\sqrt{1-\eta^{2}}\right)\right]+2L_{1}\log \left(\frac{2L_{1}^{-1}\eta^{-1}\delta}{r_{H}^{-1}}\right)\!,\end{split} \tag{109}\] where in the derivation we have ignored higher orders in \(\mathcal{O}(\frac{\delta^{2}}{r_{H}^{-2}})\). Since the subsystem size is \(2\delta\), the corresponding entanglement entropy satisfies a log-law. The prefactor of the logarithmic entanglement entropy will be the same as the central charge of the unmeasured CFT, which means the weak measurement is irrelevant. Here is one additional remark. Similar to (4.40), there is an order-one term in (4.58), \(2L_{2}\tanh^{-1}\left(\sqrt{1-\eta^{2}}\right)\), but in a different sign. It is because for (4.40) we have a part of the geodesic in figure 18 (a), while for (4.58) we have a part of the geodesic in figure 18 (b). A half of the symmetric geodesic has no constant term, so for figure 18 (a) we have \(0-2L_{2}\tanh^{-1}\left(\sqrt{1-\eta^{2}}\right)\), while for figure 18 (b) we have \(0+2L_{2}\tanh^{-1}\left(\sqrt{1-\eta^{2}}\right)\). Finally, we numerically compute the length of the geodesic for different sizes of the subsystem, ranging from \(0.025\) to \(0.5\). In figure 19, we plot the numerical results and analytical results of (4.58), and it shows that they are consistent with each other. ## 5 Discussion ### Thick brane description In Sec. 2, we have considered weak measurements in an infinite system using a bottom-up thin-brane description. Here, we discuss the thick-brane description of the weak measure Figure 19: Geodesic length as a function of the subsystem size in the bubble-outside-horizon phase. The parameters are chosen to be \(L_{1}=1,L_{2}=1.1,\mu=10000,\varepsilon=0.00001,T=1.6\). We do not use the variation to determine \(\beta\) from \(\delta\), instead, we use the approximation (4.57). Figure 18: (a) The geodesic in the region II in the bubble-outside-horizon phase. (b) The geodesic in the region II in the bubble-inside-horizon phase. ment and the relation to spatial interface CFT. In the following, we set \(L=1\) for simplicity. To this end, we consider the foliation of a general 3d Euclidean metric \[\mathrm{d}s^{2}=\mathrm{d}\rho^{2}+e^{2A(\rho)}\frac{\mathrm{d}x^{2}+\mathrm{d}y ^{2}}{y^{2}}, \tag{110}\] where \(A(\rho)\) is a warpfactor that controls the size of each AdS\({}_{2}\) slice. Figure 20 (a) illustrates the foliation. The Euclidean Poincare coordinate is related to the foliation by \(z(y,\rho)\) and \(\tau(y,\rho)\). As before, the weak measurement occurs at \(\tau=0\). The post-measurement state has an effective central charge denoted by \(c_{\mathrm{eff}}\). For \(\tau\neq 0\), the asymptotic region \(z\to 0\) is given by the unmeasured CFT with central charge \(c_{1}\). In the foliation coordinate, the measurement region at the boundary is reached by \(\lim_{y\to 0}\tau(y,\rho)=0\), and the unmeasured CFT region at the boundary is reached by \(\rho\to\pm\infty\) as shown in figure 20 (a). It is helpful to look at the pure AdS\({}_{3}\) metric: when \[e^{A(\rho)}=\cosh\rho, \tag{111}\] the metric is reduced to an AdS\({}_{3}\) metric. One can check this via a coordinate transformation similar to (6), \[z=\frac{y}{\cosh\rho},\qquad\tau=y\tanh\rho. \tag{112}\] The only difference is that \(x\) and \(\tau\) are exchanged in (6). From this coordinate transformation, because \(y\geq 0\), if the metric has time reversal symmetry, we can deduce that the metric is invariant under \(\rho\to-\rho\). Of course, this is a trivial example because there is no measurement in the pure AdS\({}_{3}\) metric. Nevertheless, we expect this property is general in the case of weak measurement. Namely, the imaginary time satisfies \(\tau(y,-\rho)=-\tau(y,\rho)\). We make the following assumptions for the warpfactor: 1. \(A(\rho)\) is an even function of \(\rho\), i.e., \(A(\rho)=A(-\rho)\). This is because of the time reversal invariance. 2. For \(\rho\to\pm\infty\), we have \(e^{A(\rho)}\sim\cosh\rho\), i.e., we recover AdS\({}_{3}\) away from the measurement in the unmeasured CFT region. 3. The minimal value of the warpfactor is located at \(\rho=\pm\rho^{*}\) with \(A^{*}=A(\rho^{*})\). We assume \(\rho^{*}\geq 0\) without loss of generality. At this minimal point, the first-order derivative vanishes \(A^{\prime}(\rho^{*})=0\). Notice that \(\rho^{*}=0\) is a special case of this. We consider entanglement entropy of a subsystem \(x\in[0,l]\). The corresponding RT surface is parametrized by \((y(x),\rho(x))\). Then the surface area of the RT surface can be rewritten as an action \(\mathcal{A}=\int\mathrm{d}x\mathcal{L}\) with the Lagrangian, \[\mathcal{L}=\sqrt{e^{2A(\rho)}\frac{\dot{y}^{2}+1}{y^{2}}+\dot{\rho}^{2}}, \quad\dot{y}=\frac{\mathrm{d}y}{\mathrm{d}x},\quad\dot{\rho}=\frac{\mathrm{d} \rho}{\mathrm{d}x}. \tag{113}\] The equation of motion is \[\begin{split}& e^{2A(\rho)}A^{\prime}(\rho)(1+\dot{y}^{2})^{2}-y[( 1+\dot{y}^{2})(\dot{\rho}\dot{y}+y(-2A^{\prime}(\rho)\dot{\rho}^{2}+\ddot{\rho }))-y\dot{\rho}\dot{y}\ddot{y}]=0,\\ & e^{2A(\rho)}[(1+\dot{y}^{2})(1+yA^{\prime}(\rho)\dot{\rho}\dot{ y})+y\ddot{y}]+y^{2}\dot{\rho}[2yA^{\prime}(\rho)\dot{\rho}^{2}\dot{y}-y\dot{y} \ddot{\rho}+\dot{\rho}(1-\dot{y}^{2}+y\ddot{y})]=0.\end{split} \tag{114}\] A solution to the first equation is given by \(\rho(x)=\rho^{*}\) because \(A^{\prime}(\rho^{*})=0\) and \(\dot{\rho}=0\). It means that the RT surface is located at the slice with the minimal warpfunction \(A^{*}=A(\rho^{*})\). Therefore, the equation for the geodesic is \[1+\dot{y}^{2}+y\ddot{y}=0, \tag{109}\] which gives a well-known solution \[y^{2}+(x-c_{1})^{2}=c_{2}^{2}, \tag{110}\] where \(c_{1,2}\) are integration constants. Therefore, the geodesic is also an arc in \(\rho=\rho^{*}\) slice. In this slice, the metric is \(AdS_{2}\) with radius \(e^{A^{*}}\). With the results before, the area of the RT surface (110) at the leading order of \(l\) is \[\mathcal{A}=2e^{A^{*}}\log l, \tag{111}\] which gives the effective central charge \(c_{\rm eff}=\frac{3e^{A^{*}}}{2G_{(3)}}\), and \(S_{l}=\frac{c_{\rm eff}}{3}\log l\). Notice that for \(\rho^{*}\neq 0\), the RT surface is not located at the time reversal invariant slice. There is also Figure 20: (a) An illustration of the foliation coordinate (108). Weak measurements occur at \(\tau=0\). For \(\tau\neq 0\), the asymptotic boundary is given by the unmeasured CFT \(\rho\to\pm\infty\). (b) The RT surface colored in orange connecting the two ends of the interval \(x\in(0,l)\) is located in the \(\rho=\pm\rho^{*}\) slice colored in blue for the weak measurement case. (c) The RT surface colored in orange connects two endpoints of \(x\in(0,l)\) for the spatial interface CFT at the \(\tau=0\) slice colored in blue. Note that the coordinate \(x\) and \(\tau\) are exchanged. a degenerate RT surface at \(\rho=-\rho^{*}\). This is illustrated in figure 20 (b). There is also a solution at the time reversal invariant slice because \(A^{\prime}(0)=0\) for the ever function \(A(\rho)\). This solution has the area given by \(2e^{A(0)}\log l>2e^{A^{*}}\log l\) because \(A(0)>A^{*}\). Now, let's consider a spatial interface CFT via a spacetime rotation \(\tau\leftrightarrow x\). The metric is \[\mathrm{d}s^{2}=\mathrm{d}\rho^{2}+e^{2A(\rho)}\frac{\mathrm{d}\tau^{2}+ \mathrm{d}y^{2}}{y^{2}}. \tag{112}\] The spatial coordinate is given by \(x=x(y,\rho)\), and it satisfies \(x(y,-\rho)=-x(y,\rho)\). The asymptotic boundary \(z\to 0\) for \(x\neq 0\) is given by two CFTs. These two boundaries are given by \(\rho\to\pm\infty\), respectively. The interface is located at \(x=0\). An illustration of this coordinate is shown in figure 20. We consider the entanglement entropy of an interval \(x\in(0,l)\). In this case, the RT surface is parametrized by \(y(\rho)\) at a fixed time slice \(\tau=0\). In the following, we use a similar approach in reference [53]. The area functional is \(\mathcal{A}=\int\mathrm{d}\rho\mathcal{L}\) with the Lagrangian \[\mathcal{L}=\sqrt{1+e^{2A}\frac{\dot{y}^{2}}{y^{2}}},\quad\dot{y}=\frac{ \mathrm{d}x}{\mathrm{d}\rho}. \tag{113}\] The Lagrangian leads to the geodesic equation \[\frac{\dot{y}}{y}=\frac{c_{s}e^{-A}}{\sqrt{e^{2A}-c_{s}^{2}}}, \tag{114}\] where \(0\leq c_{s}\leq e^{A^{*}}\) is an integration constant. In general, \(\rho\) runs from \(-\infty\) to \(\infty\), and \(c_{s}\) determines the end points of the interval \(l_{L}\) and \(l_{R}\). A spatial case is \(c_{s}=e^{A^{*}}\), the right-hand-side of (114) diverges at \(\rho=\rho^{*}\). The only possible solution is \(y=0\) at \(\rho^{*}\). Therefore, the end point of the interval is right on the interface \(l_{L}=0\), and for simplicity we denote \(l_{R}=l\). In this spatial case, the parameter runs in the range \(\rho\in(\rho_{0},\rho_{+})\) such that \(\rho_{0}\to\rho^{*}\), \(\rho_{+}\to\infty\). 19 An illustration of the RT surface is shown in figure 20 (c). With a solution of \(A(\rho)\), the RT surface satisfies Footnote 19: We do not directly take the limit because we need to regularize the area. \[\frac{\dot{y}}{y}=\frac{e^{A^{*}-A(\rho)}}{\sqrt{e^{2A(\rho)}-e^{2A^{*}}}} \tag{115}\] and its area reads \[\mathcal{A}=\int_{\rho_{0}}^{\rho_{+}}\mathrm{d}\rho\mathcal{L},\quad\mathcal{ L}=\frac{1}{\sqrt{1-e^{2(A^{*}-A(\rho))}}}. \tag{116}\] We are interested in the leading order of the area as a function of the interval length. To proceed, we can consider the variation of the area with respect to \(l\): \[\frac{\delta\mathcal{A}}{\delta l}=\mathcal{L}|_{\rho=\rho_{+}}\frac{\delta \rho_{+}}{\delta l}-\mathcal{L}|_{\rho=\rho_{0}}\frac{\delta\rho_{0}}{\delta l}. \tag{117}\] At \(\rho_{+}\to\infty\), \(e^{A}\approx\cosh\rho\), the metric asymptotes to an AdS\({}_{3}\). We impose a conventional regularization \(z=\varepsilon\) at \(x=l\). Then according to the coordinate transformation (6), \(\varepsilon\approx\frac{y}{\cosh\rho_{+}}\), \(l=y\tanh\rho_{+}\), and this leads to \[\rho_{+}=\log\frac{2l}{\varepsilon},\quad\frac{\delta\rho_{+}}{\delta l}=\frac{1 }{l}. \tag{110}\] At a finite \(\rho_{0}\to\rho^{*}\), the time component of the metric is \(\mathrm{d}s^{2}=e^{2A^{*}}\frac{\mathrm{d}\tau^{2}}{y^{2}}\). We again impose the conventional cutoff in an AdS\({}_{2}\) Poincare coordinate, i.e., \(\mathrm{d}s^{2}=\frac{\mathrm{d}\tau^{2}}{\tilde{z}^{2}}\) with \(\tilde{z}=\varepsilon\)[53]. This will lead to \(y(\rho_{0})\approx e^{A^{*}}\varepsilon\). Then we have \(y(\rho_{0})\approx e^{A^{*}}\varepsilon\) and \(y(\rho_{+})\approx l\) at the two ends. Integrating over the differential equation (109), \[\log\frac{l}{e^{A^{*}}\varepsilon}=\int_{\rho_{0}}^{\rho_{+}}\mathrm{d}\rho \frac{e^{A^{*}-A(\rho)}}{\sqrt{e^{2A(\rho)}-e^{2A^{*}}}}. \tag{111}\] Taking the derivative with respect to \(l\), we obtain \[\frac{1}{l}=-\frac{e^{A^{*}-A(\rho_{0})}}{\sqrt{e^{2A(\rho_{0})}-e^{2A^{*}}}} \frac{\delta\rho_{0}}{\delta l}. \tag{112}\] Consequently, the variation of \(\rho_{0}\) is \[\frac{\delta\rho_{0}}{\delta l}=-\frac{1}{l}\frac{\sqrt{e^{2A(\rho_{0})}-e^{2A ^{*}}}}{e^{A^{*}-A(\rho_{0})}}\approx-\frac{1}{l}\sqrt{e^{2A(\rho_{0})}-e^{2A^ {*}}}+\mathcal{O}(\rho_{0}-\rho^{*}). \tag{113}\] Finally, for the Lagrangian at \(\rho_{+}\to\infty\), because \(e^{A}=\cosh\rho_{+}\), \(\mathcal{L}|_{\rho=\rho_{+}}=1\). For Lagrangian at \(\rho_{0}\), we have \[\mathcal{L}_{\rho=\rho_{0}}=\frac{e^{A^{*}}}{\sqrt{e^{2A(\rho_{0})}-e^{2A^{*}} }}+\mathcal{O}(\rho_{0}-\rho^{*}). \tag{114}\] Now put everything together, the variation of the area with respect to \(l\) reads \[\frac{\delta\mathcal{A}}{\delta l}=(1+e^{A^{*}})\frac{1}{l}. \tag{115}\] Then by integration, we get the leading contribution \[\mathcal{A}=(1+e^{A^{*}})\log l, \tag{116}\] which gives the entanglement entropy \(S_{l}=\frac{c_{1}+c_{\mathrm{eff}}}{6}\log l\) with \(c_{\mathrm{eff}}=\frac{3e^{A^{*}}}{2G_{(3)}}\) and \(c_{1}=\frac{3}{2G_{(3)}}\) denoting the effective central charge induced by the defect and the central charge of the original CFT (corresponding to the unmeasured CFT in our case), respectively. A few remarks follow: * Comparing with the result of weak measurements in (109), we can see the weak measurement induced effective central charge is the same as the effective central charge from the interface. Clearly, the two endpoints of the interval in the measurement case are both located at \(\tau=0\), leading to a factor \(2\times e^{A^{*}}\), and in the case of spatial interface CFT, one of the end points is located at \(x=0\) while the other is located at \(x>0\), leading to a factor of \(1+e^{A^{*}}\). * While our calculation is valid for a general \(\rho^{*}\), we expect that the weak measurement will lead to a post-measurement metric with \(\rho^{*}=0\) and \(A^{\prime}(\rho)>0\) for \(\rho>0\). The intuition is that weak measurements gradually decrease the entanglement of the state. Then, according to reference [58], there is a c-theorem stating that \(c_{\rm eff}\leq c_{1}\). It means that the effective central charge induced by weak measurements is not greater than that of the unmeasured CFT. This is consistent with the calculation in the CFT side in reference [26]. ### Python's lunch In Sec. 4.5, we consider the bubble-inside-horizon phase and its geodesics. Although it has the same behavior of entanglement entropy as the bubble-outside-horizon phase, its geometry structure is different with the bubble-outside-horizon phase, and will lead to much larger complexity. The time reversal invariant slice of the bubble-inside-horizon phase is shown in figure 21 (a). We define \(\tilde{r}\) as the global coordinate that corresponds to the distance to center and increases monotonically. Then the relation between the local coordinate and the global coordinate is \[r=\begin{cases}\tilde{r}&\tilde{r}\leq r_{0},\\ 2r_{0}-\tilde{r}&r_{0}<\tilde{r}\leq 2r_{0}-r_{H},\\ \tilde{r}-2r_{0}+2r_{H}&2r_{0}-r_{H}<\tilde{r}.\end{cases} \tag{102}\] Here \(r\) is the local AdS coordinate in region I, and the local black hole coordinate in region II. Notice that \(r\) is also the metric component in the \(x\) direction, i.e., \(\sqrt{g_{xx}}=r\). The function \(r(\tilde{r})\) is monotonic in the bubble-outside-horizon phase, but is not monotonic in the bubble-inside-horizon phase. As shown in figure 21, there are two minimal points at the Figure 21: (a) The reversal invariance slice for the bubble-inside-horizon phase. The blue circle indicates the interface brane and the dashed circle indicates the horizon. (b) The metric component in the \(x\) direction as a function of a global coordinate \(\tilde{r}\). center \(r=0\) of the AdS spacetime and at the horizon \(r=r_{H}\) of the black hole spacetime, respectively, and a maximal point at the interface brane \(r=r_{0}\). This realizes a Python's Lunch geometry. The details of the definition are given in appendix D. The simple example of Python's lunch geometry is shown in figure 34, with a "min-maxmin" structure. For the complexity of this tensor network, beside the contribution from total number of tensors, the post-selection will play an important role and lead to exponentially large complexity. An intuitive understanding is that, for the tensor network from maximum to minimum, we need to apply post-selection to decrease the number of legs to realize the final state. In this process, we must select one state from the whole Hilbert space, with exponentially small probability. Crudely, to have one successful outcome, we must repeat this experiment for exponential times, which leads to exponentially large complexity. Similarly, for the bubble-inside-horizon phase, the region from maximum to minimum between \(r=r_{0}\) and \(r=r_{H}\) corresponds to the post-selection part and leads to exponentially big complexity. While for the bubble-outside-horizon phase, \(r(\tilde{r})\) is a monotonic function so that no post-selection is needed. Therefore, we conclude that, two irrelevant phases with and without a horizon will have completely different behaviors in terms of their complexity. Especially, the bubble-inside-horizon phase has the geometry of Python's lunch. To summarize, in this paper, we consider the holographic dual of weak measurements in conformal field theory. Because of the logarithmic scaling entanglement entropy (characterized by a distinct effective central charge) supported by post-measurement states from marginal measurements, the holographic dual involves interface branes separating spacetimes dual to the post-measurement state and the unmeasured CFT, respectively, generalizing the holographic dual of the conformal boundary state. We also establish the correspondence between the weak measurement and the spatial interface. In a finite system, while the irrelevant measurements will not change the entanglement scaling for the post-measurement state, it may create a Python's lunch. We conclude this paper by mentioning a few open questions we would like to explore in the future: (1) From an information perspective, the weak measurements result in several interesting scenarios for the post-measurement holographic dual. For the irrelevant case, the weak measurement can create a Python's lunch, greatly increasing the complexity for bulk reconstructions. How is the reconstruction map related to the measurement operators? For the marginal case, while the AdS spacetime dual to the unmeasured CFT is replaced by a new different AdS metric, is the bulk information erased by the measurements? (2) While we briefly discuss the thick-brane description, where the bulk solution is continuous, of the weak measurements and find that it is consistent with the thin-brane description, it is worthwhile to explore in more detail about the universal features of weak measurements using the general thick brane construction. Moreover, a top-down construction of weak measurements is also of great interest because we can get a handle from both sides of the theory. A classic example is the so-called Janus solution [64, 65]. While such a solution is not directly related to weak measurements via a spacetime rotation, it would be interesting to explore other deformations or scenarios that can be related to measurements. ###### Acknowledgements. We thank Zhuo-Yu Xian for fruitful discussions. We are grateful to Sanjit Shashi for helpful communication regarding the thick brane construction. XS acknowledges the support from Tsinghua Visiting Doctoral Students Foundation. XS is also supported by the Lavin-Bernick Grant during the visit to Tulane university. The work of SKJ is supported by a start-up fund at Tulane university. Geodesic with spatial point defect In this section, we will use the method in reference [56] to derive the bipartite entanglement entropy for finite or infinite systems. With AdS/CFT duality, we just need to calculate the length of geodesic in different AdS geometry. We first introduce a few coordinates which may be useful later. With (4), we define \[\begin{split} X^{0}=& L\frac{r_{b}}{r_{H}}\cosh \frac{r_{H}}{L}\theta,\qquad X^{1}=L\sqrt{\frac{r_{b}^{2}}{r_{H}^{2}}-1}\cos \frac{r_{H}\tau_{b}}{L^{2}},\\ X^{2}=& L\frac{r_{b}}{r_{H}}\sinh\frac{r_{H}}{L} \theta,\qquad X^{3}=L\sqrt{\frac{r_{b}^{2}}{r_{H}^{2}}-1}\sin\frac{r_{H}\tau_{ b}}{L^{2}},\end{split} \tag{103}\] with corresponding metric \[\mathrm{d}s^{2}=\left(\frac{r_{b}^{2}-r_{H}^{2}}{L^{2}}\right)\mathrm{d}\tau_{ b}^{2}+\left(\frac{r_{b}^{2}-r_{H}^{2}}{L^{2}}\right)^{-1}\mathrm{d}r_{b}^{2}+r_{ b}^{2}\mathrm{d}\theta^{2}. \tag{104}\] The parametrization of (103) requires \(\tau_{b}\sim\tau_{b}+\frac{2\pi L^{2}}{r_{H}}\), which means there is a periodic boundary condition on the \(\tau_{b}\) direction. Now we want to solve the junction condition (3a) and (3b) with coordinate (5a). Consider a brane located on \(\rho_{i}=\rho_{i}^{*}\) where \(i=1,2\). The first condition gives (8a), which leads to \(y_{1}=y_{2},\ \tau_{1}=\tau_{2}\) and \(L_{1}\cosh\left(\frac{\rho_{1}^{*}}{L_{1}}\right)=L_{2}\cosh\left(\frac{\rho_{ 2}^{*}}{L_{2}}\right)\). The second condition can be derived below. The extrinsic curvature can be expressed as \(K_{ab}=\frac{1}{2}n_{i}g^{ij}\partial_{j}g_{ab}\) where \(g_{ij}\) is metric in the original space and \(g_{ab}\) is metric on the hypersurface. We define \(\vec{n}=(n_{\rho}=1,0,0)\) with metric (5b), then \(K_{ab}=\frac{1}{2}g^{\rho\rho}\partial_{\rho}g_{ab}=\frac{1}{2}\partial_{\rho }g_{ab}=\frac{1}{2}\partial_{\rho}(\frac{L^{2}}{y^{2}}\cosh^{2}\frac{\rho}{L} )=\frac{1}{L}\tanh\frac{\rho}{L}g_{ab}\). Because for two regions 1 and 2 the directions of \(\vec{\rho}\) are opposite and here \(g_{ab}\equiv h_{ab}\), then (3b) leads to (8b). Solving these two conditions, we have (16). Finally, we can apply RT formula (17) to calculate the entanglement entropy with the length of different geodesics, and we will consider different cases in the following. In reference [56], the authors give a general geodesic solution for any \(\sigma_{1,2}\) with two regions \(1,2\) in AdS space. Assuming the interface brane is located at \(\chi_{1,2}=\psi_{1,2}\) and two end points are located on \(x_{1}=-\sigma_{1}\) and \(x_{2}=\sigma_{2}\), the length of the geodesic is \[d(\sigma_{1},\sigma_{2})=L_{1}\log\left[\frac{2r}{\varepsilon_{1}}\tan\left( \frac{\varphi}{2}\right)\right]+L_{2}\log\left[\frac{2R}{\varepsilon_{2}}\tan \left(\frac{\theta}{2}\right)\right], \tag{105}\] where \[\begin{split}& r=\frac{1}{2}\csc\left(\frac{\varphi}{2}\right) \sec\left(\frac{\psi_{1}+\psi_{2}}{2}\right)\left[\sigma_{2}\cos\left(\frac{ \theta}{2}\right)-\sigma_{1}\cos\left(\frac{\theta}{2}+\varphi\right)\right], \\ & R=\frac{1}{2}\csc\left(\frac{\theta}{2}\right)\sec\left(\frac{ \psi_{1}+\psi_{2}}{2}\right)\left[\sigma_{1}\cos\left(\frac{\varphi}{2} \right)-\sigma_{2}\cos\left(\frac{\varphi}{2}+\theta\right)\right],\end{split} \tag{106}\] \(\varepsilon_{1,2}\) are UV-cutoff, \(\varphi=\pi+\psi_{1}+\psi_{2}-\theta\) and \(\theta\) is expressed as \[\cos\theta=\frac{\cos\left(\frac{\psi_{1}-\psi_{2}}{2}\right)}{ \sigma_{1}^{2}+\sigma_{2}^{2}+2\sigma_{1}\sigma_{2}\cos\left(\psi_{1}+\psi_{2} \right)}\left\{-\sigma_{2}^{2}\cos\left(\frac{\psi_{1}-\psi_{2}}{2}\right)+ \sigma_{1}^{2}\cos\left(\frac{\psi_{1}+3\psi_{2}}{2}\right)\right.\] \[+2\sigma_{1}\sigma_{2}\sin\psi_{2}\sin\left(\frac{\psi_{1}+\psi_{ 2}}{2}\right)-\left[\sigma_{1}\sin\left(\frac{\psi_{1}+3\psi_{2}}{2}\right)- \sigma_{2}\sin\left(\frac{\psi_{1}-\psi_{2}}{2}\right)\right]\times \tag{100}\] \[\left.\sqrt{\left[\frac{(\sigma_{1}+\sigma_{2})^{2}-(\sigma_{1}- \sigma_{2})^{2}\cos(\psi_{1}-\psi_{2})+4\sigma_{1}\sigma_{2}\cos\left(\psi_{1 }+\psi_{2}\right)}{2\cos^{2}\frac{\psi_{1}-\psi_{2}}{2}}\right]}\right\}.\] Generally, the geodesic equation of equal time slice is \[\left(x-\frac{\sigma_{1}+\sigma_{2}}{2}\right)^{2}+z^{2}=\left(\frac{\sigma_{ 1}-\sigma_{2}}{2}\right)^{2}, \tag{101}\] and the length of the geodesic is (18), which corresponds to two end points \((\tau,x,z)\) and \((\tau^{\prime},x^{\prime},z^{\prime})\) in (101). ### Defect/CFT geodesic in an infinite system #### a.1.1 General solution of defect/CFT geodesic in an infinite system The first case is a subsystem with two end points on the defect and the boundary, respectively, as shown in figure 9. Here, we label the AdS dual of two half-plane CFT as region 1, and the dual of the defect as region 2. To show the geodesic explicitly, we extend the width of the defect, which is approximately zero in CFT. Now we want to calculate the entanglement entropy of the subsystem using the RT formula. The orange curve constituted by two arcs is a smooth geodesic which connects two end points. With (100), we can take the limit \(\sigma_{2}\to 0\) and \(\sigma_{1}=l\). Then (100) can be simplified as \(\cos\theta=-\cos\left(\psi_{2}-\psi_{1}\right)\) for \(\psi_{2}>\psi_{1}\), otherwise \(\cos\theta=-1\). Actually, \(\cos\theta=-1\) with \(\psi_{2}<\psi_{1}\) is not relevant because for \(\theta=\pi\), we have \(r=0\) in (101), which means the whole geodesic is in region 1. With (16), we have \[\sin\psi_{2}-\sin\psi_{1}=\frac{L_{2}}{2T}\left(T^{2}+\frac{1}{L_{2}^{2}}- \frac{1}{L_{1}^{2}}\right)-\frac{L_{1}}{2T}\left(T^{2}+\frac{1}{L_{1}^{2}}- \frac{1}{L_{2}^{2}}\right)=\frac{(\frac{1}{L_{1}}+\frac{1}{L_{2}})^{2}-T^{2}} {2T}(L_{1}-L_{2}). \tag{102}\] Because \(T<T_{\text{max}}=\frac{1}{L_{1}}+\frac{1}{L_{2}}\), \(\psi_{2}<\psi_{1}\) corresponds to \(L_{1}<L_{2}\), which means \(c_{1}<c_{2}\). Therefore, the induced effective central charge of entanglement entropy by defect cannot be larger than the original CFT without defect, which is consistent with the results we get in the main text. Now let's focus on the nontrivial case with \(\psi_{2}>\psi_{1}\) and \(L_{2}<L_{1}\). With \(\theta=\pi-\psi_{2}+\psi_{1},\ \varphi=2\psi_{2}\), (compared with reference we exchange label 1 and 2), we have \[R=\frac{\cos\psi_{1}\sigma_{1}}{\cos\psi_{1}+\cos\psi_{2}},\quad r=\frac{\sin \left(\psi_{2}-\psi_{1}\right)\sigma_{1}}{2\sin\psi_{2}(\cos\psi_{1}+\cos\psi _{2})}. \tag{103}\] So the geodesic length with the same UV-cutoff for two regions is \[d_{0,\sigma_{1}}=(L_{1}+L_{2})\log\frac{\sigma_{1}}{\varepsilon}+d_{\text{sub}}, \tag{114}\] where \(d_{\text{sub}}\) corresponding to the sub-leading term of entanglement entropy induced by the brane in AdS space. \[\begin{split} d_{\text{sub}}=& L_{2}\log\left(\frac{ \sin\left(\psi_{2}-\psi_{1}\right)}{\cos\psi_{2}(\cos\psi_{1}+\cos\psi_{2})} \right)+L_{1}\log\left(\frac{2\cos\psi_{1}}{\tan\frac{\psi_{2}-\psi_{1}}{2}( \cos\psi_{1}+\cos\psi_{2})}\right)\\ =& L_{2}\log\left(\frac{\sin\frac{\psi_{2}-\psi_{1}} {2}}{\cos\psi_{2}\cos\frac{\psi_{1}+\psi_{2}}{2}}\right)+L_{1}\log\left(\frac {\cos\psi_{1}}{\sin\frac{\psi_{2}-\psi_{1}}{2}\cos\frac{\psi_{1}+\psi_{2}}{2} }\right).\end{split} \tag{115}\] We can find that the results of (114) and (115) are consistent with the results of (28) and (29). Therefore, with (17) we have \[S_{0,\sigma_{1}}=\frac{c_{1}+c_{2}}{6}\log\frac{l}{\varepsilon}+S_{\text{sub}}, \tag{116}\] where \(S_{\text{sub}}=\frac{d_{\text{sub}}}{4G_{(3)}}\). Here, the prefactor of the log-term is \(\frac{c_{1}+c_{2}}{6}\) because the locations of each end points are in region 1 and 2, respectively. #### a.1.2 Large size limit for the original region In the following, we consider the limit case \(\sigma_{1}\rightarrow\infty\) with fixed finite \(\sigma_{2}\) with figure 22. We denote the end points of the geodesic to be Q in region 1 and P in region 2, respectively. In the limit \(\sigma_{1}\rightarrow\infty\), the figure does not show Q explicitly. To have an intuition about the limit, we can calculate the limit of (113). \[\cos\theta=\begin{cases}\cos\left(\psi_{1}+\psi_{2}\right)+\frac{4 \sigma_{2}\cos\psi_{1}(\cos\psi_{2}-\cos\psi_{1})\cos^{2}\frac{\psi_{1}+\psi_{ 2}}{2}}{-1+\cos\left(\psi_{1}-\psi_{2}\right)}\sigma_{1}^{-1},&\psi_{1}>\psi_ {2},\\ \cos\left(2\psi_{2}\right)+4\sigma_{2}\cos^{2}\psi_{2}\cos\frac{\psi_{1}+\psi_ {2}}{2}\csc\frac{\psi_{1}-\psi_{2}}{2}\sin\psi_{2}\sigma_{1}^{-1},&\psi_{1}< \psi_{2}.\end{cases} \tag{117}\] Figure 22: AdS dual of CFT with interface defect for infinite system and the defect/CFT geodesic. Therefore, the relation between \(\psi_{1}\) and \(\psi_{2}\) plays an important role for the limit case. In the following, we will discuss this property in detail. We first consider \(\psi_{2}<\psi_{1}\) case shown in figure 22 (a). The key is that, for \(\sigma_{1}\to\infty\) we have SA // CFT\({}_{\rm left}\), which can be proven that it only exists for \(\psi_{2}<\psi_{1}\) as follows. For \(\triangle\)ASO, because \(\sigma_{2}>0\), we have \(\angle\)SOA \(=\frac{\pi}{2}-\psi_{2}>\angle\)SPA \(=\frac{\pi-(\psi_{1}+\psi_{2})}{2}\), which requires \(\psi_{2}<\psi_{1}\). Then we can the geodesic. With the law of sines, because \(\angle\)ASO \(=\pi-(\psi_{1}+\psi_{2})-(\frac{\pi}{2}-\psi_{2})=\frac{\pi}{2}-\psi_{1}\), we have \[\frac{R-\sigma_{2}}{\sin\left(\frac{\pi}{2}-\psi_{1}\right)}=\frac{R}{\sin \left(\frac{\pi}{2}-\psi_{2}\right)}. \tag{111}\] The solution is \(R=\frac{\cos\psi_{2}\sigma_{2}}{\cos\psi_{2}-\cos\psi_{1}}\). It satisfies \(R>\sigma_{2}\) because only one of \(\psi_{1}\) and \(\psi_{2}\) can be negative, which means \(\cos\psi_{2}-\cos\psi_{1}>0\) with \(\sin\psi_{1}+\sin\psi_{2}>0\). Then we can get \(|\)SO\(|\) in \(\triangle\)SAO. With the law of sines, we have \(|\)SO\(|\)\(=\frac{R\sin\left(\psi_{1}+\psi_{2}\right)}{\cos\psi_{2}}\), which is a constant for fixed \(\sigma_{2}\). Therefore, for the limit case \(\sigma_{1}\to\infty\), the geodesic length in region 2 is a constant and contributes a sub-leading term. Now we can calculate the entanglement entropy for \(\psi_{2}<\psi_{1}\). For the left part of the geodesic QS with end points Q and S we have Q \(=(-\sigma_{1},\varepsilon)\) and S \(=(|\)OS\(|\)\(\sin\psi_{1},|\)OS\(|\)\(\cos\psi_{1}\)\()\), which gives \[d_{\rm QS}=L_{1}\cosh^{-1}\frac{(|\)OS\(|\)\(\sin\psi_{1}+\sigma_{1})^{2}+(|\)OS\(|\)\(\cos\psi_{1})^{2}+\varepsilon^{2}}{2|\)OS\(|\)\(\cos\psi_{1}\varepsilon\)}.\] (112a) Besides, in the large \[\sigma_{1}\] limit we keep the leading term \[\sigma_{1}^{2}\] in the numerator, which gives \[d_{\rm QS}\approx L_{1}\log\frac{\sigma_{1}^{2}}{\varepsilon[\sigma_{2}\sin \left(\psi_{1}+\psi_{2}\right)\cos\psi_{1}/(\cos\psi_{2}-\cos\psi_{2})]}=L_{1} \log\frac{\sigma_{1}^{2}/\sigma_{2}}{\varepsilon}+d_{\rm QS,sub},\] (112b) where \[d_{\rm QS,sub}=L_{1}\log\frac{\cos\psi_{2}-\cos\psi_{2}}{\sin\left(\psi_{1}+ \psi_{2}\right)\cos\psi_{1}}\]. For the right part of the geodesic SP we have \[{\rm S}=(-|\)OS\(|\)\(\sin\psi_{2},|\)OS\(|\)\(\cos\psi_{2})\()\] and P \[=(\sigma_{2},\varepsilon)\], which gives \[d_{\rm SP}=L_{2}\cosh^{-1}\frac{(|\)OS\(|\)\(\sin\psi_{2}+\sigma_{2})^{2}+(|\)OS\(|\)\(\cos\psi_{2})^{2}+\varepsilon^{2}}{2|\)OS\(|\)\(\cos\psi_{2}\)\(\varepsilon\)\(\approx L_{2}\log\frac{\sigma_{2}}{\varepsilon}+d_{\rm SP,sub}\). (112c) where \(d_{\rm SP,sub}=L_{2}\log\frac{\sin^{2}\left(\psi_{1}+\psi_{2}\right)+(\cos\psi_ {2}-\cos\psi_{1})^{2}+2\sin\psi_{2}\sin\left(\psi_{1}+\psi_{2}\right)(\cos\psi _{2}-\cos\psi_{1})}{\cos\psi_{2}\sin\left(\psi_{1}+\psi_{2}\right)(\cos\psi_{ 2}-\cos\psi_{1})}\). Therefore, the total length of the geodesic reads \[d=L_{1}\log\frac{\sigma_{1}^{2}/\sigma_{2}}{\varepsilon}+L_{2}\log\frac{\sigma _{2}}{\varepsilon}+d_{\rm sub}, \tag{113}\] where \(d_{\rm sub}=d_{\rm SP,sub}+d_{\rm QS,sub}\). Therefore, for \(\sigma_{1}\gg 1\) with a fixed \(\sigma_{2}\), the prefactor of \(\log\sigma_{1}\) is \(2L_{1}\), which leads to the prefactor \(\frac{c_{1}}{3}\) for the entanglement entropy. Now we consider \(\psi_{2}>\psi_{1}\) case shown in figure 22 (b). For \(\sigma_{1}\to\infty\) we expect \(|\)OS\(|\)\(\to\infty\), which means \(|\)AO\(|\)\(\approx\)\(|\)AP\(|\). Then \(\triangle\)SAO is an isosceles triangle with \(\angle\)ASP \(=\angle\)APS \(=\frac{\pi}{2}-\psi_{2}\). To solve the geodesic, let's look at \(\triangle\)SBO. With the law of sines, we have \[\frac{\sigma_{1}-r}{\sin\left(\frac{\pi}{2}-\psi_{2}\right)}=\frac{r}{\sin \left(\frac{\pi}{2}+\psi_{1}\right)}=\frac{|\)SO\(|\)\(\sin\left(\psi_{2}-\psi_{1}\right)}{\sin\left(\psi_{2}-\psi_{1} \right)}. \tag{114}\] he solutions are \(r=\frac{\cos\psi_{1}\sigma_{1}}{\cos\psi_{2}+\cos\psi_{1}}\rightarrow\infty\) and \(|\text{SO}|=\frac{\sin\left(\psi_{2}-\psi_{1}\right)\sigma_{1}}{\cos\psi_{2}+ \cos\psi_{1}}\). With the results above, we just need to plug new \(|\text{OS}|\) into (166a) and (166c). Finally, the total geodesic length is \[d=L_{1}\log\frac{\sigma_{1}}{\varepsilon}+L_{2}\log\frac{\sigma_{1}}{ \varepsilon}+d_{\text{sub}}, \tag{167}\] where \[d_{\text{sub}}=L_{1}\log\frac{A^{2}+1+2A\sin\psi_{1}}{A\cos\psi_{1}}+L_{2}\log \frac{A}{\cos\psi_{2}} \tag{168}\] and \(A=\frac{\sin\left(\psi_{2}-\psi_{1}\right)}{\cos\psi_{2}+\cos\psi_{1}}\). Therefore, in this case, the prefactor of \(\log\sigma_{1}\) is \(L_{1}+L_{2}\), which leads to the prefactor \(\frac{c_{1}+c_{2}}{6}\) for the entanglement entropy. Here is an additional remark about the results above. For two cases \(L_{1}<L_{2}\) and \(L_{1}>L_{2}\), the prefactor of \(\log\sigma_{1}\) are different at the large \(\sigma_{1}\) limit. The prefactor of log-term in entanglement entropy is always the smaller one: for \(c_{1}<c_{2}\) we have \(\frac{c_{1}}{6}+\frac{c_{1}}{6}\), but for \(c_{1}>c_{2}\) we have \(\frac{c_{1}}{6}+\frac{c_{2}}{6}\). In summary, the leading term of the entanglement entropy at \(\sigma_{1}\rightarrow\infty\) reads \[S_{-\sigma_{1},\sigma_{2}}=\left(\frac{c_{1}}{6}+\frac{1}{6}\min(c_{1},c_{2}) \right)\log\sigma_{1}. \tag{169}\] This property also exists in more complicated cases, which will be considered later. ### CFT/CFT geodesic in an infinite system #### a.2.1 General case for CFT/CFT geodesic in an infinite system For a 1d quantum system with a defect located at \(x=0\), we can choose a subsystem with two end points located on different sides. In the following, we will construct the geodesic with several arcs and use (2.18) to calculate its length. The diagram is shown in figure 23. Here we denote the location of two end points (\(x=-\sigma_{l},z=\varepsilon,\tau=0\)) and (\(x^{\prime}=\sigma_{r},z^{\prime}=\varepsilon,\tau^{\prime}=0\)), and the radius of three arcs are \(R_{l},\ R,\ R_{r}\). In the following, we Figure 23: AdS dual of CFT with interface defect for infinite system and the geodesic. (a) General CFT/CFT geodesic for \(\psi_{2}>\psi_{1}\). (b) Geodesic for limit case \(\sigma_{l}=0\) with \(\psi_{2}<\psi_{1}\). will start with \(R_{r}\) and \(\sigma_{r}\), and finally express \(\sigma_{l}\) with \(R_{r}\) and \(\sigma_{r}\). Then we can express the geodesic length only with \(\sigma_{l}\) and \(\sigma_{r}\). We will calculate the geodesic in several steps. (i) Assuming \(|\text{OD}|=\alpha\), and under the constraint \(R_{r}>\frac{\sigma_{r}}{2}\), we have \[\begin{split}|\text{DF}|^{2}=[(\sigma_{r}-R_{r})\cos{(\psi_{1}+ \psi_{2})}-\alpha\sin{\psi_{2}}]^{2}+[(\sigma_{r}-R_{r})\sin{(\psi_{1}+\psi_{2} )}+\alpha\cos{\psi_{2}}]^{2}=R_{r}^{2},\\ \alpha=(R_{r}-\sigma_{r})\sin{\psi_{1}}+\sqrt{R_{r}^{2}-(R_{r}- \sigma_{r})^{2}\cos^{2}{\psi_{1}}}.\end{split} \tag{115a}\] (ii) Solving the equation of the line FD, we have the point \(\text{G}=(x_{0},0)\) with \[x_{0}=\alpha\sin{\psi_{2}}+\frac{(\sigma_{r}-R_{r})\cos{(\psi_{1}+\psi_{2})}- \alpha\sin{\psi_{2}}}{-(\sigma_{r}-R_{r})\sin{(\psi_{1}+\psi_{2})}-\alpha\cos {\psi_{2}}}(-\alpha\cos{\psi_{2}}). \tag{115b}\] (iii) We define \(|\text{DG}|=\beta\) and \((\cdot)_{x,y}\) is \(x,y\)-coordinate of the point \((\cdot)\), then \[\beta=R_{r}\frac{\text{D}_{y}-\text{G}_{y}}{\text{D}_{y}-\text{F}_{y}}=\frac{ \alpha\cos{\psi_{2}}}{\alpha\cos{\psi_{2}}-[-(\sigma_{r}-R_{r})\sin{(\psi_{1} +\psi_{2})}]}R_{r}. \tag{115c}\] (iv) With \(|\text{OC}|=\gamma\), then \(|\text{CG}|=|\text{DG}|\) with the constraint \(\beta>x_{0}\) gives \[\gamma=\sqrt{\beta^{2}-x_{0}^{2}\cos^{2}{\psi_{2}}}-x_{0}\sin{\psi_{2}}. \tag{115d}\] (v) With E located at \((x_{1},\ y_{1}=\tan{(\psi_{1}+\psi_{2})}x_{1})\), it is also on the line CG, which requires \[x_{1}=\frac{\gamma\cos{\psi_{2}}}{\gamma\cos{\psi_{2}}+(\gamma\sin{\psi_{2}}+x _{0})\tan{(\psi_{1}+\psi_{2})}}x_{0}. \tag{115e}\] (vi) Finally we have \(|\text{CE}|=|\text{AE}|\), which gives \[\sigma_{l}=-\frac{x_{1}}{\cos{(\psi_{1}+\psi_{2})}}+\sqrt{\gamma^{2}+\frac{x _{1}}{\cos{(\psi_{1}+\psi_{2})}}\left(\frac{x_{1}}{\cos{(\psi_{1}+\psi_{2})}}- 2\gamma\sin{\psi_{1}}\right)}, \tag{115f}\] where we use the constraint \(\sigma_{l}>0\). For a fixed \(\sigma_{r}\), and any given \(R_{r}\), we can express all variables above, including \(\sigma_{l}\). Then the length of the geodesic can be considered as below. For AC, \(\text{A}=(-\sigma_{l},\ \varepsilon)\) and \(\text{C}=(\gamma\sin{\psi_{1}},\ \gamma\cos{\psi_{1}})\), so the length is \[d_{\text{AC}}=L_{1}\cosh^{-1}{\left(\frac{\sigma_{l}^{2}+\gamma^{2}+2\sigma_{ l}\gamma\sin{\psi_{1}}}{2\varepsilon\gamma\cos{\psi_{1}}}\right)}\approx L_{1} \log{\left(\frac{\sigma_{l}^{2}+\gamma^{2}+2\sigma_{l}\gamma\sin{\psi_{1}}}{ \varepsilon\gamma\cos{\psi_{1}}}\right)}.\] (115a) Similarly, for CD, \[\text{C}=(-\gamma\sin{\psi_{2}},\ \gamma\cos{\psi_{2}})\] and \[\text{D}=(\alpha\sin{\psi_{2}},\ \alpha\cos{\psi_{2}})\], so the length is \[d_{\text{CD}}=L_{2}\cosh^{-1}{\left(\frac{(\alpha+\gamma)^{2}-2\alpha\gamma\cos ^{2}{\psi_{2}}}{2\alpha\gamma\cos^{2}{\psi_{2}}}\right)}. \tag{115b}\] For DB, \(\text{D}=(-\alpha\sin{\psi_{1}},\ \alpha\cos{\psi_{1}})\) and \(\text{B}=(\sigma_{r},\ \varepsilon)\), so the length is \[d_{\text{DB}}=L_{1}\cosh^{-1}{\left(\frac{\sigma_{r}^{2}+\alpha^{2}+2\sigma_{r} \alpha\sin{\psi_{1}}}{2\varepsilon\alpha\cos{\psi_{1}}}\right)}\approx L_{1} \log{\left(\frac{\sigma_{r}^{2}+\alpha^{2}+2\sigma_{r}\alpha\sin{\psi_{1}}}{ \varepsilon\alpha\cos{\psi_{1}}}\right)}. \tag{115c}\] Therefore, the total length of the geodesic gives the entanglement entropy \[S_{R_{r}(\sigma_{l}),\sigma_{r}}=\frac{d_{\rm AC}+d_{\rm CD}+d_{\rm DB }}{4G_{(3)}}\] \[=\frac{c_{1}}{6}\log\bigg{(}\frac{\sigma_{l}^{2}+\gamma^{2}+2 \sigma_{l}\gamma\sin\psi_{1}}{\varepsilon\gamma\cos\psi_{1}}\frac{\sigma_{r}^{2 }+\alpha^{2}+2\sigma_{r}\alpha\sin\psi_{1}}{\varepsilon\alpha\cos\psi_{1}} \bigg{)}+\frac{c_{2}}{6}\cosh^{-1}\bigg{(}\frac{(\alpha+\gamma)^{2}-2\alpha \gamma\cos^{2}\psi_{2}}{2\alpha\gamma\cos^{2}\psi_{2}}\bigg{)}. \tag{111}\] From the result above, it seems that the prefactor of log-term is \(\frac{c_{1}}{3}\), and the contribution of region 2 is only a constant. Later we will show that it is not correct, and for different parameters there are also two cases. #### a.2.2 Numerical results of the general case The equations above cannot be solved analytically with general variables \(\sigma_{l}\) and \(\sigma_{r}\), and only numerical calculation is available. Here we show some numerical results. We first consider the marginal case \(L_{1}>L_{2}\) and \(\psi_{1}<\psi_{2}\). With (110) and (111), we consider \(L_{1}=2,\ L_{2}=1,\ T=1,\ \varepsilon=1\) and \(\sigma_{r}=10\). Numerically, we can tune \(R_{r}\in[5,20]\), and calculate the corresponding \(\sigma_{l}\) and the total geodesic length. In the calculation, we will find that there is a range \(R_{r}\in(R_{r}^{\rm min},R_{r}^{\rm max})\) for the solution to exist, and \(R_{r}^{\rm min}>\frac{\sigma_{r}}{2}\). For \(R_{r}=R_{r}^{\rm min}\), we have \(\sigma_{l}=0\). The geodesic length is shown in figure 24 (a). For \(L_{1}<L_{2}\) and \(\psi_{1}>\psi_{2}\), the geodesic length is shown in figure 24 (b). With (110) and (111), we consider \(L_{1}=1,\ L_{2}=2,\ T=1,\ \varepsilon=1\) and \(\sigma_{r}=10\). In the numerical calculation, there is no \(R_{r}^{\rm max}\) and \(R_{r}^{\rm min}=\frac{\sigma_{r}}{2}\). It is unusual that \(R_{r}^{\rm min}>\frac{\sigma_{r}}{2}\), which means in figure 23 (a) although there is a finite length of \(|{\rm DO}|\), the length of \(|{\rm AO}|=0\). In the following, we will first prove the existence of \(R_{r}^{\rm min}>\frac{\sigma_{r}}{2}\) for \(L_{1}>L_{2}\) case. Actually, there are two possibilities. One is \(|{\rm CE}|=|{\rm OE}|\neq 0\), and the other is \(|{\rm OC}|=0\). We can prove that only the second one is possible. If \(|{\rm CE}|=|{\rm OE}|\neq 0\), then \(\angle{\rm ECO}=\angle{\rm ECO}\), so \(\angle{\rm CGO}=(\frac{\pi}{2}-\psi_{2})-(\frac{\pi}{2}-\psi_{1})=\psi_{1}- \psi_{2}<0\), which means there will be no crossing point G on the positive x-axis and no solution for \({\rm F}\). For the second case we have \(|\text{DG}|=|\text{OG}|\), \(\angle\text{DGO}=\pi-2(\frac{\pi}{2}-\psi_{2})=2\psi_{2}\) and \(\angle\text{GOF}=\psi_{1}+\psi_{2}<\angle\text{DGO}\), so there always exists the crossing point F, which means the existence of \(R_{r}^{\text{min}}>\frac{\sigma_{r}}{2}\). While, for \(L_{1}<L_{2}\) and \(\psi_{1}>\psi_{2}\), we have \(\angle\text{GOF}=\psi_{1}+\psi_{2}>\angle\text{DGO}\), so there is no solution for crossing point and no \(R_{r}^{\text{min}}>\frac{\sigma_{r}}{2}\). It can also be shown as follows with figure 23 (b). With one fixed end point at the defect, we can prove that DG // \(\text{CFT}_{\text{right}}\) for another side, which means for finite \(\sigma_{r}\) we cannot have \(\sigma_{l}=0\). (i) Assuming \(|\text{CA}|=\alpha\), with \(\angle\text{CGA}=(\frac{\pi}{2}-\psi_{2})-\angle\text{GCA}=(\frac{\pi}{2}- \psi_{2})-\angle\text{EAC}=\psi_{1}-\psi_{2}\), we have the equation of line CG that \(\frac{y-\alpha\cos\psi_{2}}{x+\alpha\sin\psi_{2}}=-\tan{(\psi_{1}-\psi_{2})}\). (ii) Setting \(y=0\) we have \(x_{\text{G}}=x_{0}=\frac{\alpha\cos\psi_{2}}{\tan\psi_{1}-\psi_{2}}-\alpha \sin\psi_{2}\). (iii) Defining \(|\text{AD}|=\beta\), with \(|\text{CG}|=|\text{DG}|\) we have the equation \(\beta^{2}-2\beta x_{0}\sin\psi_{2}=\alpha^{2}+2\alpha x_{0}\sin\psi_{2}\) and the solution \(\alpha=\beta-2x_{0}\sin\psi_{2}\). (iv) Then we can calculate the slope of the line DG, which gives \[\tan\angle\text{DGA}=\frac{\beta\cos\psi_{2}}{x_{0}-\beta\sin\psi_{2}}=\tan{( \psi_{1}+\psi_{2})}. \tag{103}\] Because the slope of right CFT is \(\tan{(\psi_{1}+\psi_{2})}\), it means \(\text{CFT}_{\text{left}}\) // DG. To summarize, we have shown that for \(L_{1}>L_{2}\), \(R_{r}^{\text{min}}>\frac{\sigma_{r}}{2}\), while for \(L_{1}<L_{2}\), \(R_{r}^{\text{min}}=\frac{\sigma_{r}}{2}\). Besides, we can calculate \(R_{r}^{\text{min}}\) analytically for \(L_{1}>L_{2}\). For \(R_{r}=R_{r}^{\text{min}}\), we have \(|\text{DG}|=|\text{OG}|\). In figure 23 (a), for \(\angle\text{DFO}\), we have \(|\text{DF}|=R_{r}\) and \(|\text{OF}|=\sigma_{r}-R_{r}\). With \(|\text{DG}|=|\text{OG}|\), we have \(\angle\text{ODF}=\angle\text{DOG}=\frac{\pi}{2}-\psi_{2}\), and \(\angle\text{DOF}=\frac{\pi}{2}+\psi_{1}\). Then the law of sines gives \[\frac{R_{r}}{\sin{\left(\frac{\pi}{2}+\psi_{1}\right)}}=\frac{\sigma_{r}-R_{r }}{\sin{\left(\frac{\pi}{2}-\psi_{2}\right)}}, \tag{104}\] which means \(R_{r}^{\text{min}}=\frac{\sigma_{r}\cos\psi_{1}}{\cos\psi_{1}+\cos\psi_{2}}> \frac{\sigma_{r}}{2}\). If we plug the parameter above in it, we will get \(R_{r}^{\text{min}}\approx 6.6667\), which is consistent with the numerical results. Now we discuss the critical value \(R_{r}^{\text{max}}>\sigma_{r}\) for \(L_{1}>L_{2}\). The key is if the crossing point G exists in figure 25 (a), which corresponds to DF // \(x-\)axis, then \(\angle\text{DFB}=\psi_{1}+\psi_{2}\), and \(\angle\text{FDB}=\angle\text{FBD}=\frac{\pi-(\psi_{1}+\psi_{2})}{2}\). Because we require \(\angle\text{FBD}<\angle\text{FOD}=\psi_{2}+(\frac{\pi}{2}-(\psi_{1}+\psi_{2}))=\) Figure 25: AdS dual of CFT with interface defect for infinite system and its geodesic. (a) General CFT/CFT geodesic for \(\psi_{2}>\psi_{1}\). (b) Geodesic for large \(\sigma_{l}\) limit case with \(\psi_{2}>\psi_{1}\). \(\frac{\pi}{2}-\psi_{1}\), it means \(\psi_{1}<\psi_{2}\). This result is consistent with the conclusion that, for \(\psi_{1}<\psi_{2}\), there exists \(R_{r}^{\rm max}\). While for \(\psi_{1}>\psi_{2}\), we can show that there does not exist \(R_{r}^{\rm max}\) in later numerical results. Similarly, we can also calculate \(R_{r}^{\rm max}\) analytically. With figure 25 (b), consider \(\triangle\)FDO and DF \(//\)\(x-\)axis, where \(|{\rm FD}|=|{\rm DB}|=R_{r}\) and \(|{\rm FO}|=R_{r}-\sigma_{r}\). Because \(\angle{\rm FDO}=\frac{\pi}{2}-\psi_{2}\), \(\angle{\rm FOD}=\frac{\pi}{2}-\psi_{1}\), with the law of sines we have \[\frac{R_{r}}{\sin\left(\frac{\pi}{2}-\psi_{1}\right)}=\frac{R_{r}-\sigma_{r}}{ \sin\left(\frac{\pi}{2}-\psi_{2}\right)}, \tag{100}\] which gives \(R_{r}^{\rm max}=\frac{\sigma_{r}\cos\psi_{1}}{\cos\psi_{1}-\cos\psi_{2}}> \sigma_{r}\). If we plug the parameters above in it, we will get \(R_{r}^{\rm max}=20\) and it is consistent with numerical results. There are several remarks for both cases. (i) Although there is a finite range for \(R_{r}\), for any \(\sigma_{l}\) there always exists a solution, which means \(\sigma_{l}\) ranges from zero to infinity. (ii) We plot the geodesic length in a log-linear plot in figure 24. For \(L_{1}>L_{2}\) the prefactor of log term for large \(R_{r}\) is 3, while for \(L_{1}<L_{2}\) the prefactor is 2. It can be understood as \(3=L_{1}+L_{2}\) and \(2=\frac{3}{2}L_{1}+\frac{1}{2}L_{1}\). #### a.2.3 Analytical approximation of the general case In the following, we consider the limit case \(\sigma_{l}\gg\sigma_{r}\) for different phases, which are shown in figure 26. For \(\psi_{1}<\psi_{2}\) in figure 26 (a), the limit case is DF \(//\) OG. We first solve the geodesic. Considering \(\triangle\)FDB, (similar to figure 22 (a)), we have \(\angle{\rm DFB}=\psi_{1}+\psi_{2},\angle{\rm FDO}=\frac{\pi}{2}-\psi_{2}\) and \(\angle{\rm FOD}=\frac{\pi}{2}-\psi_{1}\). With the law of sines and \({\rm DF}=R_{r}\), we have \[\frac{R_{r}}{\sin\left(\frac{\pi}{2}-\psi_{1}\right)}=\frac{R_{r}-\sigma_{r}} {\sin\left(\frac{\pi}{2}-\psi_{2}\right)}=\frac{|{\rm OD}|}{\sin\left(\psi_{1} +\psi_{2}\right)}.\] (101a) The solutions are \[R_{r}=\frac{\cos\psi_{1}\sigma_{r}}{\left(\cos\psi_{1}-\cos\psi_{2}\right)}\] and \[|{\rm OD}|=\frac{\sin\left(\psi_{1}+\psi_{2}\right)\sigma_{r}}{\left(\cos\psi _{1}-\cos\psi_{2}\right)}=A_{2}\sigma_{r}\] It means for \[\sigma_{l}\rightarrow\infty\], \[|{\rm OD}|\sim{\cal O}(\sigma_{r})={\rm const.}\] with a fixed \[\sigma_{r}\], and \[\angle{\rm GDH}=\angle{\rm GHD}=\frac{\pi}{2}\]. So \[|{\rm OH}|\approx\frac{\pi}{2}-\psi_{1}\]. Figure 26: AdS dual of CFT with interface defect for infinite system and its geodesic. Here A is the crossing point of left CFT and geodesic, and E is the crossing point of left CFT and line CG. For the discussion above, we consider the CFT boundary of \(AdS_{2}\) is on a single line \(x=0\). \(|\text{OD}|\cos(\frac{\pi}{2}-\psi_{2})\sim\mathcal{O}(\sigma_{r})=\text{const.}\). Therefore, for \(\triangle\)EBC, we have \(\angle\text{GCH}=\angle\text{GHC}\approx\angle\text{GOC}=\frac{\pi}{2}-\psi_{2} \approx\angle\text{GCO}\) and \(\angle\text{COE}=\frac{\pi}{2}+\psi_{1}\). Now with the law of sines (similar to figure 22 (b)), we have \[\frac{R_{l}}{\sin\left(\frac{\pi}{2}+\psi_{1}\right)}=\frac{\sigma_{l}-R_{l}}{ \sin\left(\frac{\pi}{2}-\psi_{2}\right)}=\frac{|\text{OC}|}{\sin\left(\psi_{2 }-\psi_{1}\right)}. \tag{116b}\] The solutions are \(R_{l}=\frac{\cos\psi_{1}\sigma_{l}}{(\cos\psi_{1}+\cos\psi_{2})}\) and \(|\text{OC}|=\frac{\sin\left(\psi_{2}-\psi_{1}\right)\sigma_{l}}{(\cos\psi_{1} +\cos\psi_{2})}=A_{1}\sigma_{l}\). Now we can consider the length of the geodesic. For AC with \(\text{A}=(-\sigma_{l},\varepsilon)\) and \(\text{C}=(|\text{OC}|\sin\psi_{1},|\text{OC}|\cos\psi_{1})\), we have \[\begin{split} d_{\text{AC}}=& L_{1}\cosh^{-1}\frac{(| \text{OC}|\sin\psi_{1}+\sigma_{l})^{2}+\varepsilon^{2}+(|\text{OC}|\cos\psi_{ 1})^{2}}{2\varepsilon|\text{OC}|\cos\psi_{1}}\\ \approx& L_{1}\cosh^{-1}\frac{\sigma_{l}}{2 \varepsilon}\frac{A_{1}^{2}+1+2A^{2}\sin\psi_{1}}{A_{1}\cos\psi_{1}}\approx L_ {1}\log\frac{\sigma_{l}}{\varepsilon}\frac{A_{1}^{2}+1+2A_{1}^{2}\sin\psi_{1 }}{A_{1}\cos\psi_{1}}.\end{split}\] (117a) For CD with \[\text{C}=(-|\text{OC}|\sin\psi_{2},|\text{OC}|\cos\psi_{2})\] and \[\text{D}=(|\text{OD}|\sin\psi_{2},|\text{OD}|\cos\psi_{2})\], we have \[\begin{split} d_{\text{CD}}=& L_{2}\cosh^{-1}\frac{(| \text{OC}|+|\text{OD}|)^{2}\sin^{2}\psi_{2}+(|\text{OC}|\cos\psi_{2})^{2}+(| \text{OD}|\cos\psi_{2})^{2}}{2|\text{OC}||\text{OD}|\cos^{2}\psi_{2}}\\ =& L_{2}\cosh^{-1}\frac{A_{1}^{2}\sigma_{l}^{2}+A_{ 2}^{2}\sigma_{r}^{2}+2A_{1}A_{2}\sigma_{l}\sigma_{r}\sin^{2}\psi_{2}}{2A_{1}A_ {2}\sigma_{l}\sigma_{r}\cos^{2}\psi_{2}}.\end{split}\] (117b) Under the limit \[\sigma_{l}\gg\sigma_{r}\], we have \[d_{\text{CD}}\approx L_{2}\cosh^{-1}\frac{A_{1}^{2}\sigma_{l}^{2}}{2A_{1}A_{2} \sigma_{l}\sigma_{r}\cos^{2}\psi_{2}}\approx L_{2}\log\frac{\sigma_{l}}{ \sigma_{r}}\frac{1}{A_{2}\cos^{2}\psi_{2}}.\] (117c) For DB with \[\text{D}=(-|\text{OD}|\sin\psi_{1},|\text{OD}|\cos\psi_{1})\] and \[\begin{split} d_{\text{DB}}=& L_{1}\cosh^{-1} \frac{(|\text{OD}|\sin\psi_{1}+\sigma_{r})^{2}+\varepsilon^{2}+(|\text{OD}| \cos\psi_{1})^{2}}{2\varepsilon|\text{OD}|\cos\psi_{1}}\\ \approx& L_{1}\cosh^{-1}\frac{\sigma_{r}}{2 \varepsilon}\frac{A_{2}^{2}+1+2A_{2}^{2}\sin\psi_{1}}{A_{2}\cos\psi_{1}} \approx L_{1}\log\frac{\sigma_{r}}{\varepsilon}\frac{A_{2}^{2}+1+2A_{2}^{2} \sin\psi_{1}}{A_{2}\cos\psi_{1}}.\end{split}\] (117d) Then the total length of the geodesic is \[d\approx L_{1}\log\frac{\sigma_{l}}{\varepsilon}\frac{A_{1}^{2}+1+2A_{1}^{2} \sin\psi_{1}}{A_{1}\cos\psi_{1}}+L_{2}\log\frac{\sigma_{l}}{\sigma_{r}}\frac{1 }{A_{2}\cos^{2}\psi_{2}}+L_{1}\log\frac{\sigma_{r}}{\varepsilon}\frac{A_{2}^{2 }+1+2A_{2}^{2}\sin\psi_{1}}{A_{2}\cos\psi_{1}},\] (118) where the prefactor of \[\log\sigma_{l}\] is \[L_{1}+L_{2}\]. Actually, we can compare this analytical result (118) with the numerical results ( 117 ), and they are consistent as shown in figure 24 (a). Now we consider another case \(\psi_{1}>\psi_{2}\), which is more complicated and shown in figure 26 (b). For the limit case \(\sigma_{l}\gg\sigma_{r}\), we expect that CG // OE, which means \(|\text{OC}|\ll\sigma_{l}\). While DG will always cross OG at a finite angle \(\angle\)DGO, which means \(|\text{OD}|\gg\sigma_{r}\). Besides, with numerical results, we may expect that \(\frac{|\text{OC}|}{|\text{OD}|}\sim\mathcal{O}(1)\) for \(\sigma_{l}\to\infty\). The key difference is that here we cannot directly get \(|\text{OC}|\) with \(\sigma_{l}\) and \(|\text{OD}|\) with \(\sigma_{r}\). Instead, we expect \(\sigma_{r}\ll R_{r},\alpha,x_{0},\beta,\gamma\ll x_{1},\sigma_{l}\), which can be shown below. Here we first define \(|\text{OD}|=x\). (i) With \(|\text{OD}|\gg\sigma_{r}\), we have \(\angle\text{FDB}=\angle\text{DBF}\approx\angle\text{DOF}=\frac{\pi}{2}-\psi_{1}\), \(\angle\text{DFO}=2\psi_{1}\) and \(\angle\text{DGO}=\psi_{1}-\psi_{2}\). For \(\triangle\text{OGD}\) we apply the law of sines, then we have \[\frac{|\text{OG}|}{\sin\left(\frac{\pi}{2}-\psi_{1}\right)}=\frac{|\text{DG}|}{ \sin\left(\frac{\pi}{2}+\psi_{2}\right)}=\frac{|\text{DO}|}{\sin\left(\psi_{1} -\psi_{2}\right)}. \tag{111}\] The solutions are \(|\text{OG}|=\frac{\cos\psi_{1}x}{\sin\left(\psi_{1}-\psi_{2}\right)}\) and \(|\text{DG}|=\frac{\cos\psi_{2}x}{\sin\left(\psi_{1}-\psi_{2}\right)}\). (ii) With CG // OE, we have \(\angle\text{CGO}=\angle\text{GOA}=\psi_{1}+\psi_{2}\), and \(\angle\text{GCO}=\frac{\pi}{2}-\psi_{1}\). Using the law of sines for \(\triangle\text{OCG}\) we have \[\frac{|\text{OG}|}{\sin\left(\frac{\pi}{2}-\psi_{1}\right)}=\frac{|\text{OC}| }{\sin\left(\psi_{1}+\psi_{2}\right)}=\frac{|\text{CG}|}{\sin\left(\frac{\pi} {2}-\psi_{2}\right)}. \tag{112}\] The solutions are \(|\text{OC}|=\frac{\sin\left(\psi_{1}+\psi_{2}\right)x}{\sin\left(\psi_{1}-\psi _{2}\right)}\) and \(|\text{CG}|=\frac{\cos\psi_{2}x}{\sin\left(\psi_{1}-\psi_{2}\right)}\). (iii) Finally, with \(\angle\text{DFO}=2\psi_{1}\) and \(\angle\text{DOF}=\frac{\pi}{2}-\psi_{1}\), we have \(\frac{x}{\sin 2\psi_{1}}=\frac{|\text{DF}|}{\sin\left(\frac{\pi}{2}-\psi_{1}\right)}\). So \(|\text{DF}|=\frac{\cos\psi_{1}x}{\sin 2\psi_{1}}\). To be consistent with the notations above, we have \(|\text{OD}|=\alpha,|\text{OG}|=x_{0},|\text{OC}|=\gamma,|\text{CG}|=\beta,| \text{FD}|=R_{r}\). Therefore, \(|\text{OC}|=\frac{\sin\left(\psi_{1}+\psi_{2}\right)\sin 2\psi_{1}}{\sin \left(\psi_{1}-\psi_{2}\right)\cos\psi_{1}}R_{r}\) and \(|\text{OD}|=\frac{\sin 2\psi_{1}}{\cos\psi_{1}}R_{r}\). However, the results above are not enough to solve the values of all unknown variables using \(\sigma_{l}\) and \(\sigma_{r}\). We need (110). With the discussion above, \(\sigma_{l}\gg R_{r}\gg\sigma_{r}\). By symmetry, we assume that \(R_{r}\sim\mathcal{O}(\sqrt{\sigma_{l}\sigma_{r}})\sim\mathcal{O}(\sqrt{\sigma_ {l}})\) for \(\sigma_{l}\rightarrow\infty\) and a fixed \(\sigma_{r}\). This assumption can also be checked numerically. Then to simplify equations (110), we can first rescale all variables with \(\frac{1}{\sigma_{l}}\), which means \(\sigma_{l}^{\prime}=1\), \(\sigma_{r}^{\prime}=\frac{\sigma_{r}}{\sigma_{l}}\ll 1\) and \(R_{r}^{\prime}=\frac{R_{r}}{\sigma_{l}}=c\cdot\sqrt{\frac{\sigma_{r}}{\sigma_ {l}}}\), where \(c\) is an unknown variable to be determined. Now if we can get \(c\), then all variables can be solved. The constraint for \(c\) comes from \(\sigma_{l}^{\prime}=1\). For the limit case, with (110) we have \(\sigma_{l}^{\prime}\approx-\frac{2x_{1}^{\prime}}{\cos\left(\psi_{1}-\psi_{2} \right)}\) at the zero order. Then with the Taylor expansion of (110), we can finally express \(x_{1}^{\prime}\) to the zero order of \(\sigma_{r}^{\prime}\), which gives \[x_{1}^{\prime}=-c^{2}\cdot\frac{\sin^{2}\psi_{1}\sin\left[2(\psi_{1}+\psi_{2}) \right]}{\sin\left(\psi_{1}-\psi_{2}\right)}+\mathcal{O}(\sigma_{r}^{\prime}). \tag{113}\] So we have \(1=\sigma_{l}^{\prime}\approx c^{2}\cdot\frac{2\sin^{2}\psi_{1}\sin\left[2(\psi_{1}+ \psi_{2})\right]}{\sin\left(\psi_{1}-\psi_{2}\right)\cos\left(\psi_{1}+\psi_{2} \right)}\), which means \[c=\frac{1}{2\sin\psi_{1}}\sqrt{\frac{\sin\left(\psi_{1}-\psi_{2}\right)}{\sin \left(\psi_{1}+\psi_{2}\right)}}. \tag{114}\] Now we have \(|\text{OC}|=\frac{\sin\left(\psi_{1}+\psi_{2}\right)|\text{OD}|}{\sin\left( \psi_{1}-\psi_{2}\right)}=\frac{\sin\left(\psi_{1}+\psi_{2}\right)}{\sin\left( \psi_{1}-\psi_{2}\right)}\cdot 2\sin\psi_{2}R_{r}=\sqrt{\frac{\sin\left(\psi_{1}+\psi_{2} \right)}{\sin\left(\psi_{1}-\psi_{2}\right)}}\sqrt{\sigma_{l}\sigma_{r}}=b\sqrt{ \sigma_{l}\sigma_{r}}\) and \(|\text{OD}|=b^{-1}\sqrt{\sigma_{l}\sigma_{r}}\). Finally, we can calculate the length of the geodesic, which is similar to (109). For AC we have \[d_{\text{AC}}= L_{1}\cosh^{-1}\frac{(|\text{OC}|\sin\psi_{1}+\sigma_{l})^{2}+ \varepsilon^{2}+(|\text{OC}|\cos\psi_{1})^{2}}{2\varepsilon|\text{OC}|\cos\psi_ {1}}\] \[\approx L_{1}\log\frac{b^{2}\sigma_{l}\sigma_{r}+\sigma_{l}^{2}+2b\sigma_{ l}^{3/2}\sigma_{r}^{1/2}\sin\psi_{1}}{b(\sigma_{l}\sigma_{r})^{1/2}\cos\psi_{1} \varepsilon}\approx L_{1}\left(\log\left(\frac{b^{-1}\sigma_{r}^{-1/2}\sigma_{l}^ {3/2}}{\cos\psi_{1}\varepsilon}\right)+2b\sin\psi_{1}\sqrt{\frac{\sigma_{r}}{ \sigma_{l}}}\right). \tag{115a}\] or CD we have \[d_{\rm CD}= L_{2}\cosh^{-1}\frac{(|{\rm OC}|+|{\rm OD}|)^{2}\sin^{2}\psi_{2}+(|{ \rm OC}|\cos\psi_{2})^{2}+(|{\rm OD}|\cos\psi_{2})^{2}}{2|{\rm OC}||\,{\rm OD}| \cos^{2}\psi_{2}} \tag{114b}\] \[= L_{2}\cosh^{-1}\frac{(b^{-1}+b)^{2}\sin^{2}\psi_{2}+(b^{-2}+b^{2} )\cos^{2}\psi_{2}}{2\cos^{2}\psi_{2}}.\] For DB we have \[d_{\rm DB}= L_{1}\cosh^{-1}\frac{(|{\rm OD}|\sin\psi_{1}+\sigma_{r})^{2}+ \varepsilon^{2}+(|{\rm OD}|\cos\psi_{1})^{2}}{2\varepsilon|{\rm OD}|\cos\psi_ {1}} \tag{114c}\] \[\approx L_{1}\log\frac{b^{-2}\sigma_{l}\sigma_{r}+\sigma_{r}^{2}+2b^{-1 }\sigma_{l}^{1/2}\sigma_{r}^{3/2}\sin\psi_{1}}{b^{-1}(\sigma_{l}\sigma_{r})^{ 1/2}\cos\psi_{1}\varepsilon}\approx L_{1}\left(\log\left(\frac{b^{-1}\sigma_{ r}^{1/2}\sigma_{l}^{1/2}}{\cos\psi_{1}\varepsilon}\right)+2b\sin\psi_{1}\sqrt{ \frac{\sigma_{r}}{\sigma_{l}}}\right).\] Then the total length of the geodesic is \[d\approx L_{1}\left(\log\left(\frac{\sigma_{l}^{2}}{\varepsilon^{2}}\frac{b^{ -2}}{\cos^{2}\psi_{1}}\right)+4b\sin\psi_{1}\sqrt{\frac{\sigma_{r}}{\sigma_{l} }}\right)+L_{2}\cosh^{-1}\left(\frac{(b^{-1}+b)^{2}-2\cos^{2}\psi_{2}}{2\cos^ {2}\psi_{2}}\right)\!, \tag{115}\] where the prefactor of \(\log\sigma_{l}\) is \(\frac{3}{2}L_{1}+\frac{1}{2}L_{1}\). Actually, we can compare this analytical result (115) and numerical results (104), and they are consistent in figure 24 (b). ### ETW/defect geodesic with an ETW brane and two regions #### a.3.1 Special case with \(\sigma_{l}\to 0\) In the following, we consider the effect of the ETW brane. More specifically, the two end points of the geodesic are located at \(\sigma_{l}\to 0\) and the ETW brane, as shown in figure 27. It is obvious that there are more than one solution for the geodesic equation with an endpoint on the ETW, so we need to find a geodesic with the minimal length. Similar to appendix A.2.1, we start by assuming \(|{\rm OA}|=r\). With \(\angle{\rm ACO}=\frac{\pi}{2}-\psi_{2}\) and \(\angle{\rm COB}=\frac{\pi}{2}+\psi_{1}\), we have \(\angle{\rm CBO}=\psi_{2}-\psi_{1}\). It is obvious that for \(\psi_{2}<\psi_{1}\) the crossing point of lines CA and OB will be on the left of the point O, which means there is no solution for the geodesic we want. Figure 27: AdS dual of CFT with interface defect for half infinite system and ETW brane. Two end points of the geodesic are located on the defect and the ETW brane. Therefore, in the following, we just consider \(\psi_{2}>\psi_{1}\). (i) Starting from figure 27 (a), we assume \(|\text{OA}|=r\), and the equation of the line AB is \(y-r\sin{(\psi_{1}+\psi_{2})}=\tan{(\pi+\psi_{1}-\psi_{2})}(x-r\cos{(\psi_{1}+ \psi_{2})})\). Setting \(y=0\) we have \[x_{\text{B}}=\beta=\frac{-r\sin{(\psi_{1}+\psi_{2})}}{\tan{(\psi_{1}-\psi_{2}) }}+r\cos{(\psi_{1}+\psi_{2})}. \tag{114a}\] (ii) Assuming \(|\text{OC}|=\alpha\), because the point C is located at the line AB, we can solve and get \[\alpha=\frac{r\sin{(\psi_{1}+\psi_{2})}-r\tan{(\psi_{1}-\psi_{2})}\cos{(\psi_{ 1}+\psi_{2})}}{\cos{\psi_{1}+\sin{\psi_{1}}\tan{(\psi_{1}-\psi_{2})}}}. \tag{114b}\] (iii) In \(\triangle\)BCO, using the law of sines we have \(|\text{BC}|=\frac{\beta\cos{\psi_{1}}}{\cos{\psi_{2}}}\). Assuming \(\text{D}=(\sigma,\delta)\), because \(|\text{BC}|=|\text{BD}|\), we have \[\delta=\sqrt{\beta^{2}\left(\frac{\cos{\psi_{1}}}{\cos{\psi_{2}}}\right)^{2}- (\sigma-\beta)^{2}}. \tag{114c}\] Now we can express the geodesic length with the variables above. For OC, with \(\text{C}=(\alpha\sin{\psi_{2}},\alpha\cos{\psi_{2}})\) and \(\text{O}=(0,\varepsilon)\) we have \[d_{\text{OC}}=L_{2}\cosh^{-1}{\frac{(\alpha\sin{\psi_{1}})^{2}+\varepsilon^{2 }+(\alpha\cos{\psi_{1}})^{2}}{2\varepsilon\alpha\cos{\psi_{2}}}}\approx L_{2} \log{\frac{\alpha}{\varepsilon\cos{\psi_{2}}}}.\] (114a) For CD, with \[\text{C}=(-\alpha\sin{\psi_{1}},\alpha\cos{\psi_{1}})\] and \[\text{D}=(\sigma,\delta)\] we have \[d_{\text{OC}}=L_{1}\cosh^{-1}{\frac{(-\alpha\sin{\psi_{1}}-\sigma)^{2}+\delta ^{2}+(\alpha\cos{\psi_{1}})^{2}}{2\delta\alpha\cos{\psi_{1}}}}. \tag{114b}\] Therefore, the total geodesic length is \[d=L_{2}\log{\frac{\alpha}{\varepsilon\cos{\psi_{2}}}}+L_{1}\cosh^{-1}{\frac{( -\alpha\sin{\psi_{1}}-\sigma)^{2}+\delta^{2}+(\alpha\cos{\psi_{1}})^{2}}{2 \delta\alpha\cos{\psi_{1}}}}. \tag{115}\] Besides, we also mention that without a defect (region 2), the geodesic will cross the ETW brane perpendicularly. So, the length is \[d_{0}=L_{1}\cosh^{-1}{\frac{\sigma^{2}+\sigma^{2}+\varepsilon^{2}}{2\sigma \varepsilon}}\approx L_{1}\log{\frac{2\sigma}{\varepsilon}}. \tag{116}\] Now what we need to do is to minimize (115) with respect to \(r\). Actually, with numerical check we can find that, after minimizing (115), the corresponding \(r=r_{0}\) will lead \(\beta=\sigma\), which means the geodesic will cross the ETW brane perpendicularly and the point B is also located at the ETW. In the following, we will prove that the geodesic will always cross the ETW brane perpendicularly, and express the total geodesic length analytically. The calculation is tedious. With (114), we can have \(\beta=\beta(r),\ \alpha=\alpha(r),\ \delta=\delta(\beta)\). Then we plug these equations into (115) and calculate \(\frac{\partial d}{\partial r}\). For the perpendicular case we have \(\beta=\sigma\), which means \[r=r_{0}=\frac{\sigma}{\cos{(\psi_{1}+\psi_{2})}-\frac{\sin{(\psi_{1}+\psi_{2})} }{\tan{(\psi_{1}-\psi_{2})}}}. \tag{117}\] With \(r=r_{0}\) in \(\frac{\partial d}{\partial r}\) and simplifying it with \(\psi_{1}<\psi_{2}\), we have \[\frac{\partial d}{\partial r}\Big{|}_{r=r_{0}}=-\frac{2\cos\psi_{2}\sin\psi_{2}( L_{2}\csc\left(\psi_{1}-\psi_{2}\right)-L_{1}\cos\psi_{2}\csc\left(\psi_{1}-\psi_{2} \right)\sec\psi_{1})}{\sigma}. \tag{104}\] With the junction condition (8a) we have \(\frac{L_{1}}{\cos\psi_{1}}=\frac{L_{2}}{\cos\psi_{2}}\), so \(\frac{\partial d}{\partial r}\big{|}_{r=r_{0}}=0\), which means that \(r=r_{0}\) corresponds to the minimal point of \(d(r)\). Therefore, the minimal geodesic will cross the ETW brane perpendicularly. Actually, we will show later that this property of minimal geodesics will also be true for a more complicate case, and give an intuitive proof in appendix A.4.1. For the minimal geodesic above with \(r=r_{0}\), we can plug it into (103) and calculate the length of the minimal geodesic in figure 27 (b). Firstly, we consider \(\triangle\)COB with the law of sines, then we have \[\frac{\sigma}{\sin\left(\frac{\pi}{2}-\psi_{2}\right)}=\frac{|\text{CB}|}{ \sin\left(\frac{\pi}{2}+\psi_{1}\right)}=\frac{|\text{OC}|}{\sin\left(\psi_{2 }-\psi_{1}\right)}, \tag{105}\] which leads \(|\text{CB}|=|\text{BD}|=\frac{\sigma\cos\psi_{1}}{\cos\psi_{2}}\) and \(|\text{OC}|=\frac{\sigma\sin\left(\psi_{2}-\psi_{1}\right)}{\cos\psi_{2}}\). For \(|\text{OC}|\) we have \(\text{O}=(0,\varepsilon)\) and \(\text{C}=(|\text{OC}|\sin\psi_{2},|\text{OC}|\cos\psi_{2})\), so \[d_{\text{OC}}=L_{2}\cosh^{-1}\frac{(|\text{OC}|\sin\psi_{2})^{2}+\varepsilon^ {2}+(|\text{OC}|\cos\psi_{2})^{2}}{2\varepsilon(|\text{OC}|\cos\psi_{2})} \approx L_{2}\log\bigg{(}\frac{\sigma}{\varepsilon}\frac{\sin\left(\psi_{2} -\psi_{1}\right)}{\cos^{2}\psi_{2}}\bigg{)}.\] (106a) For \[|\text{CD}|\] we have \[\text{C}=(-|\text{OC}|\sin\psi_{1},|\text{OC}|\cos\psi_{1})\text{ and }\text{D}=(\sigma,|\text{BD}|)\text{, so} \tag{106b}\] \[d_{\text{CD}}= L_{1}\cosh^{-1}\frac{(-|\text{OC}|\sin\psi_{1}-\sigma)^{2}+(| \text{OC}|\cos\psi_{1})^{2}+|\text{BD}|^{2}}{2|\text{OC}|\cos\psi_{1}|\text{ BD}|}\] \[= L_{1}\cosh^{-1}\bigg{(}\frac{\sin^{2}\left(\psi_{2}-\psi_{1} \right)+\cos^{2}\psi_{2}+\cos^{2}\psi_{1}+2\sin\left(\psi_{2}-\psi_{1}\right) \sin\psi_{1}\cos\psi_{2}}{2\sin\left(\psi_{2}-\psi_{1}\right)\cos^{2}\psi_{1} }\bigg{)}. \tag{106b}\] Therefore, the total length is \[d=L_{2}\log\Big{(}\frac{\sigma}{\varepsilon}\Big{)}+d_{\text{sub}}, \tag{107}\] with the sub-leading term \[d_{\text{sub}}= L_{2}\log\bigg{(}\frac{\sin\left(\psi_{2}-\psi_{1}\right)}{\cos^{ 2}\psi_{2}}\bigg{)}\] \[+L_{1}\cosh^{-1}\bigg{(}\frac{\sin^{2}\left(\psi_{2}-\psi_{1} \right)+\cos^{2}\psi_{2}+\cos^{2}\psi_{1}+2\sin\left(\psi_{2}-\psi_{1}\right) \sin\psi_{1}\cos\psi_{2}}{2\sin\left(\psi_{2}-\psi_{1}\right)\cos^{2}\psi_{1} }\bigg{)}. \tag{108}\] Finally, we have corresponding entanglement entropy \(S_{\sigma}=\frac{c_{2}}{6}\log\frac{\sigma}{\varepsilon}+S_{\text{sub}}\) and \(S_{\text{sub}}=\frac{d_{\text{sub}}}{4G_{(3)}}\). #### a.3.2 General case with \(\sigma_{l}\neq 0\) Before we only considered the case with \(\sigma_{l}\to 0\), now we consider a more general case where \(\sigma_{l}\neq 0\). Similar to the method above, in figure 28 (a) we assume \(\alpha\). (i) Because \(|\text{OA}|=|\text{AC}|\), we can solve \((\sigma_{l}+r)^{2}=\left[r\cos\left(\psi_{1}+\psi_{2}\right)+\alpha\sin\psi_{1} \right]^{2}+\left[r\sin\left(\psi_{1}+\psi_{2}\right)-\alpha\cos\psi_{1}\right]^ {2}\) and get \[\alpha=r\sin\psi_{2}+\sqrt{(r+\sigma_{l})^{2}-r^{2}\cos^{2}\psi_{2}}. \tag{100a}\] (ii) Then we have the equation of the line AC and \(B=(x_{0},0)\) on it, so \[x_{0}=[-r\sin\left(\psi_{1}+\psi_{2}\right)]\frac{-\alpha\sin\psi_{1}-r\cos \left(\psi_{1}+\psi_{2}\right)}{\alpha\cos\psi_{1}-r\sin\left(\psi_{1}+\psi_{2 }\right)}+r\cos\left(\psi_{1}+\psi_{2}\right). \tag{100b}\] (iii) Assuming \(\text{E}=(\sigma_{r},\beta)\), with \(|\text{BC}|=|\text{BE}|\), we have \((-\alpha\sin\psi_{1}-x_{0})^{2}+(\alpha\cos\psi_{1})^{2}=(\sigma_{r}-x_{0})^ {2}+\beta^{2}\), which leads to \[\beta=\sqrt{2\alpha\sin\psi_{1}x_{0}+\alpha^{2}-\sigma_{r}^{2}+2\sigma_{r}x_ {0}}. \tag{100c}\] Now we can calculate the length of the geodesic. For DC, with \(\text{D}=(-\sigma_{l},\varepsilon)\) and \(\text{C}=(\alpha\sin\psi_{2},\alpha\cos\psi_{2})\), we have \[d_{\text{DC}}=L_{2}\cosh^{-1}\frac{(\alpha\sin\psi_{2}+\sigma_{l})^{2}+ \varepsilon^{2}+(\alpha\cos\psi_{2})^{2}}{2\varepsilon\alpha\cos\psi_{2}} \approx L_{2}\log\frac{\alpha^{2}+2\alpha\sigma_{l}\sin\psi_{2}+\sigma_{l}^{ 2}}{\varepsilon\alpha\cos\psi_{2}}.\] (100a) For CE, with \[\text{C}=(-\alpha\sin\psi_{1},\alpha\cos\psi_{1})\] and \[\text{E}=(\sigma_{r},\beta)\], we have \[d_{\text{CE}}=L_{1}\cosh^{-1}\frac{(\alpha\sin\psi_{1}+\sigma_{r})^{2}+(\alpha \cos\psi_{1})^{2}+\beta^{2}}{2\alpha\beta\cos\psi_{1}}=L_{1}\cosh^{-1}\frac{ \alpha^{2}+\beta^{2}+2\alpha\sigma_{r}\sin\psi_{1}+\sigma_{r}^{2}}{2\alpha \beta\cos\psi_{1}}. \tag{100b}\] Therefore, the total length is \[d=L_{2}\log\frac{\alpha^{2}+2\alpha\sigma_{l}\sin\psi_{2}+\sigma_{l}^{2}}{ \varepsilon\alpha\cos\psi_{2}}+L_{1}\cosh^{-1}\frac{\alpha^{2}+\beta^{2}+2 \alpha\sigma_{r}\sin\psi_{1}+\sigma_{r}^{2}}{2\alpha\beta\cos\psi_{1}}. \tag{101}\] Similar to the method above, we should minimize (101) with respect to \(r\). Numerically, it is easy to check that the minimal geodesic crosses the ETW brane perpendicularly. But analytical proof is difficult. Figure 28: AdS dual of CFT with interface defect for half infinite system and ETW brane. The two end points are located on the left CFT and the ETW brane. In the following, we will take the assumption above and calculate the length analytically. With \(x_{0}=\sigma_{r}\), expressing \(\alpha\) with \(r\) by (114b) and plugging it into (114a), we will get an effective cubic equation for \(r\). What we need to do is solving this equation which has an analytical solution. The effective cubic equation is \(ar^{3}+br^{2}+cr+d=0\) with factors \[\begin{split} a=&-2\sigma_{r}\cos\psi_{1}\cos\psi_ {2}+2\sigma_{l}\cos^{2}\psi_{2}+2\sigma_{r}\cos^{2}\psi_{2}\cos\left(\psi_{1}+ \psi_{2}\right)\!,\\ b=&\sigma_{r}^{2}\cos^{2}\psi_{1}-4\sigma_{l}\sigma _{r}\cos\psi_{1}\cos\psi_{2}+\sigma_{l}^{2}\cos^{2}\psi_{2}-\sigma_{r}^{2} \cos^{2}\psi_{2},\\ c=& 2\sigma_{l}\sigma_{r}^{2}\cos^{2}\psi_{1}-2 \sigma_{l}^{2}\sigma_{r}\cos\psi_{1}\cos\psi_{2},\\ d=&\sigma_{l}^{2}\sigma_{r}^{2}\cos^{2}\psi_{1}. \end{split} \tag{117}\] This equation has analytical solutions, and we will not show them here. After solving this equation, we also need to choose one true solution from three solutions. One criterion is that the corresponding \(\alpha,\ \beta\) and \(x_{0}\) are positive. Finally, plugging the solution into (114) and (116), we will get the final minimal length. Here we give some remarks about the results above. (i) In the discussion we don't consider the relation between \(L_{1}\) and \(L_{2}\), which plays an important role before when we take \(\sigma_{l}\to 0\). With reference [56], for any given \(\sigma_{l}\) and \(\sigma_{r}\), we can have a nontrivial solution. Then for our case, we can consider there is an ETW brane across the center of the arc in the right region 1. Therefore, the solution always exists in this case. (ii) However, the prefactor of log-term may be different for different relation between \(L_{1}\) and \(L_{2}\), and it may also be different at different limits. For example, \(\sigma_{r}\rightarrow\infty\) and \(\sigma_{l}\rightarrow\infty\) may have different prefactors of log-term. ### ETW/defect geodesic with an ETW brane and three regions Similar to the case above, now we consider the most complicated case with three regions and an ETW brane, which is shown in figure 29. Now we construct the equations for solving this problem. (i) Assume \(|\text{OB}|=\alpha\) and \(|\text{OE}|=\beta\), \(|\text{AE}|=|\text{BE}|\) leads to \((\beta+\sigma_{l})^{2}=[-\alpha\sin\psi_{2}-\beta\cos\left(\psi_{1}+\psi_{2} \right)]^{2}+[\alpha\cos\psi_{2}-\beta\sin\left(\psi_{1}+\psi_{2}\right)]^{2}\). The solution of equation above is \[\beta=\frac{\alpha^{2}-\sigma_{l}^{2}}{2(\sigma_{l}+\alpha\sin\psi_{1})}. \tag{118a}\] Figure 29: AdS dual of CFT with interface defect for half infinite system and ETW brane. The geodesic crosses two interface branes and ends on the ETW brane. (ii) Then with the equation of the line BE and point \(\mathrm{F}=(x_{0},0)\) on it, we can get \[x_{0}=(-\alpha\cos\psi_{2})\frac{\beta\cos{(\psi_{1}+\psi_{2})}+\alpha\sin\psi_{ 2}}{\beta\sin{(\psi_{1}+\psi_{2})}-\alpha\cos\psi_{2}}-\alpha\sin\psi_{2}. \tag{114b}\] (iii) With \(|\mathrm{OC}|=\gamma\), \(|\mathrm{CF}|=|\mathrm{BF}|\) leads to \((\gamma\sin\psi_{2}-x_{0})^{2}+(\gamma\cos\psi_{2})^{2}=(-\alpha\sin\psi_{2}- x_{0})^{2}+(\alpha\cos\psi_{2})^{2}\), which has solution \[\gamma=\alpha+2x_{0}\sin\psi_{2}. \tag{114c}\] (iv)Then assuming \(|\mathrm{OG}|=\delta\), because \(\mathrm{G}=(\delta\cos{(\psi_{1}+\psi_{2})},-\delta\sin{(\psi_{1}+\psi_{2})})\) on the line CF, we have the equation \(\frac{-\delta\sin{(\psi_{1}+\psi_{2})}}{\delta\cos{(\psi_{1}+\psi_{2})}-x_{0} }=\frac{\gamma\cos{\psi_{2}}}{\gamma\sin{\psi_{2}}-x_{0}}\). The solution is \[\delta=\frac{\gamma x_{0}\cos\psi_{2}}{\gamma\cos\psi_{1}-x_{0}\sin{(\psi_{1} +\psi_{2})}}. \tag{114d}\] (v) Finally assume \(|\mathrm{DH}|=\xi\), \(|\mathrm{CG}|=|\mathrm{DG}|\) leads \(\xi^{2}+(\sigma_{r}-\delta)^{2}=[\gamma\sin\psi_{2}-\delta\cos{(\psi_{1}+\psi _{2})}]^{2}+[\gamma\cos{\psi_{2}}\delta\cos{(\psi_{1}+\psi_{2})}]^{2}\). The solution is \[\xi=\sqrt{\gamma^{2}+(2\delta-\sigma_{r})\sigma_{r}+2\delta\gamma\sin\psi_{1}}. \tag{114e}\] Now we can calculate the length of the geodesic. For AB, with \(\mathrm{A}=(-\sigma_{l},\varepsilon)\) and \(\mathrm{B}=(\alpha\sin\psi_{1},\alpha\cos\psi_{1})\) we have \[d_{\mathrm{AB}}=L_{1}\cosh^{-1}\bigg{(}\frac{(-\sigma_{l}-\alpha\sin\psi_{1})^ {2}+\varepsilon^{2}+(\alpha\cos\psi_{1})^{2}}{2\varepsilon\alpha\cos\psi_{1}} \bigg{)}\approx L_{1}\log\frac{(\sigma_{l}+\alpha\sin\psi_{1})^{2}+(\alpha \cos\psi_{1})^{2}}{\varepsilon\alpha\cos\psi_{1}}. \tag{114a}\] For BC, with \(\mathrm{B}=(-\alpha\sin\psi_{2},\alpha\cos\psi_{2})\) and \(\mathrm{C}=(\gamma\sin\psi_{2},\gamma\cos\psi_{2})\) we have \[d_{\mathrm{BC}}=L_{2}\cosh^{-1}\bigg{(}\frac{(\gamma+\alpha)^{2}\sin^{2}\psi_ {2}+(\alpha\cos\psi_{2})^{2}+(\gamma\cos\psi_{2})^{2}}{2\gamma\alpha\cos^{2} \psi_{2}}\bigg{)}=L_{2}\cosh^{-1}\bigg{(}\frac{(\gamma+\alpha)^{2}-2\alpha \gamma\cos^{2}\psi_{2}}{2\gamma\alpha\cos^{2}\psi_{2}}\bigg{)}. \tag{114b}\] For CD, with \(\mathrm{C}=(-\gamma\sin\psi_{1},\gamma\cos\psi_{1})\) and \(\mathrm{C}=(\sigma_{r},\xi)\) we have \[d_{\mathrm{CD}}=L_{1}\cosh^{-1}\bigg{(}\frac{(-\gamma\sin\psi_{1}-\sigma_{r})^ {2}+(\gamma\cos\psi_{1})^{2}+\xi^{2}}{2\gamma\xi\cos\psi_{1}}\bigg{)}. \tag{114c}\] Therefore, the total length of the geodesic is \[d= L_{1}\log\frac{(\sigma_{l}+\alpha\sin\psi_{1})^{2}+(\alpha\cos \psi_{1})^{2}}{\varepsilon\alpha\cos\psi_{1}}+L_{1}\cosh^{-1}\bigg{(}\frac{(- \gamma\sin\psi_{1}-\sigma_{r})^{2}+(\gamma\cos\psi_{1})^{2}+\xi^{2}}{2\gamma \xi\cos\psi_{1}}\bigg{)}\] \[+L_{2}\cosh^{-1}\bigg{(}\frac{(\gamma+\alpha)^{2}-2\alpha\gamma \cos^{2}\psi_{2}}{2\gamma\alpha\cos^{2}\psi_{2}}\bigg{)}.\] Now what we need to do is minimizing the length of geodesic (114) with respect to \(\alpha\). Similar to the results above, we can check numerically that the minimal geodesic will cross the ETW brane perpendicularly. Therefore, in the following we will assume \(\delta=\sigma_{r}\). Plugging this condition into (114), finally we will get a 5th order equation for \(\alpha\), which can only be solved numerically. After getting the solution, we can plug it into (114) and get the total length of the minimal geodesic. #### a.4.1 Proof of perpendicular crossing of a geodesic Before, we show that for some simple cases we can prove analytically and numerically that, with ETW brane, the minimal geodesic will end on the ETW brane perpendicularly. And for more complicate cases, we may only check this property numerically. Here we give an intuitive proof without any calculation for this property. For example, considering the most complicate case shown in figure 29, the geodesic starts from the point A with fixed \(\sigma_{l}\). Now we want to find the minimal geodesic from A to the ETW brane. We can construct another geometry that, besides the original part, we add another part which is symmetric with respect to ETW brane. Then we can image that, when we minimize the geodesic with end points A and its symmetric point, the corresponding geodesic length will always twice of the original geodesic. If the original geodesic is not perpendicular to ETW brane, then the new geodesic in doubled geometry will not be smooth, which means it is not a "geodesic". Then we can have a shorter one. Therefore, with ETW brane, the geodesic will always be perpendicular to the ETW brane. ## Appendix B Boundary entropy with path integral In this section, we focus on the boundary entropy induced by the interface brane. By evaluating the partition function, we can obtain the boundary entropy without UV information, and the results are universal. ### A review of a single region boundary entropy and the conformal transformation Before discussing our cases, we first review the results in reference [18]. They consider a single CFT region with a boundary. The corresponding action is \[I=\frac{1}{16\pi G_{N}}\int_{N}\sqrt{-g}(R-2\Lambda)+\frac{1}{8\pi G_{N}}\int _{Q}\sqrt{-h}(K-T), \tag{110}\] where \(R\) denotes Ricci scalar, \(\Lambda\) is cosmological constant, \(K\) is extrinsic curvature, and \(T\) is the tension of the interface brane. Generally, we have \(\Lambda=-\frac{d(d-1)}{2L^{2}}\) where \(L\) is the AdS\({}_{d+1}\) radius (here we take \(d=2\) and \(\Lambda=-\frac{1}{L^{2}}\)). Besides, we have \(R=\frac{2d+2}{d-1}\Lambda\), so for \(d=2\) we have \(R=6\Lambda=-\frac{6}{L^{2}}\). While, for the integral on \(Q\), we need to solve the equation of motion and fix the location of brane. In the action we only have one region, so we only consider the second junction condition, which requires \[K_{ab}=(K-T)h_{ab}. \tag{111}\] Taking trace gives \(K=\frac{d}{d-1}T\), it means \(K=2T\) for \(d=2\). Besides, we still have \[K_{ab}=\frac{1}{L}\tanh\Big{(}\frac{\rho}{L}\Big{)}g_{ab}, \tag{112}\] which gives \(K=g^{ba}K_{ab}=\frac{d}{L}\tanh\big{(}\frac{\rho}{L}\big{)}\). For \(d=2\), we have \(LT=\tanh\big{(}\frac{\rho}{L}\big{)}\). Now we can simplify the action to (with Euclidean time \(\tau\)) \[I=\frac{-1}{16\pi G_{N}}\left(-\frac{4}{L^{2}}\int_{N}\sqrt{g}+K\int_{Q}\sqrt {h}\right). \tag{113}\] We keep \(K\) in action instead of \(T\) as because we can consider \(K\) as a variable that belongs to one region. Then, we calculate \(\int_{N}\sqrt{g}\) and \(\int_{Q}\sqrt{h}\), where \(h\) is the induced metric on the brane. Before calculating the volume, we introduce a UV cutoff to make it finite. To realize it, we apply a special conformal transformation (SCT) for the CFT. The general form of transformation is [66] \[x^{\prime}_{\mu}=\frac{x_{\mu}+c_{\mu}x^{2}}{1+2c\cdot x+c^{2}x^{2}}, \tag{112}\] where we define \(c_{\mu}=(c_{x},c_{\tau},c_{z}=0)\), \(c\cdot x=\sum_{\mu}c_{\mu}x_{\mu}\), \(c^{2}=\sum_{\mu}c_{\mu}^{2}\) and \(x^{2}=\sum_{\mu}x_{\mu}^{2}\). In the equation (102) of reference [56], the corresponding transformation is \(c_{\mu}=(0,c_{\tau}=c,0)\). While, in the equation (101) of reference [18], they consider a general transformation to \((x,\tau,z)\) \[x^{\prime} =\frac{x+c_{x}(x^{2}+\tau^{2}+z^{2})}{1+2(c_{x}x+c_{\tau}\tau)+(c _{x}^{2}+c_{\tau}^{2})(x^{2}+\tau^{2}+z^{2})}, \tag{113}\] \[\tau^{\prime} =\frac{\tau+c_{\tau}(x^{2}+\tau^{2}+z^{2})}{1+2(c_{x}x+c_{\tau} \tau)+(c_{x}^{2}+c_{\tau}^{2})(x^{2}+\tau^{2}+z^{2})},\] \[z^{\prime} =\frac{z}{1+2(c_{x}x+c_{\tau}\tau)+(c_{x}^{2}+c_{\tau}^{2})(x^{2} +\tau^{2}+z^{2})}.\] Here we briefly discuss the SCT in \(d=2\) CFT. Defining \(x_{\mu}=(x,\tau)\), \(b=b_{x}+\mathrm{i}b_{\tau}\) and \(z=x+\mathrm{i}\tau\), the transformation \(x_{\mu}=\frac{x_{\mu}+b_{\mu}x^{2}}{1+2bx+b^{2}x^{2}}\) can be expressed as \(z^{\prime}=\frac{z+bz\bar{z}}{1+(bz+bz)+bbz\bar{z}}\). On the other hand, the transformation above can be written as \(z^{\prime}=\frac{z}{1+bz}=\frac{1}{z^{-1}+\bar{b}}\). It is because \[z^{\prime}=\frac{z\bar{z}}{\bar{z}+\bar{b}z\bar{z}}=\frac{z\bar{z}(z+bz\bar{z })}{(\bar{z}+\bar{b}z\bar{z})(z+bz\bar{z})}=\frac{z\bar{z}(z+bz\bar{z})}{z\bar {z}+\bar{b}z^{2}\bar{z}+bz\bar{z}^{2}+bb\bar{z}\bar{z}z\bar{z}}=\frac{z+bz\bar {z}}{1+(b\bar{z}+\bar{b}z)+b\bar{b}z\bar{z}}. \tag{114}\] Then it is straightforward to check that the lines \(x=x_{0}\) and \(\tau=\tau_{0}\) will be mapped to circles except for some special lines that depend on \(b\). Now we consider the SCT in the AdS space. To simplify the problem, we consider that \(c_{\mu}=(c_{0},0,0)\), and the location of brane is \(x=z\sinh\delta+x_{0},\delta=\frac{\rho}{L}\). Then we can directly check that under (113), the brane is mapped to \[(x^{\prime}-d_{1})^{2}+\tau^{\prime 2}+(z^{\prime}-d_{2})^{2}-d_{3}=0, \tag{115}\] where \(d_{1}=\frac{1+2c_{0}x_{0}}{2c_{0}(1+c_{0}x_{0})}\), \(d_{2}=-\frac{\sinh\delta}{2c_{0}(1+c_{0}x_{0})}\) and \(d_{3}=\frac{\cosh^{2}\delta}{4c_{0}^{2}(1+c_{0}x_{0})^{2}}\). It is a sphere with radius \(R=\sqrt{d_{3}}=\frac{\cosh\delta}{|2c_{0}(1+c_{0}x_{0})|}\) and center \((d_{1},0,d_{2})\). For \(z^{\prime}=0\) and \(\tau^{\prime}=0\), we have \(x^{\prime}=\frac{1}{c_{0}},\frac{x_{0}}{1+c_{0}x_{0}}\), which means \((\frac{1}{c_{0}},0,0)\) is a fixed point for any \(x_{0}\). Besides, after translation of \(x^{\prime}\), we can set \(d_{1}=0\). And for \(x_{0}=0\) we have \(r_{D}=\frac{1}{2|c_{0}|}\) in [18]. Therefore, with a proper \(c_{\mu}\), we can transform the region \(x<x_{0}\) to a disk. After the SCT, the CFT region is on a disk with radius \(r_{D}\) in figure 30. While the disk radius is not important for boundary entropy. And the location of the center is \(z=r_{D}\sinh\frac{\rho^{*}}{L}\). It is because conformal transformation won't change the angle. The angle between the brane and the asymptotic CFT boundary is \(\frac{\pi}{2}-\psi\), and \(\tan\psi=\sinh\frac{\rho^{*}}{L}\). This is also consistent with the result before, where the location of center is \(z=\sqrt{d_{3}}\) and corresponding \(r_{D}=\frac{1}{|2c_{0}(1+c_{0}x_{0})|}\). 20 Footnote 20: In the following subsection, to simplify the problem, we may add an ETW brane artificially, which only acts as a UV-cutoff. However, from later results we will find that the value of radius \(r_{D}\) will not change the boundary entropy, which means we can consider the region bounded by an ETW brane at \(x_{0}\to+\infty\). Then we can apply the corresponding conformal transformation with \(c_{0}=-\frac{1}{x_{0}+\delta}\) and \(\delta\gtrsim 0\). It means the region on the left of \(x=-\frac{1}{c_{0}}=x_{0}+\delta\) will be mapped to a disk on the right of \(x^{\prime}=\frac{1}{c_{0}}=-(x_{0}+\delta)\), and the corresponding radius is \(r_{D}=\frac{(x_{0}+\delta)^{2}}{2\delta}\). Assuming \(\delta=1\) and \(x_{0}\gg 1\), we have \(r_{D}\approx\frac{x_{0}^{2}}{2}\). The corresponding brane after SCT will cross x-axis on \(x=\frac{1}{c_{0}}=-(x_{0}+\delta)\approx-x_{0}\) and \(x=\frac{1}{c_{0}}+2r_{D}\approx x_{0}^{2}-x_{0}\approx x_{0}^{2}\), which means the disk region will cover the whole space for \(x\to+\infty\). Now we discuss the integral \(\int_{N}\sqrt{g}\) and \(\int_{Q}\sqrt{h}\). With metric (7), we have \(\sqrt{g}=\left(\frac{L}{z}\right)^{3}\). Defining \(\bar{z}=r_{D}(\sinh\frac{\theta^{*}}{L}+\cosh\frac{\theta^{*}}{L})=r_{D}e^{ \rho^{*}/L}\), we have \[\begin{split}\int_{N}\sqrt{g}=& L^{3}\int_{0}^{\bar{z}}\mathrm{d}z\,\,\frac{1}{z^{3}}\int \mathrm{d}^{2}S=L^{3}\int_{0}^{\bar{z}}\mathrm{d}z\,\,\frac{1}{z^{3}}\pi\left( (r_{D}\cosh\frac{\rho^{*}}{L})^{2}-(z-r_{D}\sinh\frac{\rho^{*}}{L})^{2}\right) \\ =&\pi L^{3}\int_{0}^{\bar{z}}\mathrm{d}z\,\,\frac{1} {z^{3}}\left(r_{D}^{2}-z^{2}+2zr_{D}\sinh\frac{\rho^{*}}{L}\right)=\pi L^{3} \left(-\frac{r_{D}^{2}}{2}z^{-2}-2r_{D}\sinh\frac{\rho^{*}}{L}z^{-1}-\log z \right)\biggr{|}_{\varepsilon}^{\bar{z}}\\ =&\pi L^{3}\left(\frac{r_{D}^{2}}{2\varepsilon^{2}}- \frac{1}{2}e^{-2\rho^{*}/L}+2r_{D}\sinh\frac{\rho^{*}}{L}(\frac{1}{ \varepsilon}-\frac{1}{r_{D}}e^{-\rho^{*}/L})+\log\frac{\varepsilon}{r_{D}}e^{ -\rho^{*}/L}\right).\end{split} \tag{102}\] As for \(\int_{Q}\sqrt{h}\), because Q satisfies \[x^{2}+\tau^{2}+(z-r_{D}\sinh\frac{\rho^{*}}{L})^{2}=r_{D}^{2}\cosh^{2}\frac{ \rho^{*}}{L}, \tag{103}\] the induced metric is \[\begin{split}\mathrm{d}s^{2}=&\frac{L^{2}}{z^{2}} \left(\mathrm{d}z^{2}+\mathrm{d}\tau^{2}+\frac{1}{x^{2}}\left[(z-r_{D}\sinh \frac{\rho^{*}}{L})\mathrm{d}z+\tau\mathrm{d}\tau\right]^{2}\right)\\ =&\frac{L^{2}}{z^{2}}\left(\left(1+\frac{(z-r_{D} \sinh\frac{\rho^{*}}{L})^{2}}{x^{2}}\right)\mathrm{d}z^{2}+\left(1+\frac{\tau ^{2}}{x^{2}}\right)\mathrm{d}\tau^{2}+\frac{2\tau}{x^{2}}\left(z-r_{D}\sinh \frac{\rho^{*}}{L}\right)\mathrm{d}z\mathrm{d}\tau\right).\end{split} \tag{104}\] Figure 30: Location of the brane after conformal transformation. Therefore, we have volume form \[\begin{split}\sqrt{h}=&\frac{L^{2}}{z^{2}}\sqrt{\left(1+ \frac{(z-r_{D}\sinh\frac{\rho^{*}}{L})^{2}}{x^{2}}\right)\left(1+\frac{\tau^{2}} {x^{2}}\right)-\left(\frac{\tau}{x^{2}}\left(z-r_{D}\sinh\frac{\rho^{*}}{L} \right)\right)^{2}}\\ =&\frac{L^{2}}{z^{2}}\sqrt{\frac{r_{D}^{2}+z_{0}^{2 }}{r_{D}^{2}-\tau^{2}-z^{2}+2zz_{0}}}.\end{split} \tag{104}\] To simplify the notation, in the following we will denote \(z_{0}=r_{D}\sinh\frac{\rho^{*}}{L}\), then the integral region is \(Q:\ \tau\in[0,\sqrt{z_{0}^{2}+r_{D}^{2}-(z-z_{0})^{2}}],\ z\in[0,\sqrt{z_{0}^{2}+r_ {D}^{2}+z_{0}}]\). Therefore, the integral is \[\begin{split}\int_{Q}\sqrt{h}=& 4\int_{0}^{\sqrt{z_{0}^{2}+r_ {D}^{2}}+z_{0}}\mathrm{d}z\int_{0}^{\sqrt{z_{0}^{2}+r_{D}^{2}-(z-z_{0})^{2}}} \mathrm{d}\tau\frac{L^{2}}{z^{2}}\sqrt{\frac{r_{D}^{2}+z_{0}^{2}}{r_{D}^{2}- \tau^{2}-z^{2}+2zz_{0}}}\\ =& 4\int_{0}^{\sqrt{z_{0}^{2}+r_{D}^{2}}+z_{0}} \mathrm{d}z\ \frac{\pi L^{2}}{2z^{2}}\sqrt{r_{D}^{2}+z_{0}^{2}}=2\pi L^{2}\sqrt{r_{D}^{2}+ z_{0}^{2}}\left(\frac{1}{\varepsilon}+\frac{z_{0}-\sqrt{r_{D}^{2}+z_{0}^{2}}}{r_{D}^{2}}\right) \\ =& 2\pi L^{2}r_{D}\cosh\frac{\rho^{*}}{L}\left(\frac{1 }{\varepsilon}+\frac{-e^{-\rho^{*}/L}}{r_{D}}\right).\end{split} \tag{105}\] Finally we get the integral of action (103) \[\begin{split} I=&\frac{-1}{16\pi G_{N}}\left[-\frac {4}{L^{2}}\left(\pi L^{3}\left(\frac{r_{D}^{2}}{2\varepsilon^{2}}-\frac{1}{2} e^{-2\rho^{*}/L}+2r_{D}\sinh\frac{\rho^{*}}{L}(\frac{1}{\varepsilon}-\frac{1}{r_{D}} e^{-\rho^{*}/L})+\log\left(\frac{\varepsilon}{r_{D}}e^{-\rho^{*}/L}\right)\right) \right.\right.\\ &\left.\left.+K\left(2\pi L^{2}r_{D}\cosh\frac{\rho^{*}}{L}\left( \frac{1}{\varepsilon}+\frac{-e^{-\rho^{*}/L}}{r_{D}}\right)\right)\right]\\ =&\frac{4\pi L}{16\pi G_{N}}\left[\left(\frac{r_{D} ^{2}}{2\varepsilon^{2}}-\frac{1}{2}e^{-2\rho^{*}/L}+2r_{D}\sinh\frac{\rho^{*}} {L}(\frac{1}{\varepsilon}-\frac{1}{r_{D}}e^{-\rho^{*}/L})+\log\left(\frac{ \varepsilon}{r_{D}}e^{-\rho^{*}/L}\right)\right)\right.\\ &\left.\left.+\frac{KL}{2}r_{D}\cosh\frac{\rho^{*}}{L}\left(- \frac{1}{\varepsilon}+\frac{e^{-\rho^{*}/L}}{r_{D}}\right)\right]\\ =&\frac{L}{4G_{N}}\left[\frac{r_{D}^{2}}{2\varepsilon ^{2}}-\frac{1}{2}e^{-2\rho^{*}/L}+2r_{D}\sinh\frac{\rho^{*}}{L}(\frac{1}{ \varepsilon}-\frac{1}{r_{D}}e^{-\rho^{*}/L})+\log\left(\frac{\varepsilon}{r_ {D}}e^{-\rho^{*}/L}\right)+r_{D}\sinh\frac{\rho^{*}}{L}\left(-\frac{1}{ \varepsilon}+\frac{e^{-\rho^{*}/L}}{r_{D}}\right)\right]\\ =&\frac{L}{4G_{N}}\left(\frac{r_{D}^{2}}{2\varepsilon ^{2}}+\log\frac{\varepsilon}{r_{D}}-\frac{\rho^{*}}{L}-\frac{1}{2}+\frac{r_{D}} {\varepsilon}\sinh\frac{\rho^{*}}{L}\right),\end{split} \tag{106}\] where we use \(LT=\tanh\left(\frac{\rho}{L}\right)\) in the third equation. The final result is exactly the equation (101) in reference [18]. The corresponding boundary entropy is \[S_{\rm bdy}=-\left[I(\rho^{*})-I(0)\right]=\frac{\rho^{*}}{4G_{N}}. \tag{107}\] With \(LT=\tanh\left(\frac{\rho}{L}\right)\), we have \(S_{\rm bdy}=\frac{L\tanh^{-1}LT}{4G_{N}}=\frac{c\tanh^{-1}\left(LT\right)}{6}\). Later we will find it is universal and valid for other cases. Besides, we can also express boundary entropy as \(S_{\rm bdy}=\frac{c\rho^{*}}{6L}=\frac{c}{6}\tanh^{-1}\left(\sin\psi\right)\), where \(\psi\) is defined below (7). With (11b), we find that \(S_{\rm bdy}=\frac{c}{6}\log\left(\tan\left(\frac{\psi}{2}+\frac{\pi}{4}\right)\right)\), which is consistent with (10), where the prefactor \(1/3\) is because there are two branes and three regions for that case. Let's consider two branes. Under the same SCT, on the boundary CFT two branes will be mapped to two circles which are tangent at the fixed point, which is shown in figure 31. From the figure 31 (a), we can define the angle \(\rho\) which corresponds to the inner angle in the blue region. It means we define the direction of the ETW brane to be outwards to the bulk region. Then the corresponding boundary entropy is \(S_{\rm bdy}=\frac{\rho_{1}^{*}+\rho_{2}^{*}}{4G_{N}}\). We can consider the integral as two parts. One part is the larger disk and the other is the smaller disk, and the corresponding actions have opposite signs. For each action, we can calculate the boundary entropy with the method above. It is worth mentioning that if we set the larger ETW brane perpendicularly with angle \(\rho^{*}=0\), then it will not contribute to the boundary entropy. ### Boundary entropy for more than one region In the following, we consider two regions with different AdS radii that are separated by an interface brane, as shown in figure 32. We expect that the boundary entropy of this case is also \(S_{\rm bdy}=\frac{\rho_{1}^{*}+\rho_{2}^{*}}{4G_{N}}\), where \(\rho_{1}^{*},\ \rho_{2}^{*}\) are the corresponding angles of the brane in region 1 and 2. Actually, we can obtain a result consistent with (10). Figure 31: (a) AdS bulk dual of single CFT region before and after conformal transformation. (b) Single CFT region on the boundary before and after conformal transformation. We first consider figure 32 (a). If we connect two half planes of CFT, then obviously two bulk AdS regions will have an overlap. The proper way to understand it is in figure 32 (a), where we use some magic glue to connect two parts. Then we must discuss two bulk regions individually with a proper connection condition and a local coordinate transformation. For example, we should fix \(z\) and \(\tau\) but translate \(x\) when we go through the brane from one region to another. After special conformal transformation, we have the geometry shown in figure 32. To be concrete, we discuss the action (2.1). With connection condition (2.3b), we have \(\Delta K=2T\), and the action can be written as \[S_{\rm EH}= -\frac{1}{16\pi G_{(3)}}\left[\int_{\mathcal{M}_{1}}\mathrm{d}^ {3}x\ \sqrt{g_{1}}\left(R_{1}+\frac{2}{L_{1}^{2}}\right)+\int_{\mathcal{M}_{2}} \mathrm{d}^{3}x\ \sqrt{g_{2}}\left(R_{2}+\frac{2}{L_{2}^{2}}\right)+\int_{ \mathcal{S}}\mathrm{d}^{2}y\ \sqrt{h}(K_{1}-K_{2})\right]\] \[= -\frac{1}{16\pi G_{(3)}}\left[\int_{\mathcal{M}_{1}}\mathrm{d}^ {3}x\ \sqrt{g_{1}}\left(R_{1}+\frac{2}{L_{1}^{2}}\right)+\int_{\mathcal{S}} \mathrm{d}^{2}y\ \sqrt{h}K_{1}+\int_{\mathcal{M}_{2}}\mathrm{d}^{3}x\ \sqrt{g_{2}}\left(R_{2}+\frac{2}{L_{2}^{2}}\right)\right.\] \[\left.-\int_{\mathcal{S}}\mathrm{d}^{2}y\ \sqrt{h}K_{2}\right]=-\frac{1}{16\pi G_{(3)}}\sum_{ \alpha=1,2}\left[\int_{\mathcal{M}_{\alpha}}\mathrm{d}^{3}x\ \sqrt{g_{\alpha}}\left(R_{\alpha}+\frac{2}{L_{\alpha}^{2}}\right)+\int_{ \mathcal{S}_{\alpha}}\mathrm{d}^{2}y\ \sqrt{h}K_{\alpha}\right]=\sum_{\alpha=1,2}S_{\alpha}.\] (B.16) In reference [56] the left region is 1, the right region is 2 and the direction of brane and the extrinsic curvature \(K_{1,2}\) are from 1 to 2. But here we introduce the surface \(\mathcal{S}_{1,2}\), where \(\mathcal{S}_{1}=\mathcal{S}=-\mathcal{S}_{2}\). Therefore, directions of \(\mathcal{S}_{1,2}\) are defined from inside to outside. With this, we have \(S_{\rm EH}=\sum_{\alpha=1,2}S_{\alpha}\), and \(S_{\alpha}\) is the same as (B.4). So from the form of the action we can also show that the boundary entropy can be expressed as the sum of contributions Figure 32: (a) AdS bulk dual of two CFT regions before and after conformal transformation. (b) Two CFT regions on the boundary before and after conformal transformation. Here we show ETW with blue line, and for limit case we can take ETW to infinity. from the two regions. Besides, here we only consider the brane between two regions, but not the larger ETW brane, because we set the corresponding angle \(\rho=0\). ## Appendix C Different phases and phase transition for measurements In this appendix, we give (i) a detailed derivation of the geodesic solutions in AdS and black hole spacetime, and (ii) more details on the geodesic length calculation in the bubble phase. ### Convention of metric and corresponding geodesics With the metric above, we can calculate the geodesic equation and its length with the following method. The geodesic equation is \[\frac{\mathrm{d}^{2}x^{\mu}}{\mathrm{d}\lambda^{2}}+\Gamma^{\mu}_{\alpha \beta}\frac{\mathrm{d}x^{\alpha}}{\mathrm{d}\lambda}\frac{\mathrm{d}x^{\beta }}{\mathrm{d}\lambda}=f(\lambda)\frac{\mathrm{d}x^{\mu}}{\mathrm{d}\lambda}, \tag{102}\] where \(f(\lambda)\) is a function to be determined. 21 With the metric (101) and the coordinate \((t=t_{0},r=r(x),x=x)\), plugging Christoffel coefficient into (102), we have Footnote 21: Don’t confuse with the function \(f(r)\) in the metric (101). \[\ddot{x}+\frac{2}{r}\dot{r}\dot{x}=f(x)\dot{x}, \tag{103a}\] \[\ddot{r}+\frac{r}{L^{2}(\mu-1)-r^{2}}\dot{r}^{2}+r(\mu-1-\frac{r^{2}}{L^{2}}) \dot{x}^{2}=f(x)\dot{r}. \tag{103b}\] Therefore, with a fixed \(f(x)\), we only need to solve the differential equation \[\ddot{r}+\frac{r^{2}-2[L^{2}(\mu-1)-r^{2}]}{r[L^{2}(\mu-1)-r^{2}]}\dot{r}^{2}+ r(\mu-1-\frac{r^{2}}{L^{2}})=0. \tag{104}\] The general solution for this differential equation has two parameters, which we denote as \(c_{1},c_{2}\). For two situations \(\mu>1\) and \(\mu<1\), the general solution is \[r=\frac{1}{\sqrt{c_{1}-(\frac{1}{L^{2}(\mu-1)}-c_{1})\sinh^{2} \left[\sqrt{\mu-1}(x+c_{2})\right]}},\qquad\mu>1, \tag{105a}\] \[r=\frac{1}{\sqrt{c_{1}-(\frac{1}{L^{2}(1-\mu)}+c_{1})\sin^{2} \left[\sqrt{1-\mu}(x+c_{2})\right]}},\qquad\mu<1. \tag{105b}\] Here we only take positive branch because \(r>0\). For \(\mu>1\), we have the constraint \(0<c_{1}<\frac{1}{L^{2}(\mu-1)}\), while for \(\mu<1\), we have the constraint \(c_{1}>0\). It is obvious that for the limit \(\mu\to 1\), the geodesic equation will be a circle with redefinition \(\tilde{z}=1/r\). However, there is no unique solution for differential equation (104). With parameter shift \(x\to x+\frac{\pi}{\sqrt{\mu-1}}\), (105b) will change \(\sin\) to \(\cos\), which is equivalent to the shift \(c_{2}\to c_{2}+\frac{\pi}{2\sqrt{\mu-1}}\). While in (105a), another solution is given by parameter shift \(x\to x+\frac{\mathrm{i}\pi}{2\sqrt{\mu-1}}\), which is equivalent to changing \(\sinh\) to \(\mathrm{i}\cosh\). But we require \(c_{2}\) is real, which means these two geodesics are not equivalent. Therefore, the second solution for \(\mu>1\) is \[r^{-2}+(\frac{1}{L^{2}(\mu-1)}-c_{1}^{\prime})\sinh^{2}\left[\sqrt{\mu-1}(x+c_{2 })\right]=\frac{1}{L^{2}(\mu-1)}, \tag{111}\] where we redefine \(c_{1}^{\prime}=\frac{2}{L^{2}(\mu-1)}-c_{1}\). Now we can calculate the geodesic length, \[d=\int\mathrm{d}l\ \sqrt{g}=\int\mathrm{d}s=\int\sqrt{\frac{r^{\prime}(x)}{f}+r ^{2}}\ \mathrm{d}x, \tag{112}\] where \(f\) is the function defined in metric (109), and \(r^{\prime}(x)\) is the derivative of the geodesic. Then the integral can be solved and the final result is \[d=L\tanh^{-1}\Bigg{[}\frac{\tanh\left(\sqrt{\mu-1}(x+c_{2})\right)}{L\sqrt{c_{ 1}(\mu-1)}}\Bigg{]},\qquad\mu>1, \tag{113a}\] \[d=L\tanh^{-1}\Bigg{[}\frac{\tan\left(\sqrt{1-\mu}(x+c_{2})\right)}{L\sqrt{c_{ 1}(1-\mu)}}\Bigg{]},\qquad\mu<1. \tag{113b}\] Actually, in (110) and (113) we can notice that the solutions of two cases are related. For example, starting from (113a), if we now consider \(\mu<1\), then \(\sqrt{\mu-1}=\mathrm{i}\sqrt{1-\mu}\) and \(\mathrm{i}\) in \(\tanh\) will change it to \(-\mathrm{i}\tan\). Assuming two end points located at \((r=\frac{1}{\varepsilon},x=\pm\frac{1}{2}x_{0})\), for \(\mu<1\), we have parameter \(c_{1}=[\varepsilon^{2}+\frac{1}{L^{2}(1-\mu)}\sin^{2}\left(\sqrt{1-\mu}\frac{ x_{0}}{2}\right)]/\cos^{2}\left(\sqrt{1-\mu}\frac{x_{0}}{2}\right)\) and \(c_{2}=0\). Then the total geodesic length is \[\begin{split}\Delta d=& 2L\tanh^{-1}\Bigg{[}\frac{\tan \left(\sqrt{1-\mu}\frac{x_{0}}{2}\right)}{L\sqrt{c_{1}(1-\mu)}}\Bigg{]}\approx 2L \tanh^{-1}\Bigg{[}1-\frac{L^{2}(1-\mu)}{2\sin^{2}\left(\sqrt{1-\mu}\frac{x_{0} }{2}\right)}\varepsilon^{2}+\mathcal{O}(\varepsilon^{3})\Bigg{]}\\ \approx& 2L\cosh^{-1}\Bigg{(}\frac{\sin\left(\sqrt{1- \mu}\frac{x_{0}}{2}\right)}{L\sqrt{1-\mu}\varepsilon}\Bigg{)}\approx 2L \log\bigg{(}\frac{2\sin\left(\sqrt{1-\mu}\frac{x_{0}}{2}\right)}{L\sqrt{1-\mu }\varepsilon}\bigg{)},\end{split} \tag{114}\] Therefore, for \(x_{0}\ll\frac{1}{\sqrt{1-\mu}}\), we retain \(\Delta d=2L\log\frac{x_{0}/L}{\varepsilon}\). Similarly, for \(\mu>1\), we have parameter \(c_{1}=[\varepsilon^{2}+\frac{1}{L^{2}(\mu-1)}\sinh^{2}\left(\sqrt{\mu-1}\frac {x_{0}}{2}\right)]/\cosh^{2}\left(\sqrt{\mu-1}\frac{x_{0}}{2}\right)\) and \(c_{2}=0\). Then the total geodesic length is \[\begin{split}\Delta d=& 2L\tanh^{-1}\Bigg{[}\frac{\tanh \left(\sqrt{\mu-1}\frac{x_{0}}{2}\right)}{L\sqrt{c_{1}(\mu-1)}}\Bigg{]}\approx 2L \tanh^{-1}\Bigg{[}1-\frac{L^{2}(\mu-1)}{2\sinh^{2}\left(\sqrt{\mu-1}\frac{x_{0 }}{2}\right)}\varepsilon^{2}+\mathcal{O}(\varepsilon^{3})\Bigg{]}\\ \approx& 2L\cosh^{-1}\Bigg{(}\frac{\sinh\left(\sqrt{\mu-1} \frac{x_{0}}{2}\right)}{L\sqrt{\mu-1}\varepsilon}\Bigg{)}\approx 2L\log\bigg{(} \frac{2\sinh\left(\sqrt{\mu-1}\frac{x_{0}}{2}\right)}{L\sqrt{\mu-1} \varepsilon}\bigg{)}.\end{split} \tag{115}\] Therefore, for \(x_{0}\ll\frac{1}{\sqrt{\mu-1}}\), we retain \(\Delta d=2L\log\frac{x_{0}/L}{\varepsilon}\). While, for \(x_{0}\gg\frac{1}{\sqrt{\mu-1}}\), we retain \(\Delta d=2L\log\left(\frac{1}{\varepsilon L\sqrt{\mu-1}}\right)+L\sqrt{\mu-1} x_{0}\), which corresponds to volume-law entanglement. We focus on the unusual geodesic in (110). Before we argue that for (111a), \(0<c_{1}<\frac{1}{L^{2}(\mu-1)}\), which leads to a circle in some limit. While, for (110), if \(0<c_{1}<\frac{1}{L^{2}(\mu-1)}\), it is similar to (111a). But we can also have \(c_{1}<0\), it is special because corresponding derivative \(\frac{\mathrm{d}r^{-1}}{\mathrm{d}x}\) will be larger when \(c_{1}\) is smaller. Besides, there is a fixed point on (110) for \((x=-c_{2},r^{-1}=\sqrt{\frac{1}{L^{2}(\mu-1)}})\), which is on the horizon of black hole. It is one reason why we consider it unusual because usual geodesic will not touch the horizon. Analog to (111) and (111a), we have the length of geodesic for (110) that \[d=L\tanh^{-1}\Big{[}\sqrt{2-c_{1}L^{2}(\mu-1)}\mathrm{tanh}\left(\sqrt{\mu-1}( x+c_{2})\right)\Big{]}. \tag{112}\] In the following, we compare the length of geodesic for two cases with black hole metric. If we assume points \((r_{0},\pm\frac{x_{0}}{2})\) on two geodesics, then \(c_{2}=0\), and corresponding parameters \(c_{1}^{\mathrm{u}}\) for usual geodesic (111a) and \(c_{1}^{\mathrm{n}}\) for unusual geodesic (111c) are \[c_{1}^{\mathrm{u}}=\frac{r^{-2}+\frac{1}{L^{2}(\mu-1)}\sinh^{2}\left(\sqrt{ \mu-1}\frac{x_{0}}{2}\right)}{\cosh^{2}\left(\sqrt{\mu-1}\frac{x_{0}}{2} \right)},\qquad c_{1}^{\mathrm{n}}=\frac{r^{-2}+\frac{1}{L^{2}(\mu-1)}\left( \sinh^{2}\left(\sqrt{\mu-1}\frac{x_{0}}{2}\right)-1\right)}{\sinh^{2}\left( \sqrt{\mu-1}\frac{x_{0}}{2}\right)}. \tag{113}\] Plugging it in (111a) and (112), we have \[\Delta d^{\mathrm{u}}=2L\tanh^{-1}\Bigg{[}\Bigg{(}\frac{L^{2}(\mu-1)r_{0}^{-2 }+\sinh^{2}\left[\sqrt{\mu-1}\frac{x_{0}}{2}\right]}{\cosh^{2}\left[\sqrt{\mu- 1}\frac{x_{0}}{2}\right]}\Bigg{)}^{-\frac{1}{2}}\tanh\left(\sqrt{\mu-1}(x+c_{ 2})\right)\Bigg{]}, \tag{114a}\] \[\Delta d^{\mathrm{n}}=2L\tanh^{-1}\Bigg{[}\Bigg{(}\frac{-L^{2}(\mu-1)r_{0}^{-2 }+\cosh^{2}\left[\sqrt{\mu-1}\frac{x_{0}}{2}\right]}{\sinh^{2}\left[\sqrt{\mu- 1}\frac{x_{0}}{2}\right]}\Bigg{)}^{\frac{1}{2}}\!\tanh\left(\sqrt{\mu-1}(x+c_{ 2})\right)\Bigg{]}. \tag{114b}\] For usual geodesic we have \(c_{1}<\frac{1}{L^{2}(\mu-1)}\), which means \(r_{0}^{-2}<c_{1}<\frac{1}{L^{2}(\mu-1)}\). For unusual geodesic we also have \(c_{1}<\frac{1}{L^{2}(\mu-1)}\), which means \(r_{0}^{-2}<\frac{1}{L^{2}(\mu-1)}\). Therefore, we can directly check that, with \(r_{0}^{-2}L^{2}(\mu-1)<1\), we have \[\Bigg{(}\frac{L^{2}(\mu-1)r_{0}^{-2}+\sinh^{2}\left[\sqrt{\mu-1}\frac{x_{0}}{ 2}\right]}{\cosh^{2}\left[\sqrt{\mu-1}\frac{x_{0}}{2}\right]}\Bigg{)}^{-\frac{ 1}{2}}<\Bigg{(}\frac{-L^{2}(\mu-1)r_{0}^{-2}+\cosh^{2}\left[\sqrt{\mu-1}\frac{ x_{0}}{2}\right]}{\sinh^{2}\left[\sqrt{\mu-1}\frac{x_{0}}{2}\right]}\Bigg{)}^{ \frac{1}{2}}. \tag{115}\] Therefore, \(\Delta d^{\mathrm{u}}<\Delta d^{\mathrm{n}}\), which means usual geodesic is a global minimal point with the shortest length. Besides, take \(r_{0}^{-1}=\varepsilon\), we can simplify (114b) to get \[\Delta d= 2L\tanh^{-1}\Bigg{[}\Bigg{(}1-\frac{L^{2}(\mu-1)\varepsilon^{2} }{\cosh^{2}\left[\sqrt{\mu-1}\frac{x_{0}}{2}\right]}\Bigg{)}^{\frac{1}{2}} \Bigg{]}\approx 2L\tanh^{-1}\Bigg{[}1-\frac{L^{2}(\mu-1)}{2\cosh^{2}\left[\sqrt{\mu-1} \frac{x_{0}}{2}\right]}\varepsilon^{2}+\mathcal{O}(\varepsilon^{3})\Bigg{]} \tag{116}\] \[\approx 2L\cosh^{-1}\Bigg{(}\frac{\cosh\left(\sqrt{\mu-1}\frac{x_{0}}{2} \right)}{L\sqrt{\mu-1}\varepsilon}\Bigg{)}\approx 2L\log\bigg{(}\frac{2\cosh\left( \sqrt{\mu-1}\frac{x_{0}}{2}\right)}{L\sqrt{\mu-1}\varepsilon}\bigg{)}.\] However, different from (111a), for \(x_{0}\ll\frac{1}{\sqrt{\mu-1}}\), we cannot retain log-law entanglement. For \(x_{0}\gg\frac{1}{\sqrt{\mu-1}}\), we retain \(\Delta d=2L\log\left(\frac{1}{\varepsilon L\sqrt{\mu-1}}\right)+L\sqrt{\mu-1} x_{0}\), which corresponds to volume law entanglement. ### Details of calculation for the geodesic in the bubble-inside-horizon phase With (114) and (115), we mention that the existence of geodesic along horizon will always lead to a longer geodesic length. Now we compare the two cases shown in figure 33. According to (100), assuming \((r,x)=(\varepsilon^{-1},\tilde{x}_{0})\) located on the geodesic, then from (102) we have \[\Delta d=L_{2}\tanh^{-1}\Bigg{[}\bigg{(}1-\frac{L_{2}^{2}(\mu-1) \varepsilon^{2}}{\cosh^{2}\left[\sqrt{\mu-1}\tilde{x}_{0}\right]}\bigg{)}^{ \frac{1}{2}}\Bigg{]}\approx L_{2}\log\bigg{(}\frac{2\cosh\left(\sqrt{\mu-1} \tilde{x}_{0}\right)}{L_{2}\sqrt{\mu-1}\varepsilon}\bigg{)}. \tag{103}\] Now we show that \(L_{2}\log\left(\frac{2\cosh\left(\sqrt{\mu-1}\tilde{x}_{0}\right)}{L_{2}\sqrt{ \mu-1}\varepsilon}\right)<L_{2}\log\left(\frac{2\cosh\left(\sqrt{\mu-1} \tilde{x}_{1}\right)}{L_{2}\sqrt{\mu-1}\varepsilon}\right)+r_{H}(\tilde{x}_{0 }-\tilde{x}_{1})\) for \(0<\tilde{x}_{1}<\tilde{x}_{0}\). Define function \(f(x)=\sqrt{\mu-1}x-\log\left[\cosh\left(\sqrt{\mu-1}x\right)\right]\), then the problem above is equivalent to \(f(\tilde{x}_{1})<f(\tilde{x}_{0})\) for \(\tilde{x}_{1}<\tilde{x}_{0}\). Up to rescaling of \(x\) we can define \(\tilde{f}(x)=x-\log\left[\cosh\left(x\right)\right]\), and \(\tilde{f}^{\prime}(x)=1-\tanh x>0\). Therefore, we only consider the geodesic with three parts as in the main text. To solve (113) in the main text, we mentioned that \(\beta\) must have the same order of \(\delta\) and \(|\delta-\beta|\ll\delta\). Here we give a proof. An intuition argument is that for \(|\delta-\beta|\sim\mathcal{O}(\delta)\), the geodesic corresponds to volume law, while we can find a geodesic with log-law later. We first construct a geodesic with log-law for \(\varepsilon\ll r_{H}^{-1}\ll\delta\ll 1\). Assume \(\delta\approx\beta\) and \(b_{1}^{\prime}=b_{1}\) in (114) and (115), then the total length of part \(l_{1}\) and \(l_{2}\) is smaller than the length of total unusual geodesic in (102) that \[\text{Length}[l_{1}+l_{2}]<2L_{2}\log\bigg{(}\frac{2\cosh\left( \sqrt{\mu-1}(\delta-\beta)\right)}{L_{2}\sqrt{\mu-1}\varepsilon}\bigg{)}. \tag{104}\] Assume \((\delta-\beta)\sim\mathcal{O}(r_{H}^{-1})\), then the right-hand side of (104) is \(L_{2}\log\big{(}2\cosh\left(\sqrt{\mu-1}(\delta-\beta)\right)\big{)}\sim \mathcal{O}(1)\) except UV-cutoff and the divergence of \(\log\sqrt{\mu-1}\). For part \(l_{3}\), the point \((r_{0},x_{0})\) on it with \(|\delta-x_{0}|<2|\delta-\beta|\sim\mathcal{O}(r_{H}^{-1})\), which means \(x_{0}\sim\mathcal{O}(\delta)\gg r_{0}^{-1}\). Therefore, with (100), we have \[\text{Length}[l_{3}]\approx L_{1}\log\bigg{(}\frac{2\sin x_{0}}{L _{1}r_{0}^{-1}}\bigg{)}, \tag{105}\] Figure 33: Two possible geodesics in the region II in the bubble-inside-horizon phase. which corresponds to log-law entanglement for \(x_{0}\ll 1\). Therefore, we have constructed a geodesic with log-law entanglement. Now we prove that for \(\ket{\delta-\beta}\sim\mathcal{O}(\delta)\) the geodesic corresponds to volume law. We consider the length of part \(l_{1}\), which corresponds to equation (100). Assume \(\beta=\eta^{\prime}\delta\) and \(\eta^{\prime}\sim\mathcal{O}(1)<1,(1-\eta^{\prime})\sim\mathcal{O}(1)<1\), with (101), we have \[\text{Length}[l_{1}]\approx L_{2}\log\bigg{(}\frac{2\cosh\left(\sqrt{\mu-1}(1- \eta^{\prime})\delta\right)}{L_{2}\sqrt{\mu-1}\varepsilon}\bigg{)}, \tag{102}\] With limit \(\sqrt{\mu-1}\delta\sim\mathcal{O}(\frac{\delta}{r_{H}^{-1}})\gg 1\), the right-hand side of (102) can be simplified as \(\text{r.h.s.}\approx L_{2}\log\Big{(}\frac{\exp\left(L_{2}^{-1}r_{H}(1-\eta^{ \prime})\delta\right)}{L_{2}\sqrt{\mu-1}\varepsilon}\Big{)}=L_{2}\log\Big{(} \frac{r_{H}^{-1}}{\varepsilon}\Big{)}+r_{H}(1-\eta^{\prime})\delta\), which corresponds to volume law entanglement. Therefore, we finish the proof that \(\ket{\delta-\beta}\ll\delta\). ## Appendix D Tensor network realization of Python's Lunch ### Brief review of Python's Lunch property Here we briefly review some basics about complexity and Python's Lunch. The first definition is the complexity of a unitary operator \(U\), which is defined as the minimal number of 2-qubit gates \(g\) to prepare \(U\), e.g., \(U=g_{n}g_{n-1}...g_{1}\). The corresponding number is \(\mathcal{C}(U)=\mathcal{C}(U^{\dagger})\). Similarly, we can define the relative complexity between two states \(\ket{\psi}\) and \(\ket{\phi}\), which is defined as the complexity of the unitary transformation satisfying \(\ket{\psi}=U\ket{\phi}\). Besides, with gauge redundancy there is more than one U, and we must minimize all possible \(\mathcal{C}(U)\). Therefore, we define \(\mathcal{C}(\psi,\phi)=\min\mathcal{C}(U)=\mathcal{C}(\phi,\psi)\). In this article, we always consider the black holes, which can be modeled as a quantum computer with \(N\) qubits and evolve under some all-to-all Hamiltonian or discrete gates. Here the original theory always has a Hamiltonian, while to realize some properties we may construct it with some discrete gates. Under this equivalence, we can consider the complexity of a black hole, or more generally, a configuration of spacetime geometry. In reference [63], they consider restricted and unrestricted complexity, and focus on the former. Here we briefly compare these two complexities. When we consider a two-side black hole, it can be mapped to a maximal entangle state, where \(N\) qubits on the left maximally entangle with \(N\) qubits on the right. Then there may be some unitary evolution for both side. The difference between restricted and unrestricted complexity is if we require the corresponding \(U\) which maps it back to a maximal entangle state can only apply on one side. For example, we apply \(U\) only on right \(N\) qubits for restricted case. However, for the case we are interested in, which is a one-side black hole, this two definition is the same, and it is the post-selection that will increase the complexity exponentially. Now we discuss Python's Lunch geometry and introduce the conjecture of the complexity of Python's Lunch in reference [63]. A simple example of Python's Lunch is shown in figure 34. There are two minimal area and one maximal area, and the corresponding numbers of qubits from left to right are \(N\), \((1+\alpha)N\) and \((1+\gamma)N\) with \(\alpha>\gamma>1\). Therefore, using the language of state complexity, we want to calculate \(\mathcal{C}(\ket{I}\ket{0}^{\otimes m_{L}},\ket{\psi}\ket{0}^{\otimes m_{R}})\), where \(m_{L}=\alpha N\) and \(m_{R}=(\alpha-\gamma)N\). In the reference, authors use Grover algorithm to construct a decomposition of \(U\) with \(\ket{\psi}\ket{0}^{\otimes m_{R}}=U\ket{I}\ket{0}^{\otimes m_{L}}\). And the total number of gates that need to construct \(U\) and apply on the system is \(2^{\frac{m_{R}}{2}}\mathcal{C}_{\rm TN}\), where \(\mathcal{C}_{\rm TN}\) is the number of nodes in the tensor network. So the complexity is \(\mathcal{C}(U)=2^{\frac{m_{R}}{2}}\mathcal{C}_{\rm TN}\). Besides, the authors also argue that for general state \(|I\rangle\), the corresponding complexity is invariant. They also give a naive argument about the complexity of unitary transformation \(U\), which will lead to a larger number of gates, but the main conclusion is the same. Starting from the initial state \(|I\rangle\), we apply the unitary evolution on it and form the left side of figure 34. Then by adding ancilla to the system, we can apply more unitary evolution and form the middle part of the diagram. However, for the right-hand side, we have a smaller system, which means we must apply some measurement and project out some qubits. Then after applying more unitary evolution, we will get the final state \(|\psi\rangle\). Totally, we apply \(\mathcal{C}_{\rm TN}\) number of gates in this process, but it needs some measurement which is non-unitary. To project \(m_{R}\) additional qubits on \(|0\rangle^{\otimes m_{R}}\), we must consider post-selection effect, which means there are \(2^{m_{R}}\) possible outcomes and only one is what we want. Therefore, if we want to successfully construct a state \(|\psi\rangle\) from \(|I\rangle\), on average we must repeat this process \(2^{m_{R}}\) times. Finally, we find that the total number of unitary gates we need is \(2^{m_{R}}\mathcal{C}_{\rm TN}\), which is almost the same as Grover algorithm, and the only difference is the prefactor on exponent. According to the discussion above, they propose a conjecture about the complexity of Python's Lunch. For a Python's Lunch geometry with min-max-min areas \(\mathcal{A}_{L},\mathcal{A}_{\rm m,ax}\) and \(\mathcal{A}_{R}\), and with \(\mathcal{A}_{L}<\mathcal{A}_{R}\), the restricted complexity on the right system is \[\mathcal{C}_{R}(U)=\text{const.}\times\mathcal{C}_{\rm TN}\exp{\left[\frac{1}{ 2}\frac{\mathcal{A}_{\rm max}-\mathcal{A}_{R}}{4G\hbar}\right]}, \tag{120}\] where \(\mathcal{C}_{\rm TN}=\frac{V}{G\hbar_{\rm AdS}}\) is the volume of wormhole. If we apply this conjecture to the case in figure 34, then \(\mathcal{A}_{L}\approx N\cdot 4G\hbar\), \(\mathcal{A}_{\rm max}\approx(1+\alpha)N\cdot 4G\hbar\) and \(\mathcal{A}_{R}\approx(1+\gamma)N\cdot 4G\hbar\), and \(\mathcal{C}_{R}(U)=\text{const.}\times\mathcal{C}_{\rm TN}\ e^{[(\alpha- \gamma)N]/2}\). And \(\mathcal{C}_{\rm TN}\geq N\log N\). Besides, to be more concrete, two minimums are well-defined by local minimums, but the maximum is more complicate. We should first choose one foliation of the geometry which is known as "sweepout", and find the global maximum. Then we minimize all maximums for all foliations, and the minimum of maximum is \(\mathcal{A}_{\rm max}\). Correspondingly, they also give a covariant version of the conjecture above. For a covariant Python's Lunch geometry with min-max-min generalized entropy \(S_{L}^{(\rm gen)}\), \(S_{\rm max}^{(\rm gen)}\) Figure 34: Python’s Lunch geometry with quantum circuit realization. Here we add some degrees of freedom after unitary evolution on initial system. Then we apply unitary transformation for the whole system and projective measurement in some subregion. and \(S_{R}^{\rm(gen)}\), and \(S_{L}^{\rm(gen)}<S_{R}^{\rm(gen)}\), the restricted complexity on the right system is \[\mathcal{C}_{R}(U)={\rm const.}\times\mathcal{C}_{\rm TN}\exp\bigg{[}\frac{1}{2} \left(S_{\rm max}^{\rm(gen)}-S_{R}^{\rm(gen)}\right)\bigg{]}, \tag{104}\] where \(\mathcal{C}_{\rm TN}\) is the size of the tensor network. ### Python's Lunch realization In this section, we try to construct a tensor network to realize Python's lunch, which is related to the bubble-inside-horizon phase. We consider a state dual to such a geometry. The inner AdS space can be considered as the initial state before measurements, and the black hole geometry can be considered as the effect of general measurements. Therefore, we can first construct the inner AdS space with a MERA, and then construct the additional black hole region with a tensor network which consists of both unitary evolution and post-selection. We illustrate our construction in figure 35. The inner tensor network of MERA is shown in gray, and the external tensor network of measurements is shown in blue and orange. We can construct MERA with unitary circuits and ancilla qubits with the method, e.g., in reference [67], which will be easier to compute the complexity. The key is how to design the tensor network of quantum quench, and show that for some parameters it corresponds to the bubble-inside-horizon phase and has exponentially big complexity, and for other parameters it corresponds to the bubble-outside-horizon phase with much smaller complexity. A natural construction is shown in figure 35. There are two layers of tensors for measurements (from interior to outside, we Figure 35: Python’s lunch realization with MERA and additional layers of tensor network. Here the gray disk corresponds to MERA with the black dots denoting degrees of freedom on the boundary. The blue dots mean degrees of freedom after measurements. The orange dots denote ancilla qudits. The green and purple curves indicate two geodesics. denote them as the first and second layers). The bond dimension of each tensor is to be determined in the following. We define the bond dimension \(D^{(\cdot)}\) and simply use \((\cdot)\) to denote the bond dimension. Because we hope that, after the weak measurements, the dimension of the Hilbert space of boundary qubits is invariant, i.e. the dimensions of the MERA state before and after weak measurements are the same. For simplicity, we set the bond dimensions of legs in the MERA and of the legs in the outermost layer of the tensor network to be 1. Besides, we expect that there is a min-max-min structure in tensor network for Python's lunch. Here the first minimum can be seen as the center of MERA. Then for the circle with the same center of MERA, the corresponding perimeter is larger for larger radius, and the maximum may locate near the boundary of MERA. Now if we increase the radius, then the area will rely on the bond dimension of the legs between two external layers of tensors, which we define as \(\gamma\). Therefore, \(\gamma<1\) will give another minimal area, and the corresponding geometry can be considered as Python's lunch. However, if \(\gamma=1\), there will be no maxima and second minimum, which means there doesn't exist Python's lunch. (Here we won't consider \(\gamma>1\) because it may induce additional post-selection on the second layer.) Now comparing the area of the circle and the corresponding area in the bubble phase in section C, we can find that the bubble-inside-horizon phase corresponds to \(\gamma<1\), while the bubble-outside-horizon phase corresponds to \(\gamma=1\). And we require tensors in the weak measurement should also be perfect tensors [68], which will be helpful to compute the geodesic and minimal surface. Now we have constructed a tensor network to realize two phases and Python's lunch. Intuitively, from the discussion before, we expect that for \(\gamma<1\) the complexity will be exponentially large, but for \(\gamma=1\) it cannot be larger than a power law. Now we discuss the complexity of figure 35 in details. Firstly, we discuss the ways of contracting tensors in the tensor network. As the definition of complexity, we must decompose the tensor network to many small unitary tensors and apply them to initial qubits one by one. For each tensor, there are two ways to contract them. One is applying post-selection, and we choose the maximally entangled state, which can be considered as a straight line in tensor network and shown in figure 36 (first panel). This method doesn't require other condition. Although this method is general, we may have more post-selection and lead to larger complexity. There is another way of contracting the tensor by applying it as a unitary transformation. Actually, we can always divide the legs of tensor into two parts to consider it as a unitary transformation. For example, when we have a tensor with four bonds and their dimensions are \((1,\beta,\beta,\gamma)\), we can still consider it as a unitary tensor by dividing them into two equal parts with bond dimension \(\beta+\frac{1+\gamma}{2}\). However, although a tensor can be seen as a unitary tensor, we may have additional constraints. Considering one tensor in the measurement part, because of the rotation symmetry, we require that two bonds on the tangent direction (angular direction) will be one "in" and one "out" with the same bond dimension. Besides, tensors in the first layer will be connected to the boundary of MERA, and the corresponding bond should also be "in". Therefore, if we require \(\gamma\neq 1\), then the tensor is not a unitary transformation because the total bond dimension of in and out are different. It is the reason why we add additional dangling bonds on each site, and their bond dimension is to be determined to minimize the post-selection. Because additional bonds will increase the post-selection times, it is a trade-off for the complexity. To summarize, for the first method we don't need additional bonds but post-selection to get maximally entangled state, while in the second method we need add some additional dangling bonds but may have less post-selection. So we need to compare them and choose the smaller one. As shown in figure 36, we first consider the tensors with bond dimension \((1,\beta,\beta,\gamma)\) in the first layer that connect to the boundary of MERA. Because of the rotation symmetry, we don't need to consider two tangent bonds with dimension \(\beta\). If we naively apply post-selection on the bond with dimension \(1\) (to connect to the boundary of MERA), the Hilbert space of post-selection is \(D^{1}\cdot D^{1}=D^{2}\). However, we can use another way with the property of perfect tensors to reduce the dimension of post-selection. We can divide the total bond into two equal parts with dimension \(\frac{1+\gamma}{2}<1\). Then we separate the bond with dimension \(1\) into two groups with dimension \(\frac{1+\gamma}{2}\) and \(\frac{1-\gamma}{2}\), and apply the tensor as a unitary transformation on the bond with dimension \(\frac{1+\gamma}{2}\). Now what we need to do is applying post-selection on the left bond with dimension \(\frac{1-\gamma}{2}\), which corresponds to the Hilbert space with dimension \(D^{\frac{1-\gamma}{2}}\cdot D^{\frac{1-\gamma}{2}}=D^{1-\gamma}<D^{2}\). It means that with the first method we must at least apply post-selection on Hilbert space with dimension \(D^{1-\gamma}\), which is shown in figure 36 (second panel). Now we consider the second method, where we add a dangling bond with dimension \(\alpha\). Because we want to apply tensor as a unitary transformation on the boundary of MERA, it requires \(\alpha\geq 1-\gamma\). Therefore, with a dangling bond with dimension \(\alpha=1-\gamma\), we must apply post-selection on this bond after connecting the tensor to MERA boundary as a unitary transformation. It means the dimension of Hilbert space under post-selection is \(D^{\alpha}=D^{1-\gamma}\), which is shown in figure 36 (fourth panel). Therefore, we find that for both methods, we must at least apply post-selection on a Hilbert space with dimension \(D^{1-\gamma}\). This equivalence is shown in figure 36. We can bind two bonds with dimension \(\frac{1-\gamma}{2}+\frac{1-\gamma}{2}\) under post-selection to maximally entangled state in the first method, and consider them as the additional dangling bond we add in the second method. This result is also consistent with the naive argument before, where we consider the times of post-selection as \((D^{1-\gamma})^{N}\) and \(N\) is the number of dangling bonds on one layer. Besides, the first and last tensors we add for each layer are special, where the bond with dimension \(\beta\) is vital. It is shown in figure 35 that, for the first layer, we start to construct the network from the tensor located at A. For this tensor, the bond with dimension \(1\) is "in", and the bonds with dimension \(\beta,\beta\) and \(\gamma\) are "out". It means there are two additional Figure 36: Diagram of equivalence of post-selection on maximally mixed state and general post-selection on ancilla with dimension \(D^{1-\gamma}\). out bonds. So we may use the dangling bonds to decrease the dimension of post-selection. For example, if \(1+\alpha>\gamma+2\beta\), we can divide the dangling bond into two parts with dimension \((-1+\alpha+\gamma+2\beta)/2\) and \((1+\alpha-\gamma-2\beta)/2\) as "in" and "out" bonds. Then the total dimension of "in" and "out" bond are the same. Therefore, the first tensor needs post-selection of Hilbert space with dimension \(D^{\frac{(1+\alpha-\gamma-2\beta)}{2}}\). For \(1+\alpha<\gamma+2\beta\), a naive method is directly applying post-selection on the bond with dimension 1 to maximally entangled state (without introducing additional ancilla). For the last tensor located on \(B\), we will find that after connecting it to the whole network, there are two bonds with dimension \(\beta\) left. To connect them, we need to apply post-selection on them. Therefore, for two layers there are additional post-selections, nevertheless, the complexity resulted from these post-selections won't scale with the total number of qudits in one layer. The discussion before actually is only valid for the first layer with \(\gamma<1\). For the second layer, the method of connecting tensors is similar. With rotation symmetry, we focus on the tensor except the first and last one, and do not need to consider two tangent bonds with dimension \(\beta\). Similar to the first layer, we can set \(\alpha^{\prime}=\alpha=1-\gamma\). Then we can consider two bonds with dimension \(\gamma\) and \(\alpha^{\prime}\) as "in" bond and the bond with dimension 1 as "out" bond. Therefore, we can connect the tensor by regarding it as a unitary transformation. The difference between the first and second layers is that, for the first layer, the dangling bonds act as the Hilbert space under post-selection, while for the second layer, the dangling bonds act as ancilla qudits. These ancilla qudits are coupled to the system without the need of post-selection. Besides, the post-selection is also needed for the first and last tensors, which doesn't scale with the system size. To summarize, for \(\gamma<1\), which corresponds to the Python's lunch geometry, the complexity of the tensor network is about \(\mathcal{C}(U)=(\text{const.})\cdot D^{(1-\gamma)N}\), where \(N\) is the total system size (the number of qudits on the boundary), and (const.) includes the contribution of complexity from MERA (and the first and last tensors in two external layers that do not scale with the system size). While, for \(\gamma=1\), which corresponds to the bubble-outside-horizon phase without Python's lunch, the complexity is \(\mathcal{C}(U)=(\text{const.})^{\prime}\) which only comes from the contribution of MERA (and connecting the first and last tensors in the angular direction in two external layers). There are some remarks about the discussion above. (i) Here, we require the tensors in the tensor network realization of Python's Lunch to be perfect tensors. It is because this requirement will be useful when we compute the geodesic length and regard tensors as unitary transformations. For example, with Python's lunch geometry, when we consider the geodesic bounded by two end points on the boundary, there is a local minimal which is shown in figure 35 with label (b). The length of this geodesic is \(d=(\log D^{\gamma})\cdot L\) where \(L\) is the size of the subsystem, and it corresponds to volume law entanglement. But the minimal geodesic is labeled in figure 35 by (a) with length \(d=\log D\cdot\log L+\text{const.}\) where \(\log L\) and const. terms correspond to the contribution in the AdS region and the boundary region corresponding to the measurements. And it leads to the area law entanglement. These geodesics can be gotten with a greedy algorithm for perfect tensor in reference [68]. For \(\gamma=1\), there is no local minimal geodesics, and the only one has the length \(d=\log D\cdot\log L+\text{const.}\). These
2301.00307
Bending Deformation Driven by Molecular Rotation
In recent years, certain molecular crystals have been reported to possess surprising flexibility by undergoing significant elastic or plastic deformation in response to mechanical loads. However, despite this experimental evidence, there currently exists no atomistic mechanism to explain the physical origin of this phenomenon from numerical simulations. In this study, we investigate the mechanical behavior of three naphthalene diimide derivatives, which serve as representative examples, using direct molecular dynamics simulations. Our simulation trajectory analysis suggests that molecular rotational freedom is the key factor in determining a crystal's mechanical response, ranging from brittle fracture to elastic or plastic deformation under mechanical bending. Additionally, we propose a rotation-dependent potential energy surface as a means to classify organic materials' mechanical responses and identify new candidates for future investigation.
Pedro A. Santos-Florez, Shinnosuke Hattori, Qiang Zhu
2022-12-31T23:48:21Z
http://arxiv.org/abs/2301.00307v2
# Bending Deformation Driven by Molecular Rotation ###### Abstract Recently, some molecular crystals have been found to be surprisingly flexible by undergoing a large extent of elastic or plastic deformation upon various mechanical loads. Despite the increasing experimental reports on mechanically flexible crystals, this phenomenon has never been reproduced in numerical simulation and thus there is no atomistic mechanism to explain its physical origin. Using three recently reported naphthalene dimide derivatives as the examples, we perform the first direct molecular dynamics simulation to model their mechanical behaviors from brittle fracture to elastic/plastic deformation upon mechanical bending. Our simulation reveals that molecular rotational freedom is the key factor to determine the crystal's mechanical response. Furthermore, we propose the use of rotation-dependent potential energy surface to classify organic materials' mechanical response and screen new mechanically flexible candidates in future. While most molecular crystals are known to be brittle, there exists a class of compliant organic crystals that can easily bend under a large mechanical stress loading [1; 2]. Since early 2000, there has been a growing number of experimental identifications of mechanically flexible crystals [3; 4; 5; 6; 7; 8; 9]. In general, the mechanical response of an organic solid depends on both the molecular substance and the corresponding crystal packing. A remarkable example is shown in Fig. 1, three crystals, made of similar molecules from naphthalene dimide derivatives, were found to exhibit distinct responses from brittle fracture to compliant deformation with either reversible (elastic) or irreversible (plastic) characteristic [10]. The flexible nature of such organic materials is vital for a variety of applications, e.g., high-performance modular organic solar cells [11], actuators [12], photochemistry [13], electronics [14], optics [15], as well as drug tabulation [16]. In the recent years, various computational techniques have been introduced to characterize the observed mechanical properties on different molecular systems [17; 18; 10; 19]. They include the topological analysis, elastic properties calculation [17], and the simulation of shear/tensile deformations [10; 18]. These techniques are partially successful in identifying the brittle materials which usually exhibit a complex three dimensional packing. Within such an interlocked environment, molecular motions are largely restricted, resulting a brittleness under bending. On the other hand, the compliant class of materials are featured by a strong anisotropy with plausible slip planes [17; 20]. Therefore, these materials become compliant over a broad range of applied stress along some specific crystallographic directions. However, all available techniques fail to explain the difference between the elastic and plastic materials. While there have been plenty of studies on the bending of metals [21; 22; 23; 24; 25; 26], to our knowledge, no attempts have been made to directly simulate the bending of organic materials at the atomistic level. Among the compliant crystals, ductile materials are often favored in engineering applications [16]. Hence, researchers attempted to use the well established dislocation theory to explain the observed plasticity on organic materials [2; 3]. Similar to the plastic deformation in ductile metals, it was found that mechanical shearing can also occur via the slippage of dislocated molecular layers on the molecular crystals with a layered packing [27; 28]. Using these facile slip planes, a bending model was proposed accordingly to explain the underlying mechanism [3]. Although the dislocation is not uncommon in molecular crystals [29; 30], there has been no direct experimental evidence to support that the dislocation is present in the organic crystals under bending. Furthermore, this mechanism fails to explain the observed crystals that can also bend elastically to a large extent. In fact, two crystals as shown in Fig. Figure 1: The simulated bending on three different materials based on naphthalene dimide derivatives. (a) brittle **Pr** (50.3\(\times\)7.0\(\times\)6.8 nm\({}^{3}\)), (b) elastic **Et** (50.7\(\times\)6.4\(\times\)6.6 nm\({}^{3}\)) and (c) elastic/plastic **Me** (50.2\(\times\)6.4\(\times\)6.9 nm\({}^{3}\)). These three crystals consist of very similar molecules that differ only in the side groups. In the left panel, the initial and finally deformed configurations are colored by the molecular alignment (\(\alpha\)) along the \(x\)-axis. The corresponding molecules and the definition of rotation angles are shown in the right panel. 1b-c possess very similar crystal packing. Give the apparent similarity in both molecular structure and crystal packing, it is expected that the elastic crystal (Fig. 1b) should undergo similar molecular events like the plastic crystal (Fig. 1c) by following the ending mechanism. But the actual deformation was observed to be elastic. Clearly, our current understanding on the elasticity and plasticity remains limited. In this work, we present our efforts in questioning the molecular bending mechanism with the aid of atomistic simulation. To achieve this goal, we start by developing a robust simulation protocol that can directly model the bending of organic crystals at the atomic level. Specifically, we employed a three-point bending model within a partial periodic boundary condition [31]. In our calculation, we performed non-equilibrium molecular dynamics simulation by applying the indentation on the center of molecular slab under finite temperature [31]. We also carefully tested the choice of slab models and thermal equilibration to ensure the robustness of our simulation set up. In order to automate the simulation, we developed a computational pipeline to automate the generation of molecular force fields from the AmberTools20 software [32]. Force field parameters are assigned by the General Amber Force Field (GAFF) with atomic charges using semi-empirical (AM1) with bond charge correction (BCC) [35]. All simulations were performed on the LAMMPS package [34] at room temperature with the strain rate of 10 m/s. In the following, we will focus on three naphthalene-tracarboxylic diimide crystals as discussed in Fig. 1. The three molecules share the same backbone while differing only in the side chains. The brittle crystal consists of the molecules with the **propyl** group, featured by the orthorhombic space group \(Pbca\) with one molecule in the asymmetric unit. On the other hand, the elastic/plastic crystals have the **ethyl/methyl** groups, both adopting the monoclinic space group \(P2_{1}/c\) with half a molecule in the asymmetric unit. For convenience, we follow the previous literature [10] to name these systems according to their molecular functional groups (i.e., **Pr, Et, Me**). In all three cases, the weak interaction are formed by alkyl groups at the (001) plane. However, the overall molecular packing in the brittle-**Pr** crystal are more complex since there exist eight different types of molecular alignments due to the \(mmm\) symmetry operations. On the contrary, there are two types of molecular alignments in the **Et/Me** crystals, and each (001) layer contains only one type of molecular alignment (see Fig. S1 and table S1). Fig. 2 summarized the simulated evolution of average molecular potential energy as a function of indentation depth for all three materials. For a fair comparison, we set up the model size close to \(\sim\) 50.0 \(\times\) 7.0 \(\times\) 7.0 nm\({}^{3}\). Encouragingly, our calculations reproduced the experimentally observed brittle fracture, elastic deformation and plastic bending, respectively. First, **Pr** is clearly brittle as evidenced by the abrupt drop of energy in Fig. 2a, which is also consistent to the appearance of crack pattern in Fig. 1a when the indentation depth reaches 3.5 nm. On the other hand, **Et** is more complaint with a maximum indentation of 6.2 nm. Apply further loading would lead to the formation of crack as well. If we release the indentation before **Et** reaches 6.2 nm, the simulation will roughly return to the original state. Therefore, the deformation is elastic. Interestingly, **Me** can survive under more than 10 nm indentation without breaking with two different setups. For the slab after a full isobaric-isothermal equilibration, it bends elastically, as evidenced by the reversible energy versus indentation depth relation (denoted as **Me**-elastic in Fig. 2b). When the slab has a small strain in the initial configuration (see Table S2), the corresponding energy curves upon loading and unloading are no longer reversible. Compared to the **Me**-elastic, this sample achieves lower energy stable when it approaches the maximum indentation depth upon loading. When the indentation is released, it does not return to the original states, but maintains a relatively higher energy. Therefore, the whole deformation process is irreversible and plastic. The sample will be referred to **Me**-plastic from now on. It is also important to note that the deformation is strongly anisotropic. For the same **Me** sample, the deformations are brittle if the indentation is applied on other directions. Such a direction-dependence has also been observed in recent experiments [16]. Although several recent computational studies attempted to explain the observed mechanical properties, they were limited to indirect simulations such as pure tensile and shear tests [17; 10; 18; 19]. Here, our results provide the first direct evidence from atomistic modeling and reproduce the experiment observations on their mechanical responses upon the bending deformation. Compared to the simulation results, the elastic and plastic samples are found to bend under larger deformations in real experiments [10]. This is because that the material's length on \(x\)-axis under the actual bending test can shrink to release the tensile stress. However, our simulation model still obeys the periodic boundary condition along the \(x\)-axis. Hence we expect that the degree of bending from our simulation is un Figure 2: The evolution of average molecular potential energy as a function of indentation depth upon (a) loading and (b) unloading. In (b), only two samples (**Me**-elastic and **Me**-plastic) are shown for clarity. derestimated as compared to the real situation. We also tried to vary the strain rate. According to our attempts, it seems that increasing the strain rate by 10 times does not qualitatively change the results. However, an ultrafast strain rate (\(>\)200 m/s) is likely to trigger some unrealistic phase transition thus changes the nature of deformation significantly. Regardless of these restrictions on parameter choices, our simulations are robust in capturing the main physics. While analyzing the dynamic trajectories, we observed that molecules rotate strongly upon bending. Fig. 1 defines the alignments (\(\alpha,\beta,\gamma\)) for each molecule that can rotate along the \(x,y,z\) axes in the Cartesian coordinates. Fig. 3 plots the distribution of molecular rotations for all three directions. Given that indentation direction acts on the \(z\)-axis and the setup of three bending points aligns along the \(x\)-axis, we expect that the rotational mode along \(y\) axis (\(\beta\)) is the primary motion under the loading. Indeed, Fig. 3 reveals that the rotation in \(\beta\) is more pronounced that other directions for all three molecules. According to the computed moments of rotational inertia in Table 1, the molecules with smaller size are easier to rotate more. Therefore, **Me** has overall more rotational flexibility than **Et** and **Pr** in all directions. In Figs. S3-S5[31], we provided the detailed analysis on each simulation trajectory. Among them, it is mostly interesting to note that there is an obvious asymmetric distribution of \(\beta\) for the plastic deformation as shown in Fig. 3. To quest its origin, we plot a few representative structures from the corresponding trajectory in Fig. 4. Unlike the elastic deformation that all molecules are symmetrically aligned at the centered \(yz\) plane, we found that the region near the indenter tip undergoes a phase transition through molecular rotation. This region is also evident from non-zero rotations of \(\alpha\) and \(\gamma\) as shown in Fig. S5. This new domain, consisting of re-aligned molecules (denoted as the red dotted eclipse), can easily slip along its interface with the parent domain. Upon indentation, the molecules in the secondary domain, located on the upper surface of the slab, do not gain enough momentum to go downward as compared to other molecules due to the compressive stress from the bending forces. Therefore, the relative slipping direction of the secondary domain is upward and we observe the appearance of a bump near the indenter tip. As the tip continues to go down, the secondary domain keeps climbing up until the bump reaches its maximum. Upon further compression, the molecules at the bottom region are nearly flattened due to a large tensile stress, thus creating much empty space along the \(z\)-axis. Thus, the secondary domain slips down to push the neighboring molecules down to fill the empty space. Clearly, this secondary domain serves as a buffer zone to help the system maintain a relatively low energy state and postpone the formation of crack. When the indentation is released, the process is supposed to be irreversible at low temperature since triggering the back transformation requires some energy barrier. Therefore, it is a plastic deformation. However, it is driven by the molecular rotation, which is different from the plastic phenomenon in the metals that requires the migration of dislocations. Due to the phase transition driven by molecule rotation, the domain of new phase may appear near the indenter and coexist Figure 3: The simulated distribution of accumulated rotational angles (with respect to the initial configurations) for all materials upon the bending loads. For clarity, the **Me**-elastic data was omitted. Figure 4: The list of representative snapshots from the simulation of **Me**-plastic deformation. The molecules are colored by the \(\beta\) angle values from red to blue. The domains of the secondary phase are highlighted by the red dotted eclipses. The red dotted arrows indicate the slip direction. The grey colored shapes represent the contacting locations in the three-point bending test. \begin{table} \begin{tabular}{c c c c c} \hline \hline System & Number of atoms & \(I_{xx}\) & \(I_{yy}\) & \(I_{zz}\) \\ \hline **Pr** & 44 & 1707.95 & 4124.74 & 5606.78 \\ **Et** & 38 & 2332.63 & 2311.36 & 3610.28 \\ **Me** & 32 & 1911.78 & 1710.74 & 2854.21 \\ \hline \hline \end{tabular} \end{table} Table 1: The computed moments of rotational inertia (Da-A\({}^{2}\)) for each system. with the parent phase via a low-energy interface. The newly formed secondary phase can freely slide along the interface due to the external stress conditions. In the early stage, the upward movement of new phase results in a bump shape near the indenter. We note that such a bump has actually been found in the bending experiment [10], but it was not discussed in the literature. Our simulation here suggests that the formation of bump is a key characteristic of the plastic deformation driven by molecular rotation. If the external temperature is sufficiently high to cross the phase transition barrier, the process may become reversible, similar to the previously reported superelastic organic crystals [4]. So far, we have established the relation between molecular rotation and the observed mechanical responses. Clearly, the degree of freedom of molecular rotation is the key factor that determines the mechanical flexibility of organic crystals under bending. However, we are still unclear why some materials are more compliant than others and why we observed two different deformation behaviors on the **Me** crystal with slightly different initial configurations. To quest their physical origins, it is necessary to examine the potential energy surface (PES) with respect to the molecular rotations. Therefore, we use the relaxed crystal structure as the reference and then systematically rotate two groups of symmetrically-related molecules (colored in red and blue in Fig. S1) along the \(y\)-axis in the unit cell. For the **Pr**-crystal, each group has four molecules with the same alignment in \(\beta\). For both **Et** and **Me** crystals, each group contains only one molecule. The computed potential energy maps as the function of the rotation angles (\(R_{1}\) and \(R_{2}\)) are summarized in Fig. 5. As shown in Fig. 5a, **Pr** has a very stiff global minimum (GM) at (0, 0). This indicates that even a slight rotation can lead to a high energy penalty. The energy basin of GM is aligned diagonally, suggesting that the low energy rotation modes are synchronous due to the crystal symmetry constraint. In this energy basin, the total energy of the whole system increase about 500 kJ/mol, when it reaches the (10, 10). However, such high energy penalty would eventually lead to the generation of crack. In addition, there is a local minimum (LM) centered around (20, 20). But this state is nearly impossible to reach due to a high energy barrier up to 10\({}^{4}\) kJ/mol. Overall, **Pr** has a rather limited rotational freedom, which is consistent with the fact that each molecule in **Pr** is surrounded by multiple types of molecular alignments. Compared to **Pr**, the **Et** sample (Fig. 5b) has more spreads around the GM (0, 0). Therefore, the molecules can rotate more under the mechanical load. As shown in Fig. 3, two rotational peaks are symmetrically distributed at \(\pm\)20 degrees when the system reaches the elastic limit. According to Fig. 5b, the rotation around (20, 20) would lead to a penalty energy of 500 kJ/mol. Therefore, the **Et** molecules can rotate more than **Pr** before the crack event starts. Similarly, **Et** has another LM around (30, 30), but it is unreachable due to a high energy barrier. On the other hand, the **Me** has a even flatter energy spread around the GM basin (Fig. 3c). Using 500 kJ/mol as the threshold, the computed area ratios are roughly 0.14 (**Pr**), 0.84 (**Et**), 1.00 (**Me**). Hence, the **Pr** can sustain more elastic deformation than other materials. These values are qualitatively consistent with our computed critical indentation depth values as shown in Fig. 2, and even fits the experimental values better (given that **Me** is found to be significantly more elastic than **Pr**). In addition, **Me** is remarkable because there exists a low energy pathway that connects its LM at (30, 30). Under the mechanical load, there exist two scenarios. One is to continue to expand in the GM basin and the system bends elastically, as we found in our simulation starting with the perfectly equilibrated **Me** single crystal sample. Alternatively, it is also possible to reach the neighboring LM basin. While the latter case requires crossing a barrier on its PES map, it may be facilitated by the pre-existing structural defects or activated due to kinetic reason. Indeed, we observed such a phase transition when the initial configuration is strained. And this eventually led to a plastic deformation as shown in Fig. 4. Correspondingly, the existence of molecules at the LM (30, 30) region resulted in a stronger peak around 30 degree as compared to the peak at -30 degree for the distribution of \(\beta\) in Fig. 3, In the real experiment, the latter scenario is more likely to occur since the defects are unavoidable. Although the deformation process is irreversible at low temperature upon the release of Figure 5: The potential energy surface as a function of molecular rotation for three crystals with different mechanical response: (a) **Pr**-brittle, (b) **Et**-elastic, and (c) **Me**-elastic/plastic deformations. The while region in (a) denotes the rotations leading to energy exceeding 10\({}^{4}\) kJ/mol. indentation, it may become reversible at an elevated temperature when it is sufficient to cross the barrier between LM and GM. In summary, we perform the first molecular dynamics simulation to directly model the mechanical bending of organic crystals. Using three recently reported naphthalene diimide derivatives as the examples, our simulation successfully reproduced the experimentally observed mechanical behaviors from brittle fracture to elastic/plastic deformation upon mechanical bending. By analyzing their atomistic trajectories, we found that molecular rotational freedom is the key factor to determine whether or not the materials are bendable. This phenomenon originates from the subtle interplay between geometry packing and intermolecular interaction. Furthermore, we found the use of rotation-dependent potential energy surface map can be used clearly explain the origin of different mechanical responses for organic materials. Together with the recently proposed crystal packing screening model [35], our results can be used to guide the search for new mechanically flexible candidates with improved functionality for future device applications. This research is sponsored by the NSF (DMR-2142570) and Sony Group Corporation. The computing resources are provided by ACCESS (TG-DMR180040).
2309.14550
MEMO: Dataset and Methods for Robust Multimodal Retinal Image Registration with Large or Small Vessel Density Differences
The measurement of retinal blood flow (RBF) in capillaries can provide a powerful biomarker for the early diagnosis and treatment of ocular diseases. However, no single modality can determine capillary flowrates with high precision. Combining erythrocyte-mediated angiography (EMA) with optical coherence tomography angiography (OCTA) has the potential to achieve this goal, as EMA can measure the absolute 2D RBF of retinal microvasculature and OCTA can provide the 3D structural images of capillaries. However, multimodal retinal image registration between these two modalities remains largely unexplored. To fill this gap, we establish MEMO, the first public multimodal EMA and OCTA retinal image dataset. A unique challenge in multimodal retinal image registration between these modalities is the relatively large difference in vessel density (VD). To address this challenge, we propose a segmentation-based deep-learning framework (VDD-Reg) and a new evaluation metric (MSD), which provide robust results despite differences in vessel density. VDD-Reg consists of a vessel segmentation module and a registration module. To train the vessel segmentation module, we further designed a two-stage semi-supervised learning framework (LVD-Seg) combining supervised and unsupervised losses. We demonstrate that VDD-Reg outperforms baseline methods quantitatively and qualitatively for cases of both small VD differences (using the CF-FA dataset) and large VD differences (using our MEMO dataset). Moreover, VDD-Reg requires as few as three annotated vessel segmentation masks to maintain its accuracy, demonstrating its feasibility.
Chiao-Yi Wang, Faranguisse Kakhi Sadrieh, Yi-Ting Shen, Shih-En Chen, Sarah Kim, Victoria Chen, Achyut Raghavendra, Dongyi Wang, Osamah Saeedi, Yang Tao
2023-09-25T21:47:41Z
http://arxiv.org/abs/2309.14550v2
MEMO: Dataset and Methods for Robust Multimodal Retinal Image Registration with Large or Small Vessel Density Differences ###### Abstract The measurement of retinal blood flow (RBF) in capillaries can provide a powerful biomarker for the early diagnosis and treatment of ocular diseases. However, no single modality can determine capillary flowrates with high precision. Combining erythrocyte-mediated angiography (EMA) with optical coherence tomography angiography (OCTA) has the potential to achieve this goal, as EMA can measure the absolute 2D RBF of retinal microvasculature and OCTA can provide the 3D structural images of capillaries. However, multimodal retinal image registration between these two modalities remains largely unexplored. To fill this gap, we establish _MEMO_, the first public multimodal EMA and OCTA retinal image dataset. A unique challenge in multimodal retinal image registration between these modalities is the relatively large difference in vessel density (VD). To address this challenge, we propose a segmentation-based deep-learning framework (_VDD-Reg_) and a new evaluation metric (_MSD_), which provide robust results despite differences in vessel density. _VDD-Reg_ consists of a vessel segmentation module and a registration module. To train the vessel segmentation module, we further designed a two-stage semi-supervised learning framework (_LVD-Seg_) combining supervised and unsupervised losses. We demonstrate that _VDD-Reg_ outperforms baseline methods quantitatively and qualitatively for cases of both small VD differences (using the CF-FA dataset) and large VD differences (using our _MEMO_ dataset). Moreover, _VDD-Reg_ requires as few as three annotated vessel segmentation masks to maintain its accuracy, demonstrating its feasibility. Erythrocyte mediated angiography, optical coherence tomography angiography, multi-modal retinal image registration. ## I Introduction Retinal blood flow (RBF) is a key functional biomarker, implicated in three of the four major causes of blindness worldwide, glaucoma [1], diabetic retinopathy [2], age-related macular degeneration [3], as well as in neurodegenerative diseases such as Alzheimer's dementia [4, 5]. Specifically, RBF in capillaries may provide sensitive biomarkers for the early diagnosis of ocular diseases, and could aid in the development of novel therapies. Unfortunately, accurately measuring RBF in capillaries is challenging because it requires the precise measurement of both absolute erythrocyte velocities and capillary width. Moreover, it also has high requirements of sensor resolution and repeatability. Current methods of measuring RBF are limited. For instance, laser Doppler imaging [6] is limited by high variability of measured flowrates. Dynamic OCTA [7, 8] and color Doppler imaging [9] can only measure relative flowrates, leading to poor intra-platform and cross-platform measurement repeatability. Adaptive optics scanning laser ophthalmoscopy (AO-SLO) [10, 11] and AO-OCT [12, 13] have a limited field of view. Erythrocyte mediated angiography (EMA) [14], on the other hand, is a novel technique which has the capability of determining absolute erythrocyte flowrates of arterioles and venules _in vivo_ with high precision and a large field of view. EMA determines the flowrates by following the motion of individual fluorescently labelled erythrocyte ghosts in the retinal capillary circulation which can be visualized _in vivo_[15, 16, 17]. Despite the aforementioned advantages, a major limitation of EMA is that it cannot determine the axial location of capillaries. EMA images are captured in 2D using a scanning laser ophthalmoscope, but capillaries are located across different retinal layers [18]. One potential solution to address the limitation of EMA is to combine it with other modalities providing 3D structural imaging of retinal capillaries. Optical coherence tomography angiography (OCTA) is an ideal candidate, as it can generate high-resolution 3D structural images of retinal capillaries [19, 20, 21]. Combining EMA and OCTA may enable absolute capillary RBF measurement for diagnosis and treatments of ocular diseases. A key requirement for this is accurate registration. Manual approaches to registration are time-consuming, necessitating the development of an automated approach to registration of EMA and OCTA image pairs. Multimodal retinal image registration has been extensively studied in recent years [22, 23, 24, 25, 26, 27, 28, 29]. However, current approaches have primarily utilized the public CF-FA dataset [30] (color fundus and fluorescein angiography) or private datasets with modalities other than EMA and OCTA, such as CF and fundus autofluorescence (FAF) [22, 23], and CF and infrared reflectance (IR) imaging [23, 27, 28]. The lack of new and publicly available multimodal retinal image datasets not only makes it difficult for researchers to fairly and thoroughly compare every method, but also prevents the identification of new challenges of registering image pairs from different modalities. To fill in this gap, we conducted experiments on nonhuman primates, to create a public dataset of EMA and OCTA pairs [31]. This dataset has the unique features of being well-controlled, one of the few datasets that includes OCTA images and the only dataset to include EMA sequences. We intend to expand the dataset to include human images. This dataset is described as the multimodal EMA and OCTA (_MEMO_) retinal image dataset. _MEMO_ contains EMA and OCTA image pairs with manually labeled matched points for studying multimodal retinal image registration. Additionally, _MEMO_ includes OCTA projection images [32] from three different layers (superficial vascular plexus (SVP), intermediate capillary plexus (ICP) and deep capillary plexus (DCP)) and EMA image sequences with moving erythrocytes. Using the _MEMO_ dataset, we address a unique challenge in multimodal retinal image registration between EMA and OCTA images arising from the relatively large difference in vessel density between the two modalities. In this paper, the vessel density (VD) is defined as the proportion of image area occupied by vessels divided by the entire captured area. As compared to other modalities available in public datasets, such as CF-FA [30], EMA and OCTA have a VD difference over 30% between the two modalities (Fig. 1). Through extensive experiments, we found that large VD differences dramatically decrease registration performance, as the majority of smaller vessels in OCTA could not be visualized in EMA due to fundamental differences in image acquisition. To overcome the challenges posed by large VD differences, we further propose _VDD-Reg_, a segmentation-based deep-learning framework for multimodal retinal image registration that can robustly register two imaging modalities despite vessel density differences. _VDD-Reg_ consists of a vessel segmentation module and a registration module. For the vessel segmentation module, we also designed a novel two-stage semi-supervised learning framework, _LVD-Seg_, which requires only a few (e.g., three) labeled vessel segmentation masks from the modality with lower vessel density (EMA in our case). Specifically, _LVD-Seg_ first uses a supervised loss (i.e., MSE) to stabilize the training of the vessel segmentation module, and then uses an unsupervised loss (i.e., style loss [33]) to further improve the segmentation and registration results. Finally, we found that _Soft Dice_[27], a recently proposed image registration metric extended from _Dice_, could not accurately distinguish which method was better when large VD differences were present and when the results of each method did not exhibit significant differences. As a result, we propose a new registration metric called _Masked Soft Dice_ (_MSD_). _MSD_ considers only the pixels within the ground truth vessel segmentation masks of the modality with lower VD, which can more accurately evaluate each method in our _MEMO_ dataset. With both _MSD_ and standard registration metrics, we show that _VDD-Reg_ can not only achieve the same level of accuracy as competing methods in datasets with similar vessel densities (CF-FA [30]), but also maintain high performance with large differences in VD, as in our _MEMO_ dataset. The contributions of our work can be summarized as follows: 1. We establish _MEMO_, the first public multimodal EMA and OCTA retinal image dataset. _MEMO_ provides registration ground truth, three layers of OCTA projection images and EMA image sequences containing moving erythrocytes. This has the potential for use in any research involving OCTA registration with other modalities that use a scanning laser ophthalmoscope. _MEMO_ is available at [https://chiaoyiwang0424.github.io/MEMO/](https://chiaoyiwang0424.github.io/MEMO/). 2. We address the unique challenge of retinal image registration between modalities with a large difference in vessel density (VD). To the best of our knowledge, _MEMO_ is the first public multimodal retinal image dataset having this characteristic. 3. We propose a segmentation-based deep-learning framework, _VDD-Reg_, for multimodal retinal image registration that is robust with respect to vessel density differences. To train the segmentation module in _VDD-Reg_, we further designed a two-stage semi-supervised learning framework, _LVD-Seg_, which requires as few as three labeled vessel segmentation masks. 4. We introduce a registration metric, _Masked Soft Dice_ (_MSD_), specifically designed for multimodal retinal image datasets with large VD differences. The rest of the paper is organized as follows. Section II summarizes the existing public retinal image datasets with image pairs and multimodal retinal image registration methods. Section III illustrates the details of our _MEMO_ dataset. In Section IV, we describe the proposed _VDD-Reg_ framework. Section V illustrates our experimental settings. Section VI and Section VII present the results and discussion. Section VIII includes the conclusion of the paper. ## II Related Works ### _Retinal Image Datasets with Image Pairs_ Existing public retinal image datasets with image pairs are listed in Table I. Here, we focus on datasets with image Fig. 1: Sample images of (a) CF, (b) FA, (c) EMA and (d) OCTA with vessel density (VD). (a) and (b) are taken from the CF-FA dataset [30]. pairs as they are more relevant to the main scope of this paper on multimodal retinal image registration. Private (in-house) datasets were excluded as they cannot be freely used to evaluate methods. The datasets listed in Table 1 can be divided into monomodal and multimodal ones. Among the monomodal datasets, only FLORI21 [38] provides ultra-widefield fluorescein angiography images while the others [34, 35, 36, 37] provide fundus images. To be more specific, e-ophtha [34] contains 144 image pairs with large and small overlapping regions. RODREP [35] provides 1400 image pairs acquired from 140 eyes, but the overlap of each pair is limited. VARIA [36] provides 154 image pairs with only large overlapped areas due to its small FOV. The above three datasets do not provide a registration ground truth. In contrast, FIRE [37] provides registration ground truth, consisting of 134 fundus image pairs with both large and small areas of overlap. Ten corresponding points are provided for each image pair. FLORI21 [38] also provides registration ground truth and contains image pairs from 5 different subjects. Although FIRE and FLORI21 are useful for image registration research, they contain images from only one modality. Table 1 demonstrates that public multimodal retinal image datasets with image pairs are relatively rare. CF-FA [30] contains 59 pairs of color fundus (CF) and fluorescein angiograms (FA). Twenty-nine pairs were collected from healthy eyes, while the rest were collected from eyes with diseases. Although CF-FA has been widely used for multimodal retinal image registration, the registration ground truth is not officially provided and the vessel density (VD) difference between the two modalities is relatively small. PRIME-FP20 [39] contains several ultra-wide field fundus photography (FP) and ultra-wide field fundus angiography (FA) retinal image pairs. However, it provides only the vessel segmentation ground truth of ultra-wide field fundus photography without providing the registration ground truth. OCTA-500 [40] provides 500 pairs of OCT and OCTA images with vessel segmentation ground truth. Despite its large size, the image pairs of OCT and OCTA were completely aligned, limiting its usage for multimodal retinal image registration. Compared to the above datasets, _MEMO_ has three major advantages. Firstly, _MEMO_ is the first public multimodal retinal image dataset providing two modalities with relatively large VD differences. Secondly, the global registration ground truth is provided by manually labeling six corresponding point pairs per image pair. Finally, _MEMO_ additionally provides raw EMA image sequences and OCTA projection images, which may be useful for multiple research fields such as automated erythrocyte tracking. ### Multi-Modal Retinal Image Registration Multimodal retinal image registration methods can be categorized into conventional and deep learning-based methods. The conventional methods can be further divided into two types: _direct_ and _indirect_ methods. The _direct_ conventional methods try to detect and match features directly on the raw images by manually designing more powerful feature descriptors or more robust matching algorithms. For example, Chen et al. [41] proposed a partial intensity invariant feature descriptor (PIIFD) and designed an image registration framework called Harris-PIIFD based on the proposed descriptor. Ghassabi et al. [42] combined UR-SIFT and PIIFD for image registration with large content or scale changes. Wang et al. [43] presented an image registration framework combining SURF, PIIFD and robust point matching. Lee et al. [44] introduced a low-dimensional step pattern analysis method to align retinal image pairs that were poorly aligned with baseline methods. Hossein-Nejad et al. [45] adopted adaptive Random Sample Consensus (A-RANSAC) for feature matching. On the other hand, the _indirect_ conventional methods attempt to first transfer the images from different modalities into a similar "style", such as the vessel mask or the phase image, before detecting and matching features. For instance, Hernandez et al. [46] proposed line structures segmentation with a tensor-voting approach to improve registration. Hervella et al. [47] combined feature-based and intensity-based registration methods and employed a domain-adapted similarity metric to detect vessel bifurcations and crossovers. Motta et al. [48] proposed a registration framework based on optimal transport theory for vessel extraction on retinal fundus images. Li et al. [49] proposed a two-step registration method which converted raw images into phase images and adopted log-Gabor filters for global registration. Recently, many deep learning-based multimodal retinal image registration methods have been proposed, demonstrating comparable or superior performance as compared to conventional methods. Similar to the conventional methods, deep learning-based methods can also be roughly divided into _direct_ and _indirect_ methods. The _direct_ deep learning-based methods usually try to directly learn a feature matching network using raw image datasets. For example, De Silva et al. [23] proposed an end-to-end network following the conventional feature point-based registration steps, using a VGG-16 feature extractor [50] and a feature matching network for predicting patch displacements. Lee et al. [26] extracted pattern patches surrounding the intersection points and used a Convolutional Neural Network (CNN) to select matched patches. The _indirect_ deep learning-based methods, on the other hand, try to learn a transformation network to first transform the two modalities into the same domain such as the vessel mask instead of directly performing image registration. For instance, Arikan et al. [25] used a U-Net for vessel segmentation and a mask R-CNN for vessel junctions detection based on supervised learning before multimodal image registration. Luo et al. [24] proposed a two-stage affine registration framework. The first stage used two individual U-Nets to segment the optic discs in two modalities, and the second stage adopted ResNet for fine registration. Zhang et al. [27] proposed a vessel segmentation-based two-step registration method integrating global and deformable registration. Their vessel segmentation networks were trained with a deformable registration network using ground truth registration affine matrices. Wang et al. [28] proposed a content-adaptive multimodal retinal image registration method, which adopted pixel-adaptive convolution (PAC) [51] and style loss [33] in their vessel segmentation network. In addition to transforming images into the vessel masks, Santarossa et al. [22] and Sindel et al. [29] applied CycleGAN [52] to transform the images from one modality to the other before extracting features. Although many methods have been proposed for multimodal retinal image registration, none of them tackle the registration between EMA and OCTA. Compared to the modalities used in existing works, the vessel density (VD) difference between EMA and OCTA used in our _MEMO_ dataset is relatively large, making image registration much more challenging. ## 3 The MEMO Dataset ### Overview A sample EMA and OCTA image pair from the _MEMO_ dataset is shown in Fig. 2. The dataset contains 30 pairs of EMA and OCTA images. For each image pair, 6 corresponding point pairs were manually annotated. The annotated points were chosen from the visually distinctive points in EMA and OCTA images, such as vessel bifurcation points and vessel bending points. All images were acquired following a protocol approved by Institutional Animal Care and Use Committee of the University of Maryland, Baltimore. Four eyes from two rhesus monkeys (_macaca mulatta_) were used to acquired paired EMA and OCTA images. Each pair was collected in the same session on the same date. Prior to the experimental session, the animal was sedated with ketamine and xylazine (5-10 and 0.2-0.4 mg/kg by intramuscular injection). The animal was intubated by trained veterinary technicians with an endotracheal tube and general anesthesia was maintained with 1.5% to 3% isoflurane with 100% oxygen. The animal was paralyzed with vecuronium (40-60 ug/kg, followed by 0.35-45 ug/kg/min), preventing eye movement during image acquisition. Body temperature was maintained at physiologic levels using a thermal blanket and blood pressure was monitored using a blood pressure cuff on the arm. The animal was laid in a prone position during the imaging session. A wire lid speculum was used to keep the eyelids open during imaging and tropicamide 1% was administered for pupillary dilation. ### EMA The procedure for EMA image acquisition is shown in Fig. 3. All EMA image sequences were captured by a Heidelberg Spectralis platform (Heidelberg Engineering, Heidelberg Germany). Approximately 17 mL of blood was drawn for Figure 3: The procedure for image acquisition. The numbers shown in the figure indicate the order. Figure 2: A sample EMA and OCTA pair from our _MEMO_ dataset. Images inside the orange boxes were used for ground truth labeling. (A-1, A-2 and A-3: frame 0, 10 and 20 in the sample EMA image sequence. A-4: the stacked images of the EMA sequence. C-1, C-2 and C-3: the sample OCTA projection images representing DCP, ICP and SVP layer. C-4: the OCTA B-scan image. B and D: the six corresponding point pairs of the sample EMA and OCTA pair.) processing with 5,6-carboxyfluorescein diacetate succinimidyl ester (CFSE) (Molecular Probes, USA) reconstituted in anhydrous dimethyl sulfoxide. Autologous erythrocytes were isolated from whole blood and loaded with 7.5 mM of CFSE using the osmotic shock method that has been detailed in prior publications [53]. Following cell preparation, up to 1.2 mL of CFSE-loaded cells were intravenously injected during image acquisition. After the cells were injected, ten-second angiograms centered on the macula were obtained with the Heidelberg Spectralis (Heidelberg Engineering GmbH, Germany) using a high speed 15-degree horizontal x 15-degree vertical field of view taken at 15 frames per second. All image frames from the EMA image sequences were stored in TIF format. Six image sequences had the resolution of \(512\times 512\) pixels, while the other 24 had the resolution of \(384\times 384\) pixels. The pixel size of every EMA image sequence was provided. The stacked image of each EMA image sequence was used for registration ground truth labeling. ### Octa The procedure for OCTA image acquisition is also shown in Fig. 3. OCTA scans centered on the fovea were taken using the same Heidelberg Spectralis with a \(10\times 10\) degree protocol, consisting of 512 a-scans \(\times\) 512 b-scans with 5-10 microns between b-scans and 5-7 frames averaged per b-scan location. Projection images of the superficial vascular plexus (SVP), intermediate capillary plexus (ICP), and deep capillary plexus (DCP) were generated using the segmentation algorithms and slab definitions provided by the Spectralis software (Heidelberg Eye Explorer, version 1.10.3.0, Heidelberg Engineering, Germany). The SVP slab was defined as between the internal limiting membrane to the anterior border of the inner plexiform layer, the ICP included the entire inner plexiform layer, and the DCP ranged from the posterior border of the inner plexiform layer to the anterior border of the outer plexiform layer. The projection images were processed using projection artifact removal (PAR). All images from the OCTA image groups were stored in TIF format. Each of the OCTA image groups contained three images from the three layers (i.e., SVP, ICP and DCP). Fifteen OCTA image groups had the resolution of \(512\times 512\) pixels, while the other 15 had a resolution of \(768\times 768\) pixels. The SVP image from each OCTA image group was used for registration ground truth labeling. ## 4 Proposed Method The overview of the proposed framework, _VDD-Reg_, for multimodal retinal image registration is shown in Fig. 4, which consists of a vessel segmentation module and a registration module. In _VDD-Reg_, multimodal images were first transformed into binary vessel masks by the vessel segmentation module. The global registration matrix was then estimated by the registration module from the two binary vessel masks. ### Vessel Segmentation Module #### 4.1.1 LVD-Seg Background As discussed in Section 2.2, vessel segmentation has been frequently used as the first step for multimodal retinal image registration [25, 27, 28, 46, 47, 48], primarily because features of vessels are considered to be more consistent across different modalities. Recently, deep learning-based vessel segmentation methods have shown superior performance. They can be categorized into two groups, supervised [25, 54] and unsupervised methods [27, 28], which present different limitations. The supervised vessel segmentation methods [25, 54] usually require a large number of high-quality pixel-level vessel masks for training to ensure test performance. However, such high-quality pixel-level vessel masks are often difficult and time-consuming to acquire. To avoid the need for pixel-level ground truth, an unsupervised vessel segmentation method based on style transfer has been proposed [27, 28]. However, because of the lack of direct supervision, we found that training the segmentation networks with style loss alone produced unreliable results on our _MEMO_ dataset due to the large VD difference between EMA and OCTA images (Sec. 4.1.1). To alleviate the limitations of both supervised and unsupervised vessel segmentation methods, we designed a novel two-stage semi-supervised learning framework, _LVD-Seg_, to train our vessel segmentation module. Details of the two stages are described as follows. #### 4.1.2 LVD-Seg Stage 1 - Supervised Loss In this stage, we trained our vessel segmentation module on \(n\) manually-annotated EMA vessel segmentation masks, where \(n\) could be as few as three according to our experiment results (Sec. 4.2.2). We used EMA vessel segmentation masks because the VD of EMA is much smaller than that of OCTA. This is because OCTA images contain a plethora of small capillaries which do not present in the corresponding EMA images and are not helpful for image registration. Moreover, labeling the less complex EMA vessel segmentation masks is much more feasible in terms of efficiency than labeling OCTA vessel segmentation masks. Following [27, 28], we adopted the DRIU [54] network for segmenting EMA images. The DRIU network used a pre-trained VGG-16 network [50] for feature extraction and was followed by a segmentation prediction layer. The mean squared error (MSE), denoted as \(L_{v}\), was adopted to train the network, which is defined as \[L_{v}=\frac{1}{N}\sum_{i=1}^{N}(Pred(I(i))-M(i))^{2}. \tag{1}\] \(I\) represents the input EMA image and \(M\) represents the ground truth EMA mask. \(Pred(I)\) represents the predicted segmentation mask of \(I\). \(i\) represents the \(i^{th}\) pixel of the predicted segmentation mask or the ground truth mask. \(N\) denotes the total number of pixels. In addition to MSE, the self-comparison loss [27, 28], denoted as \(L_{sc}\), was also adopted to make the prediction robust against input image rotation. Specifically, \(L_{sc}\) is defined as \[L_{sc}=MSE(Rot_{-90}(Pred(Rot_{90}(I)),Pred(I)), \tag{2}\] where \(Rot_{\theta}(I)\) represents \(I\) rotated by \(\theta^{\circ}\). Here, \(L_{sc}\) can be seen as an alternative way to perform data augmentation. Overall, the training loss for stage 1, denoted as \(L_{s1}\), can be written as \[L_{s1}=w_{v}*L_{v}+w_{sc}*L_{sc}, \tag{3}\] where \(w_{v}\) and \(w_{sc}\) represent the weighting factors for the MSE and the self-comparison loss. In this paper, \(w_{v}\) and \(w_{sc}\) were set to 1 and 1e-3, respectively. The trained weights of the EMA vessel segmentation network in stage 1 were used to initialize both the EMA and OCTA vessel segmentation networks in stage 2. #### 3.1.3 LVD-Seg Stage 2 - Unsupervised Loss Training a vessel segmentation network directly with supervised losses on very few ground truth vessel masks may not work well on a larger test set. Moreover, it is unknown whether the segmentation network trained on EMA images can extract vessels well in OCTA images for multimodal image registration, especially when a relatively large VD difference exists between the two modalities. To deal with these potential issues, we further optimized the segmentation networks using style loss [27, 28, 33] with a joint style target mask, a stand-alone EMA ground truth segmentation mask. Using a joint style target mask with style loss encouraged the EMA and OCTA segmentation networks to segment shared vessels. Style loss penalized the difference between the predicted segmentation mask and the style target mask using Gram matrices. The Gram matrix was used to capture the style information but remove the spatial information, which can be written as \[G_{j}(I)=\frac{1}{C_{j}H_{j}W_{j}}\sum_{h=1}^{H_{j}}\sum_{w=1}^{W_{j}}\phi_{j}( I)\phi_{j}^{{}^{\prime}}(I). \tag{4}\] Here, \(\phi_{j}(I)\) denotes the feature map with shape \(C_{j}\times H_{j}\times W_{j}\) obtained from the \(j^{th}\) layer of the pre-trained VGG-16 network [50] by feeding the network with input image \(I\). Style loss (\(L_{st}\)) is then defined as the squared Frobenius norm of the difference between the Gram matrices of the predicted segmentation mask (\(Pred(I)\)) and the style target mask (\(M_{t}\)), which can be written as \[L_{st_{j}}(Pred(I),M_{t})=\parallel G_{j}(Pred(I))-G_{j}(M_{t})\parallel_{F}^{2}, \tag{5}\] \[L_{st}=\sum_{j\in J}L_{st_{j}}(Pred(I),M_{t}). \tag{6}\] \(I\) represents the input EMA or OCTA image. Style loss was computed at four different layers of the VGG-16 network. Overall, the training loss for stage 2, denoted as \(L_{s2}\), can be written as \[L_{s2}=w_{st}^{e}L_{st}^{e}+w_{st}^{o}L_{st}^{o}+w_{sc}(L_{sc}^{e}+L_{sc}^{o}). \tag{7}\] Note that the two different modalities used different weighting factors for style loss. The self-comparison loss was also adopted as an additional constraint for the predicted segmentation masks. In this paper, \(w_{st}^{e}\), \(w_{st}^{o}\) and \(w_{sc}\) were set to 100, 1 and 1e-3, respectively. The outputs of the segmentation module were EMA and OCTA pixel-wise probability maps, which represented the probability of each pixel belonging to a vessel. The probability maps were transformed into binary segmentation masks with threshold set to 0.5. ### Registration Module #### 3.2.1 Feature Detection and Description We adopted pre-trained SuperPoint [55] as our feature detector and descriptor since it demonstrated good performance on detecting feature points of binary segmentation masks [27]. SuperPoint contains a shared encoder, an interest point decoder and a descriptor decoder. It was first trained on a synthetic dataset with labeled interest points to detect feature points. Next, Homographic Adaptation was used to self-label a large unlabeled real world image dataset. Finally, the model was jointly-trained to extract feature points and their corresponding descriptors with self-supervision. We refer the readers to [55] for more details. In this paper, the non-maximum suppression distance was set to 4 and the detector confidence threshold was set to 0.015 for keypoint detection. Figure 4: The proposed _VDD-Reg_ framework. _VDD-Reg_ includes a vessel segmentation module and a registration module. The vessel segmentation module is trained with the proposed two-stage semi-supervised learning framework (LVD-Seg). DRIU [54] and SuperPoint [55] are adopted for our segmentation networks and registration network, respectively. \(M_{reg}^{global}\) denotes the partial affine transformation matrix for global image registration. #### 4.1.2 Feature Matching and Registration We determined the matched feature points based on bidirectional calculation of the minimum Euclidean distance. That is, a feature point X in an OCTA image is said to match a feature point Y in the corresponding EMA image only when the Euclidean distance between each other's feature descriptors is smaller than (1) the Euclidean distance between X and any other feature point in the EMA image and (2) the Euclidean distance between Y and any other feature point in the OCTA image. The Random Sample Consensus (RANSAC) [56] method was applied to remove outliers and estimate the partial affine transformation matrix between the EMA and OCTA pair. Here, the partial affine transformation (i.e., 4 degrees of freedom) was adopted because the EMA and OCTA images in _MEMO_ already have the same pixel density (i.e., scale factor). ## 5 Experimental Settings ### Dataset We used our _MEMO_ dataset and the CF-FA [30] dataset to conduct the experiments. The two datasets were chosen to examine how the proposed and the competing methods performed for both scenarios of small VD differences (using the CF-FA dataset) and large VD differences (using our _MEMO_ dataset). #### 5.1.1 The MEMO Dataset The MEMO dataset contains 30 pairs of images. Fifteen pairs (with even indices) were selected as the training set, and the rest of the pairs (with odd indices) were used as the test set. For OCTA, the SVP layer projection images were used in our experiments because they contained clearer arterioles and venules, which could also be observed in EMA images. For EMA, the stacked image of each EMA image sequence was used for the purpose of denoising. Furthermore, we annotated the vessel segmentation mask of each EMA stacked image for the proposed _VDD-Reg_ and _MSD_. We also annotated one EMA stacked image that is not part of the _MEMO_ dataset as the style target. Note that even though we annotated the vessel masks for all EMA images, our _VDD-Reg_ actually required only three of those to maintain its performance. For the image pre-processing, the OCTA images were first resized to \(256\times 256\) pixel. Then, the EMA images were resized using the same scaling factors of the corresponding OCTA images. Next, to meet the requirement of our model, the resized EMA images were then cropped to ensure that their widths and heights were multiples of 8. Finally, the annotated EMA vessel segmentation masks and the registration ground truth were pre-processed accordingly to ensure their correct scale. To ensure the quality and consistency of annotation, all annotations were drawn by the same human annotator and checked by an experienced ophthalmologist (OJS). #### 5.1.2 The CF-FA Dataset The CF-FA dataset contains 59 pairs of color fundus (720 \(\times\) 576, RGB) and fluorescein angiography (720 \(\times\) 576, grayscale) images. We manually labeled 6 pairs of corresponding points for all image pairs as the registration ground truth. We selected 29 image pairs (with odd indices) as the training set, 29 image pairs (with even indices) as the test set, and 1 image pair (normal/1-1) as the style target. Similar to the _MEMO_ dataset, we manually annotated the vessel segmentation masks of the style target and three selected color fundus images from the training set for the proposed _VDD-Reg_. ### Baseline Methods We compared _VDD-Reg_ with five baseline methods listed in Table 2. For fair comparison, we made all methods except for _SURF-PIIFD-RPM_ estimate the partial affine transformation matrix and adopt RANSAC with the same hyperparameters. For _SURF-PIIFD-RPM_, as we directly adopted the official code, the affine transformation matrix was applied and RANSAC was not used. Moreover, for methods that used SuperPoint [55] for keypoint detection and description, including _SG_, _CycleGAN_, _Content-Adaptive_ and _VDD-Reg_, the official pretrained network was adopted without fine-tuning. More details about the baseline methods are listed as follows: * _SURF-PIIFD-RPM_[43]: This method utilized SURF and PIIFD for more robust feature extraction and RPM for outliers rejection. The official MATLAB code was used. * _LoPTR_[57]: This method exploited Transformer [59] for processing and matching the dense local features extracted from the backbone. The official pretrained model was adopted with the default setting and was applied directly to the raw images without fine-tuning. * _SuperGlue (SG)_[58]: This method used a graph neural network (GNN) for finding correspondences and rejecting non-matchable points between two sets of local features. SuperPoint was used for feature detection and description. The official pretrained networks were adopted, where the SuperPoint detection threshold was set as 0.015 and the SuperGlue match threshold was set as 0.1. * _CycleGAN-based_[29]: This method combined a keypoint detection and description network designed for retinal images (i.e., RetinaCraquelureNet [60]) with SuperGlue. The networks were trained using self-supervised learning on synthetic multimodal images generated by CycleGAN [52]. As the code was unavailable, we implemented a simplified alternative of this approach by using CycleGAN to transfer images from one modality to another and adopting SuperPoint for feature detection and description. * _Content-Adaptive_[28]: This method designed a content-adaptive vessel segmentation network based on the pixel-adaptive convolution (PAC) [51] guided by the phase images. The network was trained with style loss (Eq. 6) and the self-comparison loss (Eq. 2). The image registration loss based on the ground truth transformation matrix was also used. We implemented this method based on the code from [61]. Unlike the original paper [28], we ignored the outlier rejection network and did not fine-tuned SuperPoint because these were general techniques applicable to all the other competing methods, which were not within the scope of this paper. ### _Training and Testing Details_ All networks in our method were implemented in PyTorch. For the vessel segmentation module, both stages took 1000 epochs for training. The trained networks in stage 1 were used to initialize the networks in stage 2. The Adam optimizer [62] with learning rate 1e-4 was used. A batch size of 1 was used due to the limitation of our GPU memory. OpenCV was used for data pre-processing and the RANSAC algorithm, where \(cv2.estimateAffinePartial2D()\) was adopted by setting the maximum reprojection error to 5 pixels and the maximum iteration to 2000. ### _Evaluation Metrics_ #### V-D1 RMSE Based on the predicted registration matrices, the labeled points in all test OCTA images were reprojected to the corresponding test EMA images. Then, the root-mean-square-error (RMSE) [42] between the reprojected points and the labeled points was computed. #### V-D2 Success rate The success rate was defined as the number of image pairs with successful registration over the total number of test image pairs. A registration was successful when its RMSE \(<\) 10, based on the clinical tolerance. #### V-D3 Soft Dice (SD)/ Masked Soft Dice (MSD) _Dice_ is widely used to evaluate the registration quality by calculating the pixel alignment between the warped source vessel masks and the target vessel masks. _Soft Dice (SD)_[27] has been proposed as an extension of _Dice_ for assessing registration quality. For _SD_, CLAHE [63] was first applied to enhance the contrast of two input images and the Frangi vesselness filter [64] was then used to generate the vesselness probability masks of the two input images. We calculated _SD_ by \[SD=\frac{2\sum_{i=1}^{N}min(F_{i}^{s^{\prime}},F_{i}^{t})}{\sum_{i=1}^{N}F_{i} ^{s^{\prime}}+\sum_{i=1}^{N}F_{i}^{t}}, \tag{8}\] where \(F^{s^{\prime}}\) and \(F^{t}\) are the vesselness probability masks of the warped source images and the target images, and \(N\) denotes the total number of pixels. In our experiments, FA and EMA images were viewed as the source images. Additionally, we found that _SD_ could not accurately represent performance when a relatively large VD difference was present between the two modalities, such as in our _MEMO_ dataset. Moreover, it was particularly unreliable when the results of each competing method had relatively small differences. As most vessels in OCTA images do not exist in the corresponding EMA images, calculating _SD_ based on every pixel is not ideal. Hence, when evaluating on the _MEMO_ dataset, we extended _SD_ to _Masked Soft Dice (MSD)_ which considered only the pixels within the ground truth segmentation masks of EMA images. _MSD_ is defined as \[MSD=\frac{2\sum_{i=1}^{N}min(M_{i}^{e^{\prime}}F_{i}^{e^{\prime}},M_{i}^{e^{ \prime}}F_{i}^{o})}{\sum_{i=1}^{N}M_{i}^{e^{\prime}}F_{i}^{e^{\prime}}+\sum_{i= 1}^{N}M_{i}^{e^{\prime}}F_{i}^{o}}, \tag{9}\] where \(F^{e^{\prime}}\) and \(F^{o}\) are the vesselness probability masks of the warped EMA images and the OCTA images, and \(M^{e^{\prime}}\) represents the warped ground truth segmentation masks of EMA images. In Fig. 5, we demonstrated that _MSD_ has better ability to assess the registration performance on our _MEMO_ dataset, as it is less sensitive to the VD difference and noise. ## VI Results ### _The CF-FA Dataset_ Table III illustrates the quantitative results of our method and the baseline methods on the _CF-FA_ dataset. The MSD metric was not used as the VD difference of the CF-FA dataset is relatively small. From Table III, we can observe that our _VDD-Reg_ achieved 100% success rate and the lowest RMSE among all the methods on the CF-FA dataset. Surprisingly, _SURF-PIIFD-RPM_, the only conventional multimodal registration method in Table III, achieved decent performance (82.76%) compared to the other methods. This might indicate that a well-designed conventional method is still competitive if the target dataset is not too difficult. _LoFTR_ and _SG_ are two _direct_ and _deep learning-based_ registration methods in Table III. _LoFTR_, despite performing well on a general homography estimation dataset [57; 65], achieved the worst performance Fig. 5: The average (a) SD and (b) MSD values over image pairs in _MEMO_ by adding different x and y shifts to the ground truth registration. The top-left value in each figure represents the average SD or MSD value obtained by ground truth registration. All values are color-coded. (41.28%) on the CF-FA dataset according to Table 3. Fine-tuning _LoFTR_ on the CF-FA dataset might be helpful. However, as _LoFTR_ was originally trained on the ground-truth labels obtained from a large-scale synthetic indoor scenes datasets [66], it is unclear how to effectively fine-tune _LoFTR_ on a multimodal retinal image registration dataset such as the CF-FA dataset. Compared to _LoFTR_, _SG_ demonstrated better generalization to the CF-FA dataset (82.76%), even though it was trained on the same synthetic dataset [66] as _LoFTR_. This was possibly because SuperPoint (SP), the feature detection and description network used by _SG_, had good generalization capability. Compared to the _direct_ methods, _indirect deep learning-based_ methods in Table 3 generally achieved better registration performance on the CF-FA dataset. _CycleGAN-based (FA\(\rightarrow\)CF)_, which transformed FA images to CF images with CycleGAN first before registration, also achieved 100% success rate as our _VDD-Reg_. Its counterpart, _CycleGAN-based (CF\(\rightarrow\)FA)_, had a slightly lower success rate (86.21%) mainly because the transformation from FA to CF worked better than the opposite direction using CycleGAN. _Content-Adaptive_, a segmentation-based method similar to our _VDD-Reg_, performed slightly worse (89.67%) than our method. Thanks to our two-stage semi-supervised learning framework, _VDD-Seg_ could produce better vessel segmentation masks for image registration. ### _The MEMO Dataset_ Table 4 shows the quantitative registration results of our method and the baseline methods on our _MEMO_ dataset. Due to the relatively large VD difference, all the methods performed worse compared to the results on the CF-FA dataset. Still, our _VDD-Reg_ outperformed all the baseline methods by a large margin (86.67%). From Table 4, we observed that _SURF-PIIFD-RPM_ performed poorly (6.67%) on our _MEMO_ dataset, which suggested that the hand-crafted features might be insufficiently powerful for this scenario. The two _deep learning-based direct_ methods, _LoFTR_ and _SG_, also produced unsatisfactory results (13.33% and 0%) due to the large distribution gap between their training dataset [66] and our _MEMO_ dataset. _CycleGAN-based_ with pretrained SuperPoint (SP) achieved relatively better performance compared to other competing methods, demonstrating its potential to solving difficult multimodal registration problems. _Content-Adaptive_, which is also a segmentation-based method, performed much worse (33.33%) than our method on the _MEMO_ dataset. We attributed this to our use of annotated vessel segmentation masks from a single modality (EMA in our case). Different from our two-stage semi-supervised learning framework (_LVD-Seg_), _Content-Adaptive_ trained the segmentation networks naively with style loss, the self-comparison loss and the image registration loss. To improve the segmentation quality, _Content-Adaptive_ additionally guided the segmentation networks with mean phase images of input images using pixel-adaptive convolution (PAC) [51]. However, due to the high complexity of OCTA images, the OCTA mean phase images were usually too noisy to correctly guide the segmentation networks. On the other hand, our _LVD-Seg_ framework used very few (e.g., three) annotated vessel segmentation masks from one modality to guide the segmentation networks to segment similar vessels in EMA and OCTA image pairs for both stages. In addition, the two-stage design also enhanced training stability when style loss was involved. These are particularly important for multimodal retinal image registration when a large VD difference exists between the two modalities. In Figure 6, we also demonstrated the qualitative registration results of our method and the baseline methods on a selected image pair from our _MEMO_ dataset. Two different approaches were used to present the results, including grid images (top) and overlay images (bottom). The RMSE and MSD are listed below the overlay images. Our method demonstrated more accurate alignment compared to all the baseline methods. ## 7 Discussion ### _Ablation Study on the Two-stage Learning Framework_ In this section, we investigated the benefits of each stage in the proposed two-stage semi-supervised learning framework (LVD-Seg) by removing one of the stages from the framework. The results are shown in Table 5. For _Stage 1 only_, we trained the EMA vessel segmentation network following the procedure described in Section 4 and used the same EMA vessel segmentation network for segmenting OCTA images. For _Stage 2 only_, we trained the segmentation networks with style loss only. In general, the performance of both variants decreased significantly. _Stage 1 only_ performed the worst probably due to the limited number (three in our case) of annotated vessel segmentation masks used for supervised training, making the segmentation network generalize poorly on the test images and resulting in poor registration performance. Furthermore, _Stage 1 only_ directly applied the trained EMA segmentation network on OCTA images, which may not work due the relatively large VD difference between the two modalities. Although _Stage 2 only_ worked better than _Stage _1 only_, it still significantly lagged behind our _LVD-Seg_. This implied that relying purely on style loss could let the training become unstable and resulted in unreliable registration. All these results emphasized the effectiveness of the proposed two-stage semi-supervised learning framework (LVD-Seg). ### _Ablation Study on Number of Required Vessel Masks_ The major advantage of our method lies in the requirement for few manually annotated vessel segmentation masks during stage 1 of the _LVD-Seg_ framework. In this section, we further investigated the performance of our method on the _MEMO_ dataset by using different numbers of labeled EMA vessel segmentation masks for supervised training. Specifically, during stage 1 of _LVD-Seg_, we trained our vessel segmentation module using randomly sampled 3, 5, 10 and 15 annotated EMA vessel segmentation masks. In stage 2, we used all 15 training pairs as our default setting. The results are shown in Table VI. We found that using more annotated EMA vessel segmentation masks during the supervised training (i.e., stage 1 of _LVD-Seg_) did not affect the performance significantly. For instance, there was only a 6.66% difference between the highest and the lowest success rates. In other words, the proposed method required very few (e.g., three) annotated vessel segmentation masks to maintain its accuracy, demonstrating its feasibility. ### _Ablation Study on Data Used for Supervised Training_ As mentioned in the previous section, the primary cost of our method lies in the requirement for few manually annotated vessel segmentation masks. In this section, we further investigated whether existing retinal image segmentation datasets could potentially be used to train our segmentation module during stage 1 of _LVD-Seg_. We selected two datasets, HRF [67] and DRIVE [68], to conduct the experiments. Specifically, HRF and DRIVE are two retinal color fundus (CF) image datasets providing ground truth vessel segmentation masks. We randomly chose three images from each dataset to train our segmentation network during stage 1 of _LVD-Seg_. Other than that, the default settings were adopted. The results are shown in Table VII. Compared to the performance of using our _MEMO_ dataset, the performance of using the HRF and DRIVE datasets in stage 1 both decreased. Additionally, using the HRF dataset achieved superior performance than using the DRIVE dataset. One possible reason for this was that the HRF dataset had a more similar VD with our _MEMO_ dataset. The average VD of _MEMO_ (EMA), HRF and DRIVE are 4.71%, 10.05% and 11.21%, respectively. This implies that selecting the ground truth vessel segmentation masks whose VD is closer to the target images (EMA images in our case) might be very important for achieving better results when using the proposed framework. ### _Potential of the Proposed Method_ The proposed _VDD-Reg_ requires very little labeling. It could potentially be applied to other vessel imaging modalities, especially for modalities with large differences on vessel structures. This has wider applications for any comparison of SLO images with OCTA. For instance, the registration of FAF to OCTA images may benefit from this approach [69]. Fig. 6: Registration results of our method and the baseline methods on a selected image pair from our _MEMO_ dataset. The top row shows the grid images where the EMA and OCTA images are interfaced as small grids. The bottom row shows the overlay images of the EMA (green) and OCTA (orange) vessel segmentation masks generated by each method. The RMSE and MSD of each method are listed below each overlay image. Furthermore, multimodal adaptive optics devices which use both AO-SLO and AO-OCT methods could also benefit from this approach [70, 71]. ## VIII Conclusion In this paper, we present _MEMO_, the first public multimodal EMA and OCTA retinal image dataset. _MEMO_ provides registration ground truth, EMA image sequences and OCTA projection images, desirable for various research fields. With _MEMO_, we first uncover a unique challenge of multimodal retinal image registration between modalities with large VD differences. After that, we propose a segmentation-based deep-learning registration framework, _VDD-Reg_, and a new evaluation metric, _Masked Soft Dice_ (_MSD_), to deal with the large vessel density difference between EMA and OCTA in multimodal retinal image registration. Moreover, to train the segmentation module in our _VDD-Reg_, we design a novel two-stage semi-supervised learning framework, _LVD-Seg_, which combines supervised and unsupervised losses. Both quantitative and qualitative results demonstrate that _VDD-Reg_ outperforms the baseline methods in both small VD differences (i.e., CF-FA) and large VD differences (i.e., _MEMO_). Additionally, _VDD-Reg_ requires as few as three annotated vessel segmentation masks to maintain its performance, which demonstrates its promising potential for registering other modalities. ## Acknowledgment The authors would like to acknowledge the funding support from the University of Maryland Strategic Partnership: MPOwering the State, a formal collaboration between the University of Maryland College Park and the University of Maryland Baltimore.
2309.10736
Mixture Weight Estimation and Model Prediction in Multi-source Multi-target Domain Adaptation
We consider the problem of learning a model from multiple heterogeneous sources with the goal of performing well on a new target distribution. The goal of learner is to mix these data sources in a target-distribution aware way and simultaneously minimize the empirical risk on the mixed source. The literature has made some tangible advancements in establishing theory of learning on mixture domain. However, there are still two unsolved problems. Firstly, how to estimate the optimal mixture of sources, given a target domain; Secondly, when there are numerous target domains, how to solve empirical risk minimization (ERM) for each target using possibly unique mixture of data sources in a computationally efficient manner. In this paper we address both problems efficiently and with guarantees. We cast the first problem, mixture weight estimation, as a convex-nonconcave compositional minimax problem, and propose an efficient stochastic algorithm with provable stationarity guarantees. Next, for the second problem, we identify that for certain regimes, solving ERM for each target domain individually can be avoided, and instead parameters for a target optimal model can be viewed as a non-linear function on a space of the mixture coefficients. Building upon this, we show that in the offline setting, a GD-trained overparameterized neural network can provably learn such function to predict the model of target domain instead of solving a designated ERM problem. Finally, we also consider an online setting and propose a label efficient online algorithm, which predicts parameters for new targets given an arbitrary sequence of mixing coefficients, while enjoying regret guarantees.
Yuyang Deng, Ilja Kuzborskij, Mehrdad Mahdavi
2023-09-19T16:29:34Z
http://arxiv.org/abs/2309.10736v2
# Mixture Weight Estimation and Model Prediction in Multi-source Multi-target Domain Adaptation ###### Abstract We consider the problem of learning a model from multiple heterogeneous sources with the goal of performing well on a new target distribution. The goal of learner is to mix these data sources in a target-distribution aware way and simultaneously minimize the empirical risk on the mixed source. The literature has made some tangible advancements in establishing theory of learning on mixture domain. However, there are still two unsolved problems. Firstly, how to estimate the optimal mixture of sources, given a target domain; Secondly, when there are numerous target domains, how to solve empirical risk minimization (ERM) for each target using possibly unique mixture of data sources in a computationally efficient manner. In this paper we address both problems efficiently and with guarantees. We cast the first problem, mixture weight estimation, as a convex-nonconcave compositional minimax problem, and propose an efficient stochastic algorithm with provable stationarity guarantees. Next, for the second problem, we identify that for certain regimes, solving ERM for each target domain individually can be avoided, and instead parameters for a target optimal model can be viewed as a non-linear function on a space of the mixture coefficients. Building upon this, we show that in the offline setting, a GD-trained overparameterized neural network can provably learn such function to _predict_ the model of target domain instead of solving a designated ERM problem. Finally, we also consider an online setting and propose a label efficient online algorithm, which predicts parameters for new targets given an arbitrary sequence of mixing coefficients, while enjoying regret guarantees. ## 1 Introduction With a rapidly increasing amount of decentralized data, multiple source domain adaptation has been an important learning scheme in modern machine learning, e.g., in learning with data collected from multiple sources (e.g. crowdsourcing) or learning in distributed systems where the data can be highly heterogeneous such as federated learning. In this learning scenario, given an input space \(\mathcal{X}\) and output space \(\mathcal{Y}\), we assume access to \(N\) sources of data, each with its own underlying distributions \(\mathcal{D}_{j},j\in[N]\) over \(\mathcal{X}\times\mathcal{Y}\). Then, given i.i.d. training samples \(\widehat{\mathcal{D}}_{1},\ldots,\widehat{\mathcal{D}}_{N}\), and a hypothesis space \(\mathcal{H}\), our goal is to learn a model on the combination of these sources, for instance through the Empirical Risk Minimization (ERM) procedure \(\widehat{h}_{\mathbf{\alpha}}=\arg\min_{h\in\mathcal{H}}\sum_{j=1}^{N}\alpha(j) \mathcal{L}_{\widehat{\mathcal{D}}_{j}}(h)\), where \(\mathcal{L}_{\widehat{\mathcal{D}}_{j}}(h)\) is the empirical loss of a model \(h\in\mathcal{H}\) over data samples in \(\widehat{\mathcal{D}}_{j}\), and \(\mathbf{\alpha}\in\Delta^{N}\) is some mixing parameter, such that predictor \(\widehat{h}_{\mathbf{\alpha}}\) entails a good generalization performance on a target domain characterized by a distribution \(\mathcal{T}\), i.e., yielding a small true risk \(\mathcal{L}_{\mathcal{T}}(\widehat{h}_{\mathbf{\alpha}})=\int\ell(\widehat{h}_{ \mathbf{\alpha}}(\mathbf{x}),y)\,\mathrm{d}\mathcal{T}(\mathbf{x},y)\). It is natural to measure the quality of \(\widehat{h}_{\mathbf{\alpha}}\) in terms of the excess risk -- namely, the difference between the risk of optimal model for target domain \(h_{\mathcal{T}}^{*}=\arg\min_{h\in\mathcal{H}}\mathcal{L}_{\mathcal{T}}(h)\), and that achieved by \(\widehat{h}_{\mathbf{\alpha}}\). Clearly, the performance of \(\widehat{h}_{\mathbf{\alpha}}\) will be influenced by several factors, such as the choice of mixing coefficients \(\mathbf{\alpha}\) to aggregate the empirical losses, capacity of \(\mathcal{H}\), and discrepancy between target and source data distributions. So, in order to design a good procedure for learning \(\widehat{h}_{\mathbf{\alpha}}\) we need to understand aforementioned trade-offs. Over the years the literature on the multiple source learning has dedicated a considerable attention to this problem [25, 5, 35, 17, 12]. To this end, we consider the following bound on the excess risk of \(\widehat{h}_{\mathbf{\alpha}}\): **Theorem 1** (Multi-source learning bound [17]).: _Given \(N\) source data distributions \(\mathcal{D}_{1},\ldots,\mathcal{D}_{N}\) and a target data distribution \(\mathcal{T}\), let \(\widehat{h}_{\mathbf{\alpha}}=\arg\min_{h\in\mathcal{H}}\sum_{j=1}^{N}\alpha(j) \mathcal{L}_{\widehat{\mathcal{D}}_{j}}(h)\) be the ERM solution with fixed mixture weights \(\mathbf{\alpha}\in\Delta^{N}\). Then for any \(\nu\geq 0\), with probability at least \(1-4e^{-\nu}\) it holds that_ \[\mathcal{L}_{\mathcal{T}}(\widehat{h}_{\mathbf{\alpha}})\leq\mathcal{L}_{ \mathcal{T}}(h_{\mathcal{T}}^{*})+\mathcal{C}(\mathcal{H},\mathbf{\alpha})+\sup_{ h\in\mathcal{H}}\sum\nolimits_{j=1}^{N}\alpha(j)|\mathcal{L}_{\widehat{ \mathcal{D}}_{j}}(h)-\mathcal{L}_{\widehat{\mathcal{D}}_{j}}(h)|+C\sqrt{ \frac{\nu}{2}\sum\nolimits_{j=1}^{N}\frac{\alpha^{2}(j)}{m_{j}}}\] _where \(C\) is some constant, the complexity term is \(\mathcal{C}(\mathcal{H},\mathbf{\alpha}):=\sum_{j=1}^{N}\alpha(j)\mathfrak{R}_{j} (\mathcal{H})\) with \(\mathfrak{R}_{j}(\mathcal{H})\) being the Rademacher complexity of \(\mathcal{H}\) w.r.t. data source \(j\), and \(m_{j}\) is the number of samples from source \(j\)._ The above bound indicates that, the generalization ability of a model learnt by ERM on an \(\mathbf{\alpha}\)-combined sources, depends on the \(\mathbf{\alpha}\)-weighted sum of target-source discrepancies, and the number of samples drawn from each source. To entail a good generalization on target domain, it naturally motivates us to minimize right hand side of the bound over \(\mathbf{\alpha}\in\Delta^{N}\) to get a _good_ mixture parameter. In this paper we can cast this idea as solving the following minimax optimization problem: \[\min_{\mathbf{\alpha}\in\Delta^{N}}\max_{h\in\mathcal{H}}\sum\nolimits_{j=1}^{N} \alpha(j)|\mathcal{L}_{\widehat{\mathcal{T}}}(h)-\mathcal{L}_{\widehat{ \mathcal{D}}_{j}}(h)|+C\sqrt{\sum\nolimits_{j=1}^{N}\frac{\alpha^{2}(j)}{m_{ j}}}, \tag{1}\] where we drop the complexity term as it becomes identical for all sources by fixing the hypothesis space \(\mathcal{H}\) and bounding it with a computable distribution-independent quantity such as VC dimension [31], or it can be controlled by choice of \(\mathcal{H}\) or through data-dependent regularization. [17] gave a simple algorithm to minimize the bound of theorem 1 for binary classifiers and 0-1 loss, however their algorithm does not extend to a more general setting. [24] also looked at minimization of a similar bound with the goal to find weights for an optimal mixture, but they did not give a practical algorithm, nor a provable convergence guarantee. However, none of these works aimed to solve (1) because of its complex structure, and so an efficient algorithm for solving (1) so far has not been proposed. In particular, the first difficulty with (1) is that it is a convex-nonconcave objective, which means all minimax algorithms that require inner concavity [23, 22, 27, 28] or PL-condition [30] will fail to converge to a stationary point. However, recently the literature on optimization of this type of objectives has recently made a tangible progress: The first provable convex-nonconcave algorithm was proposed by [34], where they consider alternating gradient descent ascent algorithm. Their algorithm is deterministic, but in practice, we favor a stochastic gradient method. The second difficulty in solving (1) is its compositional structure, which means that simply replacing gradient with stochastic gradient in [34] will not retain convergence guarantees. To tackle these two difficulties, we propose a _stochastic corrected gradient descent ascent_ algorithm, with provable convergence guarantee for solving (1). Our method can be viewed as a variant of the Stochastic Gradient Descent-Ascent (SGDA) algorithm, and moreover here we give a positive answer to the question posed by [34], on _whether an algorithm performing simultaneous updates can optimize convex-nonconcave problem?_, which could be interesting by its own right. The discussion above concerns learning with one target domain, but in practice, a more common scenario is that we have multiple target domains to adapt to. For example, in federated learning [26], millions of users might wish to learn a good model from multiple sources, which can have good performance on their own data distribution. Hence, we propose to study _Multi-source Multi-target Domain Adaptation_ scenario (M\({}^{2}\)DA). Here we assume that we have \(M\) target domains, each of them characterized by its own distribution \(\mathcal{T}_{i},i\in[M]\) over \(\mathcal{X}\times\mathcal{Y}\). Adapting to \(M\) different target domains requires different mixture weights \(\mathbf{\alpha}_{1},\ldots,\mathbf{\alpha}_{M}\), either obtained by solving (1), or supplied by the user. Equipped with mixing parameters, next we have to solve \(M\) weighted ERM problems to tailor solutions for each target domain \(\mathcal{T}_{i}\), that is \[\widehat{h}_{i}=\arg\min_{h\in\mathcal{H}}\sum\nolimits_{j=1}^{N}\alpha_{i}(j) \mathcal{L}_{\widehat{\mathcal{D}}_{j}}(h)\;,\qquad i\in[M].\] Notice that these \(M\) ERM objectives share the same component functions, so we name this problem as a _co-component empirical risk minimization_. A straightforward and naive approach is to solve all \(M\) weighted ERMs individually which becomes computationally inefficient when dealing with a large number of data sources. Nevertheless, given the benign structure of these \(M\) ERM problems, we may inquire whether there is a computationally efficient method for discovering all solutions without the necessity of solving each one individually. We give an affirmative answer to this question by replacing the _learning_ of the target model by _predicting_ the target model and propose two efficient strategies to learn such predictors (for instance, a neural network). Our algorithm designs are based on the following observation: if we assume that each empirical risk \(\mathcal{L}_{\widehat{\mathcal{D}}_{j}}(h)\) is strongly convex and smooth in parameters of a hypothesis \(h\), then the optimal parameters are given by _a Lipschitz function_ of mixture weights \(\boldsymbol{\alpha}\). More formally, denoting \(\mathbf{w}^{*}(\boldsymbol{\alpha})\) as optimal parameters of a hypothesis \(h\) for \(\boldsymbol{\alpha}\)-weighted ERM, \(\mathbf{w}^{*}(\boldsymbol{\alpha})\) is Lipschitz in \(\boldsymbol{\alpha}\). This means that we can learn function \(\mathbf{w}^{*}(\cdot)\) with, say, a neural network and gradient descent, with provably small generalization error. Moreover, analysis of generalization error allows us to understand when such target model prediction is more efficient than direct learning. In particular, we look at such a _phase transition of efficiency_, and conclude that when the number of targets \(M\) is much larger than number of sources \(N\), i.e, \(M\geq\Omega((1/\epsilon)^{N/2})\), learning to predict solution is more efficient than optimizing to solve all \(M\) ERMs. When \(M\) is relatively smaller, optimizing to solve ERMs is more efficient than learning to predict. Finally, as a second learning scenario we consider an online learning setting, where mixture weights \(\boldsymbol{\alpha}_{1},...,\boldsymbol{\alpha}_{M}\) are arriving sequentially, and may not even originate from the same distribution. We cast this problem as an online non-parametric regression problem with inexact labels and propose an label-efficient online algorithm to predict models. Our contributionsThe main contributions of this paper is summerized as follows: * We study the multi-source domain adaptation problem where there are multiple source domains and we wish to learn a new model given the mixture of source domains, that can perform well on a given target domain. We build upon existing learning-theoretic results on multi-source domain adaptation and design a new algorithm for weighing of source domains, that casts this problem as a a convex-nonconcave minimax optimization problem. In section 2 we give the first stochastic optimization algorithm for this problem which provably converges to a stationary point. The proposed algorithm is the first provably convergent algorithm for a stochastic _compositional_ convex-nonconcave minimax problem. * We further consider the above adaptation problem with multiple target domains, the Multi-source Multi-target Domain Adaptation (M\({}^{2}\)DA). We observe that these multiple adaptation might share a common structure, which allows us to avoid solving adaptation problem for each target domain individually, and instead we can replace it by direct prediction of parameters for a new problem. We consider offline and online settings for prediction of target parameters, and propose computationally efficient algorithms for both. For the offline setting, in section 3.1 we propose to use a two-layer neural network to learn optimal parameters \(\mathbf{w}^{*}(\boldsymbol{\alpha})\) using bilevel gradient descent. We show that our algorithm can achieve \(O(n^{-\frac{2}{2+N}})\) excess risk. We also identify the regime where our learning based approach is more efficient compared to directly solving each target problem individually. * Finally, in section 3.2 we focus on scenario where target problems arrive sequentially (and could be dependent) and extend our study of direct target parameter prediction to the online setting. We propose a label-efficient algorithm which enjoys \(O(n^{-\frac{1}{1+N}})\) average regret. NotationWe introduce some basic definitions and notation that will be used throughout the paper. Let \(\mathbb{B}_{q}^{d}(r)\) be the ball in \(q\)-metric centered at the origin and of radius \(r>0\), and let \(\mathbb{S}^{d-1}(r)=\{\mathbf{x}\in\mathbb{R}^{d}\ :\ \|\mathbf{x}\|_{2}=r\} \subset\mathbb{R}^{d}\) be the \(\ell_{2}\)-norm unit sphere centered at the origin, and finally let \(\mathbb{S}^{d-1}=\mathbb{S}^{d-1}(1)\). In addition, the probability simplex is defined as \(\Delta^{N}=\{\boldsymbol{\alpha}\in[0,1]^{N}:\|\boldsymbol{\alpha}\|_{1}=1\}\). Concatenation of vectors is denoted by parentheses, that is \((\mathbf{w}_{1},\ldots,\mathbf{w}_{m})=[\mathbf{w}_{1}^{\top},\ldots,\mathbf{w }_{m}^{\top}]^{\top}\). A vector norm \(\|\cdot\|\) is understood as Euclidean norm, while \(\|\mathbf{x}\|_{\infty}=\max_{i}|x_{i}|\). For a matrix \(\mathbf{M}\), \(\|\mathbf{M}\|_{\mathrm{op}}\) denotes its spectral norm while \(\|\mathbf{M}\|_{F}\) is its Frobenius norm. For some \(f:\mathbb{S}^{d-1}\rightarrow\mathbb{R}\) the _empirical semi-norm_ is defined as \(\|f\|_{n}^{2}=\frac{1}{n}(f(\mathbf{x}_{1})^{2}+\cdots+f(\mathbf{x}_{n})^{2})\) and is always taken w.r.t. the training sample \(S\). In addition, for \(g:\mathbb{S}^{d-1}\rightarrow\mathbb{R}\), we define an empirical inner product \(\left\langle f,g\right\rangle_{n}=\frac{1}{n}(f(\mathbf{x}_{1})g(\mathbf{x}_{1}) +\cdots+f(\mathbf{x}_{n})g(\mathbf{x}_{n}))\). At the same time, \(\|f\|_{2}=\|f\|_{L^{2}(P_{\mathcal{X}})}^{2}\). ## 2 Mixture Weights Estimation via Convex-nonconcave Minimax Optimization In this section we focus on a single target domain and present an Algorithm 1 designed to solve a minimax problem (1) to estimate the mixture weights. We assume that hypothesis \(h\) is parameterized by a vector space \(\{\mathbf{w}\in\mathcal{W}\subseteq\mathbb{R}^{d}\}\), and use \(f_{j}(\mathbf{w})=\mathcal{L}_{\widehat{\mathcal{D}}_{j}}(h)\) to denote the empirical risk over data source \(j\). Similarly we define \(f_{\widehat{\mathcal{T}}}(\mathbf{w})=\mathcal{L}_{\widehat{\mathcal{T}}}(h)\). We do the following standard relaxations. First, for the sake of simplicity in computation, we relax the square root on the quadratic term w.r.t. \(\boldsymbol{\alpha}\). Second, since the absolute value function is non-smooth, we shall use the smooth approximation function \(g\) to replace it, e.g., \(g(x)=\sqrt{x^{2}+c}\) where \(c\) is some small number (here \(g(\cdot)\) is smooth approximation of \(|\cdot|\)). These relaxations lead to solving the following compositional convex-nonconcave minimax optimization problem: \[\min_{\alpha\in\Delta^{N}}\max_{\mathbf{w}\in\mathcal{W}}F(\boldsymbol{\alpha },\mathbf{w}):=\sum\nolimits_{j=1}^{N}\alpha(j)g(f_{\widehat{\mathcal{T}}}( \mathbf{w})-f_{j}(\mathbf{w}))+C\boldsymbol{\alpha}^{\top}\mathbf{M} \boldsymbol{\alpha}\;, \tag{2}\] where \(\mathbf{M}=\mathrm{diag}\{\frac{1}{m_{1}},\ldots,\frac{1}{m_{N}}\}\). We are interested in developing a stochastic optimization algorithm to solve (2). It is a strongly-convex-nonconcave minimax problem, and it is one of most difficult type of minimax problem due to the absence of inner concavity. ``` Input: Target domain \(\mathcal{T}\), Source domains \(\mathcal{D}_{1},...,\mathcal{D}_{N}\), Initialization variable \(\mathbf{w}^{0}=\mathbf{w}^{-1}\), \(z_{1}^{0},...,z_{N}^{0}\), Positive hyper-parameters \((B,\beta,\eta,\gamma)\) (see theorem 2). for\(t=0,...,T-1\)do Sample a minibatch \(\xi_{\mathcal{T}}^{t}\) of size \(B\) from target domain \(\mathcal{T}\), and \(\xi_{1}^{t},\ldots,\xi_{N}^{t}\) from source domains \(\mathcal{D}_{1},\ldots,\mathcal{D}_{N}\) \(z_{j}^{t+1}=(1-\beta^{t})\left(z_{j}^{t}+f_{\widehat{\mathcal{T}}}(\mathbf{w} ^{t};\xi_{\mathcal{T}}^{t})-f_{j}(\mathbf{w}^{t};\xi_{j}^{t})-(f_{\widehat{ \mathcal{T}}}(\mathbf{w}^{t-1};\xi_{\mathcal{T}}^{t})-f_{j}(\mathbf{w}^{t-1}; \xi_{j}^{t}))\right)\) \(+\beta^{t}(f_{\widehat{\mathcal{T}}}(\mathbf{w}^{t};\xi_{\mathcal{T}}^{t})-f_ {j}(\mathbf{w}^{t};\xi_{\mathcal{T}}^{t}))\). Compute gradient for \(\mathbf{w}\): \(\mathbf{g}_{\mathbf{w}}^{t}=\sum_{j=1}^{N}\alpha_{j}^{t}\nabla g(z_{j}^{t+1})( \nabla f_{\widehat{\mathcal{T}}}(\mathbf{w}^{t};\xi_{\mathcal{T}}^{t})-\nabla f _{j}(\mathbf{w}^{t};\xi_{j}^{t}))\) \(\mathbf{w}^{t+1}=\mathcal{P}_{\mathcal{W}}\left(\mathbf{w}^{t}+\gamma\mathbf{ g}_{\mathbf{w}}^{t}\right)\) Make vector \(\mathbf{v}\in\mathbb{R}^{N}\) whose \(j\)th coordinate is \(g(z_{j}^{t})\). Compute gradient for \(\boldsymbol{\alpha}\): \(\mathbf{g}_{\boldsymbol{\alpha}}^{t}=\mathbf{v}+2C\mathbf{M}\boldsymbol{ \alpha}^{t}\). \(\boldsymbol{\alpha}^{t+1}=\mathcal{P}_{\Delta^{N}}\left(\boldsymbol{\alpha}^{t}- \eta\mathbf{g}_{\boldsymbol{\alpha}}^{t}\right)\) ``` **Algorithm 1**Mixture Weight Estimation To the best of our knowledge, only Xu _et al._[34] proposed a deterministic algorithm to solve it, but it is still unknown whether a stochastic algorithm can solve it with provable guarantee. We give an affirmative answer to this question, by proposing an algorithm built on celebrated stochastic gradient descent ascent [23]. In addition to nonconcavity nature of 2, another difficulty that arises from the compositional structure of objective is that we cannot simply compute stochastic gradients, namely (with \(\mathbb{E}[\cdot]\equiv\mathbb{E}[\cdot\mid\mathrm{data}]\)): \[\mathbb{E}[g^{\prime}(f_{\widehat{\mathcal{T}}}(\mathbf{w};\xi_{\mathcal{T}})- f_{j}(\mathbf{w};\xi_{j}))(\nabla f_{\widehat{\mathcal{T}}}(\mathbf{w};\xi_{ \mathcal{T}})-\nabla f_{j}(\mathbf{w};\xi_{j}))]\neq g^{\prime}(f_{\widehat{ \mathcal{T}}}(\mathbf{w})-f_{j}(\mathbf{w}))(\nabla f_{\widehat{\mathcal{T}}}( \mathbf{w})-\nabla f_{j}(\mathbf{w})),\] where \(\xi_{1},\xi_{2},\ldots\) are independent random elements in sample space \(\Xi=\mathcal{X}\times\mathcal{Y}\) that capture stochasticity of the algorithm. To alleviate this issue, we borrow 'the stochastic corrected gradient' idea from [7], and maintain an auxiliary variable by introducing \[z_{j}^{t+1}=(1-\beta^{t})\left(z_{j}^{t}+f_{\widehat{\mathcal{T}}}( \mathbf{w}^{t};\xi_{\mathcal{T}}^{t})-f_{j}(\mathbf{w}^{t};\xi_{j}^{t})-(f_{ \widehat{\mathcal{T}}}(\mathbf{w}^{t-1};\xi_{\mathcal{T}}^{t})-f_{j}(\mathbf{w }^{t-1};\xi_{j}^{t}))\right)\] \[+\beta^{t}(f_{\widehat{\mathcal{T}}}(\mathbf{w}^{t};\xi_{\mathcal{ T}}^{t})-f_{j}(\mathbf{w}^{t};\xi_{j}^{t}))\] In words, for each source \(j\), we maintain a variable \(z_{j}^{t}\) serving as a correction term, to ensure that \(z_{j}^{t}\) is getting closer to inner function component \(f_{\widehat{\mathcal{T}}}(\mathbf{w}^{t})-f_{j}(\mathbf{w}^{t})\). Relying on the auxiliary variable, we then have gradient estimates \[\mathbf{g}_{\mathbf{w}}^{t}=\sum_{j=1}^{N}\alpha^{t}(j)\nabla g(z_{j}^{t+1})( \nabla f_{\widehat{\mathcal{T}}}(\mathbf{w}^{t};\xi_{\mathcal{T}}^{t})-\nabla f _{j}(\mathbf{w}^{t};\xi_{j}^{t})),\quad\mathbf{g}_{\alpha}^{t}=[g(z_{1}^{t}),...,g(z_{N}^{t})]+2C\mathbf{M}\boldsymbol{\alpha}^{t}.\] Then, denoting the projection operator onto convex set \(\mathcal{C}\) by \(\mathcal{P}_{\mathcal{C}}(\cdot)\), the update rule becomes \[\mathbf{w}^{t+1}=\mathcal{P}_{\mathcal{W}}\left(\mathbf{w}^{t}+\gamma\mathbf{g} _{w}^{t}\right),\qquad\boldsymbol{\alpha}^{t+1}=\mathcal{P}_{\Delta^{N}}\left( \boldsymbol{\alpha}^{t}-\eta\mathbf{g}_{\boldsymbol{\alpha}}^{t}\right)\.\] ### Convergence Analysis In this section we are going to present the convergence guarantee for Algorithm 1. We make the following standard assumption on objective in (2). **Assumption 1**.: _We make the following assumptions on \(g\) and \(f\):_ 1. \(g(z)\) _is_ \(G_{g}\) _Lipschitz and_ \(L_{g}\) _smooth._ \(f_{j}(\mathbf{w};\xi)\) _is_ \(G_{f}\) _Lipschitz and_ \(L_{f}\) _smooth,_ \(\forall\mathbf{w}\in\mathcal{W},j\in[N],\,\xi\in\Xi\)__ 2. \(\mathbb{E}\left\|\nabla f_{j}(\mathbf{w};\xi)-\nabla f_{j}(\mathbf{w})\right\| ^{2}\leq\sigma^{2},\forall\mathbf{w}\in\mathcal{W}\)_._ 3. \(\max_{\boldsymbol{\alpha}\in\Delta^{N},\mathbf{w}\in\mathcal{W}}F(\boldsymbol {\alpha},\mathbf{w})\leq F_{max}\)_,_ \(\max_{\mathbf{w}\in\mathcal{W}}g(f_{\widehat{\mathcal{T}}}(\mathbf{w})-f_{j}( \mathbf{w}))\leq B_{g},\forall j\in[N]\)_._ Points 1 and 2 of Assumption 1 are standard in the literature on compositional optimization [7]. Point 3 guarantees boundedness of objective value, which can be ensured since we are working in the bounded parameter domain. Assumption 1 also implies the following property of \(F\). **Proposition 1**.: _Under Assumption 1, \(F(\boldsymbol{\alpha},\mathbf{w})\) is \(L:=\max\left\{4G_{f}^{2}L_{g}+2G_{g}L_{f},\frac{2C}{m_{\min}}\right\}\) smooth, and \(\mu=\frac{2C}{m_{\max}}\) strongly convex in \(\boldsymbol{\alpha}\)._ Next, we consider the following convergence measure: **Definition 1** (Convergence Measure [34]).: _Given two parameters, \(\boldsymbol{\alpha}\) and \(\mathbf{w}\), we define the following quantity as a stationary gap_ \[\nabla G(\boldsymbol{\alpha},\mathbf{w})=\left(\begin{array}{l}\frac{1}{ \eta}\left(\boldsymbol{\alpha}-\mathcal{P}_{\Delta^{N}}\left(\boldsymbol{ \alpha}-\eta\nabla_{\boldsymbol{\alpha}}F(\boldsymbol{\alpha},\mathbf{w}) \right)\right)\\ \frac{1}{\gamma}\left(\mathbf{w}-\mathcal{P}_{\mathcal{W}}\left(\mathbf{w}+ \gamma\nabla_{\mathbf{w}}F(\boldsymbol{\alpha},\mathbf{w})\right)\right) \end{array}\right)\.\] Given the nonconcave nature of (2), we are only able to show the convergence to a stationary point. Definition 1 measures the stationarity given parameter pair \((\boldsymbol{\alpha},\mathbf{w})\) by examining how much the parameter will change if we run one step projected gradient descent-ascent on them. Alternatively, one could consider the widely employed _primal function_[23] as a convergence measure, \(\|\nabla\Phi(\boldsymbol{\alpha})\|\) with \(\Phi(\boldsymbol{\alpha})=\max_{\mathbf{w}\in\mathcal{W}}F(\boldsymbol{\alpha },\mathbf{w})\), but it is ill-suited to express stationarity since \(F(\boldsymbol{\alpha},\cdot)\) is non-concave. One of our main results, proved in appendix A, establishes the convergence rate of Algorithm 1: **Theorem 2**.: _Consider Assumption 1 and let \(L\) and \(\mu\) be defined in Proposition 1. Then, letting \(B=\Theta\left(\max\left\{\frac{G_{g}^{2}N\sigma^{2}}{\epsilon^{2}},\frac{\kappa L \sigma^{2}}{\epsilon^{2}}\right\}\right)\), \(\beta=0.1\), \(\eta=\Theta\left(\frac{\mu}{L^{2}}\right)\), \(\gamma=\Theta\left(\frac{\mu^{3}}{NG_{g}^{2}G_{f}^{2}L^{2}}\right)\), the Algorithm 1 guarantees that_ \[\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\left\|\nabla G(\boldsymbol{\alpha}^{t}, \mathbf{w}^{t})\right\|^{2}\leq\epsilon^{2}\] _with the gradient complexity bounded by:_ \[O\left(\frac{\kappa LF_{\max}}{\epsilon^{2}}\cdot\max\left\{\frac{\kappa L \sigma^{2}}{\epsilon^{2}},\frac{G_{g}^{2}N\sigma^{2}}{\epsilon^{2}},1\right\} \right).\] To the best of our knowledge, this is the first convergence proof for stochastic algorithm on solving strongly-convex-nonconcave problem. We achieve \(O(\epsilon^{-4})\) gradient complexity required to reach an \(\epsilon\) stationary point. In contrast to the most relevant result of [34], they show the rate \(O(\epsilon^{-2})\) for a _deterministic_ Alternating Gradient Projection (AGP) in a strongly-convex-nonconcave setting. Note that our result also positively answers the question posed by [34], on whether some algorithm performing simultaneous instead of alternative updates can optimize strongly-convex-nonconcave minimax problem. Finally, compared to \(O(\epsilon^{-4})\) rate of SGDA given a nonconvex-strongly-concave problem [23], we need roughly same stochastic gradient evaluations. Multiple Target Domains: Learning to Solve Co-component ERM Up till now, the main focus was on the problem of learning _good_ mixture parameters given a single target domain. Now we turn to a more general setting where we have \(M\) target domains, each associated with a different data distribution which necessitates per target mixture weights \(\mathbf{\alpha}_{1},\ldots,\mathbf{\alpha}_{M}\), either obtained by our algorithm, or provided by the user to guarantee good generalization on individual domains. Next, to get personalized models for these \(M\) domains, we have to solve \(M\) different ERM problems based on these mixture weights: \[\min_{\mathbf{w}\in\mathcal{W}}f_{\mathbf{\alpha}_{1}}(\mathbf{w}):=\sum\nolimits _{j=1}^{N}\alpha_{1}(j)f_{j}(\mathbf{w}),\quad\cdots,\quad\min_{\mathbf{w}\in \mathcal{W}}f_{\mathbf{\alpha}_{M}}(\mathbf{w}):=\sum\nolimits_{j=1}^{N}\alpha_{n} (j)f_{j}(\mathbf{w});\] A naive way is to solve each of them, which will result in a computational complexity of \(M\) multiplied by the cost required to minimize each individual \(f_{\mathbf{\alpha}_{i}}\) to a desired precision. We note that such a solution does not exploit the benign structure of these ERM problems: they share the same component functions \((f_{j})_{j}\), and the only difference is in the mixture weights. It naturally motivates us to ask, can we propose an efficient algorithm which avoids solving all these \(M\) co-component ERM problems from scratch? Consider the solution of \(\min_{\mathbf{w}\in\mathcal{W}}f_{\alpha}(\mathbf{w}):=\sum_{j=1}^{N}\alpha( j)f_{j}(\mathbf{w})\) as a function of \(\mathbf{\alpha}\), \[\mathbf{w}^{*}(\mathbf{\alpha}):=\arg\min_{\mathbf{w}\in\mathcal{W}}\sum \nolimits_{j=1}^{N}\alpha(j)f_{j}(\mathbf{w}). \tag{3}\] Fortunately, if we assume that each source empirical risk \(f_{j}\) is strongly convex and \(L_{f}\) smooth in model parameters, we have the following Lipschitz property (shown in appendix B.1): **Lemma 1**.: _If each \(f_{j}\) is \(\mu_{f}\) strongly convex and \(L_{f}\) smooth, then \(\mathbf{w}^{*}(\cdot)\) is \(\kappa^{*}=\sqrt{N}G_{f}/\mu_{f}\) Lipschitz._ Some basic algebra shows that in the above example \(\mathbf{w}^{*}(\mathbf{\alpha})\) is indeed Lipschitz in \(\mathbf{\alpha}\) with respect to \(\ell^{2}\) metric. The Lipschitz property allows us to learn \(\mathbf{w}^{*}\) efficiently. In particular, learning arbitrary Lipschitz (and bounded) vector-valued function \(\mathbf{w}^{*}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{d}\) is an instance of a well-studied _nonparametric regression_ problem [11]. In the following we will consider algorithms for learning \(\mathbf{w}^{*}\) in both offline and online setting and which are provably capable of estimating \(\mathbf{w}^{*}\) at an almost optimal rate. In offline setting, we assume that we have access to a subset of \(M\) mixture weights, say \(\mathbf{\alpha}_{1},...,\mathbf{\alpha}_{n}\), and we shall use a two layer neural network \(\mathbf{h}_{\mathbf{\theta}}(\cdot)\) to learn \(\mathbf{w}^{*}(\cdot)\). Our algorithm is GD based empirical risk minimization with adaptive label refining. In a nutshell, given an \(\mathbf{\alpha}\), since we do not have access to \(\mathbf{w}^{*}(\mathbf{\alpha})\), we will use gradient descent to jointly solve weighted ERM with \(\mathbf{\alpha}\) to get an approximation of \(\mathbf{w}^{*}(\mathbf{\alpha})\) as well as optimizing neural network parameters. With a mild distributional assumption on \(\mathbf{\alpha}\), we show that our algorithm guarantees that the two layer network learns \(\mathbf{w}^{*}(\mathbf{\alpha})\), that is, it achieves a small excess risk \(\mathbb{E}_{\mathbf{\alpha}}\left\|\mathbf{h}_{\mathbf{\theta}}(\mathbf{\alpha})-\mathbf{ w}^{*}(\mathbf{\alpha})\right\|^{2}\). In online setting, we assume that we observe an arbitrary sequence \(\mathbf{\alpha}_{1},\mathbf{\alpha}_{2},...\) on a simplex, and we wish to predict parameters close to \(\mathbf{w}^{*}(\mathbf{\alpha}_{1}),\mathbf{w}^{*}(\mathbf{\alpha}_{2}),...\). As baseline algorithm we will consider a well-known online nonparametric regression that greedily covers the simplex with local online learners and which enjoys almost-optimal regret [13]. However, in the considered online protocol, the algorithm will need access to labels, and revealing each label requires to solve (3) to some desired accuracy. Here we explore a possibility that in practice we might be satisfied with \(\epsilon\)-average regret, while saving the labelling cost. To this end we propose a modification of the algorithm that randomly skips some labels, while incurring a slightly larger regret. ### Offline Setting: Learning Lipschitz function with ReLU Neural Network In this section we consider offline learning of \(\mathbf{w}^{*}\). The Lipschitzness guarantees that the \(\mathbf{w}^{*}(\mathbf{\alpha})\) function can be efficiently learnt on finite \(\mathbf{\alpha}\)s, and generalizable to unseen \(\mathbf{\alpha}\). Hence, we propose to use a vector-valued two layer ReLU neural network \(\mathbf{h}_{\mathbf{\theta}}\) to learn \(\mathbf{w}^{*}(\mathbf{\alpha})\). We consider a two layer vector-valued neural network \(\mathbf{h}_{\mathbf{\theta}}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{d}\), \(\mathbf{h}_{\mathbf{\theta}}(\mathbf{x})=[\mathbf{a}_{1}^{\top}(\mathbf{U}^{1} \mathbf{x})_{+},...,\mathbf{a}_{d}^{\top}(\mathbf{U}^{d}\mathbf{x})_{+}]\), where parameters of the _hidden layer_ are matrices \(\mathbf{U}^{i}\in\mathbb{R}^{m\times N}\), collectively captured by the parameter vector \(\mathbf{\theta}=(\operatorname{vec}(\mathbf{U}^{1}),\ldots,\operatorname{vec}( \mathbf{U}^{d}))\in\mathbb{R}^{dmN}\). Here \(\mathbf{a}_{i}\in\{\pm 1/\sqrt{m}\}^{m}\) are parameters of the _output layer_. In the following the hidden layer is tuned by Algorithm 2, while parameters of the output layer are fixed throughout training. We assume that at initialization, for each \(\mathbf{U}^{i}\), the first half of its rows are drawn i.i.d. from isotropic standard Gaussian and the remaining half is identical to the first half. Similarly, for each \(\mathbf{a}_{i}\), half of the entries are set to \(-1/\sqrt{m}\) and the rest to \(1/\sqrt{m}\) (we assume that \(m\) is even). This initialization ensures that each output coordinate is \(0\) and so the empirical risk is bounded by a constant at initialization. We assume that we observe mixture weights \(\mathbf{\alpha}_{1},\ldots,\mathbf{\alpha}_{n}\in\Delta^{N}\) i.i.d. according to some underlying distribution \(\mathcal{U}\). Such mixture weights can be obtained by Algorithm 1 and their independence means that samples originating from target domains are independent from each other. We learn the neural network by solving the following **Bi-level** ERM: \[\min_{\mathbf{\theta}}\widehat{\mathcal{R}}(\mathbf{\theta}):=\frac{1}{n}\sum_{i=1}^{n }\|\mathbf{h}_{\mathbf{\theta}}(\mathbf{\alpha}_{i})-\mathbf{w}^{*}(\mathbf{\alpha}_{i}) \|^{2},\quad\text{s.t.}\quad\mathbf{w}^{*}(\mathbf{\alpha}_{i})=\arg\min_{\mathbf{ w}\in\mathcal{W}}\sum_{j=1}^{N}\alpha_{i}(j)f_{j}(\mathbf{w}). \tag{4}\] The parameters of a neural network \(\mathbf{\theta}\) should have the well-controlled excess risk on unseen \(\mathbf{\alpha}\): \[\mathcal{R}(\mathbf{\theta})=\int_{\Delta^{N}}\|\mathbf{h}_{\mathbf{\theta}}(\mathbf{ \alpha})-\mathbf{w}^{*}(\mathbf{\alpha})\|^{2}\,\mathrm{d}\mathcal{U}(\mathbf{ \alpha})\.\] To solve (4), we use a nested loop procedure which performs a GD step on a neural network objective \(\widehat{\mathcal{R}}\), while in the inner loop we approximately find 'labels' \(\mathbf{w}^{*}(\mathbf{\alpha}_{1}),...,\mathbf{w}^{*}(\mathbf{\alpha}_{n})\) using a \(K\)-step GD. The entire procedure for solving (4) is described in Algorithm 2. Then, the following theorem shows that the two layer neural net optimized by Algorithm 2 learns \(\mathbf{w}^{*}(\mathbf{\alpha})\): ``` Input: Number of global iteration \(T\), Number of Iteration for inner problem \(R\). for\(t=1,...,T\)do \(\mathbf{\theta}^{t+1}=\mathbf{\theta}^{t}-\eta\nabla_{\mathbf{\theta}^{t}}\mathbf{h}_{ \mathbf{\theta}^{t}}(\mathbf{\alpha}_{i})(\mathbf{h}_{\mathbf{\theta}^{t}}(\mathbf{\alpha}_{i })-\mathbf{w}_{i}^{t})\) \(\triangleright\) Neural network parameter update for\(i=1,...,n\)do \(\mathbf{w}_{i}^{t+1}=\texttt{GD}(\mathbf{w}_{i}^{t},\mathbf{\alpha}_{i},K)\)\(\triangleright\) Label refining by \(K\)-step gradient descent ``` **Algorithm 2**Learning \(\mathbf{w}^{*}\) function by a neural network **Theorem 3**.: _Let \(\lambda_{0}=N\cdot\mathrm{polylog}(N,n)\). Consider a neural network of with \(m\geq\Omega\left(n^{8+\frac{2}{2+N}}\right)\). Then, for the Algorithm 2 with \(\eta\leq\frac{1}{2}\), \(\gamma=\frac{1}{L_{f}}\), \(T\geq\Omega\left(\frac{n}{N\lambda_{0}^{2}n}\log(n)\right)\), and \(K\geq\Omega\left(\kappa\log\left(\frac{nTnD}{\lambda_{0}}\right)\right)\), the following excess risk bound holds with probability at least \(0.99\):_ \[\mathcal{R}(\mathbf{\theta}_{T+1})\leq O\left((\kappa^{*})^{2}dn^{-\frac{2}{2+N}} \right).\] The proof given in appendix B.5 is based on a more-or-less standard Neural Tangent Kernel (NTK) approximation argument [15, 4], namely we use the key fact that predictions made by a GD-trained overparameterized neural network are close to those made by a Kernelized Least-Squares (KLS) predictor (given that the width of the network is sufficiently large). Now, such a KLS GD-trained predictor can learn Lipschitz target functions: It is well known that by learning on a sufficiently large Reproducing kernel Hilbert space (RKHS) (with polynomial spectral decay), one can approximate Lipschitz functions well [8, 2]. Here our goal is to approximate a vector-valued function, however, by treating each output independently we follow existing proofs [14, 21] for scalar-valued Lipschitz regression by GD-trained neural networks and arrive at the same excess risk times \(d\). **Optimality of our rate.** Here we show that a two-layer neural network trained by a bi-level Gradient Descent (GD) can learn a vector-valued function with \(O(dn^{-\frac{2}{2+N}})\) excess risk. If we ignore the dependency on \(d\), our result matches the minimax rate of learning a scalar valued Lipschitz function [11]. ``` Data: Radii schedule \(\varepsilon_{1},\varepsilon_{2}\ldots\) with \(\varepsilon_{t}\in\mathbb{R}_{+}\), Label efficiency parameter \(p\in[0,1]\) \(S\leftarrow\varnothing\) ; \(\triangleright\) Set of centers for\(t=1,2,\ldots\)do Observe \(\boldsymbol{\alpha}_{t}\); if\(S=\varnothing\)then \(S\leftarrow\{t\},\quad T_{t}\leftarrow\varnothing\) ; \(\triangleright\) Create initial ball \(s\leftarrow\arg\min_{s\in S}\|\boldsymbol{\alpha}_{t}-\boldsymbol{\alpha}_{s} \|\) ; \(\triangleright\) Find active center if\(T_{s}=\varnothing\)then \(\hat{\mathbf{w}}_{t}=\frac{1}{d}\boldsymbol{1}\) else \(\hat{\mathbf{w}}_{t}\leftarrow\frac{1}{|T_{s}|}\sum_{t^{\prime}\in T_{s}} \mathbf{w}_{t^{\prime}}\) ; \(\triangleright\) Predict using active center if\(\|\boldsymbol{\alpha}_{t}-\boldsymbol{\alpha}_{s}\|\leq\varepsilon_{t}\)then \(T_{s}\gets T_{s}\cup\{t\}\) ; \(\triangleright\) Update list for active center else \(S\gets S\cup\{s\},\quad T_{s}\leftarrow\varnothing\) ; \(\triangleright\) Create new center Draw a Bernoulli random variable \(Z_{t}\) such that \(\mathbb{P}(Z_{t}=1)=p\); Observe \(\mathbf{w}_{t}=\mathbb{I}\left\{Z_{t}=1\right\}\texttt{GD}(\mathbf{0}, \boldsymbol{\alpha}_{t},K)\) ; \(\triangleright\) Update pseudo label for \(\boldsymbol{\alpha}_{t}\) ``` **Algorithm 4**Label efficient nonparametric online regression **Efficiency of our learning-based approach.** Given \(M\) mixture weights \((\boldsymbol{\alpha}_{i})_{i=1}^{M}\), the baseline naive approach is to solve all \(M\) weighted ERM with gradient descent, to accuracy level \(\epsilon\), which requires \(\Theta(M\kappa\log(1/\epsilon))\) time complexity. Using our approach, we first need to learn a neural network with \(\epsilon\) excess risk, and it needs \(n=\Theta\left((\kappa^{*2}d/\epsilon)^{1+N/2}\right)\) samples, which implies that we need to solve this many weighted ERM problems, resulting a complexity of \(\Theta\left((\kappa^{*2}d/\epsilon)^{1+N/2}\cdot\kappa\log(1/\epsilon)\right)\). Once we learn a neural network, we just need to pay for the inference cost to predict \(\mathbf{w}^{*}\) for each \(\boldsymbol{\alpha}_{i}\). Putting things together, the total time complexity is \(\Theta\left((\kappa^{*2}d/\epsilon)^{1+N/2}\cdot\kappa\log(1/\epsilon)+M\right)\). We observe the following regimes: * When \(M\geq\Omega\left(\frac{(\kappa^{*2}d/\epsilon)^{1+N/2}\kappa\log(1/\epsilon)}{ \kappa\log(1/\epsilon)-1}\right)\), learning is more efficient than solving \(M\) ERMs. * Otherwise, directly solving \(M\) ERMs is more efficient than learning a model predictor. Intuitively, when the number of target domains is much larger than number of source domains, our learning based approach is strictly more efficient. It is also interesting to note that our learning based approach can avoid computational overhead of \(M\), but suffers exponential cost from the number of sources \(N\). While solving ERMs avoids the price for \(N\), the computational cost increases linearly in terms of \(M\). ### Online Setting: Label Efficient Nonparametric Online Regression In previous section we discussed nonparametric offline learning of \(\mathbf{w}^{*}\) with distributional assumption on \(\boldsymbol{\alpha}_{1},...,\boldsymbol{\alpha}_{M}\). In this section we consider the following online learning protocol with oblivious adversary. Given a known and fixed parameter \(p\in[0,1]\), and an unknown sequence \((\boldsymbol{\alpha}_{1},\mathbf{w}^{*}(\boldsymbol{\alpha}_{1})),( \boldsymbol{\alpha}_{2},\mathbf{w}^{*}(\boldsymbol{\alpha}_{2})),\cdots\in \Delta^{N}\times\mathbb{B}_{2}^{d}(D)\) of inputs and labels, at every round \(t=1,2,\ldots\) 1. the environment reveals mixture weights \(\boldsymbol{\alpha}_{t}\in\Delta^{N}\); 2. the learner selects a label \(\hat{\mathbf{w}}_{t}\in\mathbb{B}_{2}^{d}(D)\) and incurs loss \(\ell_{t}\big{(}\hat{\mathbf{w}}_{t}\big{)}=\|\hat{\mathbf{w}}_{t}-\mathbf{w}^{ *}(\boldsymbol{\alpha}_{t})\|^{2}\); 3. the learner samples \(Z_{t}\sim\mathrm{Bern}(p)\) and observes \(\mathbb{I}\left\{Z_{t}=1\right\}\mathbf{w}_{t}\), when \(\mathbf{w}_{t}\) is a GD-optimized approximation of \(\mathbf{w}^{*}(\boldsymbol{\alpha}_{t})\). In particular, we introduce Algorithm 4, a modified version of the online nonparametric regression algorithm proposed by [13]. Algorithm 4 iteratively constructs a packing of \(\Delta^{N}\) using \(\ell^{2}\) balls centered on a subset of previously observed inputs. At each step \(t\), the label (parameters) associated with the current input \(\boldsymbol{\alpha}_{t}\) is predicted by averaging the labels of past inputs within the ball whose center \(\boldsymbol{\alpha}_{s}\) is closest to \(\boldsymbol{\alpha}_{t}\) in the \(\ell^{2}\) metric (note that labels are vector-valued). If \(\boldsymbol{\alpha}_{t}\) lies outside the nearest ball, a new ball with center \(\boldsymbol{\alpha}_{t}\) is created. The radii \(\varepsilon_{t}\) of all balls shrink at a rate \(t^{-1/(1+N)}\) Note that efficient (log in the number of centers) algorithms for approximate nearest-neighbor search are well-known [19], as well as highly-optimized open-source packages are readily available [16]. In contrast to the original algorithm by [13], where all sequentially observed labels are used to generate predictions (update local learners), our algorithm variant uses only a \(p\)-fraction of labels on average. This reduces label complexity at the cost of increased regret. This approach is referred to as _label-efficient prediction_ in online learning [6, Sec. 6.2] and is beneficial when accessing labels is costly. The following theorem, shown in appendix C, establishes the regret bound of Algorithm 4. **Theorem 4** (Regret bound).: _Let \(f:\Delta^{N}\rightarrow\mathbb{R}^{d}\) be arbitrary \(\kappa^{*}\)-Lipschitz function with respect to \(\|\cdot\|\) metric and let \(C_{N}>0\) be a metric-dependent constant. Then, Algorithm 4 with \(\varepsilon_{t}=t^{-\frac{1}{1+N}}\) satisfies_ \[\mathbb{E}\left[\sum_{t=1}^{T}\left(\ell_{t}(\hat{\mathbf{w}}_{t})-\ell_{t}(f( \boldsymbol{\alpha}_{t}))\right)\right]\leq(pC_{N}\ln(eT)+4D\kappa^{*})T^{ \frac{N}{1+N}}+(1-p)TD+4pDT(1-\kappa^{-1})^{K}D\] _Moreover, by definition \(\ell_{t}(\mathbf{w}^{*}(\boldsymbol{\alpha}_{t}))=0\), so choosing \(f(\cdot)=\mathbf{w}^{*}\), the above bound implies that:_ \[\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}[\ell_{t}(\hat{\mathbf{w}}_{t})]\leq 8(pC_{N} \ln(eT)+4D\kappa^{*})T^{-\frac{1}{1+N}}+(1-p)D+4pD\left(1-\kappa^{-1}\right)^{ K}D.\] The core idea behind the proof (deferred to appendix C) involves maintaining a balance between the regret contribution of each ball and an added regret term arising from approximating the target function using Voronoi partitioning. The regret contribution of each ball is logarithmic in the number of predictions made due to regret of online quadratic optimization [6, p. 42]. Ignoring log factors, the overall regret contribution equals the number of balls, which is essentially governed by the packing number with respect to the \(\ell^{2}\) metric. The additional term in the regret comes from the algorithm's prediction being constant within the Voronoi cells of \(\Delta^{N}\) induced by the current centers (considering that we predict using the nearest neighbor). Thus, an extra term equal to the product of the balls' radius and the Lipschitz constant is incurred. Finally, the label efficient algorithm we present here incurs yet another, \(p\)-dependent terms, which accounts for the missed labels. **Corollary 1**.: _If our desired average regret is \(\epsilon>0\), then Algorithm 4 has label complexity:_ \[\tilde{O}\left(\max\left\{\left(\frac{D\kappa^{*}}{\epsilon}\right)^{1+N}, \left(\frac{1-\epsilon/D}{\epsilon}\right)^{1+N}\right\}\right)\left(1-\frac{ \epsilon}{D}\right).\] Note that we recover the standard version of the algorithm (non-label efficient) by trivially setting \(p=1\), which in contrast has label complexity of order \(\tilde{O}(\max\{\left(D\kappa^{*}/\epsilon\right)^{1+N},\left(1/\epsilon \right)^{1+N}\})\), which is strictly larger than the label efficient version as long as \(\epsilon\) is not zero. When our desired regret goes to zero, the label complexity of two algorithms will tend to be the same asymptotically. ## 4 Experiments To demonstrate the effectiveness of our proposed mixture weights estimation algorithm, we conducted an experiment using MNIST dataset [1] according to the following specifications. We consider a scenario with 15 source distributions, and dividing them into 3 groups. For group 1, it contains 5 source distributions and each distribution contains 100 data (80 for training and 20 for testing) samples which are drawn uniformly randomly from class '0', '1' and '2'. Group 2 and 3 share similar settings but their distributions' data are drawn from class '3', '4', '5', and class '6', '7', '9', '10' respectively. The data generation process is summarized in Table 1. To demonstrate the effectiveness of our Algorithm 1, we implemented and run experiments with two-layer MLP neural network. We choose four different target setting: (1) target distribution \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Group** & **Classes** & **Domains per Group** & **Samples per Domain** \\ \hline 1 & 0, 1, 2 & 5 & 100 \\ \hline 2 & 3, 4, 5 & 5 & 100 \\ \hline 3 & 6, 7, 8, 9 & 5 & 100 \\ \hline \end{tabular} \end{table} Table 1: Classes and samples per domain for each group from Group 1, (2) target distribution from Group 2, (3) target distribution from Group 3, (4) target distribution from the mix of Group 1 and Group 2. We compare three algorithms: 1. Weighted ERM using our learnt weights, 2. ERM on averaged weights and 3. ERM solely on target domain, and presented our findings in Table 2. The results indicate that the accuracy achieved using the learnt alphas outperforms the other two approaches. ## 5 Discussion and Conclusions In this paper we studied the multi-source multi-target domain adaptation problem. In the first part of the paper we gave an algorithm for adressing a minimax problem, that provably finds good mixture weights of source domains, given a single target domain. In the second part we studied the problem of domain adaptation with multiple target domains, and introduced the co-component ERM problem. We gave two concrete algorithms to solve co-component ERM problem, in offline and online settings. There are several potential future venues for future work, which we briefly discuss below. Online mixture weight predictionThroughout Section 3, we assumed that the target domains' \(\mathbf{\alpha}\)s are given. However, it would be interesting if given a new target domain, one could predict _good_ mixture weight in an online fashion, and our Algorithm 1 could serve as an oracle to give the inexact label. Meanwhile, since the Algorithm 1 takes considerable time to converge, a desired online algorithm should also be label-efficient. The complexity of solving co-component ERMThe co-component ERM problem we introduced in section 3 is interesting from pure optimization perspective. Even though we proposed the learning-based approach to avoid heavy computation, one alternative direction is to develop efficient algorithms to directly solve \(M\) co-component ERM problems, and give upper and lower complexity bounds. More structure in \(\mathbf{w^{\star}}\) and better phase transition lawsIn this work we considered a basic structure in co-component ERM problems (strong-convexity and smoothness), which gave rise to Lipschitzness of \(\mathbf{w^{\star}}\). Lipschitz class of functions is very large and in general can only be learned at a rate \(\Theta(n^{-\frac{2}{2+N}})\). As discussed in section 3.1 this allowed us to argue that the learning approach is more efficient than solving co-component ERM whenever \(M\geq\Omega((1/\epsilon)^{N/2})\). However, we could potentially obtain better rates (and better laws) of learning \(\mathbf{w^{\star}}\) having more structure in \(\mathbf{w^{\star}}\). For example, assuming that \(\mathbf{w^{\star}}\) is \(H\)-times differentiable, the excess risk would behave as \(\Theta(n^{-\frac{2H}{2H+N}})\)[11]. Thus, the learning approach would be more efficient when \(M\geq\Omega((1/\epsilon)^{N/(2H)})\), that is with potentially much fewer sources than targets. It is an intriguing question to figure our which co-component ERM problems would allows for a nicer structure in \(\mathbf{w^{\star}}\). Alternative learning based approach for co-component ERMThere may be several other learning based method to solve co-component ERM. One potential approach is Meta learning [10]. The idea is to train a meta model \(\mathbf{w}_{\mathrm{meta}}\) by optimizing a pre-defined meta objective based on some sampled mixture weights, and the goal would be to find a model that can quickly adapt to tasks with different mixture weights \(\mathbf{\alpha}\). We leave this as a promising open problem. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline & Target & Target & Target & Target \\ & (Group 1) & (Group 2) & (Group 3) & (mix of Group 1 and 2) \\ \hline Average ERM & 69.9 \% & 40.0 \% & 34.9 \% & 59.9 \% \\ \hline Pure target training & 69.9 \% & 55.0\% & 40.0 \% & 55.0\% \\ \hline Our method & 80.0 \% & 69.9 \% & 55.0 \% & 65.0 \% \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy comparison with two baseline algorithms. Each row represents the accuracy of model learnt by Average ERM, Pure target training or Our method, on different target domain. We can see that the models learnt using mixture weights from our algorithm (Algorithm 1) always yield the best accuracy. ## Acknowledgement The work of YD and MM was partially supported by NSF CAREER Award # 2239374 and CNS Award #1956276.
2309.11428
Off-shell selfenergy for 1-D Fermi liquids
The selfenergy in Born approximation including exchange of interacting one-dimensional systems is expressed in terms of a single integral about the potential which allows a fast and precise calculation for any potential analytically. The imaginary part of the self energy as damping of single-particle excitations shows a rich structure of different areas limited by single-particle and collective excitation lines. The corresponding spectral function reveals a pseudogap, a splitting of excitation into holons and antiholons as well as bound states.
Klaus Morawetz, Vinod Ashokan, Kare Narain Pathak, Neil Drummond
2023-09-20T16:03:44Z
http://arxiv.org/abs/2309.11428v2
# Off-shell selfenergy for 1-D Fermi liquids ###### Abstract The selfenergy in Born approximation including exchange of interacting one-dimensional systems is expressed in terms of a single integral about the potential which allows a fast and precise calculation for any potential analytically. The imaginary part of the self energy as damping of single-particle excitations shows a rich structure of different areas limited by single-particle and collective excitation lines. The corresponding spectral function reveals a pseudogap, a splitting of excitation into holons and antiholons as well as bound states. ## I Introduction Interacting Fermi systems in one dimension appear in many physical situations. Among them let us mention only quantum wires in carbon nanotubes [1; 2; 3; 4] or edge states in quantum hall liquid [5; 6; 7], semiconducting nanowires [8; 9], cold atomic gases [10; 11; 12], conducting molecules [13] or even crystalline ion beams in storage rings [14; 15]. Though exact solutions are known for Luttinger [16; 17; 18], Tomonaga [19], and Gaudin-Yang models [20; 21] of contact interaction by the Bethe ansatz [22; 23] the interacting Fermi system in quantum wires are still a theoretical challenge. Among these methods bosonization techniques in [24; 25] and out of [26] equilibrium are employed which are based on the similar behaviour of long-distance correlations of Fermi and Bose systems [27]. The underlying model is the continuum limit which has been discussed with the help of correlation functions [28]. Already the width dependence [29] of quantum wires escapes exact solutions and perturbation methods have been used to investigate analytically and numerically the ground-state properties [30; 31; 32]. The question is how relevant are results from perturbation theory for such strongly correlated one-dimensional Fermi system. In [33] it was shown that the Random Phase Approximation becomes exact in the high-density limit for one-dimensional systems. This peculiar feature of a perturbation series to become exact is due to the fact that in one dimensions the ratio of kinetic to interaction energy is proportional to the density. Therefore the weak coupling corresponds to the high-density regime and the strong coupling regime to low densities [34]. Contrary to three dimensions one can therefore describe the high-density limit by a weak-coupling theory, i.e. perturbation theory. Though we cannot expect quantitative correct results by Born approximation as first-order perturbation theory of collisional damping, we might get insight into high-density correlations. Conventionally, perturbation theory is considered to fail in one dimensions due to divergences at the Fermi energy. Recently this was cured by a Pade approximation and an extended quasiparticle picture does work indeed [35]. The selfenergy represents the fundamental quantity to study single-particle correlation and excitation effects. This is best described within the Green function technique, for an overview see [37; 38; 39; 40]. Green functions allow to investigate interacting models beyond exactly solvable cases and in various approximations [41; 42]. The transition between Tomonaga-Luttinger and Fermi liquids has been studied [43; 44] and the resulting non-Fermi liquid behaviour has been numerically shown for Tomonaga Luttinger processes in [36]. For contact interaction the exact Green function has been known for a long time [41] where even the finite-size effect of the potential has been discussed. The exact impurity Green function for contact interaction was presented in [45]. The elastic two-particle collision in one dimension can only lead to an exchange of momenta of the two particles due to energy-momentum conservation which means that the on-shell selfenergy vanishes. In contrast, the off-shell selfenergy can provide an interesting insight into the physics of strongly correlated one-dimensional system. Therefore we will provide analytical expressions for the off-shell selfenergy for electron-electron interactions in Born approximation including exchange. Due to numerous analytic reductions we present a scheme with a single integral over the potential which can be applied in a variety of situations. The outline of the paper is as follows. Next we present the analytical result of the imaginary part of selfenergy in Born approximation including exchange. The nontrivial integration is shifted to the appendix and the particle and hole contributions to the selfenergy are discussed. Multiple ranges appear in the plot of momentum versus off-shell energy. They can be understood as originating from collective and single-particle excitations completely nested since in one dimensions the Fermi surface consists only of two points. The real part of the selfenergy as a Hilbert transform is presented in chapter III where the details are again moved to the appendix. Both the imaginary and real part of the selfenergy are expressed by a single integral over any chosen potential which allows a precise and fast calculation with a wide range of applications. Taking additionally the Hartree-Fock self energy into account, in chapter IV the spectral function is discussed up to quadratic orders in the potential or Bruckner coupling parameter. Chapter V summarizes and gives some conclusions. ## II Self energy in Born approximation The real part of the selfenergy is the Hilbert transform \[\sigma(k,\omega)=\mathrm{Re}\,\sigma^{R}(k,\omega)=\int\frac{d\bar{\omega}}{2\pi }\frac{\gamma(k,\bar{\omega})}{\omega-\bar{\omega}} \tag{1}\] of the selfenergy spectral function or imaginary part \[\gamma=\sigma^{>}+\sigma^{<}=i(\sigma^{R}-\sigma^{A})=-2\mathrm{Im}\,\sigma^{ R}. \tag{2}\] Both specifying the retarded selfenergy \[\sigma^{R}(k,\omega)=\sigma(k,\omega)-\frac{i}{2}\gamma(k,\omega)=\int\frac{d \bar{\omega}}{2\pi}\frac{\gamma(\bar{\omega})}{\omega-\bar{\omega}+i\eta}. \tag{3}\] Here the electron momentum is \(k\) and the off-shell energy is denoted by \(\omega\). From the collision integral \[I=n_{k}\sigma^{>}-(1-n_{k})\sigma^{<} \tag{4}\] one sees that \(\sigma^{>}\) describes the contribution of damping due to particles characterized by the Fermi distribution \(n_{k}\) and \(\sigma^{<}\) the contribution to the damping due to holes \(1-n_{k}\). ### Hole contribution to the damping The selfenergy in Born approximation reads \[\sigma^{<}(k,\omega) = \sum_{qp}2\pi\delta(\omega+\epsilon_{p}-\epsilon_{p\text{-}q}- \epsilon_{k\text{-}q})n_{p\text{-}q}n_{k\text{-}q}(1-n_{p}) \tag{5}\] \[\times V_{q}\left[sV_{q}-V_{p-k-q}\right]\] where the spin degeneracy \(s\) does only apply to the direct and not to the exchange terms. The \(\sigma^{>}\) selfenergy is obtained by interchanging the distribution or occupation factors \(n\leftrightarrow 1-n\). In the following we understand all energies, \(\omega,\gamma,\sigma\) etc, in units of Fermi energy \(\epsilon_{f}\) and the momenta \(k\) in units of Fermi momentum given by the free-particle density \(n_{f}\) as \(k_{F}=n_{f}\hbar\pi/s\). The interaction strength we express in terms of \(a_{B}\), the Bohr-radius-equivalent \[V_{q}=\frac{\hbar^{2}}{ma_{B}}v_{q} \tag{6}\] which allows to discuss charged and neutral impurities on the same footing. The Brueckner parameter \(r_{s}\) is the ratio of inter-particle distance \(d=1/ns\) to this length \(r_{s}=d/a_{B}\). The \(\delta\)-function in (5) is carried out which means to replace \[p=q+k-\frac{\Omega}{2q},\quad\Omega=\omega-k^{2} \tag{7}\] and providing an additional \(m/|q|\) prefactor. Together with the potential, it becomes \[\delta(\omega+\epsilon_{p}-\epsilon_{p\text{-}q}-\epsilon_{k \text{-}q})V_{q}\left[sV_{q}-V_{p-k-q}\right]\] \[=\frac{\hbar^{4}}{ma_{B}^{2}}\delta\left(p-q-k+\frac{\Omega}{2q} \right)\frac{v_{q}}{|q|}\left(sv_{q}-v_{\frac{\Omega}{2q}}\right). \tag{8}\] The implication of the occupation factors on the range of \(q\)-integration turns out to be quite non-trivial and are discussed in appendix A. We abbreviate \[a^{2}=\frac{|\Omega|}{2}=\frac{k^{2}-\omega}{2}>0 \tag{9}\] in the following since it is convenient when we will calculate the Hilbert transform for the real part of the self-energy in the next chapter. The result for (5) reads \[0<k <1\] \[\sigma^{<} =\Phi(1-k)-\Phi\left(\frac{a^{2}}{1-k}\right) \text{for}\,0<a<1-k \leftrightarrow 2-(k-2)^{2}<\omega<k^{2} \text{area}:1,2,3\] \[\sigma^{<} =\Phi(q_{2})-\Phi(q_{1}) \text{for}\,0<a<\frac{1-k}{2} \leftrightarrow \frac{(k+1)^{2}}{2}-1<\omega<k^{2} \text{area}:1\] \[\sigma^{<} =\Phi(1+k)-\Phi\left(\frac{a^{2}}{1+k}\right) \text{for}\,0<a<1+k \leftrightarrow 2-(k+2)^{2}<\omega<k^{2} \text{area}:1,3,5\] \[\sigma^{<} =\Phi(\bar{q}_{2})-\Phi(\bar{q}_{1}) \text{for}\,0<a<\frac{1+k}{2} \leftrightarrow \frac{(k-1)^{2}}{2}-1<\omega<k^{2} \text{area}:1,2,3,4,5\] \[1<k <3\] \[\sigma^{<} =\Phi\left(\frac{a^{2}}{-1+k}\right)-\Phi(k-1) \text{for}\,\sqrt{-2(1-k)}<a<\sqrt{k^{2}-1} \leftrightarrow 2-k^{2}<\omega<(k-2)^{2} \text{area}:6,7\] \[\sigma^{<} =\Phi(1+k)-\Phi\left(\frac{a^{2}}{1+k}\right) \text{for}\,\sqrt{k^{2}-1}<a<1+k \leftrightarrow 2-(k+2)^{2}<\omega<2-k^{2} \text{area}:4,5\] \[\sigma^{<} =\Phi(\bar{q}_{2})-\Phi(\bar{q}_{1}) \text{for}\,\sqrt{-2(1-k)}<a<\frac{1+k}{2} \leftrightarrow \frac{(k-1)^{2}}{2}-1<\omega<(k-2)^{2} \text{area}:5,6\] \[3<k\] \[\sigma^{<} =\Phi\left(\frac{a^{2}}{-1+k}\right)-\Phi(-1+k) \text{for}\,1-k<a<\sqrt{k^{2}-1} \leftrightarrow 2-k^{2}<\omega<2-(k-2)^{2} \text{area}:7\] \[\sigma^{<} =\Phi(1+k)-\Phi\left(\frac{a^{2}}{1+k}\right) \text{for}\,\sqrt{k^{2}-1}<a<1+k \leftrightarrow 2-(k+2)^{2}<\omega<2-k^{2} \text{area}:4 \tag{10}\] with (A8) \[q_{1/2}=\frac{1-k}{2}\pm\sqrt{\frac{(1-k)^{2}}{4}-a^{2}}. \tag{11}\] and \(\bar{q}_{1/2}\) are (11) with \(k\to-k\). This expression (10) is solely given in terms of a single integral over the potential \[\Phi(q)=\frac{s^{4}r_{s}^{2}}{\pi^{3}}\int d\bar{q}\frac{v_{\bar{q}}}{|\bar{q} |}\left(sv_{q}-v_{\frac{Q}{2\bar{q}}}\right) \tag{12}\] which can be calculated analytically (A9) for contact potential and a model potential we will use \[v_{q}=\frac{1}{\sqrt{q^{2}+\kappa^{2}}}. \tag{13}\] Here the finite-size parameter \(\kappa\) describes the width of the wire or alternatively the screening of Coulomb potential \(v_{q}\sim 1/q\). Seven different areas in frequency \(\omega\) appear and are plotted in figure 1. The border curves between these areas have a physical meaning. The part \(\sigma^{<}\) describes the damping due to holes and lays below the on-shell \(\omega=k^{2}\). Since the one-dimensional system is maximally nested we do have all borders also with \(k\to k\pm 2\). i.e. also below the shell \((k-2)^{2}\). The borders \((1\pm k)^{2}/2-1\) can be understood as arising from the collective behaviour. Since the Fermi surface shrinks at T = 0 to two points the single-particle excitation turn into collective one [46]. The lowest-order polarization in random phase approximation reads \[\Pi_{0}(q,\omega)=\frac{ms}{\pi p_{f}\hbar}\int dp\frac{n_{p+\frac{8}{2}}-n_{ p-\frac{8}{2}}}{(p+\frac{9}{2})^{2}-(p-\frac{9}{2})^{2}-\omega-i0} \tag{14}\] Figure 1: The 7 different areas for the selfenergy \(\sigma^{<}\) according to (10). which becomes for zero temperature [33; 47] \[{\rm Im}\Pi_{0} = -\frac{ms}{2\hbar p_{f}q}\Theta(\omega-|\omega_{-}|)\Theta(|\omega_ {+}|-\omega),\] \[{\rm Re}\Pi_{0} = \frac{ms}{2\pi\hbar p_{f}q}\ln\left|\frac{\omega^{2}-\omega_{-}^{2 }}{\omega^{2}-\omega_{+}^{2}}\right| \tag{15}\] with \(\omega_{\pm}=q(q\pm 2)\) in units of Fermi energy. This gives the limiting line where collective excitations occur. Subtracting the Fermi energy and considering the reduced mass due to two-particle scattering translates into \((1\pm q)^{2}/2-1\) lines of figure 1. These borders indicates the divergence of polarization known as Kohn anomaly and which are the reason for Peierls instability. The second class of lines are \(2-k^{2}\), or nested as \(2-(k\pm 2)^{2}\), arising from single particle excitation [21] due to off-shell scattering. For explanation we consider a simple model in figure 2 similar to [38]. The excitation of a particle with momentum \(k\) out of the Fermi sea \(\omega_{k}(q)=\epsilon_{k+q}-\epsilon_{k}\) due to scattering with momentum \(q\) arises for possible particle momenta \(k\in(k_{f}-q,k_{f})\) for \(k<2k_{f}\) and \(k\in(-k_{f},k_{f})\) for \(k>2k_{f}\). Averaging this excitation \(w_{k}(q)\) about the possible interval \(k\) and choosing as fluctuation the maximum and minimum possible excitation in this interval we obtain in units of Fermi energy \[\omega(q)=\left\{\begin{array}{ll}2q\pm 2q^{2};&q<2\\ q^{2}\pm 4q;&q>2\end{array}\right. \tag{16}\] which second case yields after subtracting from the two Fermion threshold the curves \(2-(k\pm 2)^{2}\) in figure 1. For contact interaction we plot in figures 3 the selfenergy \(\sigma^{<}\) for different momenta cuts according to figure 1 covering the crossing of various areas by frequency. One sees a continuous but non-differentiable behaviour. ### Particle contribution to the damping The second part of the damping (2) due to particles reads \[\sigma^{>}(k,\omega) = \sum_{qp}V_{q}\left[sV_{q}-V_{p-k-q}\right]2\pi\delta(\omega+ \epsilon_{p}-\epsilon_{p-q}-\epsilon_{k+q}), \tag{17}\] which integration is presented in appendix B. We have to distinguish two cases. In the first case, \(\Omega=\omega-k^{2}<0\), we obtain Figure 3: The selfenergy \(\sigma^{<}\) of contact interaction for different momentum cuts and regions numbers of figure 1. Figure 2: The scheme of possible single-particle excitation due to scattering out of Fermi sea. \[1<k <3\] \[\sigma^{>} =\Phi(-1+k)-\Phi\left(\frac{a^{2}}{-1+k}\right) \text{for}\,0\!<\!a\!<\!k-1 \leftrightarrow 2-(k-2)^{2}<\omega<k^{2} \text{area}:4,5\] \[\sigma^{>} =\Phi(\bar{q}_{1})-\Phi(\bar{q}_{2}) \text{for}\,0\!<\!a\!<\!\frac{(k\!-\!1)}{2} \leftrightarrow \frac{(k+1)^{2}}{2}-1<\omega<k^{2} \text{area}:5,7\] \[3<k\] \[\sigma^{>} =\Phi(\bar{q}_{1})-\Phi(\bar{q}_{2}) \text{for}\,0\!<\!a\!<\!\frac{(k\!-\!1)}{2} \leftrightarrow \frac{(k+1)^{2}}{2}-1<\omega<k^{2} \text{area}:4,5,7\] \[\sigma^{>} =\Phi(-1+k)-\Phi(\frac{a^{2}}{-1+k}) \text{for}\,0\!<\!a\!<\!\sqrt{2(k-1)} \leftrightarrow (k-2)^{2}<\omega<k^{2} \text{area}:4,5\] \[\sigma^{>} =\Phi(\bar{q}_{4})-\Phi(\bar{q}_{3}) \text{for}\,\sqrt{2(k-1)}\!<\!a\!<\!\frac{1+k}{2} \leftrightarrow \frac{(k-1)^{2}}{2}-1<\omega<(k-2)^{2} \text{area}:6,7 \tag{18}\] with (12), \[q_{3/4}=\frac{-1-k}{2}\pm\sqrt{\frac{(1+k)^{2}}{4}-a^{2}}, \tag{19}\] the ranges \(4-7\) plotted in figure 4. In the second case \(\Omega=\omega-k^{2}>0\) we obtain different areas and arguments \[0<k <1\] \[\sigma^{>} =\Phi\left(\frac{a^{2}}{1+k}\right)-\Phi(k+1) \text{for}\,\sqrt{1-k^{2}}\!<\!a\!<\!\sqrt{2(1+k)} \leftrightarrow 2-k^{2}<\omega<(k+2)^{2} \text{area}:4\] \[\sigma^{>} =\Phi\left(\frac{a^{2}}{1-k}\right)-\Phi(1-k) \text{for}\,\sqrt{1-k^{2}}\!<\!a\!<\!\sqrt{2(1-k)} \leftrightarrow 2-k^{2}<\omega<(k-2)^{2} \text{area}:1\] \[\sigma^{>} =\Phi\left(\bar{q}_{7}\right)-\Phi(q_{8}) \text{for}\,\sqrt{2(1-k)}\!<\!a\!<\!\infty \leftrightarrow (k-2)^{2}<\omega<\infty \text{area}:2,3,4\] \[\sigma^{>} =\Phi\left(q_{7}\right)-\Phi(\bar{q}_{8}) \text{for}\,\sqrt{2(1+k)}\!<\!a\!<\!\infty \leftrightarrow (k+2)^{2}<\omega<\infty \text{area}:3\] \[1<k\] \[\sigma^{>} =\Phi\left(\frac{a^{2}}{1+k}\right)-\Phi(k+1) \text{for}\,0<a<\sqrt{2(1\!+\!k)} \leftrightarrow k^{2}<\omega<(k+2)^{2} \text{area}:4\] \[\sigma^{>} =\Phi\left(\bar{q}_{7}\right)-\Phi(q_{8}) \text{for}\,0<a<\infty \leftrightarrow k^{2}<\omega<\infty \text{area}:2,3,4\] \[\sigma^{>} =\Phi\left(q_{7}\right)-\Phi(\bar{q}_{8}) \text{for}\,\sqrt{2(1+k)}\!<\!a\!<\!\infty \leftrightarrow \frac{(k+1)^{2}}{2}-1<\omega<k^{2} \text{area}:3 \tag{20}\] with (12), \(a^{2}=|\Omega|/2=(\omega-k^{2})/2\), \[q_{7/8}=\frac{\pm 1-k}{2}+\sqrt{\frac{(1\mp k)^{2}}{4}+a^{2}}, \tag{21}\] and \(\bar{q}_{7/8}\) are (21) with \(k\to-k\). This case contributes to the ranges \(1-4\) plotted in figure 4 which summarizes the different areas of (18) and (20). The occurring border lines are the same as in figure 1 explained there. In figures 5 the selfenergy \(\sigma^{>}\) is presented for different momentum cuts covering the crossing of various areas by the ranges \(4-7\) plotted in figure 4. In the second case \(\Omega=\omega-k^{2}>0\) we obtain different areas and arguments \[0<k <1\] \[\sigma^{>} =\Phi\left(\frac{a^{2}}{1+k}\right)-\Phi(k+1) \text{for}\,\sqrt{1-k^{2}}\!<\!a\!<\!\sqrt{2(1+k)} \leftrightarrow 2-k^{2}<\omega<(k+2)^{2} \text{area}:4\] \[\sigma^{>} =\Phi\left(\frac{a^{2}}{1-k}\right)-\Phi(1-k) \text{for}\,\sqrt{1-k^{2}}\!<\!a\!<\!\sqrt{2(1-k)} \leftrightarrow 2-k^{2}<\omega<(k-2)^{2} \text{area}:1\] \[\sigma^{>} =\Phi\left(\bar{q}_{7}\right)-\Phi(q_{8}) \text{for}\,\sqrt{2(1-k)}\!<\!a\!<\!\infty \leftrightarrow (k-2)^{2}<\omega<\infty \text{area}:2,3,4\] \[\sigma^{>} =\Phi\left(q_{7}\right)-\Phi(\bar{q}_{8}) \text{for}\,\sqrt{2(1+k)}\!<\!a\!<\!\infty \leftrightarrow (k+2)^{2} \text{area}:3\] \[1<k\] \[\sigma^{>} =\Phi\left(\frac{a^{2}}{1+k}\right)-\Phi(k+1) \text{for}\,0<a<\sqrt{2(1\!+\!k)} \leftrightarrow k^{2}<\omega<(k+2)^{2} \text{area}:4\] \[\sigma^{>} =\Phi\left(\bar{q}_{7}\right)-\Phi(q_{8}) \text{for}\,0<a<\infty \leftrightarrow k^{2}<\omega<\infty \text{area}:2,3,4\] \[\sigma^{>} =\Phi\left(q_{7}\right)-\Phi(\bar{q}_{8}) \text{for}\,\sqrt{2(1+k)}\!<\!a\!<\!\infty \leftrightarrow \frac{(k+1)^{2}}{2}-1<\omega<k^{2} \text{area}:3 \tag{21}\] with (12), \(a^{2}=|\Omega|/2=(\omega-k^{2})/2\), \[q_{7/8}=\frac{\pm 1-k}{2}+\sqrt{\frac{(1\mp k)^{2}}{4}+a^{2}}, \tag{22}\] and \(\bar{q}_{7/8}\) are (21) with \(k\to-k\). This case contributes to the ranges \(1-4\) plotted in figure 4 which summarizes the different areas of (18) and (20). The occurring border lines are the same as in figure 1 explained there. In figures 5 the selfenergy \(\sigma^{>}\) is presented for different momentum cuts covering the crossing of various areas by the ranges \(4-7\) plotted in figure 4. In figures 6 the selfenergy \(\sigma^{>}\) is presented for different momentum cuts covering the crossing of various areas by the ranges \(4-7\) plotted in figure 4. In the figures 6 the selfenergy \(\sigma^{>}\) is presented for different momentum cuts covering the crossing of various areas by the ranges \(4-7\) plotted in figure 4. In the figures 6 the selfenergy \(\sigma^{>}\) is presented for different momentum cuts covering the crossing of various areas by the ranges \(4-7\) plotted in figure 4. In the figures 7 the selfenergy \(\sigma^{>}\) is presented for different momentum cuts covering the crossing of various areas by the ranges \(4-7\) plotted in figure 4. same borderlines of areas as discussed above but different quantitative values dependent on the width parameter \(\kappa\). For smaller \(\kappa\) we approach the Coulomb potential and one sees that the peaks become enhanced. ## III Real part of selfenergy The Hilbert transform (1) we will perform in the appendix C and obtain with the help of a single integral about arbitrary potentials \[\phi(x)=\frac{s^{5}r_{s}^{2}}{2\pi^{4}}\int\limits_{\frac{x}{2}+x}^{x}\left(sv_ {q}-v_{\frac{x}{q}}\right) \tag{22}\] with the abbreviations \[\Theta_{ij} = \Psi_{ij}^{0}-\Psi_{ij}\] \[\Psi_{ij} = \phi(ikq+jq-q^{2}),\quad\Psi_{ij}^{0}=\phi(ikq+jq) \tag{23}\] finally \[\sigma=\int\limits_{0}^{2}\!dq\frac{v_{q}}{q}\left(\Theta_{+-}+ \Theta_{--}\right)\] \[+\int\limits_{2}^{\infty}\!\!dq\frac{v_{q}}{q}\left(\Psi_{++}\!+ \!\Psi_{-+}-\Psi_{+-}\!-\!\Psi_{--}\right)\] \[+\left\{\begin{array}{c}\int\limits_{k-1}^{k+1}\!\!dq \frac{v_{q}}{q}\left(\Theta_{++}\!-\!\Theta_{+-}\right)\\ \int\limits_{0}^{k+1}\!\!dq\frac{v_{q}}{q}\left(\Theta_{++}\!-\!\Theta_{+-} \right)\!+\!\int\limits_{0}^{1-k}\!\!dq\frac{v_{q}}{q}\left(\Theta_{-+}\!- \!\Theta_{--}\right)\end{array}\right.k\!\!>\!1 \tag{24}\] The real and imaginary parts of the selfenergy are plotted in figure (7). One sees that for \(k>2k_{f}\) the damping vanishes in a range \(\omega\approx 2\) which is accompanied with a gap as seen in figure 6. Further a splitting of two excitation lines for positive frequencies and one for negative frequencies appear. Which of them finally survive and describe a real excitation in the system is decided by the spectral function in the next chapter. The finite size of a potential (13) is compared in figure 8 for different values of the screening parameter \(\kappa\) with the contact interaction. For small \(\kappa\) we approach Coulombic behaviour and see that the peak at the Fermi energy becomes enhanced. ## IV Spectral function Now that we have the real and imaginary part of the selfenergy (2) we can calculate the spectral function as measure for the single-particle excitation in the system \[a(\omega,k)=\frac{\gamma(\omega,k)}{[\omega-k^{2}-\sigma^{F}(k)-\sigma(\omega, k)]^{2}+\frac{\gamma^{2}(\omega,k)}{4}}. \tag{25}\] Figure 4: The 7 different areas for the selfenergy \(\sigma^{>}\) according to (18) and (20). Figure 5: The selfenergy \(\sigma^{>}\) of contact interaction for different momentum cuts of figure 4. Figure 6: The imaginary part of selfenergy for contact interaction and three values of finite size potential (13). The still missing part is the Hartree-Fock meanfield self-energy as the part lower than Born in perturbation theory and it is necessary to include the meanfield in order to have all results systematically up to second order in the potential. The Hartree selfenergy proportional to the number of electrons \(\sigma^{H}=nV_{0}\) is compensated by a neutralizing background. The Fock term as exchange meanfield term reads \[\sigma^{F}(k) = -s\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi\hbar}v_{q}n_{k-q} \tag{13}\] \[= -\frac{2s^{2}}{\pi^{2}}r_{s}\left\{\begin{array}{cc}1&\mbox{ contact}\\ \frac{2}{\pi^{2}}\ln\left|\frac{k-\sqrt{(k+1)^{2}+\mbox{\scriptsize kap}^{2}}+1 }{-k+\sqrt{(1-k)^{2}+\mbox{\scriptsize kap}^{2}}+1}\right|&\mbox{pot.}\end{array}\right.\] which results for contact interaction \(v_{q}=1\) and finite-size potential (13) respectively using the Bruckner or coupling parameter \(r_{s}\). In figures 9 and 10 we plot the spectral functions for a Bruckner parameter \(r_{s}=0.5\) and \(r_{s}=2\) respectively. One recognizes the main excitation at \(\omega=k^{2}\) line for large momenta. For contact interaction a splitting of the quasiparticle excitation pole appears at higher momenta which is absent in the finite-size or Coulombic potential. The characteristic feature of contact potential is more clearly visible for higher coupling in figure 10. A gap opens with the borders \(\omega\approx\pm 2k\) (or in units \(\hbar\omega\approx\frac{p_{f}}{m}k\)) which feature would be the exact borders for a Luttinger liquid [38; 48]. Here we do not have a Lut Figure 8: The real part of the selfenergy for contact interaction and three values of finite size potential (13) corresponding to the imaginary part in figure 6. Figure 7: The real part \(\sigma\) (above) and imaginary part \(\gamma\) (below) of the selfenergy for contact interaction. Figure 9: The spectral function (25) with coupling parameter \(r_{s}=0.5\) for contact interaction (above) and for finite size (13) with \(\kappa=0.5\) (below). tinger liquid but see similar features. The two peaks are related to holon and antiholon excitations, i.e. the excitation of a particle out of Fermi sea [22], schematically illustrated in figure 2. The corresponding threshold singularities have been discussed in [49]. The deviation from the Luttinger liquid can be seen by the boundary of the gap in figure 7 which should be linear \(\pm kv_{o}\) with the charge velocity [50; 51] of \(v_{0}=\sqrt{v_{f}^{2}+g^{2}}\). A spin-polarized system would show additionally a splitting of the peak in spin and charge velocities [52; 38]. Further we do have a finite width of the peaks of the spectral function in contrast to the Tomonaga-Luttinger model [51]. The appearance of the gap is also related to a pseudogap in the density of states [53]. Up to the momentum of \(k=2k_{f}\) there appear an excitation at negative frequencies which one interprets as bound states. Momenta above \(2k_{f}\) correspond to nesting which means that this bound state is destroyed when nesting occurs. The occurrence of this bound state is puzzling since it appears in 3D only for higher-order approximations like the ladder summation. Since we consider the weak-coupling limit which is the high-density limit, we see probably here a precursor of bound states in the off-shell selfenergy. ## V Summary and conclusions The Born selfenergy including exchange is expressed analytically with a remaining single integral for the imaginary part (12) and the real part (22) respectively. This allows to calculate the selfenergy precise and fast for any interaction potential. Therefore these expressions can be applied widely. The momentum-frequency range of different parts of the selfenergy turns out to be astonishing complex consisting of single-particle excitation and border lines of collective modes. This leads to a non-differentiable behaviour of the imaginary part of the selfenergy. The real part is worked down as well to a single integral which provides a fast scheme. Given the Born selfenergy together with the meanfield, the spectral function as measure for single-particle excitation is calculated for the illustrative examples of contact interaction and a finite-size potential. The opening of the Luttinger gap is seen with increasing momenta. Two excitation lines due to holon and antiholon excitations are observed. An excitation at negative frequencies is interpreted as precursor of bound states which vanishes for momenta exceeding \(2p_{f}\) indicating nesting instead. ###### Acknowledgements. K.N.P. acknowledges the grant of honorary senior scientist position by National Academy of Sciences of India (NASI) Prayagraj. K.M. acknowledges support from DFG-project MO 621/28-1. ## Appendix A q-integration of \(\sigma<\) First we observe that it is only necessary to integrate half of the range \(q>0\) in (5) since the area \(q<0\) can be mapped to the \(q>0\) expression if we set \(k\to-k\). This can be seen in (5) since the \(p\) integration allows to set \(p\to-p\). From occupation factors we get the conditions \(\Omega=\omega-k^{2}\) \[n_{p-q}: -1+\frac{\Omega}{2q}<k<1+\frac{\Omega}{2q}\] \[n_{k+q}: -1-q<k<1-q\] \[1-n_{p}: k<-1-q+\frac{\Omega}{2q}\operatorname{or}1-q+\frac{\omega}{2q}<k \tag{10}\] which allows two possibilities for the range of \(k\) \[\operatorname{max}\!\left(\!-1\!-\!q,\!-\!1\!+\!\frac{\Omega}{2q},\!1\!+\! \frac{\Omega}{2q}\!-\!q\right)\!<\!k\!<\!\min\!\left(\!1\!-\!q,\!1\!+\!\frac{ \Omega}{2q}\!-\!q\right)\] or \[\operatorname{max}\!\left(\!-1\!-\!q,\!-\!1\!+\!\frac{\Omega}{2q}\!\right)\!< \!k\!<\!\min\!\left(\!1\!-\!q,\!1\!+\!\frac{\Omega}{2q},\!-\!1\!+\!\frac{ \Omega}{2q}\!-\!q\right). \tag{11}\] Figure 10: The spectral function (25) for contact interaction and a coupling parameter \(r_{s}=2\) in different views. Since \(q>0\) we see that the second line is not possible to complete since it would require \(-1+\frac{\Omega}{2q}<-1+\frac{\Omega}{2q}-q\). From the first line we see that \(\Omega<0\) since otherwise \(1+\frac{\Omega}{2q}-q<1-q\) would be impossible. Therefore setting \[a=\sqrt{-\Omega/2} \tag{10}\] we have to discuss \[\max\!\left(\!-1\!-\!q,\!-\!1\!-\!\frac{a^{2}}{q},\!1\!-\!\frac{a^{2}}{q}\!-\!q \right)\!<\!k\!<\!\min\!\left(\!1\!-\!q,\!1\!-\!\frac{a^{2}}{q}\!\right) \tag{11}\] which is plotted in figure 11. The allowed region is in a four-polygon and additionally above the curve \(1-a^{2}/q-q\). Due to its maxima at \((a,1-2a)\) we have 3 cases: 1. \(1-2a<-1-a\) which means \(a>2\) and we have \[-1-a<k<-\sqrt{1+a^{2}}:-1-k<q<-\frac{a^{2}}{1+k}\] \[-\sqrt{1+a^{2}}<k<1-a:\frac{a^{2}}{1-k}<q<1-k\] (12) 2. \(-1-a<1-2a\) which means \(\frac{4}{3}<a<2\) and \[-1-\frac{a^{2}}{2}<k<1-2a:\] \[-1-k<q<q_{2}\quad\text{or}\quad q_{1}<q<-\frac{a^{2}}{1+k}\] \[1-2a<k<-\sqrt{1+a^{2}}:-1-k<q<-\frac{a^{2}}{1+k}\] \[-\sqrt{1+a^{2}}<k<1-a:\frac{a^{2}}{1-k}<q<1-k\] (13) 2. \(-\sqrt{1+a^{2}}<1-2a<1-a\) which yields \(0<a<\frac{4}{3}\) and \[-1-\frac{a^{2}}{2}<k<-\sqrt{1+a^{2}}:\] \[-1-k<q<q_{2}\quad\text{or}\quad q_{1}<q<-\frac{a^{2}}{1+k}\] \[-\sqrt{1+a^{2}}<k<1-2a:\] \[\frac{a^{2}}{1-k}<q<q_{2}\quad\text{or}\quad q_{1}<q<1-k\] \[-\sqrt{1+a^{2}}<k<1-a:\frac{a^{2}}{1-k}<q<1-k\] (14) with the two crossing points of the \(1-q-a^{2}/q\) curve with the horizontal \(k\)-line \[q_{1/2}=\frac{1-k}{2}\pm\sqrt{\frac{(1-k)^{2}}{4}-a^{2}}. \tag{15}\] Eq. (A) provides the integration limits for \(q\) of \[\Phi(q) =\frac{s^{4}r_{s}^{2}}{\pi^{3}}\int d\bar{q}\frac{v_{\bar{q}}}{| \bar{q}|}\left(sv_{q}-v_{\frac{a^{2}}{q}}\right)\] \[=\frac{s^{4}r_{s}^{2}}{\pi^{3}}\left\{\begin{array}{cc}\ln q& \text{contact}\\ \frac{\ln\left(\frac{q^{2}}{s^{4}+q^{2}}\right)}{\pi^{2}}+\frac{iF\left(\text{ airsinh}\left(\frac{q}{k}\right)|\frac{a^{4}}{a^{4}}\right)}{a^{2}}&\text{pot.}\end{array}\right.. \tag{16}\] witt the elliptic integral of first kind \[F(a|m)=\int\limits_{0}^{a}\frac{d\theta}{\sqrt{1-m\sin^{2}\theta}}. \tag{17}\] Plotting the cases 1., 2.1, and 2.2. and regrouping with respect to \(k\) one sees in figure 12 that 4 areas \(A-D\) Figure 11: The condition (11) for the allowed region (light gray) bounded by \(-1-q,1-q,1-a^{2}/q,-1-a^{2}/q\) and additionally to be larger than \(1-a-a^{2}/q\) (gray). Depending on the maxima of the latter function (red) at \((a,1-2a)\) there are three cases, 1.2, 2.1,and 2.2. Figure 12: The condition (11) of figure 11 rearranged in \(a-k\) plot yielding 4 different regions. appear with 3 combinations of \[\mathrm{area}\,A,B: \Phi(1-k)-\Phi\left(\frac{a^{2}}{1-k}\right)\] \[\mathrm{area}\,A,D: \Phi(q_{2})-\Phi(q_{1})\] \[\mathrm{area}\,C,D: \Phi\left(\frac{a^{2}}{-1-k}\right)-\Phi(-1-k) \tag{101}\] Remembering that in order to include the \(q<0\) part we have to add all expressions for \(k\to-k\) which provides finally the cases (10). ## Appendix B \(q\)-integration for \(\sigma^{>}\) The occupation factors in (17) lead after \(\delta\)-integration for \(p\) to the conditions \[1-n_{p-q}: \!\!\!\!\!k<-1+\frac{\Omega}{2q}\quad\mathrm{or}\quad 1+\frac{ \Omega}{2q}<k\] \[n_{k+q}: \!\!\!\!\!k<-1-q\quad\mathrm{or}\quad 1-q<k\] \[n_{p}: \!\!\!\!\!\!-1-q+\frac{\Omega}{2q}<k<1-q+\frac{\omega}{2q}<k \tag{102}\] which together allows four possibilities for the range of \(k\) \[\mathrm{max}\!\left(\!-1\!+\!\frac{\Omega}{2q}\!-\!q\right)\!<\!k\!<\!\min\! \left(\!-1\!-\!q,\!1\!+\!\frac{\Omega}{2q}\!-\!q,-1\!+\!\frac{\Omega}{2q}\!\right) \tag{103}\] or \[\mathrm{max}\!\left(\!-1\!+\!\frac{\Omega}{2q}\!-\!q,1\!+\!\frac{ \Omega}{2q}\!\right)\!<\!k\!<\!\min\!\left(\!-1\!-\!q,\!1\!+\!\frac{\Omega}{2q }\!-\!q\right)\] \[\mathrm{or} \tag{104}\] where the second and fourth line is not possible to complete due to the last expressions on left and right side \(1\!+\!\frac{\Omega}{2q}\!<\!k\!<\!1\!+\!\frac{\Omega}{2q}\!-\!q\). The first line is only possible to complete for \(\Omega<0\) since \(-1\!+\!\frac{\Omega}{2q}\!-\!q\!<\!k\!<\!-1\!-\!q\) and the third line only for \(\Omega>0\) due to \(1\!-\!q\!<\!k\!<\!1\!+\!\frac{\Omega}{2q}\!-\!q\) Therefore we have two cases: #### b.1.1 \(\Omega=\omega-k^{2}<0\) Discussing first the case (103) and setting \[a=\sqrt{-\Omega/2} \tag{105}\] the area is plotted in figure 13. The allowed region is additionally below the curve \(-1-a^{2}/q-q\). Due to its maxima at \((a,1-2a)\) we have 3 cases: (1) \(1-2a<-1-a\) which means \(a>2\) and we have \[-\infty<k<-1-2a:\] \[-\frac{a^{2}}{1+k}<q<q_{4}\quad\mathrm{or}\quad q_{3}<q<-1-k\] \[-1\!-\!2a<k<-1\!-\!a:-\frac{a^{2}}{1\!+\!k}<q<-1\!-\!k \tag{106}\] (2) \(-1-2a<-1-a\) and \(-1-a^{2}/2>-1-2a\) which Figure 14: The condition (103) of figure 13 rearranged in \(a-k\) plot yielding 4 different regions. Figure 13: The condition (103) for the allowed region (light gray) bounded by \(-1-q,-1-a^{2}/q,-1-a^{2}/q-q\) and additionally to be smaller than \(1-q-a^{2}/q\) (red line). Depending on the maxima of the latter function (red) at \((a,1\!-\!2a)\) there are three cases (1)-(3). means \(2<a<4\) and \[-\infty<k<-1-2a:\] \[-\frac{a^{2}}{1+k}<q<q_{4}\quad\mbox{or}\quad q_{3}<q<-1-k\] \[-1-2a<k<-1-\frac{a^{2}}{2}:-\frac{a^{2}}{1+k}<q<-1-k\] \[-1-\frac{a^{2}}{2}<k<1-2a:q_{2}<q<q_{1} \tag{3.3}\] \[-1-\frac{a^{2}}{2}<-1-2a\mbox{ which yields }4<a\mbox{ and }\] \[-1-\frac{a^{2}}{2}<k<-1-2a:\] \[q_{2}<q<q_{4}\quad\mbox{or}\quad q_{3}<q<q_{1}\] \[-1-2a<k<1-2a:q_{2}<q<q_{1}\] \[-\infty<k<-1-\frac{a^{2}}{2}:\] \[-\frac{a^{2}}{1-k}<q<q_{4}\quad\mbox{or}\quad q_{3}<q<-1-k \tag{3.4}\] with the two crossing points of the \(\pm 1-q-a^{2}/q\) curves with the horizontal \(k\)-line of (3.4) \[q_{3/4}=\frac{-1-k}{2}\pm\sqrt{\frac{(1+k)^{2}}{4}-a^{2}}. \tag{3.5}\] In figure 14 we plot these ranges and obtain 4 areas \(A-D\) with 3 combinations of \[\mbox{area}\,A,B: \Phi(-1-k)-\Phi\left(\frac{a^{2}}{-1-k}\right)\] \[\mbox{area}\,A,D: \Phi(q_{4})-\Phi(q_{3})\] \[\mbox{area}\,C,D: \Phi(q_{1})-\Phi(q_{2}) \tag{3.6}\] Again we have to add all expressions for \(k\to-k\) to get finally the cases (18) 4.3) \(1-a^{2}/2<-1+a^{2}/2\) which means \(\sqrt{2}<a\) and \[-\infty<k<1-\frac{a^{2}}{2}:1-k<q<q_{7}\] \[1-\frac{a^{2}}{2}<k<-1+\frac{a^{2}}{2}:q_{8}<k<q<q_{7}\] \[-1+\frac{a^{2}}{2}<k<\infty:q_{8}<q<\frac{a^{2}}{1+k} \tag{101}\] with the two crossing points of the \(\pm 1-q+a^{2}/q\) curve with horizontal line \[q_{7/8}=\frac{\pm 1-k}{2}+\sqrt{\frac{(1\mp k)^{2}}{4}+a^{2}}. \tag{102}\] In figure 16 we plot these ranges and obtain 4 areas \(A-D\) with the combinations \[\mathrm{area}\,A\,: \Phi\left(q_{7}\right)-\Phi(1-k)\] \[\mathrm{area}\,B\,: \Phi\left(\frac{a^{2}}{1+k}\right)-\Phi(q_{8})\] \[\mathrm{area}\,C\,: \Phi\left(\frac{a^{2}}{1+k}\right)-\Phi(1-k)\] \[\mathrm{area}\,D\,: \Phi\left(q_{7}\right)-\Phi(q_{8}). \tag{103}\] Again we have to add all expressions for \(k\rightarrow-k\) to get the cases (20). ## Appendix C Real part of selfenergy We calculate the Hilbert transform (1) by interchanging integration orders \[\sigma(q,\omega) = \int\limits_{-\infty}^{\infty}\frac{d\bar{\omega}}{2\pi}\frac{ \sigma^{<}(q,\bar{\omega})}{\omega-\bar{\omega}} \tag{104}\] \[= \int\limits_{0}^{\infty}\frac{dx}{2\pi}\frac{\sigma^{<}(q,x)}{ \mathcal{O}+x}+\int\limits_{-\infty}^{\infty}\frac{dx}{2\pi}\frac{\sigma^{>}( q,x)}{\mathcal{O}-x}\] \[= \frac{s^{4}r_{s}^{2}}{2\pi^{4}}\int\limits_{u}^{o}dq\frac{v_{q}}{ q}\int\limits_{x_{u}}^{x_{u}}\frac{dx}{\mathcal{O}+x}\left(sv_{q}-v_{\frac{x}{u}}\right)\] where the integration limit of \(q\) according to (10) are interchanged with \(a^{2}=x=(k^{2}-\omega)/2\) integration leading to the limits summarized in table 1. We have abbreviated \[\mathcal{O}=(\omega-k^{2})/2\gtrless 0 \tag{105}\] and used a transformation of \(x\rightarrow-x\) in the part for \(\sigma^{>}\). As example, the case \[0<k<1\quad\Phi(1-k)-\Phi\left(\frac{a^{2}}{1-k}\right)\,\mathrm{ for}\,0\!<\!a^{2}\!<\!(1\!-\!k)^{2} \tag{106}\] leads to \[\int\limits_{0}^{(1-k)^{2}}\frac{dx}{\mathcal{O}+x}\int\limits_{ \frac{x}{1-k}}^{1-k}\frac{dq}{q}v_{q}(sv_{q}-v_{\frac{x}{q}}) \tag{107}\] \[= \int\limits_{0}^{1-k}\frac{dq}{q}v_{q}\int\limits_{0}^{(1-k)q} \frac{dx}{\mathcal{O}+x}(sv_{q}-v_{\frac{x}{q}})\] as represented in first line of table 1. Working out all cases of (10) is tedious but straight by just painting all the corresponding curves. Summing up the contributions according to the range of \(k\) we obtain (24) which shows that the ranges \(1<k<3\) and \(k>3\) are identically as it should since we have only \(k=1(k_{f})\) as exceptional point. Figure 16: The condition (105) of figure 15 rearranged in \(a-k\) plot yielding 4 different regions.
2309.09267
Toric sheaves and flips
Any toric flip naturally induces an equivalence between the associated categories of equivariant reflexive sheaves, and we investigate how slope stability behaves through this functor. On one hand, for a fixed toric sheaf, and natural polarisations that make the exceptional loci small, we provide a simple numerical criterion that characterizes when slope stability is preserved through the flip. On the other hand, for a given flip, we introduce full subcategories of logarithmic toric sheaves and characterize when polystability is preserved for all toric sheaves in those subcategories at once.
Andrew Clarke, Achim Napame, Carl Tipler
2023-09-17T13:07:54Z
http://arxiv.org/abs/2309.09267v2
# Toric Sheaves and Flips ###### Abstract. Any toric flip naturally induces an equivalence between the associated categories of equivariant reflexive sheaves, and we investigate how slope stability behaves through this functor. On one hand, for a fixed toric sheaf, and natural polarisations that make the exceptional loci small, we provide a simple numerical criterion that characterizes when slope stability is preserved through the flip. On the other hand, for a given flip, we introduce full subcategories of logarithmic toric sheaves and characterize when polystability is preserved for all toric sheaves in those subcategories at once. 2010 Mathematics Subject Classification: Primary: 53C07, Secondary: 53C55, 14J60 ## 1. Introduction Introduced by Mumford [16] and generalized by Takemoto [23], slope stability of vector bundles, and more generally of torsion-free coherent sheaves, can be used as a device to produce moduli spaces. While slope stability is not a GIT notion in higher dimension, it behaves well with respect to tensor products and restrictions, and has a differential geometric interpretation in gauge theory through the Hitchin-Kobayashi correspondence (see e.g. [14] and reference therein). In particular, stable bundles, and more generally stable reflexive sheaves ([1]), are of particular interest in gauge theory and mathematical physics (see e.g. [13] for a survey on stable sheaves on toric varieties addressed to the mathematical physics community). Despite its usefulness, checking stability in practice remains a difficult problem. Our goal is to add on to the list of known methods to produce stable sheaves via transformations of the underlying polarised manifold. In the equivariant context of toric geometry, the behaviour of slope stability through descent under GIT quotients was studied in [4], while the problem of pulling back stable sheaves on fibrations was considered in [18]. In this paper, we study how slope stability is affected through a toric flip between polarised (simplicial) toric varieties. Those transformations are of particular interest in several aspects. From the complex geometry point of view, they form building blocks for toric Minimal Model Program (see [6, Chapter 15] and references therein), and together with fibrations and blow-ups addressed in [18], our results complete the study of slope stability through any type of extremal contraction arising in toric MMP. From the mathematical physics point of view, toric flips can be seen as singular transitions between toric varieties. Those transitions are of particular importance given the construction of Calabi-Yau hypersurfaces in Fano reflexive toric varieties ([2]) and the connections between various Calabi-Yau vacua through conifold transitions ([3, 21]). Our results then provide a toy model in the study of stable sheaves through singular transitions between toric varieties (see [5] for a differential geometric approach to stability of the tangent bundle through conifold transitions). Consider a toric flip (see Section 2 for precise definitions): between two simplicial toric varieties \(X\) and \(X^{\prime}\). There is a \(\mathbb{Q}\)-Cartier divisor \(D_{+}\subset X\) naturally attached to the flip, such that \(-D_{+}\) is \(\phi\)-ample and restricts to the anticanonical divisor of the \(\phi\)-contracted fibers (see Section 2.2). By abuse of notation, we will still denote \(D_{+}\) the divisor \(\psi_{*}(D_{+})\subset X^{\prime}\). Then, for some ample Cartier divisor \(L_{0}\) on \(X_{0}\), there exists \(\varepsilon_{0}>0\) such that the divisors \[L_{-\varepsilon}:=\phi^{*}L_{0}-\varepsilon D_{+}\subset X\] and \[L_{\varepsilon}:=(\phi^{\prime})^{*}L_{0}+\varepsilon D_{+}\subset X^{\prime}\] define ample \(\mathbb{Q}\)-Cartier divisors for \(\varepsilon\in(0,\varepsilon_{0})\). Then, our first result (Theorem 4.4) relates slope stability of toric sheaves on \((X_{0},L_{0})\), \((X,L_{-\varepsilon})\) and \((X^{\prime},L_{\varepsilon})\) : **Theorem 1.1**.: _Let \(\mathcal{E}\) be a torus equivariant reflexive sheaf on \(X\). Then, up to shrinking \(\varepsilon_{0}\), we have for all \(\varepsilon\in(0,\varepsilon_{0})\) :_ 1. _If_ \(\phi_{*}\mathcal{E}\) _is_ \(L_{0}\)_-stable, then_ \(\mathcal{E}\) _(resp._ \(\psi_{*}\mathcal{E}\)_) is_ \(L_{-\varepsilon}\)_-stable on_ \(X\) _(resp._ \(L_{\varepsilon}\)_-stable on_ \(X^{\prime}\)_)._ 2. _If_ \(\phi_{*}\mathcal{E}\) _is_ \(L_{0}\)_-unstable, then_ \(\mathcal{E}\) _(resp._ \(\psi_{*}\mathcal{E}\)_) is_ \(L_{-\varepsilon}\)_-unstable on_ \(X\) _(resp._ \(L_{\varepsilon}\)_-unstable on_ \(X^{\prime}\)_)._ 3. _If_ \(\phi_{*}\mathcal{E}\) _is_ \(L_{0}\)_-semistable, let_ \(\mathfrak{E}\) _be the finite set of equivariant and saturated reflexive subsheaves_ \(\mathcal{F}\subseteq\phi_{*}\mathcal{E}\) _arising in a Jordan-Holder filtration of_ \(\phi_{*}\mathcal{E}\)_. If for any_ \(\mathcal{F}\in\mathfrak{E}\)_,_ (1.1) \[\frac{c_{1}(\mathcal{E})\cdot D_{+}\cdot(\phi^{*}L_{0})^{n-2}}{\operatorname{ rank}(\mathcal{E})}<\frac{c_{1}((\phi^{*}\mathcal{F})^{\vee\vee})\cdot D_{+} \cdot(\phi^{*}L_{0})^{n-2}}{\operatorname{rank}(\mathcal{F})}\] _then_ \(\mathcal{E}\) _is_ \(L_{-\varepsilon}\)_-stable on_ \(X\)_._ A similar statement as \((iii)\) holds for \(\psi_{*}\mathcal{E}\) with the reverse inequalities in (1.1) (see Theorem 4.4). The intersection number that appears in (1.1) is the first order term in the \(\varepsilon\)-expansion of the \(L_{-\varepsilon}\)-slope. As Theorem 1.1 shows, if \(\phi_{*}\mathcal{E}\) is strictly semistable, this term will never allow for both \(\mathcal{E}\) and \(\psi_{*}\mathcal{E}\) to be stable at the same time, for the considered polarisations. As \(\mathfrak{E}\) is finite, the numerical criterion in \((iii)\) can be used in practice to produce examples of toric sheaves that go from being unstable to stable through a toric flip (see Section 4.3). **Remark 1.2**.: Theorem 1.1 actually holds uniformly for flat families of toric sheaves with fixed characteristic function (see the discussion at the end of Section 4.2). While our first result focuses on specific flat families of sheaves for a given flip, our second result describes toric flips that preserve slope polystability for _all_ equivariant reflexive sheaves at once, in some adapted full subcategories. Let's denote by \(\mathfrak{Ric}^{T}(X)\) the category of torus equivariant reflexive sheaves on \(X\). For any given torus invariant divisor \(D\subset X\), we introduce in Section 3.3 a full subcategory \(\mathfrak{Ric}^{T}(X,D)\) of _logarithmic toric sheaves_. Our terminology is inspired by the fact that the logarithmic tangent sheaf \(\mathcal{T}_{X}(-\log D)\) belongs to \(\operatorname{Obj}(\mathfrak{Ref}^{T}(X,D))\). Setting \(D^{\prime}=\psi_{*}D\), the birational transformation \(\psi:X\dashrightarrow X^{\prime}\) induces an equivalence of categories (still denoted \(\psi_{*}\)) between \(\mathfrak{Ref}^{T}(X,D)\) and \(\mathfrak{Ref}^{T}(X^{\prime},D^{\prime})\). We will say that the functor \(\psi_{*}\) preserves polystability for a pair of ample classes \((\alpha,\alpha^{\prime})\in\operatorname{Pic}(X)_{\mathbb{Q}}\otimes \operatorname{Pic}(X^{\prime})_{\mathbb{Q}}\) if for any \(\mathcal{E}\in\operatorname{Obj}(\mathfrak{Ref}^{T}(X,D))\), \(\mathcal{E}\) is polystable on \((X,\alpha)\) if and only if \(\psi_{*}\mathcal{E}\) is polystable on \((X^{\prime},\alpha^{\prime})\). Then, our result is as follows (see Theorem 5.1) : **Theorem 1.3**.: _Let \(\Sigma\) be the fan of \(X\), and let_ \[D:=\sum_{\rho\in\Delta}D_{\rho}\subset X\] _be a torus invariant divisor, for \(\Delta\subset\Sigma(1)\). Then, the following assertions are equivalent, for a pair of ample classes \((\alpha,\alpha^{\prime})\in\operatorname{Pic}(X)_{\mathbb{Q}}\otimes \operatorname{Pic}(X^{\prime})_{\mathbb{Q}}\)._ 1. _The functor_ \(\psi_{*}:\mathfrak{Ref}^{T}(X,D)\to\mathfrak{Ref}^{T}(X^{\prime},D)\) _preserves polystability for_ \((\alpha,\alpha^{\prime})\)_._ 2. _There is_ \(c\in\mathbb{Q}_{>0}\) _such that for all_ \(\mathcal{E}\in\mathfrak{Ref}^{T}(X,D)\)_,_ \[\mu_{\alpha}(\mathcal{E})=c\;\mu_{\alpha^{\prime}}(\psi_{*}\mathcal{E}).\] 3. _There is_ \(c\in\mathbb{Q}_{>0}\) _such that for all_ \(\rho\notin\Delta\)_,_ \[\deg_{\alpha}D_{\rho}=c\;\deg_{\alpha^{\prime}}D_{\rho}^{\prime}.\] In the above statement, \(D_{\rho}\) stands for the torus invariant divisor associated to a ray \(\rho\in\Sigma(1)\), and \(\mu_{\alpha}\) stands for the slope on \((X,\alpha)\). We should point out that condition \((iii)\) becomes very restrictive when \(\Delta\) is small, while \(\mathfrak{Ref}^{T}(X,D)\) becomes smaller for larger \(\Delta\)'s. Nevertheless, Theorem 1.3 provides a simple numerical criterion for pairs of classes on \(X\) and \(X^{\prime}\) to preserve polystability of specific toric sheaves through the toric flip. ### Acknowledgments The authors are partially supported by the grant BRIDGES ANR-FAPESP ANR-21-CE40-0017. AN and CT also benefited from the grant MARGE ANR-21-CE40-0011. ### Notations and conventions All varieties we will consider will be normal toric varieties over the complex numbers. For such a variety \(X\), we denote by \(T\) its torus, \(N\) its lattice of one parameter subgroups, \(M\) its lattice of characters. For \(\mathbb{K}\) a field of characteristic zero, and a lattice \(W\), we denote \(W_{\mathbb{K}}:=W\otimes_{\mathbb{Z}}\mathbb{K}\). The fan of \(X\) will be denoted \(\Sigma_{X}\), or simply \(\Sigma\) when the situation is clear enough, and we will also use the notation \(X_{\Sigma}\) for the variety associated to the fan \(\Sigma\). For a given cone \(\sigma\in\Sigma\), we let \(U_{\sigma}=\operatorname{Specm}(\mathbb{C}[\sigma^{\vee}\cap M])\) the affine chart in \(X\), \(O(\sigma)\subset X\) the orbit associated to \(\sigma\) by the orbit-cone correspondence, and \(V(\sigma)\) the closure in \(X\) of \(O(\sigma)\). Finally, for a fan \(\Sigma\) and a subset \(S\subset N_{\mathbb{R}}\), we denote by \(\Sigma_{S}=\{\sigma\in\Sigma\,|\,\sigma\subset S\}\). ## 2. Background on toric flips ### Toric flips We recall in this section the basics on toric flips (and refer the reader to [6, Section 3.3] for the definition of toric morphisms and their fan description). While our presentation differs slightly from [6, Chapter 15], we will keep most of the notations from this book, and the properties that we list here can be recovered from [6, Lemma 15.3.7, Lemma 15.3.11 and Theorem 15.3.13]. Let \(N\) be a rank \(n\) lattice, with \(n\geq 3\). **Definition 2.1**.: A full dimensional strictly convex cone \(\sigma_{0}\subset N_{\mathbb{R}}\) will be called a _flipping cone_ if there exist primitive elements \(\{\nu_{1},\ldots,\nu_{n+1}\}\in N^{n+1}\) such that 1. The cone \(\sigma_{0}\) is spanned by the \(\nu_{i}\)'s : \[\sigma_{0}=\operatorname{Cone}(\nu_{1},\ldots,\nu_{n+1}),\] 2. there exists \((b_{1},\ldots,b_{n+1})\in\mathbb{Z}^{n+1}\) such that \[\sum_{i=0}^{n+1}b_{i}\nu_{i}=0,\] 3. the sets \(J_{-}=\{i\mid b_{i}<0\}\) and \(J_{+}=\{i\mid b_{i}>0\}\) both contain at least \(2\) elements. For a given flipping cone \(\sigma_{0}\subset N_{\mathbb{R}}\) as in Definition 2.1, we will denote \[J_{0}=\{i\mid b_{i}=0\}.\] We also introduce the notation, for any \(J\subset\{1,\ldots,n+1\}\), \[\sigma_{J}=\operatorname{Cone}(\nu_{i}\mid i\in J)\] together with the fans \[\Sigma_{-}=\{\sigma_{J}\mid J_{+}\nsubseteq J\}\] and \[\Sigma_{+}=\{\sigma_{J}\mid J_{-}\nsubseteq J\}.\] Identifying \(\sigma_{0}\) with the fan of its faces, we see that \(\Sigma_{-}\) and \(\Sigma_{+}\) provide refinements of \(\sigma_{0}\). Those refinements induce toric morphisms \(\phi_{\pm}:X_{\Sigma_{\pm}}\to U_{\sigma_{0}}\), whose properties are listed below (see [6, Lemma 15.3.11.(c)]) : **Lemma 2.2**.: _The morphisms \(\phi_{\pm}:X_{\Sigma_{\pm}}\to U_{\sigma_{0}}\) are surjective and birational. Their exceptional loci are given by \(V(\sigma_{J_{\pm}})\subset X_{\Sigma_{\pm}}\) and satisfy \(\phi_{\pm}(V(\sigma_{J_{\pm}}))=V(\sigma_{J_{-}\cup J_{+}})\) and \(\dim V(\sigma_{J_{\pm}})=n-|J_{\pm}|\), while \(\dim V(\sigma_{J_{-}\cup J_{+}})=|J_{0}|\)._ We now introduce the notion of toric flips that we will use in this paper. **Definition 2.3**.: Let \(X\) and \(X^{\prime}\) be two \(n\)-dimensional simplicial toric varieties with fans \(\Sigma\) and \(\Sigma^{\prime}\) and common lattice of one-parameter subgroups \(N\). We will say that they are related by a _toric flip_ if there exists a normal toric variety \(X_{0}\) with fan \(\Sigma_{0}\) containing a flipping cone \(\sigma_{0}\in\Sigma_{0}\) such that \[\Sigma_{|\sigma_{0}}=\Sigma_{+},\,\Sigma^{\prime}_{|\sigma_{0}}=\Sigma_{-}\ \text{ and }\Sigma_{|N_{\mathbb{R}}\setminus\sigma_{0}}=\Sigma^{\prime}_{|N_{ \mathbb{R}}\setminus\sigma_{0}}=(\Sigma_{0})_{|N_{\mathbb{R}}\setminus\sigma_ {0}}.\] In that situation, the refinements \(\Sigma\) and \(\Sigma^{\prime}\) of \(\Sigma_{0}\) induce toric morphisms \(\phi:X\to X_{0}\) and \(\phi^{\prime}:X^{\prime}\to X_{0}\), the latter being called the _flip_ of the former. In Definition 2.3, the fans \(\Sigma_{\pm}\) are the one associated to the flipping cone \(\sigma_{0}\) as described above. Note that the definition, together with Lemma 2.2, imply that \(X\) and \(X^{\prime}\) are birational and isomorphic in codimension \(2\), and that \(X_{0}\setminus(\sigma_{J_{-}\cup J_{+}})\) is simplicial. The situation of Definition 2.3 can be better summarized in the following commutative diagram : The maps \(\phi\) and \(\phi^{\prime}\) are the toric morphisms induced by the refinements \(\Sigma\) and \(\Sigma^{\prime}\) of \(\Sigma_{0}\), while the maps \(\iota_{+}\), \(\iota_{-}\) and \(\iota_{0}\) denote inclusions. Finally, \[\psi=(\phi^{\prime})^{-1}\circ\phi:X\dashrightarrow X^{\prime}\] is the birational morphism that is an isomorphism away from \(V(\sigma_{J_{\pm}})\). From now on, we fix \(X\) and \(X^{\prime}\) two simplicial toric varieties related by a toric flip \(\psi\), and keep the notations from the previous diagram. ### The exceptional loci and relatively ample divisors We will be interested in stability later on, whose definition requires polarisations. We hence turn to the description of the fibers of \(\phi_{\pm}\), and describe some \(\phi\)-ample (and \(\phi^{\prime}\)-ample) divisors. For a cone \(\sigma\subset N_{\mathbb{R}}\), we denote \(N_{\sigma}\subset N\) the sublattice \[N_{\sigma}=\operatorname{Span}(\sigma)\cap N,\] and the quotient map \[\pi_{\sigma}:N\to N(\sigma):=N/N_{\sigma}.\] Recall that from the orbit-cone correspondence (see [6, Section 3.2, Proposition 3.2.7]), the toric variety \(V(\sigma)\) can be obtained as the toric variety associated to the fan of cones in \((N/N_{\sigma})_{\mathbb{R}}\) : \[\operatorname{Star}(\sigma)=\{\pi_{\sigma}(\tau)\mid\sigma\preceq\tau\}.\] In particular, the lattices of one parameter subgroups of \(V(\sigma_{J_{-}})\) and \(V(\sigma_{J_{-}\cup J_{+}})\) are respectively \(N/N_{\sigma_{J_{-}}}\) and \(N/N_{\sigma_{J_{-}\cup J_{+}}}\). One can show that the projection map \[N/N_{\sigma_{J_{-}}}\to N/N_{\sigma_{J_{-}\cup J_{+}}}\] is compatible with the fans \(\operatorname{Star}(\sigma_{J_{-}})\) and \(\operatorname{Star}(\sigma_{J_{-}\cup J_{+}})\) and induces the toric morphism \[\phi_{-}:V(\sigma_{J_{-}})\to V(\sigma_{J_{-}\cup J_{+}}).\] Those lattices fit naturally in the sequence \[0\to N_{\sigma_{J_{-}\cup J_{+}}}/N_{\sigma_{J_{-}}}\to N/N_{\sigma_{J_{-} }}\to N/N_{\sigma_{J_{-}\cup J_{+}}}\to 0.\] As we are interested in the fibers of \(\phi_{-}\), we introduce the quotient lattice \[N_{\mathcal{R}}:=N_{\sigma_{J_{-}\cup J_{+}}}/N_{\sigma_{J_{-}}},\] and denote the projection \(N_{\sigma_{J_{-}\cup J_{+}}}\to N_{\mathcal{R}}\), and its \(\mathbb{R}\)-linear extension, by \(u\mapsto\overline{u}\). We finally introduce the fan \[\Sigma_{\mathcal{R}}:=\{\overline{\sigma}_{J}\mid J\subsetneq J_{+}\},\] and the associated toric variety \(X_{\mathcal{R}}\). **Remark 2.4**.: We keep the notation \(X_{\mathcal{R}}\) to be consistent with [6], where the \(\mathcal{R}\) stands for an _extremal ray_ being responsible for the flip in the context of toric MMP. For the following, see [6, Lemma 15.4.2 and Proposition 15.4.5.(c)]. **Proposition 2.5**.: _The fibers of \(\phi_{-}\) are isomorphic to the \(\mathbb{Q}\)-Fano toric variety \(X_{\mathcal{R}}\). Moreover, \(X_{\mathcal{R}}\) has dimension \(|J_{+}|-1\) and Picard rank one._ From the above, we deduce that the anticanonical divisor of \(X_{\mathcal{R}}\) \[-K_{\mathcal{R}}=\sum_{\rho\in\Sigma_{\mathcal{R}}(1)}D_{\rho}\] is \(\mathbb{Q}\)-Cartier and ample. Note that by construction it can be written \[-K_{\mathcal{R}}=\sum_{i\in J_{+}}D_{\overline{\mathbb{R}_{+}\cdot\nu_{i}}}= \sum_{i\in J_{+}}D_{\overline{\rho}_{i}}\] where we set \[\rho_{i}=\mathbb{R}_{+}\cdot\nu_{i},\] and \(D_{\rho}\) stands for the torus-invariant divisor associated to the ray \(\rho\). An easy exercise, using the orbit-cone correspondence, together with [6, Proposition 15.5.1], shows that **Proposition 2.6**.: _The \(\mathbb{Q}\)-Cartier torus invariant divisor_ \[-D_{J_{+}}:=-\sum_{i\in J_{+}}D_{\rho_{i}}\in\operatorname{Div}(X) \tag{2.1}\] _is \(\phi\)-ample, while the \(\mathbb{Q}\)-Cartier invariant divisor_ \[D^{\prime}_{J_{+}}:=\psi_{*}(D_{J_{+}})=\sum_{i\in J_{+}}D^{\prime}_{\rho_{i} }\in\operatorname{Div}(X^{\prime}) \tag{2.2}\] _is \(\phi^{\prime}\)-ample._ **Remark 2.7**.: In the above proposition, we let a \({}^{\prime}\) to \(D^{\prime}\) to remember that the torus invariant divisor \(D^{\prime}\) is taken as the orbit closure of some ray _in_\(X^{\prime}\). As \(\psi:X\dashrightarrow X^{\prime}\) is an isomorphism in codimension 2, it induces an isomorphism \(\psi_{*}\) between the groups of torus invariant \(\mathbb{Q}\)-Cartier divisors on \(X\) and \(X^{\prime}\), which can be written on a basis \(D_{\rho}\mapsto D^{\prime}_{\rho}\) for any \(\rho\in\Sigma(1)=\Sigma^{\prime}(1)\). Then, to ease notation later on, we will omit the \({}^{\prime}\) to \(D^{\prime}_{\rho}\), the situation being clear enough whether \(D_{\rho}\) is considered as a divisor on \(X\) or \(X^{\prime}\). We will also simply denote \(D_{J_{+}}\) and \(D^{\prime}_{J_{+}}\) by \[D_{+}=\sum_{i\in J_{+}}D_{\rho_{i}},\] so that the conclusion of Proposition 2.6 is that \(-D_{+}\) is \(\phi\)-ample on \(X\) and \(D_{+}\) is \(\phi^{\prime}\)-ample on \(X^{\prime}\). Finally, we note that we could have considered the divisor \[D_{-}=\sum_{i\in J_{-}}D_{\rho_{i}},\] which is \(\phi\)-ample (while \(-D_{-}\) is \(\phi^{\prime}\)-ample). However, the _wall relation_ \[\sum_{i\in J_{-}}b_{i}\nu_{i}+\sum_{i\in J_{+}}b_{i}\nu_{i}=0\] from Definition 2.1 implies that from the intersection theory point of view, computing slopes with \(D_{+}\) or \(D_{-}\) will produce the same results regarding stability notions (see e.g. [6, Section 6.4, Proposition 6.4.4]). ## 3. The flip functor ### Equivariant sheaves and Klyachko's equivalence We turn now to the description of the flip functor, which requires first introducing the categories of torus equivariant reflexive sheaves. For a given toric variety \(X\) with fan \(\Sigma\), a _torus equivariant reflexive sheaf_ is a reflexive sheaf \(\mathcal{E}\) on \(X\) together with an isomorphism \[\varphi:\alpha^{*}\mathcal{E}\to\pi_{2}^{*}\mathcal{E}\] satisfying some cocyle conditions, where \(\alpha:T\times X\to X\) and \(\pi_{2}:T_{N}\times X\to X\) stand for the torus action and the projection on \(X\) respectively (see for example [20, Section 5]). **Definition 3.1**.: A _toric sheaf_ is a torus equivariant reflexive sheaf. Klyachko has shown (see [12] for locally free sheaves and [20] in general) that any toric sheaf is uniquely described by a _family of filtrations_, denoted \[(E,E^{\rho}(i))_{\rho\in\Sigma(1),i\in\mathbb{Z}}.\] Here, \(E\) stands for a finite dimensional complex vector space of dimension \(\operatorname{rank}(\mathcal{E})\), and for each ray \(\rho\in\Sigma(1)\), \((E^{\rho}(i))_{i\in\mathbb{Z}}\) is a bounded _increasing_ filtration of \(E\) (we will use increasing filtrations as in [20], rather than decreasing ones as in [12]). Then, the equivariant reflexive sheaf \(\mathcal{E}\) is recovered from the formula, for \(\sigma\in\Sigma\) : \[\Gamma(U_{\sigma},\mathcal{E}):=\bigoplus_{m\in M}\bigcap_{\rho\in\sigma(1)}E ^{\rho}(\langle m,u_{\rho}\rangle)\otimes\chi^{m}\] where \(u_{\rho}\in N\) is the primitive generator of \(\rho\), \(\langle\cdot,\cdot\rangle\) the duality pairing and \(\chi^{m}\) the weight \(m\) character function. A morphism between two families of filtrations \[b:(E_{1},E_{1}^{\rho}(i))_{\rho\in\Sigma(1),i\in\mathbb{Z}}\to(E_{2},E_{2}^{ \rho}(i))_{\rho\in\Sigma(1),i\in\mathbb{Z}}\] is a linear map \(b:E_{1}\to E_{2}\) that satisfies \[b(E_{1}^{\rho}(i))\subset E_{2}^{\rho}(i)\] for any \(\rho\in\Sigma(1)\) and \(i\in\mathbb{Z}\). Any such morphism corresponds uniquely to a morphism between the associated reflexive sheaves, and a fundamental result of Klyachko and Perling ([12, 20]) asserts that the categories of families of filtrations and of toric sheaves are equivalent. For our purposes, it seems more natural to restrict a bit the definition of morphisms. We will consider morphisms between toric sheaves \(\mathcal{E}_{i}\) to be equivariant morphisms of coherent sheaves \(\beta:\mathcal{E}_{1}\to\mathcal{E}_{2}\) that satisfy that \(\operatorname{Im}(\beta)\) is a saturated reflexive subsheaf of \(\mathcal{E}_{2}\). Those morphisms correspond through Klyachko's equivalence to linear maps \(b:E_{1}\to E_{2}\) such that \[b(E_{1}^{\rho}(i))=b(E)\cap E_{2}^{\rho}(i)\] for any \((\rho,i)\), as can be seen via [18, Lemma 2.15] for example. We denote by \(\mathfrak{Hel}^{T}(X)\) on one hand and by \(\mathfrak{Filt}(X)\) on the other hand, the categories of toric sheaves and of families of filtrations, endowed with those classes of morphisms. We will denote by \[\mathfrak{Fil}:\mathfrak{Filt}(X)\to\mathfrak{Hel}^{T}(X)\] Klyachko's functor as described above. Then, Klyachko and Perling's work readily implies the following : **Theorem 3.2** ([12, 20]).: _The functor \(\mathfrak{sl}\) is an equivalence of categories._ **Remark 3.3**.: A nice feature of the categories \(\mathfrak{sit}(X)\) and \(\mathfrak{afe}^{T}(X)\) is that they are abelian. This is no longer true when we consider all morphisms of reflexive sheaves, as for example the quotient of \(\mathcal{O}_{\mathbb{P}^{1}}\) by the subsheaf \(\mathcal{O}_{\mathbb{P}^{1}}(1)\) is torsion, and hence not reflexive. ### Flip functor Assume now that \(\phi^{\prime}:X^{\prime}\to X_{0}\) is the flip of \(\phi:X\to X_{0}\) as in the previous section. As \(\phi\) and \(\phi^{\prime}\) are isomorphisms in codimension \(2\), we have \[\Sigma(1)=\Sigma_{0}(1)=\Sigma^{\prime}(1).\] We deduce that there are equivariant injections \(i\) (resp. \(i_{0}\) and \(i^{\prime}\)) of the \(T\)-invariant Zariski open set \[U:=\bigcup_{\tau\in\Sigma(0)\cup\Sigma(1)}U_{\tau}\] into \(X\) (resp. \(X_{0}\) and \(X^{\prime}\)). Then, from [10, Proposition 1.6], we deduce that for any toric sheaf \(\mathcal{E}\in\operatorname{Obj}(\mathfrak{afe}^{T}(X))\) (resp. \(\mathcal{F}\in\operatorname{Obj}(\mathfrak{afe}^{T}(X_{0}))\) and \(\mathcal{G}\in\operatorname{Obj}(\mathfrak{afe}^{T}(X^{\prime}))\)), we have \[i_{*}(\mathcal{E}_{|U})\simeq\mathcal{E}\] (resp. \((i_{0})_{*}(\mathcal{F}_{|U})\simeq\mathcal{F}\) and \(i^{\prime}_{*}(\mathcal{G}_{|U})\simeq\mathcal{G}\)). As reflexive sheaves are normal, meaning their sections extend over codimenion \(2\) Zariski closed sets, we have : **Proposition 3.4**.: _The pushforward \(i_{*}\) (resp. \(i^{\prime}_{*}\) and \((i_{0})_{*}\)) induces an equivalence of categories_ \[i_{*}:\mathfrak{afe}^{T}(U)\to\mathfrak{afe}^{T}(X)\] _(resp. \(\mathfrak{afe}^{T}(U)\simeq\mathfrak{afe}^{T}(X_{0})\) and \(\mathfrak{afe}^{T}(U)\simeq\mathfrak{afe}^{T}(X^{\prime})\)). Hence, we have equivalences_ \[\mathfrak{afe}^{T}(X)\simeq\mathfrak{afe}^{T}(X_{0})\simeq\mathfrak{afe}^{T}(X ^{\prime}).\] It is straightforward to check that the equivalence \[\mathfrak{afe}^{T}(X)\simeq\mathfrak{afe}^{T}(X_{0})\] is induced by the pushforward \(\phi_{*}\) while the equivalence \[\mathfrak{afe}^{T}(X^{\prime})\simeq\mathfrak{afe}^{T}(X_{0})\] is induced by \(\phi^{\prime}_{*}\). Moreover, the categories of families of filtrations on \(U\), \(X\), \(X^{\prime}\) and \(X_{0}\) are readily the same, and the above equivalence of categories simply correspond to the self-equivalence of \(\mathfrak{sit}(X)\) induced by the identity on objects and morphisms. **Definition 3.5**.: We define the _flip functor_ \[\psi_{*}:\mathfrak{afe}^{T}(X)\to\mathfrak{afe}^{T}(X^{\prime})\] to be the composition of functors induced by \(((\phi^{\prime})^{-1})_{*}\) and \(\phi_{*}\). We conclude this section by noting that the flip functor sends the tangent sheaf \(\mathcal{T}_{X}\) of \(X\) to the tangent sheaf \(\mathcal{T}_{X^{\prime}}\) of \(X^{\prime}\): **Lemma 3.6**.: _We have_ \[\phi_{*}\mathcal{T}_{X}=\mathcal{T}_{X_{0}}=\phi^{\prime}_{*}\mathcal{T}_{X^ {\prime}}.\] Proof.: It follows directly from the description of the family of filtrations associated to the tangent sheaf, that only depends on \(N\), \(\Sigma(1)\) and the primitive generators of elements \(\rho\in\Sigma(1)\) (see [12, Example 2.3(5) on page 350]). As this data is the same for \(X\), \(X^{\prime}\) and \(X_{0}\), and as the functors induced by \(\phi_{*}\) and \(\phi^{\prime}_{*}\) correspond to the identity functor on \[\mathfrak{Filt}(X)=\mathfrak{Filt}(X_{0})=\mathfrak{Filt}(X^{\prime}),\] the result follows. ### The logarithmic subcategories In Section 5, we will be interested in specific subcategories of \(\mathfrak{Hel}^{T}(X)\) and \(\mathfrak{Hel}^{T}(X^{\prime})\). For any \(\Delta\subset\Sigma(1)\), we introduce the torus invariant divisor \[D_{\Delta}:=\sum_{\rho\in\Delta}D_{\rho}\] and the full subcategory \(\mathfrak{Hel}^{T}(X,D_{\Delta})\) of \(\mathfrak{Hel}^{T}(X)\) whose objects are the toric sheaves on \(X\) whose associated families of filtrations \((E,E^{\rho}(i))_{\rho\in\Sigma(1),i\in\mathbb{Z}}\) satisfy for all \(\rho\in\Delta\): \[E^{\rho}(i)=\left\{\begin{array}{ll}0&\text{if}\quad i<0\\ E&\text{if}\quad i\geq 0.\end{array}\right. \tag{3.1}\] **Remark 3.7**.: From [17, Theorem 1.1], the logarithmic tangent sheaf \(\mathcal{T}_{X}(-\log D_{\Delta})\) belongs to \(\operatorname{Obj}(\mathfrak{Hel}^{T}(X,D_{\Delta}))\), which justify our choice of terminology. It is then straightforward to see that the flip functor \(\psi_{*}\) induces an equivalence between \(\mathfrak{Hel}^{T}(X,D_{\Delta})\) and \(\mathfrak{Hel}^{T}(X^{\prime},D^{\prime}_{\Delta})\), where we use \(D^{\prime}_{\Delta}\) to denote \(\sum_{\rho\in\Delta}D^{\prime}_{\rho}\). Note also that \(\psi_{*}\) sends \(\mathcal{T}_{X}(-\log D_{\Delta})\) to \(\mathcal{T}_{X^{\prime}}(-\log D^{\prime}_{\Delta})\). ## 4. Flips and stability for a given sheaf ### Slope stability of toric sheaves Let \((X,L)\) be a polarised complex variety. Recall that a reflexive sheaf \(\mathcal{E}\) on \((X,L)\) is said to be _slope stable_ (resp. _slope semistable_), or simply stable (resp. semistable) for short, if for any coherent and saturated subsheaf \(\mathcal{F}\subset\mathcal{E}\) of strictly smaller rank, one has \[\mu_{L}(\mathcal{F})<\mu_{L}(\mathcal{E}),\] (resp. \(\mu_{L}(\mathcal{F})\leq\mu_{L}(\mathcal{E})\)), where for any coherent torsion-free sheaf \(\mathcal{F}\), the _slope_\(\mu_{L}(\mathcal{F})\) is defined by \[\mu_{L}(\mathcal{F})=\frac{c_{1}(\mathcal{F})\cdot L^{n-1}}{\operatorname{ rank}(\mathcal{F})}\in\mathbb{Q}.\] A _polystable sheaf_ is a direct sum of stable ones with the same slope. A sheaf will be called _unstable_ if it is _not semistable_. **Remark 4.1**.: When referring to a specific polarisation \(L\) used to define stability notions, we will use the terminology \(L\)-stable (resp. \(L\)-unstable, \(L\)-semistable, etc). A remarkable fact, proved by Kool ([15, Proposition 4.13]), is that if we assume \(X\) and \(\mathcal{E}\) to be toric, to check stability for \(\mathcal{E}\), it is enough to compare slopes with equivariant and saturated reflexive subsheaves, that is sub objects of \(\mathcal{E}\) in \(\mathfrak{Hel}^{T}(X)\) (note that this was proved in the smooth case by Kool, but it was noticed in [4] that the proof extends in the normal case). If \((E,E^{\rho}(\bullet))_{\rho\in\Sigma(1)}\) stands for the family of filtrations of \(\mathcal{E}\), any saturated equivariant reflexive subsheaf of \(\mathcal{E}\) is associated to a family of filtrations of the form \((F,F\cap E^{\rho}(i))_{\rho\in\Sigma(1),i\in\mathbb{Z}}\) for some vector subspace \(F\subsetneq E\) (see [18, Lemma 2.15]). Moreover, Klyachko's formula for the first Chern class ([4, Corollary 2.18]) is \[\mu_{L}(\mathcal{E})=-\frac{1}{\operatorname{rank}(\mathcal{E})}\sum_{\rho\in \Sigma(1)}\iota_{\rho}(\mathcal{E})\,\deg_{L}(D_{\rho}), \tag{4.1}\] where \(\deg_{L}(D_{\rho})\) is the degree with respect to \(L\), and \[\iota_{\rho}(\mathcal{E}):=\sum_{i\in\mathbb{Z}}i\left(\dim(E^{\rho}(i))-\dim( E^{\rho}(i-1))\right).\] To sum up, we have **Proposition 4.2**.: _The toric sheaf associated to \((E,E^{\rho}(i))_{\rho\in\Sigma(1),i\in\mathbb{Z}}\) is stable if and only if for any subspace \(F\subsetneq E\), we have_ \[\frac{1}{\dim(F)}\sum_{\rho\in\Sigma(1)}\iota_{\rho}(F)\,\deg_{L}(D_{\rho})> \frac{1}{\dim(E)}\sum_{\rho\in\Sigma(1)}\iota_{\rho}(\mathcal{E})\,\deg_{L}(D _{\rho}),\] _where_ \[\iota_{\rho}(F):=\sum_{i\in\mathbb{Z}}i\left(\dim(F\cap E^{\rho}(i))-\dim(F \cap E^{\rho}(i-1))\right).\] _The similar statement holds for semistability by replacing the strict inequality by a large inequality._ **Remark 4.3**.: As observed in Section 3.2, for a given flip as in Section 2.1, the families of filtrations for \(\mathcal{E}\in\operatorname{Obj}(\mathfrak{R}\mathfrak{e}\mathfrak{f}^{T}(X))\), \(\psi_{*}\mathcal{E}\) and \(\phi_{*}\mathcal{E}\) are the same. We thus have the equalities \(\iota_{\rho}(\mathcal{E})=\iota_{\rho}(\phi_{*}\mathcal{E})=\iota_{\rho}(\psi_ {*}\mathcal{E})\). Then, to compare slopes on \(X\), \(X_{0}\) and \(X^{\prime}\), only the terms coming from the degrees of the invariant divisors \(D_{\rho}\)'s will vary according to the polarisations on each variety. ### Main result and its proof Consider now \(\phi:X\to X_{0}\) and its toric flip \(\phi^{\prime}:X^{\prime}\to X_{0}\) as defined in Section 2. From Proposition 2.6 (recall also Remark 2.7), for any ample Cartier divisor \(L_{0}\) on \(X_{0}\), there exists \(\varepsilon_{0}>0\) such that the divisors \[L_{-\varepsilon}:=\phi^{*}L_{0}-\varepsilon D_{+}\] on \(X\) and \[L_{\varepsilon}:=(\phi^{\prime})^{*}L_{0}+\varepsilon D_{+}\] on \(X^{\prime}\) define ample \(\mathbb{Q}\)-Cartier divisors for \(\varepsilon\in(0,\varepsilon_{0})\). We will then be interested in the behaviour of stability for toric sheaves related by the flip functor on \((X,L_{-\varepsilon})\) and \((X^{\prime},L_{\varepsilon})\), for \(0<\varepsilon<\varepsilon_{0}\). Note that a necessary condition for stability of an element \(\mathcal{E}\in\operatorname{Obj}(\mathfrak{R}\mathfrak{e}\mathfrak{f}^{T}(X))\) under those polarisations is \(L_{0}\)-semistability of \(\phi_{*}\mathcal{E}\). Conversely, if the sheaf \(\phi_{*}\mathcal{E}\) is \(L_{0}\)-semistable, it then admits a Jordan-Holder filtration \[0=\mathcal{E}_{1}\subseteq\mathcal{E}_{2}\subseteq\ldots\subseteq\mathcal{E }_{\ell}=\phi_{*}\mathcal{E}\] by slope semistable coherent and saturated subsheaves with stable quotients of same slope as \(\phi_{*}\mathcal{E}\) (see e.g. [11]). We denote by \[\operatorname{Gr}(\phi_{*}\mathcal{E}):=\bigoplus_{i=1}^{\ell-1}\mathcal{E}_ {i+1}/\mathcal{E}_{i}\] the graded object of \(\phi_{*}\mathcal{E}\) and by \(\mathfrak{E}\) the finite set of equivariant and saturated reflexive subsheaves \(\mathcal{F}\subseteq\phi_{*}\mathcal{E}\) arising in a Jordan-Holder filtration of \(\phi_{*}\mathcal{E}\). Note that by Proposition 3.4, for any \(\mathcal{F}\in\mathfrak{E}\), \((\phi^{*}\mathcal{F})^{\vee\vee}\) (resp. \(((\phi^{\prime})^{*}\mathcal{F})^{\vee\vee}\)) is saturated in \(\mathcal{E}\) (resp. in \(\psi_{*}\mathcal{E}\)). **Theorem 4.4**.: _Let \(\mathcal{E}\) be a toric sheaf on \(X\). Then, up to shrinking \(\varepsilon_{0}\), we have for all \(\varepsilon\in(0,\varepsilon_{0})\) :_ 1. _If_ \(\phi_{*}\mathcal{E}\) _is_ \(L_{0}\)_-stable, then_ \(\mathcal{E}\) _(resp._ \(\psi_{*}\mathcal{E}\)_) is_ \(L_{-\varepsilon}\)_-stable on_ \(X\) _(resp._ \(L_{\varepsilon}\)_-stable on_ \(X^{\prime}\)_)._ 2. _If_ \(\phi_{*}\mathcal{E}\) _is_ \(L_{0}\)_-unstable, then_ \(\mathcal{E}\) _(resp._ \(\psi_{*}\mathcal{E}\)_) is_ \(L_{-\varepsilon}\)_-unstable on_ \(X\) _(resp._ \(L_{\varepsilon}\)_-unstable on_ \(X^{\prime}\)_)._ 3. _If_ \(\phi_{*}\mathcal{E}\) _is_ \(L_{0}\)_-semistable, and if for any_ \(\mathcal{F}\in\mathfrak{E}\)_,_ \[\frac{c_{1}(\mathcal{E})\cdot D_{+}\cdot(\phi^{*}L_{0})^{n-2}}{\operatorname {rank}(\mathcal{E})}<\frac{c_{1}((\phi^{*}\mathcal{F})^{\vee\vee})\cdot D_{+} \cdot(\phi^{*}L_{0})^{n-2}}{\operatorname{rank}(\mathcal{F})}\] _then_ \(\mathcal{E}\) _(resp._ \(\psi_{*}\mathcal{E}\)_) is_ \(L_{-\varepsilon}\)_-stable on_ \(X\) _(resp._ \(L_{\varepsilon}\)_-unstable on_ \(X^{\prime}\)_)._ 4. _If_ \(\phi_{*}\mathcal{E}\) _is_ \(L_{0}\)_-semistable, and if for any_ \(\mathcal{F}\in\mathfrak{E}\)_,_ \[\frac{c_{1}(\mathcal{E})\cdot D_{+}\cdot(\phi^{*}L_{0})^{n-2}}{\operatorname {rank}(\mathcal{E})}>\frac{c_{1}((\phi^{*}\mathcal{F})^{\vee\vee})\cdot D_{+} \cdot(\phi^{*}L_{0})^{n-2}}{\operatorname{rank}(\mathcal{F})}\] _then_ \(\mathcal{E}\) _(resp._ \(\psi_{*}\mathcal{E}\)_) is_ \(L_{-\varepsilon}\)_-unstable on_ \(X\) _(resp._ \(L_{\varepsilon}\)_-stable on_ \(X^{\prime}\)_)._ **Remark 4.5**.: Note that, given semistability of \(\phi_{*}\mathcal{E}\), the numerical criterion in points \((iii)\) and \((iv)\) only requires to test a _finite_ number of inequalities, as \(\mathfrak{E}\) is finite. This makes this criterion useful in practice. Before proving this theorem, we first recall some facts on intersection products in toric varieties that will be used. Let \(\{u_{1},\ldots,u_{k}\}\) be a set of primitive elements of \(N\) such that \(\sigma=\operatorname{Cone}(u_{1},\ldots,u_{k})\) is simplicial. We define \(\operatorname{mult}(\sigma)\) as the index of the sublattice \(\mathbb{Z}u_{1}+\ldots+\mathbb{Z}u_{k}\) in \(N_{\sigma}=\operatorname{Span}(\sigma)\cap N\). If \(\Sigma\) is simplicial, according to [9, Section 5.1], one can consider intersections of cycles or cycle classes with _rational_ coefficients. The Chow group \[A^{\bullet}(X)_{\mathbb{Q}}=\bigoplus_{p=0}^{n}A^{p}(X)\otimes\mathbb{Q}= \bigoplus_{p=0}^{n}A_{n-p}(X)\otimes\mathbb{Q}\] has the structure of a graded \(\mathbb{Q}\)-algebra and by [6, Lemma 12.5.2], if \(\rho_{1},\ldots,\rho_{d}\in\Sigma(1)\) are distinct, then in \(A^{\bullet}(X)_{\mathbb{Q}}\), we have \[[D_{\rho_{1}}]\cdot[D_{\rho_{2}}]\cdots[D_{\rho_{d}}]=\left\{\begin{array}{ ll}\frac{1}{\operatorname{mult}(\sigma)}[V(\sigma)]&\text{if }\sigma=\rho_{1}+\ldots+\rho_{d}\in\Sigma\\ 0&\text{otherwise.}\end{array}\right.. \tag{4.2}\] If \(\chi^{m}\) is the weighted \(m\) character function on \(X_{\Sigma}\), then by [6, Proposition 4.1.2 and (12.5.4)], the divisor of \(\chi^{m}\) is given by \[\operatorname{div}(\chi^{m})=\sum_{\rho\in\Sigma(1)}\langle m,u_{\rho}\rangle D _{\rho} \tag{4.3}\] and \(\operatorname{div}(\chi^{m})\sim_{\operatorname{lin}}0\) in \(A^{1}(X_{\Sigma})\). Proof of Theorem 4.4.: We first prove that for any \(\rho\in\Sigma(1)\), \[\psi_{*}(D_{\rho}\cdot D_{+})=D^{\prime}_{\rho}\cdot D^{\prime}_{+}. \tag{4.4}\] We use the notations of Section 2.1 for the toric flip, and set \[\Delta=\{\operatorname{Cone}(\nu_{i}):i\in J_{+}\}.\] Recall from Definition 2.1 that \(J_{+}\) and \(J_{-}\) have at least two elements. It then follows from the definition of \(\Sigma_{\pm}\) that for any \(\rho\in\sigma_{0}(1)\setminus\Delta\), and any \(j\in J_{+}\), \(\rho+\operatorname{Cone}(\nu_{j})\) is a two-dimensional cone of \(\Sigma_{+}\) and \(\Sigma_{-}\). Together with the definition of \(\psi\), we deduce that in the Chow ring \(A^{\bullet}(X^{\prime})_{\mathbb{Q}}\), \[\psi_{*}(D_{\rho}\cdot D_{\rho_{j}})=D^{\prime}_{\rho}\cdot D^{\prime}_{\rho_ {j}}.\] If \(\rho\in\Sigma(1)\setminus\sigma_{0}(1)\), then for any \(j\in J_{+}\), \[\rho+\operatorname{Cone}(\nu_{j})\notin\{\tau:\tau\preceq\sigma_{0}\}.\] As by Definition 2.3, \[\Sigma_{|N_{\mathbb{R}}\setminus\sigma_{0}}=\Sigma^{\prime}_{|N_{\mathbb{R}} \setminus\sigma_{0}}=(\Sigma_{0})_{|N_{\mathbb{R}}\setminus\sigma_{0}},\] we deduce that: * either \(\rho+\operatorname{Cone}(\nu_{j})\in\Sigma_{0}\setminus\{\tau:\tau\preceq \sigma_{0}\}\), and then \(\psi_{*}(D_{\rho}\cdot D_{\rho_{j}})=D^{\prime}_{\rho}\cdot D^{\prime}_{\rho_ {j}}\) in \(A^{\bullet}(X^{\prime})_{\mathbb{Q}}\); * or \(\rho+\operatorname{Cone}(\nu_{j})\notin\Sigma_{0}\setminus\{\tau:\tau\preceq \sigma_{0}\}\), in which case \(D_{\rho}\cdot D_{\rho_{j}}=0\) in \(A^{\bullet}(X)_{\mathbb{Q}}\) and \(D^{\prime}_{\rho}\cdot D^{\prime}_{\rho_{j}}=0\) in \(A^{\bullet}(X^{\prime})_{\mathbb{Q}}\). This proves that for any \(\rho\in\Sigma(1)\setminus\Delta\) and any \(j\in J_{+}\), \[\psi_{*}(D_{\rho}\cdot D_{\rho_{j}})=D^{\prime}_{\rho}\cdot D^{\prime}_{\rho_ {j}}\] and then by linearity and definition of \(D_{+}\) : \[\psi_{*}(D_{\rho}\cdot D_{+})=D^{\prime}_{\rho}\cdot D^{\prime}_{+}.\] We now assume that \(\rho\in\Delta\). By Lemma 2.2, one has \(\dim V(\sigma_{J_{+}})=n-|J_{+}|\); therefore \(\sigma_{J_{+}}\) is a simplicial cone and then \(\{\nu_{j}:j\in J_{+}\}\) form a part of a \(\mathbb{Q}\)-basis of \(N\otimes_{\mathbb{Z}}\mathbb{Q}\). Let \(\{\nu^{*}_{j}:j\in J_{+}\}\) be a part of a \(\mathbb{Q}\)-basis of \(M\otimes_{\mathbb{Z}}\mathbb{Q}\) such that for any \(i,j\in J_{+}\), \[\langle\nu^{*}_{j},\nu_{i}\rangle=\left\{\begin{array}{ll}0&\text{if }i\neq j \\ 1&\text{if }i=j\end{array}\right..\] For any \(j\in J_{+}\), there is \(a_{j}\in\mathbb{N}^{*}\) such that \(a_{j}\nu^{*}_{j}\in M\). By using (4.3) with \(m=a_{j}\nu^{*}_{j}\), we get \[a_{j}D_{\rho_{j}}\sim_{\operatorname{lin}}-\sum_{\rho\in\Sigma(1)\setminus \Delta}\langle a_{j}\nu^{*}_{j},u_{\rho}\rangle D_{\rho}.\] on \(X\) and \(X^{\prime}\). By the first cases, we deduce that for any \(j\in J_{+}\), \[\psi_{*}(D_{\rho_{j}}\cdot D_{+}) =-\psi_{*}\left(\sum_{\rho\in\Sigma(1)\setminus\Delta}\langle\nu^ {*}_{j},u_{\rho}\rangle D_{\rho}\cdot D_{+}\right)\] \[=-\sum_{\rho\in\Sigma(1)\setminus\Delta}\langle\nu^{*}_{j},u_{ \rho}\rangle D^{\prime}_{\rho}\cdot D^{\prime}_{+}\] \[=D^{\prime}_{\rho_{j}}\cdot D^{\prime}_{+}\] in \(A^{\bullet}(X^{\prime})_{\mathbb{Q}}\). This concludes the proof of (4.4). We can now compute the slopes. We have \[(L_{-\varepsilon})^{n-1}=(\phi^{*}L_{0})^{n-1}-(n-1)\varepsilon D_{+}\cdot( \phi^{*}L_{0})^{n-2}+o(\varepsilon)\quad\text{and}\] \[(L_{\varepsilon})^{n-1}=((\phi^{\prime})^{*}L_{0})^{n-1}+(n-1)\varepsilon D_{+} \cdot((\phi^{\prime})^{*}L_{0})^{n-2}+o(\varepsilon).\] By the Projection formula [8, Proposition 2.3], for any \(\rho\in\Sigma(1)\), one has \[D_{\rho}\cdot(\phi^{*}L_{0})^{n-1}=(\phi_{*}D_{\rho})\cdot(L_{0})^{n-1}=\deg_{L _{0}}(D_{\rho})\] and \[D_{\rho}\cdot((\phi^{\prime})^{*}L_{0})^{n-1}=(\phi^{\prime}_{*}D_{\rho})\cdot(L_{ 0})^{n-1}=\deg_{L_{0}}(D_{\rho}).\] By (4.4), we get \[D_{\rho}\cdot D_{+}\cdot(\phi^{*}L_{0})^{n-2}=D_{\rho}\cdot D_{+}\cdot((\phi^{ \prime})^{*}L_{0})^{n-2}.\] Hence, for any coherent sheaf \(\mathcal{E}\) on \(X\), \[\mu_{L_{-\varepsilon}}(\mathcal{E})=\mu_{L_{0}}(\phi_{*}\mathcal{E})-\frac{c_ {1}(\mathcal{E})\cdot D_{+}\cdot(\phi^{*}L_{0})^{n-2}}{\operatorname{rank}( \mathcal{E})}(n-1)\varepsilon+o(\varepsilon)\] and \[\mu_{L_{\varepsilon}}(\psi_{*}\mathcal{E})=\mu_{L_{0}}(\phi_{*}\mathcal{E})+ \frac{c_{1}(\mathcal{E})\cdot D_{+}\cdot(\phi^{*}L_{0})^{n-2}}{\operatorname {rank}(\mathcal{E})}(n-1)\varepsilon+o(\varepsilon).\] Assume now \(\mathcal{E}\in\operatorname{Obj}(\mathfrak{Re}\mathfrak{f}^{T}(X))\) is given by the family of filtrations \((E,E^{\rho}(j))\). By Proposition 4.2, to check stability of \(\mathcal{E}\), it is enough to compare the slope of \(\mathcal{E}\) with the slope of any equivariant reflexive sheaf \(\mathcal{F}\) given by the family of filtrations \((F,F\cap E^{\rho}(j))\) for \(F\subsetneq E\) a subspace. As the set of vector subspaces \(F\subset E\) that are necessary to test slopes on is actually finite (see [18, Lemma 2.17]), we deduce from the above \(\varepsilon\)-expansions for the slopes that there is \(\varepsilon_{0}>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0})\): * if \(\phi_{*}\mathcal{E}\) is \(L_{0}\)-stable, then \(\mathcal{E}\) (resp. \(\psi_{*}\mathcal{E}\)) is \(L_{-\varepsilon}\)-stable on \(X\) (resp. \(L_{\varepsilon}\)-stable on \(X^{\prime}\)); * and if \(\phi_{*}\mathcal{E}\) is \(L_{0}\)-unstable, then \(\mathcal{E}\) (resp. \(\psi_{*}\mathcal{E}\)) is \(L_{-\varepsilon}\)-unstable on \(X\) (resp. \(L_{\varepsilon}\)-unstable on \(X^{\prime}\)). We now consider the case where \(\phi_{*}\mathcal{E}\) is \(L_{0}\)-semistable. We first observe, as in the (un)stable case, that there is \(\varepsilon_{1}>0\), such that for all \(\varepsilon\in(0,\varepsilon_{1})\) and for all \(\mathcal{F}\in\operatorname{Obj}(\mathfrak{Re}\mathfrak{f}^{T}(X_{0}))\) with \(\mathcal{F}\subsetneq\phi_{*}\mathcal{E}\) such that \(\mu_{L_{0}}(\mathcal{F})<\mu_{L_{0}}(\phi_{*}\mathcal{E})\) one has \[\mu_{L_{-\varepsilon}}((\phi^{*}\mathcal{F})^{\vee\vee})<\mu_{L_{-\varepsilon }}(\mathcal{E})\quad\text{and}\quad\mu_{L_{\varepsilon}}(((\phi^{\prime})^{*} \mathcal{F})^{\vee\vee})<\mu_{L_{\varepsilon}}(\psi_{*}\mathcal{E}).\] If \(\mathcal{F}\subsetneq\phi_{*}\mathcal{E}\) is a sub object such that \(\mu_{L_{0}}(\mathcal{F})=\mu_{L_{0}}(\phi_{*}\mathcal{E})\), then there is Jordan-Holder filtration \[0=\mathcal{E}_{1}\subseteq\ldots\subseteq\mathcal{E}_{l}=\phi_{*}\mathcal{E}\] with \(l\geq 1\) such that \(\mathcal{F}=\mathcal{E}_{i}\) for some \(i\in\{1,\ldots,l\}\) (see [11, Section 1.6]) and we deduce that \(\mathcal{F}\in\mathfrak{E}\). Therefore, from the expansions of the slopes, to get the points \((iii)\) and \((iv)\) of the theorem, it suffices to compare \[\frac{c_{1}(\mathcal{E})\cdot D_{+}\cdot(\phi^{*}L_{0})^{n-2}}{\operatorname{ rank}(\mathcal{E})}\quad\text{and}\quad\frac{c_{1}((\phi^{*}\mathcal{F})^{ \vee\vee})\cdot D_{+}\cdot(\phi^{*}L_{0})^{n-2}}{\operatorname{rank}(\mathcal{ F})}\] for any \(\mathcal{F}\in\mathfrak{E}\). By uniqueness of the reflexive hull of the graded object of a Jordan-Holder filtration ([11, Theorem 1.6.7]), \(\mathfrak{E}\) is finite, and the result follows. **Remark 4.6**.: In the proof, we have shown that for any \(\rho\in\Sigma(1)\), \[D_{\rho}\cdot D_{+}\cdot(\phi^{*}L_{0})^{n-2}=D^{\prime}_{\rho}\cdot D^{\prime} _{+}\cdot((\phi^{\prime})^{*}L_{0})^{n-2}\] If \(\ell\geq 2\), the equality \[D_{\rho}\cdot(D_{+})^{\ell}\cdot(\phi^{*}L_{0})^{n-1-\ell}=D^{\prime}_{\rho} \cdot(D^{\prime}_{+})^{\ell}\cdot((\phi^{\prime})^{*}L_{0})^{n-1-\ell}\] is no longer true in general (see for example Equations (4.5) and (4.6) in Section 4.3). The arguments used to prove Theorem 4.4 are very close to those used in [18]. One should be careful though that the results from [18] do not imply directly Theorem 4.4, as \(X_{0}\) is not \(\mathbb{Q}\)-factorial. **Remark 4.7**.: While the case when \(\phi_{*}\mathcal{E}\) is semistable on \(X_{0}\) is not fully covered by Theorem 4.4, items \((iii)\) and \((iv)\), one can easily adapt the numerical criterion of [18, Theorem 1.3] to take into account higher order terms in the \(\varepsilon\)-expansions of the \(L_{-\varepsilon}\) and \(L_{\varepsilon}\) slopes, and obtain a full description of the stability behaviour of \(\mathcal{E}\) in terms of that of \(\phi_{*}\mathcal{E}\), for the considered polarisations. Actually, Theorem 4.4 holds for some specific flat families of toric sheaves. We recall that the _characteristic function_\(\chi\) of an equivariant reflexive sheaf \(\mathcal{F}\) with family of filtrations \((F,\{F^{\rho}(j)\})\) is the function \[\begin{array}{rcl}\chi(\mathcal{F}):&M&\longrightarrow&\mathbb{Z}^{\sharp \Sigma(n)}\\ &m&\longmapsto&\left(\dim\left(\bigcap_{\rho\in\sigma(1)}F^{\rho}(\langle m,u_ {\rho}\rangle)\right)\right)_{\sigma\in\Sigma(n)}\.\end{array}\] Let \(S\) be a scheme of finite type over \(\mathbb{C}\) and \(\mathcal{E}=(\mathcal{E}_{s})_{s\in S}\) be an \(S\)-family of equivariant reflexive sheaves over \(X\) (see [18, Section 3.5] for more details). We denote by \((E_{s},E_{s}^{\rho}(i))\) the family of filtrations of \(\mathcal{E}_{s}\). There is a collection of increasing filtrations of reflexive sheaves \[(\mathcal{F},\{\mathcal{F}_{m}^{\rho}:m\in M\}_{\rho\in\Sigma(1)})\] such that for any \(s\in S\) and all \(m\in M\), \[E_{s}=\mathcal{F}(s)\quad\text{and}\quad E_{s}^{\rho}(\langle m,u_{\rho} \rangle)=\mathcal{F}_{m}^{\rho}(s)\] where \(\mathcal{F}(s)\) and \(\mathcal{F}_{m}^{\rho}(s)\) are respectively the fibers of \(\mathcal{F}\) and \(\mathcal{F}_{m}^{\rho}\) at \(s\). **Lemma 4.8**.: _Let \(X\) be a toric variety given by a simplicial fan \(\Sigma\) and let \(\mathcal{E}=(\mathcal{E}_{s})_{s\in S}\) be an \(S\)-family of equivariant reflexive sheaves over \(X\) such that:_ 1. \(\mathcal{E}\) _is locally free over_ \(X\times S\)_, or_ 2. _the map_ \(s\mapsto\chi(\mathcal{E}_{s})\) _is constant._ _Then, for all \(\rho\in\Sigma(1)\) and \(j\in\mathbb{Z}\), the map \(s\mapsto\dim(E_{s}^{\rho}(j))\) is constant._ Proof.: If \(\mathcal{E}\) is locally free over \(X\times S\), by [19, Proposition 3.13] (Klyachko's compatibility condition for \(S\)-families of locally free sheaves), for any \(\sigma\in\Sigma(n)\), there is a multiset \(A_{\sigma}\subseteq M\) of size \(\operatorname{rank}(\mathcal{E})\) such that for any \(m\in M\), \(\mathcal{F}_{m}^{\rho}\) is a locally free sheaf of rank \[\left|\{\alpha\in A_{\sigma}:\langle\alpha,u_{\rho}\rangle\leq\langle m,u_{ \rho}\rangle\}\right|.\] As for any \(s\in S\) and \(m\in M\), \(\dim(\mathcal{F}_{m}^{\rho}(s))=\operatorname{rank}(\mathcal{F}_{m}^{\rho})\), we deduce that the map \[s\longmapsto\dim(E_{s}^{\rho}(\langle m,u_{\rho}\rangle))\] is constant. We now assume that the map \(s\mapsto\chi(\mathcal{E}_{s})\) is constant. For any \(\rho\in\Sigma(1)\), we denote by \(i_{\rho}\) the smallest integer such that for any \(j\geq i_{\rho}\) and any \(s\in S\), \[E_{s}^{\rho}(j)=E_{s}.\] Let \(\sigma\in\Sigma(n)\). The set \(\{u_{\rho}:\rho\in\sigma(1)\}\) is a \(\mathbb{Q}\)-basis of \(N\otimes_{\mathbb{Z}}\mathbb{Q}\); we denote by \(\{u_{\rho}^{*}:\rho\in\sigma(1)\}\) its dual basis. For any \(\rho^{\prime}\in\sigma(1)\), there is \(m^{\prime}\in M\), such that \(\langle m^{\prime},u_{\rho^{\prime}}\rangle=j\). Let \(m\in M\) be given by \[m=m^{\prime}+\sum_{\rho\in\sigma(1)\setminus\{\rho^{\prime}\}}a_{\rho}u_{\rho} ^{*}\] where for any \(\rho\in\sigma(1)\setminus\{\rho^{\prime}\}\), \(a_{\rho}\in\mathbb{Z}\) satisfies \(a_{\rho}u_{\rho}^{*}\in M\) and \(a_{\rho}+\langle m^{\prime},u_{\rho}\rangle\geq i_{\rho}\). By construction of \(m\), one has \[\bigcap_{\rho\in\sigma(1)}E_{s}^{\rho}(\langle m,u_{\rho}\rangle)=E_{s}^{\rho^ {\prime}}(j).\] As \(s\mapsto\chi(\mathcal{E}_{s})\) is constant, we deduce that the map \(s\mapsto\dim(E_{s}^{\rho^{\prime}}(j))\) is constant for any \(\rho^{\prime}\in\sigma(1)\) and any \(j\in\mathbb{Z}\). If \(\mathcal{E}\) is an \(S\)-family of equivariant reflexive sheaves on \(X\) which satisfies the conditions of Lemma 4.8, then by [18, Lemma 3.12], for any ample \(\mathbb{Q}\)-Cartier divisor \(L\) on \(X\), the set \[\{\mu_{L}(\mathcal{G}_{s}):s\in S,\,\mathcal{G}_{s}\text{ is an equivariant and saturated reflexive subsheaf of }\mathcal{E}_{s}\}\] is finite. Therefore, in that case, the \(\varepsilon_{0}\) in Theorem 4.4 can be taken uniformly for \((\mathcal{E}_{s})_{s\in S}\). ### An example We denote by \((e_{1},e_{2},e_{3})\) the standard basis of \(\mathbb{Z}^{3}\). Let \[u_{1}=e_{1},\quad u_{2}=e_{1}+e_{2}-e_{3},\quad u_{3}=e_{2},\quad u_{4}=e_{3}, \quad u_{0}=-(e_{1}+e_{2}+e_{3})\] and \(\Sigma_{0}\) be a fan in \(\mathbb{R}^{3}\) given by \[\Sigma_{0}=\{\operatorname{Cone}(u_{1},u_{2},u_{3},u_{4})\}\cup\bigcup_{i=1}^{ 4}\{\operatorname{Cone}(A):A\subseteq\{u_{0},u_{i},u_{i+1}\}\}\] where \(u_{5}=u_{1}\). We denote by \(\sigma_{0}\) the flipping cone \(\operatorname{Cone}(u_{1},u_{2},u_{3},u_{4})\); we have \[u_{2}+u_{4}-u_{1}-u_{3}=0.\] Let \[\Sigma=(\Sigma_{0}\setminus\{\sigma_{0}\})\cup\Sigma_{+}\quad\text{and}\quad \Sigma^{\prime}=(\Sigma_{0}\setminus\{\sigma_{0}\})\cup\Sigma_{-}\] where \[\Sigma_{+} =\{\operatorname{Cone}(u_{j}:j\in J):J\subset\{1,\dots,4\}\text{ and }\{1,3\}\nsubseteq J\}\quad\text{and}\] \[\Sigma_{-} =\{\operatorname{Cone}(u_{j}:j\in J):J\subset\{1,\dots,4\}\text{ and }\{2,4\}\nsubseteq J\}.\] We denote by \(D_{i}\) the torus invariant divisor associated to the ray \(\operatorname{Cone}(u_{i})\). By using (4.3) with \(m\in\{e_{1},e_{2},e_{3}\}\), we get the following linear equivalences of divisors on \(X_{0}\), \(X\) and \(X^{\prime}\): \[D_{1}\sim_{\operatorname{lin}}D_{3}\sim_{\operatorname{lin}}D_{0}-D_{2}\quad \text{and}\quad D_{4}\sim_{\operatorname{lin}}D_{0}+D_{2}.\] By [6, Theorem 4.2.8.(d)], the divisor \(D_{0}\) generates the set of invariant Cartier divisors of \(X_{0}\). As \(\Sigma\) (resp. \(\Sigma^{\prime}\)) is simplicial, by [6, Proposition 4.2.7] any invariant divisor of \(X\) (resp. \(X^{\prime}\)) is \(\mathbb{Q}\)-Cartier. **Lemma 4.9** (Intersections of divisors).: 1. _On_ \(X\) _and_ \(X^{\prime}\)_, we have:_ \[D_{1}\cdot D_{0}^{2}=\frac{1}{2}\qquad D_{2}\cdot D_{0}^{2}=\frac {1}{4}\qquad D_{4}\cdot D_{0}^{2}=1\qquad D_{0}\cdot D_{0}^{2}=\frac{3}{4}\] \[D_{1}\cdot(D_{0}\cdot D_{2})=\frac{1}{2}\qquad D_{4}\cdot(D_{0} \cdot D_{2})=0\qquad D_{2}\cdot(D_{0}\cdot D_{2})=-\frac{1}{4}.\] 2. _On_ \(X\)_, we have_ \[D_{1}\cdot D_{2}^{2}=\frac{1}{2}\qquad D_{4}\cdot D_{2}^{2}=-1\qquad D_{0} \cdot D_{2}^{2}=\frac{-1}{4}\qquad D_{2}\cdot D_{2}^{2}=\frac{-3}{4}.\] 3. _On_ \(X^{\prime}\)_, we have_ \[D_{1}\cdot D_{2}^{2}=\frac{-1}{2}\qquad D_{4}\cdot D_{2}^{2}=0\qquad D_{0}\cdot D _{2}^{2}=\frac{-1}{4}\qquad D_{2}\cdot D_{2}^{2}=\frac{1}{4}.\] Proof.: The lemma follows from Formula (4.2) and Table 1. We show the first point to illustrate the computations, the other intersection numbers follow in the same way. \[D_{1}\cdot D_{0}^{2} =D_{1}\cdot D_{0}\cdot(D_{2}+D_{3})=D_{1}\cdot D_{0}\cdot D_{2}+D _{1}\cdot D_{0}\cdot D_{3}=\frac{1}{2}\] \[D_{2}\cdot D_{0}^{2} =\frac{1}{2}D_{2}\cdot D_{0}\cdot(D_{3}+D_{4})=\frac{1}{4}\] \[D_{4}\cdot D_{0}^{2} =D_{4}\cdot D_{0}\cdot(D_{2}+D_{3})=1\] \[D_{0}\cdot D_{0}^{2} =\frac{1}{2}(D_{3}+D_{4})\cdot D_{0}^{2}=\frac{1}{4}+\frac{1}{2}= \frac{3}{4}.\] We assume that \(\mathcal{E}=\mathcal{T}_{X}\). By [7, Corollary 2.2.17], the family of filtrations of the tangent sheaf \(\mathcal{T}_{X}\) of \(X\) is given by: \[E^{\rho}(j)=\left\{\begin{array}{ll}0&\text{if }j<-1\\ \operatorname{Span}(u_{\rho})&\text{if }j=-1\\ N\otimes_{\mathbb{Z}}\mathbb{C}&\text{if }j>-1\end{array}\right..\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\mathbb{Z}u_{0}+\mathbb{Z}u_{1}+\mathbb{Z}u_{2}\) & \(\mathbb{Z}u_{0}+\mathbb{Z}u_{2}+\mathbb{Z}u_{3}\) & \(\mathbb{Z}u_{0}+\mathbb{Z}u_{3}+\mathbb{Z}u_{4}\) & \(\mathbb{Z}u_{0}+\mathbb{Z}u_{4}+\mathbb{Z}u_{1}\) \\ \hline 2 & 2 & 1 & 1 \\ \hline \(\mathbb{Z}u_{2}+\mathbb{Z}u_{4}+\mathbb{Z}u_{1}\) & \(\mathbb{Z}u_{2}+\mathbb{Z}u_{4}+\mathbb{Z}u_{3}\) & \(\mathbb{Z}u_{1}+\mathbb{Z}u_{3}+\mathbb{Z}u_{2}\) & \(\mathbb{Z}u_{1}+\mathbb{Z}u_{3}+\mathbb{Z}u_{4}\) \\ \hline 1 & 1 & 1 & 1 \\ \hline \end{tabular} \end{table} Table 1. Sublattices and their index in \(\mathbb{Z}^{3}\) Figure 1. Fans of varieties given in Section 4.3 In this case, the inequalities of Proposition 4.2 become \[\frac{1}{\dim(F)}\sum_{u_{\rho}\in F}\deg_{L}(D_{\rho})\stackrel{{( \leq)}}{{\leq}}\frac{1}{n}\sum_{\rho\in\Sigma(1)}\deg_{L}(D_{\rho}).\] Let \(L_{0}=D_{0}\) be an ample Cartier divisor on \(X_{0}\). We have \[\mu_{L}(\mathcal{E})=1=\frac{1}{\dim(F_{1})}\sum_{u_{\rho}\in F_{1}}\deg_{L}(D_ {\rho})=\frac{1}{\dim(F_{2})}\sum_{u_{\rho}\in F_{2}}\deg_{L}(D_{\rho})\] where \(F_{1}=\operatorname{Span}(u_{4})\) and \(F_{2}=\operatorname{Span}(u_{0},u_{2},u_{4})\). As for any vector subspace \(F\subsetneq E\) such that \(F\notin\{F_{1},F_{2}\}\), one has \[\frac{1}{\dim(F)}\sum_{u_{\rho}\in F}\deg_{L}(D_{\rho})<1,\] we deduce that \(\mathcal{E}\) is semistable. We denote by \(\mathcal{F}\) the subsheaf of \(\mathcal{E}\) given by the family of filtrations \((F_{1},F_{1}\cap E^{\rho}(i))\). Let \[D_{+}=D_{2}+D_{4}.\] We have \[D_{+}\sim_{\operatorname{lin}}D_{0}+2D_{2}.\] So \[L_{-\varepsilon}=\phi^{*}L_{0}-\varepsilon D_{+}\sim_{\operatorname{lin}}(1- \varepsilon)D_{0}-2\varepsilon D_{2}=(1-\varepsilon)\left(D_{0}-\frac{2 \varepsilon}{1-\varepsilon}D_{2}\right)\] and \[L_{\varepsilon}=(\phi^{\prime})^{*}L_{0}+\varepsilon D_{+}\sim_{ \operatorname{lin}}(1+\varepsilon)D_{0}+2\varepsilon D_{2}=(1+\varepsilon) \left(D_{0}+\frac{2\varepsilon}{1+\varepsilon}D_{2}\right).\] According to Lemma 4.9, on \(X\) we have \[\deg_{L_{-\varepsilon}}(D_{1})=\frac{1}{2}-3\varepsilon+\frac{9}{ 2}\varepsilon^{2} \deg_{L_{-\varepsilon}}(D_{2})=\frac{1}{4}+\frac{1}{2}\varepsilon- \frac{15}{4}\varepsilon^{2}\] \[\deg_{L_{-\varepsilon}}(D_{4})=1-2\varepsilon-3\varepsilon^{2} \deg_{L_{-\varepsilon}}(D_{0})=\frac{3}{4}-\frac{5}{2}\varepsilon+ \frac{3}{4}\varepsilon^{2}; \tag{4.5}\] and on \(X^{\prime}\) we have \[\deg_{L_{-\varepsilon}}(D_{1})=\frac{1}{2}+3\varepsilon+\frac{1}{2} \varepsilon^{2} \deg_{L_{-\varepsilon}}(D_{2})=\frac{1}{4}-\frac{1}{2}\varepsilon+ \frac{1}{4}\varepsilon^{2}\] \[\deg_{L_{-\varepsilon}}(D_{4})=1+2\varepsilon+\varepsilon^{2} \deg_{L_{-\varepsilon}}(D_{0})=\frac{3}{4}+\frac{5}{2}\varepsilon+ \frac{3}{4}\varepsilon^{2}. \tag{4.6}\] By Formulas (4.5), we get \[\mu_{L_{-\varepsilon}}(\mathcal{E})=1-\frac{10}{3}\varepsilon+\varepsilon^{2} \quad\text{and}\quad\mu_{L_{-\varepsilon}}(\mathcal{F})=1-2\varepsilon-3 \varepsilon^{2}.\] Hence, there is \(\varepsilon_{0}\) such that for any \(\varepsilon\in(0,\varepsilon_{0})\cap\mathbb{Q}\), the tangent sheaf \(\mathcal{T}_{X}\) is \(L_{-\varepsilon}\)-unstable. By Formulas (4.6), we have \[\mu_{L_{-\varepsilon}}(\mathcal{E})=1+\frac{10}{3}\varepsilon+\varepsilon^{2} \quad\text{and}\quad\mu_{L_{-\varepsilon}}(\mathcal{F})=1+2\varepsilon+ \varepsilon^{2}.\] Hence, there is \(\varepsilon_{0}\) such that for any \(\varepsilon\in(0,\varepsilon_{0})\cap\mathbb{Q}\), the tangent sheaf \(\mathcal{T}_{X^{\prime}}\) is \(L_{\varepsilon}\)-stable. ## 5. Flips and stability for logarithmic subcategories Consider a toric flip as in Section 2. Fix \(\Delta\subset\Sigma(1)\), and introduce the divisor \[D:=\sum_{\rho\in\Delta}D_{\rho}\] seen as a divisor on \(X\), \(X^{\prime}\) and \(X_{0}\). We consider the equivalent categories \(\mathfrak{Ref}^{T}(X,D)\) and \(\mathfrak{Ref}^{T}(X^{\prime},D)\) as in Section 3.3. We will say that \[\psi_{*}:\mathfrak{Ref}^{T}(X,D)\to\mathfrak{Ref}^{T}(X^{\prime},D)\] _preserves polystability_ for the pair of polarisations \((\alpha,\alpha^{\prime})\in\operatorname{Pic}(X)_{\mathbb{Q}}\otimes \operatorname{Pic}(X^{\prime})_{\mathbb{Q}}\) if for any \(\mathcal{E}\in\operatorname{Obj}(\mathfrak{Ref}^{T}(X,D))\), \(\mathcal{E}\) is \(\alpha\)-polystable if and only if \(\psi_{*}\mathcal{E}\) is \(\alpha^{\prime}\)-polystable. **Theorem 5.1**.: _The following assertions are equivalent, for a pair of ample classes \((\alpha,\alpha^{\prime})\in\operatorname{Pic}(X)_{\mathbb{Q}}\otimes \operatorname{Pic}(X^{\prime})_{\mathbb{Q}}\)._ 1. _The functor_ \(\psi_{*}:\mathfrak{Ref}^{T}(X,D)\to\mathfrak{Ref}^{T}(X^{\prime},D)\) _preserves polystability for_ \((\alpha,\alpha^{\prime})\)_._ 2. _There is_ \(c\in\mathbb{Q}_{>0}\) _such that for all_ \(\mathcal{E}\in\mathfrak{Ref}^{T}(X,D)\)_,_ \(\mu_{\alpha}(\mathcal{E})=c\;\mu_{\alpha^{\prime}}(\psi_{*}\mathcal{E})\)_._ 3. _There is_ \(c\in\mathbb{Q}_{>0}\) _such that for all_ \(\rho\notin\Delta\)_,_ \(\deg_{\alpha}D_{\rho}=c\;\deg_{\alpha^{\prime}}D_{\rho}^{\prime}\)_._ Proof.: Recall the formula \[\mu_{\alpha}(\mathcal{E})=-\frac{1}{\operatorname{rank}(\mathcal{E})}\sum_{ \rho\in\Sigma(1)}\iota_{\rho}(\mathcal{E})\;\deg_{\alpha}(D_{\rho})\] for the slope, with \[\iota_{\rho}(\mathcal{E}):=\sum_{i\in\mathbb{Z}}i\left(\dim(E^{\rho}(i))- \dim(E^{\rho}(i-1))\right)\] where \((E,E^{\rho}(\bullet))_{\rho\in\Sigma(1)}\) stands for the family of filtrations associated to \(\mathcal{E}\). Then, by definition of the logarithmic category \(\mathfrak{Ref}^{T}(X,D)\), for any \(\mathcal{E}\in\operatorname{Obj}(\mathfrak{Ref}^{T}(X,D))\), and any \(\rho\in\Delta\), we have \[\iota_{\rho}(\mathcal{E})=0,\] so the slope reads \[\mu_{\alpha}(\mathcal{E})=-\frac{1}{\operatorname{rank}(\mathcal{E})}\sum_{ \rho\notin\Delta}\iota_{\rho}(\mathcal{E})\;\deg_{\alpha}(D_{\rho}).\] Note also that by construction, for any \(\rho\), \[\iota_{\rho}(\mathcal{E})=\iota_{\rho}(\psi_{*}\mathcal{E}).\] Then, \((iii)\Rightarrow(ii)\Rightarrow(i)\) is straightforward. To prove \((i)\Rightarrow(iii)\), we argue as in [4, proof of Proposition 4.8], and consider for any pair \((\rho_{1},\rho_{2})\in\Sigma(1)^{2}\) the polystable toric sheaf \[\mathcal{E}=\mathcal{O}_{X}(d\deg_{\alpha}(D_{\rho_{2}})D_{\rho_{1}})\oplus \mathcal{O}_{X}(d\deg_{\alpha}(D_{\rho_{1}})D_{\rho_{2}}),\] where \(d\) is the common denominator of \(\deg_{\alpha}(D_{\rho_{2}})\) and \(\deg_{\alpha}(D_{\rho_{1}})\). Its image by \(\psi_{*}\) is \[\psi_{*}\mathcal{E}=\mathcal{O}_{X^{\prime}}(d\deg_{\alpha}(D_{\rho_{2}})D^{ \prime}_{\rho_{1}})\oplus\mathcal{O}_{X^{\prime}}(d\deg_{\alpha}(D_{\rho_{1}})D^ {\prime}_{\rho_{2}}).\] As \(\mathcal{E}\in\operatorname{Obj}(\mathfrak{Ref}^{T}(X,D))\) (see e.g. [7, Example 2.2.13] for the family of filtrations of rank one toric sheaves), if \(\psi\) preserves polystability, we must have \[\deg_{\alpha}(D_{\rho_{2}})\deg_{\alpha^{\prime}}(D^{\prime}_{\rho_{1}})=\deg _{\alpha}(D_{\rho_{1}})\deg_{\alpha^{\prime}}(D^{\prime}_{\rho_{2}}).\] The result follows. **Remark 5.2**.: The reason for considering logarithmic subcategories is the following. If one considers the full \(\mathfrak{Ref}^{T}(X)\), and assume that \(\psi_{*}\) preserves polystability for all \(\mathcal{E}\in\operatorname{Obj}(\mathfrak{Ref}^{T}(X,D))\), then by \((iii)\) of Theorem 5.1, for any \(\rho\in\Sigma(1)\), one must have \[\deg_{\alpha}D_{\rho}=c\,\deg_{\alpha^{\prime}}D^{\prime}_{\rho}.\] Up to scale, we can assume \(c=1\). But then, a result due to Minkowski ([22, p.455]), translated into the toric setting in [4, Proposition 5.3 and Corollary 5.4], implies that the polytope associated to \((X,\alpha)\) equals the polytope associated to \((X^{\prime},\alpha^{\prime})\), which is absurd, as \(X\) and \(X^{\prime}\) are not isomorphic.
2306.17804
Solving Edge Clique Cover Exactly via Synergistic Data Reduction
The edge clique cover (ECC) problem -- where the goal is to find a minimum cardinality set of cliques that cover all the edges of a graph -- is a classic NP-hard problem that has received much attention from both the theoretical and experimental algorithms communities. While small sparse graphs can be solved exactly via the branch-and-reduce algorithm of Gramm et al. [JEA 2009], larger instances can currently only be solved inexactly using heuristics with unknown overall solution quality. We revisit computing minimum ECCs exactly in practice by combining data reduction for both the ECC \emph{and} vertex clique cover (VCC) problems. We do so by modifying the polynomial-time reduction of Kou et al. [Commun. ACM 1978] to transform a reduced ECC instance to a VCC instance; alternatively, we show it is possible to ``lift'' some VCC reductions to the ECC problem. Our experiments show that combining data reduction for both problems (which we call \emph{synergistic data reduction}) enables finding exact minimum ECCs orders of magnitude faster than the technique of Gramm et al., and allows solving large sparse graphs on up to millions of vertices and edges that have never before been solved. With these new exact solutions, we evaluate the quality of recent heuristic algorithms on large instances for the first time. The most recent of these, \textsf{EO-ECC} by Abdullah et al. [ICCS 2022], solves 8 of the 27 instances for which we have exact solutions. It is our hope that our strategy rallies researchers to seek improved algorithms for the ECC problem.
Anthony Hevia, Benjamin Kallus, Summer McClintic, Samantha Reisner, Darren Strash, Johnathan Wilson
2023-06-30T17:06:04Z
http://arxiv.org/abs/2306.17804v2
# Solving Edge Clique Cover Exactly via Synergistic Data Reduction ###### Abstract The edge clique cover (ECC) problem--where the goal is to find a minimum cardinality set of cliques that cover all the edges of a graph--is a classic NP-hard problem that has received much attention from both the theoretical and experimental algorithms communities. While small sparse graphs can be solved exactly via the branch-and-reduce algorithm of Gramm et al. [1], larger instances can currently only be solved inexactly using heuristics with unknown overall solution quality. We revisit computing minimum ECCs exactly in practice by combining data reduction for both the ECC _and_ vertex clique cover (VCC) problems. We do so by modifying the polynomial-time reduction of Kou et al. [12] to transform a reduced ECC instance to a VCC instance; alternatively, we show it is possible to "lift" some VCC reductions to the ECC problem. Our experiments show that combining data reduction for both problems (which we call _synergistic data reduction_) enables finding exact minimum ECCs orders of magnitude faster than the technique of Gramm et al., and allows solving large sparse graphs on up to millions of vertices and edges that have never before been solved. With these new exact solutions, we evaluate the quality of recent heuristic algorithms on large instances for the first time. The most recent of these, EO-ECC by Abdullah et al. [13], solves 8 of the 27 instances for which we have exact solutions. It is our hope that our strategy rallies researchers to seek improved algorithms for the ECC problem. Keywords:Edge clique cover, Vertex clique cover, Data reduction, Degeneracy + Footnote †: Corresponding author. ## 1 Introduction In the _edge clique cover (ECC) problem_, also called the _clique cover_ problem, we are given an unweighted, undirected, simple graph \(G=(V,E)\) and asked to find a minimum cardinality set of cliques that cover the edges of \(G\). The ECC problem is NP-hard, however its decision variant did not appear in Karp's original list of NP-complete problems [23], though the _vertex clique cover (VCC) problem_ did. Compared to the VCC problem, the ECC problem has received the lion's share of attention from researchers, in part because it has many applications. For instance, edge clique covers can be used to succinctly represent constraints for integer program solvers [5] and to detect communities in networks [12]. Data reduction rules, which allow one to transform an input instance to a smaller equivalent instance of the same problem, are powerful tools for solving NP-hard problems in practice [4, 26]. Of particular interest in the field of parameterized algorithms is whether the repeated application of data reduction rules produces a _kernel_--which is a problem instance that has size bounded by a function \(O(f(k))\) of some parameter \(k\) of the input. Gramm et al. [19] show that repeated application of four simple reduction rules produce a kernel of size \(2^{k}\), where the parameter \(k\) is the number of cliques in the cover. When intermixed with branch-and-bound (a so-called _branch-and-reduce_ algorithm), these reduction rules enable solving sparse graphs of up to 10,000 vertices quickly in practice. Since their seminal work, no progress has been made on solving larger instances exactly. Indeed, the prospect of doing so is grim since polynomial kernels are unlikely to exist for the ECC problem, when parameterized on the solution size [13]. Although researchers have found further FPT algorithms (and smaller kernels) with other parameters [6, 32], these algorithms are still only able to solve relatively small instances in practice. The outlook for the VCC problem is even worse in theory: it is unlikely to have any problem kernel when parameterized on the number of cliques \(k\) in the cover, as it is already NP-hard for \(k=3\) (since it is equivalent to 3-coloring the complement graph). However, recent data reductions for the _VCC_ problem have been shown to significantly accelerate computing minimum VCCs exactly in practice. Strash and Thompson [31] introduce a suite of reduction rules and show that data reduction can solve real-world sparse graphs with up to millions of vertices in seconds. ### Our Results We show that combining VCC and ECC data reductions enables the ECC problem to be solved exactly on large instances not previously solvable by Gramm et al. [19]. We do so by modifying the polynomial-time transformation of Kou et al. [25] to transform a reduced ECC instance to a VCC instance, but also show that some VCC reductions can be "lifted" to ECC reductions. Their combined reduction power (which we call _synergistic data reduction_) reduces an ECC instance significantly more than Gramm et al.'s reductions alone, enabling us to exactly solve graphs with millions of vertices and edges. With these exact results, we objectively evaluate the quality of heuristic algorithms recently introduced in the literature. On instances not solvable exactly with our method, we give upper and lower bounds for use by future researchers. ## 2 Related Work We now briefly review the relevant previous work on the ECC and VCC problems, as well as practical data reduction in related problems. ### Edge Clique Cover The goal of the edge clique cover (ECC) problem is to cover the edges of the graph \(G\) with a minimum number of cliques, denoted \(\theta_{E}(G)\). That is, to find a set of cliques \(\mathcal{C}=\{C_{1},C_{2},\ldots,C_{k}\}\) such that each edge is in at least one clique in \(\mathcal{C}\) and \(k=\theta_{E}(G)\). Although closely related to the VCC problem (to cover _vertices_ with a minimum number of cliques, denoted \(\theta(G)\)), Brigham and Dutton [8] showed that \(\theta(G)\leq\theta_{E}(G)\), and that these cover numbers can differ significantly: \(\theta_{E}(G)\) can be as large as \(\theta(G)(n-\theta(G))\). Gramm et al. [19] introduced four data reductions for the ECC problem, which they show can solve real-world sparse graphs of hundreds of vertices, as well as synthetic instances on up to 10K vertices in practice, when interleaved with branch and bound. Furthermore, they showed that their data reductions produce a kernel of size \(2^{k}\), where \(k\) is the number of cliques. Cygan et al. [13] showed that it is unlikely that a polynomial-size kernel exists when parameterized by the number of cliques in the cover, as otherwise the polynomial hierarchy collapses to its third level. However, Blanchette et al. [6] gave a linear-time algorithm having running time \(O(2^{\binom{k}{2}}n)\) where \(k\) is the treewidth of the graph. In practice, their algorithm is effective on graphs with hundreds of vertices and small treewidth. For larger graphs, heuristic methods are used to compute inexact ECCs [12, 2, 1] in practice. No heuristic algorithm performs best on all instances, and their overall quality is unclear. ### Vertex Clique Cover The vertex clique cover (VCC) problem is NP-hard, and closely related to the maximum independent set and graph coloring problems. The size of a minimum VCC (also called the clique cover number) \(\theta(G)\) is lower bounded by the size of a maximum independent set (the independence number \(\alpha(G)\)) and equivalent to the chromatic number of the complement graph, \(\chi(\overline{G})\). There is a rich line of research on the graph coloring problem, which seeks to compute the chromatic number; many of the theoretical results for the VCC problem come via the graph coloring problem. The fastest exact exponential-space algorithm for computing the chromatic number on an \(n\)-vertex graph has time \(O^{*}(2^{n})\) (where \(O^{*}\) hides polynomial factors) using a generalization of the exclusion-inclusion principle [24], and in polynomial space the problem can be solved in time \(O(2.2356^{n})\)[18]. Furthermore, there exists no polynomial-time algorithm with approximation ratio better than \(n^{1-\epsilon}\) for \(\epsilon>0\) unless \(P=NP\)[34]. In terms of data reduction, we note that it is unlikely that a kernel exists when parameterized on the (vertex) clique cover number. Deciding if a cover with even 3 cliques exists is NP-complete (since 3-coloring the complement is NP-hard). A polynomial kernel would have size \(O(1)\) and could be computed in polynomial time. Solving the kernel with brute-force computation would solve the VCC problem in polynomial time, implying \(P=NP\). However, in practice, the VCC problem can be solved on large, sparse real-world graphs using the data reductions by Strash and Thompson [31]. ### Data Reduction in Practice for Related Problems Other classical NP-hard problems have large suites of data reductions that are effective in practice, including minimum vertex cover [4, 15], maximum cut [16], and cluster editing [7]. Popular data reductions include variations of simplicial vertex removal, degree-2 folding, twin, domination, unconfined, packing, crown, and linear-programming-relaxation-based reductions [4]. Even the simplest reductions can be highly effective when combined with other techniques [10, 30]. Data reductions are most effective in sparse graphs, which are the graphs that we consider here. Finally, similar to what we propose here, other NP-hard problems are solved by first applying a problem transformation. In particular, algorithms for minimum dominating set problem first transform the problem to an instance of the set cover problem [33]. ## 3 Preliminaries We consider a simple finite undirected graph \(G=(V,E)\) with vertex set \(V\) and edge set \(E\subseteq\{\{u,v\}\mid u,v\in V\}\). For brevity, we denote by \(n=|V|\) and \(m=|E|\) the number of vertices and edges in the graph, respectively. When more specificity is needed, we denote the vertex and edge set of a graph \(G\) by \(V(G)\) and \(E(G)\) respectively. We say two vertices \(u,v\in V\) are _adjacent_ (or _neighbors_) when \(\{u,v\}\in E\). The _open neighborhood_ of a vertex \(v\in V\) is the set of its neighbors \(N(v):=\{u\mid\{u,v\}\in E\}\), and the _degree_ of \(v\) is \(|N(v)|\). We further define the _closed neighborhood_ of a vertex \(v\in V\) to be \(N[v]:=N(v)\cup\{v\}\). Extending these definitions, the open neighborhood of a set \(A\subseteq V\) is \(N(A):=\bigcup_{v\in A}N(v)\setminus A\) and the closed neighborhood of \(A\) is \(N[A]:=\bigcup_{v\in A}N[v]\). The subgraph of \(G\) induced by a vertex set \(V^{\prime}\subseteq V\), denoted \(G[V^{\prime}]\), has vertex set \(V^{\prime}\) and edge set \(E^{\prime}=\{\{u,v\}\in E\mid u,v\in V^{\prime}\}\). The _degeneracy_\(d\) of a graph \(G\) is the smallest value such that every nonempty subgraph of \(G\) has a vertex of degree at most \(d\)[27]. It is possible to order the vertices of a graph \(G\) in time \(O(n+m)\) so that every vertex has \(d\) or fewer neighbors later in the ordering; such an ordering is called a _degeneracy_ ordering [14]. A vertex set \(C\subseteq V\) is called a _clique_ if, for each pair of distinct of vertices \(u,v\in C\), \(\{u,v\}\in E\). A set of cliques \(\mathcal{C}\) is called an _edge clique cover (ECC)_ (or just a _clique cover_) of \(G\) if for every edge \(\{u,v\}\in E\) there exists at least one \(C\in\mathcal{C}\) such that \(\{u,v\}\subseteq C\). That is, there is some clique in \(\mathcal{C}\) that _covers_\(\{u,v\}\). The set of cliques \(\mathcal{C}\) is said to cover the graph \(G\). An ECC of minimum cardinality is called a _minimum ECC_, and its cardinality is denoted by \(\theta_{E}(G)\), called the edge clique cover number. Similarly, in a _vertex clique cover (VCC)_, every _vertex_\(v\in V\) is covered by some clique. The cardinality of a minimum VCC is the _clique cover number_, denoted by \(\theta(G)\). ## 4 Existing Tools Discussion In this section, we discuss basic tools that we will use to solve the ECC problem, together with insights into their behavior on sparse graphs. We begin by describing the existing ECC data reductions by Gramm et al. [19]. We then discuss how to convert an input ECC instance to an equivalent VCC instance using the technique of Kou et al. [25]. We will extend these tools to develop our full algorithm combining ECC and VCC reductions in the next section. ### ECC Reduction Rules Gramm et al. [19] introduce four data reduction rules that either cover edges by a clique known to be in a minimum cardinality ECC or add edges to the input graph \(G\). Once all of a vertex \(v\)'s incident edges are covered, \(v\) can be removed from the graph. With each edge \(\{u,v\}\), Gramm et al. store the common neighbors in \(G\), denoted by \(N_{\{u,v\}}\), as well as a count \(c_{\{u,v\}}=|E(G[N_{\{u,v\}}])|\) of the edges between common neighbors. These values are updated in ECC Reduction 1, and are used in ECC Reduction 2. Throughout the application of data reductions, vertices are removed from \(G\) and edges are covered. Figure 1 illustrates an example of the reductions. Set let edge set \(E^{\prime}\subseteq E\) be the set of uncovered edges (by extension, \(E\setminus E^{\prime}\) are the covered edges). The graph \(G\) only changes when a vertex is removed. We note that the data reductions by Gramm et al. [19] are particularly effective for sparse graphs; however, the original data reductions were not written with efficiency in mind. Although these reductions have (very) slow theoretical running times, we offer insights as to why their reductions are faster in practice than indicated by the theoretical running time from Gramm et al. [19]. [[19]] Let \(v\in V\) be a vertex whose incident edges are all covered (i.e., in \(E\setminus E^{\prime}\)). Then remove \(v\) from the graph \(G\), along with its incident edges, and update values \(c_{\{w,x\}}\) and \(N_{\{w,x\}}\) for all uncovered edges \(\{w,x\}\in E^{\prime}\) whose endpoints are both adjacent to \(v\), i.e., \(\{w,x\}\subseteq N(v)\). As noted by Gramm et al. [19], this step can be applied to all vertices in running time \(O(n^{2}m)\) by iterating over each vertex \(v\) and updating \(N_{\{u,w\}}\) for all edges \(\{u,w\}\in E^{\prime}\) whose endpoints are adjacent to \(v\). However, in sparse graphs the maximum degree in \(G\), denoted \(\Delta\), is significantly smaller than \(n\). Each edge \(\{u,w\}\) has its set \(N_{\{u,w\}}\) updated at most \(\Delta\) times, taking \(O(\Delta)\) time to update each time, giving a more reasonable running time of \(O(\Delta^{2}m)\). We note that with adjustments, this can be run faster by enumerating all triangles in \(G\) in time \(O(dm)\) using the triangle listing by Chiba and Nishizeki [11] and updating \(N_{\{u,w\}}\) for edge \(\{u,w\}\) in each triangle; however, this is a different implementation than that done by Gramm et al. [19] and not our focus here. [[19]] Let edge \(\{u,v\}\in E^{\prime}\) be an uncovered edge such that \(c_{\{u,v\}}=\binom{|N_{\{u,v\}}|}{2}\) (i.e., the edge is in exactly one maximal clique in \(G^{\prime}\)). Then \(C=N_{\{u,v\}}\cup\{u,v\}\) is a maximal clique of \(G\) in some minimum ECC. Add the clique \(C\) to the clique cover, and cover any uncovered edges in \(C\) in \(G\). As noted by Gramm et al. [19], ECC Reduction 2 can be implemented in time \(O(n^{2}m)\) by iterating over each edge \(\{u,v\}\in E^{\prime}\), checking if \(c_{\{u,v\}}=\binom{|N_{\{u,v\}}|}{2}\) in \(O(1)\) time, and covering the edges of \(\{u,v\}\)'s clique in time \(O(n^{2})\) time. However, when run on sparse graphs, which tend to have low degeneracy \(d\)[14], this rule is much faster. Graphs with degeneracy \(d\) have cliques of at most \(d+1\) vertices, therefore the reduction is only triggered when \(|N_{\{u,v\}}|<d\). Hence, in practice, we should observe the much faster running time of \(O(d^{2}m)\). Gramm et al. introduce two more ECC reductions, however, they are more complex and we choose not to run them here. Experiments by Gramm et al. show that these reductions are very slow in practice, and only improve the search tree size by a constant Figure 1: Illustrating Gramm et al. [19]’s data reductions: (left) edge \(\{v,x\}\) is in exactly one maximal clique \(C_{1}\), triggering ECC Reduction 2 and covering edges \(\{v,x\}\), \(\{v,z\}\), and \(\{x,z\}\) (middle). Vertex \(v\) can then be removed with ECC Reduction 1. The remaining triangle is covered by clique \(C_{2}\) by applying ECC Reduction 2 to either \(\{x,y\}\) or \(\{y,z\}\). factor when incorporated in branch and reduce [19]. We invite the interested reader to see ECC Reductions 3 and 4 in Appendix A. ### Transforming an ECC Instance to a VCC Instance Kou et al. [25] showed that the ECC problem is NP-hard via a polynomial-time reduction from the VCC problem. Furthermore, they gave a polynomial-time reduction _to_ the VCC problem, which we use as the basis of our transformation. We describe their transformation and briefly justify why it works. Given an input graph \(G=(V,E)\) for the ECC problem Kou et al. [25] transform \(G\) to a new graph \(G_{VCC}=(V_{VCC},E_{VCC})\) that is an equivalent VCC instance as follows. For each edge \(\{x,y\}\in E\), create a new vertex \(v_{xy}\in V_{VCC}\), then add an edge \(\{v_{xy},v_{wz}\}\) to \(E_{VCC}\) if and only if there exists a clique \(C\) in \(G\) containing both \(\{x,y\}\) and \(\{w,z\}\). Now, for any given subset \(C\subset V_{VCC}\), \(C\) is a clique in \(G_{VCC}\) iff its vertices' corresponding edges in \(E\) also induce a clique in \(G\). Hence, a minimum cardinality VCC in \(G_{VCC}\) corresponds to a minimum cardinality ECC in \(G\). (See Figure 2.) To determine if two edges are in a clique together in \(G\), Kou et al. [25] make the following observation: **Observation 1** ([25]).: _Two distinct edges \(\{x,y\},\{w,z\}\) are in a clique together in \(G\) iff \(\{x,y\}\) and \(\{w,z\}\) are incident and \(\{x,y\}\cup\{w,z\}\) induce a triangle, or \(\{x,y\}\) and \(\{w,z\}\) are not incident and \(\{w,x,y,z\}\) form a 4-clique._ However, there is a clear issue when using this transformation: how large can \(G_{VCC}\) be? We briefly discuss its size and sparsity. #### The Effect of Transformation on Graph Size and Sparsity In the worst case, the size of \(G_{VCC}\) is a quadratic factor larger than \(G\). Indeed, if the graph \(G\) is itself the complete graph \(K_{n}\), on \(n\) vertices and \(\Theta(n^{2})\) edges, then the transformed graph is the complete graph \(K_{n(n-1)/2}\) having \(\Theta(n^{2})\) nodes and \(\Theta(n^{4})\) edges. However, we show that the size of the graph only increases by a factor of \(O(d^{2})\), where \(d\) is the degeneracy of the graph. Real-world sparse graphs have low degeneracy [14], and thus this is a significant improvement over the worst case. **Theorem 2**.: _Let the degeneracy of \(G=(V,E)\) be \(d\). Then \(|V_{VCC}|=m\leq dn\) and \(|E_{VCC}|=O(d^{2}m)\)._ Proof.: By construction \(|V_{VCC}|=m\); hence, to bound \(|V_{VCC}|\), we bound the number of edges in \(G\). In a degeneracy ordering of the graph, each vertex has at most \(d\) later neighbors in the ordering. Therefore, \(|V_{VCC}|=m\leq dn\). To bound \(|E_{VCC}|\), we compute an upper Figure 2: Example graph \(G\) and its transformed graph \(G_{VCC}\), with minimum clique covers. bound on the number of triangles and 4-cliques in \(G\). Following Observation 1, each edge in \(E_{VCC}\) corresponds to a pair of edges in \(E\) contained in a triangle or a pair of non-incident edges in a 4-clique. Each triangle has 3 edges, and each 4-clique has 3 pairs of non-incident edges. Therefore, an asymptotic upper bound of the number of triangles and 4-cliques in \(G\) gives an upper bound for \(|E_{VCC}|\). In any triangle, some vertex must come first in a degeneracy ordering, and can be in a triangle with at most \(\binom{d}{2}\) of its at most \(d\) later neighbors. Therefore each vertex is in \(O(d^{2})\) triangles with its later neighbors and, summing up over all vertices, contributes at most \(O(d^{2}n)\) edges to \(E_{VCC}\). Similarly, for each edge \(\{u,v\}\) we count the number of 4-cliques it is in with (non-incident) edges that come lexicographically after it in the degeneracy ordering. The number of triangles the second vertex can be in with later neighbors is \(\binom{d}{2}\) and hence the edge is in at most \(O(d^{2})\) 4-cliques with \(v\)'s at most \(d\) later neighbors, giving at most \(O(d^{2}m)\) 4-cliques total. Thus, we conclude that \(|E_{VCC}|=O(d^{2}m)\). Thus, the size of the \(G_{VCC}\) has size at most \(O(d^{2}m)\), a factor \(O(d^{2})\) larger than \(G\). As a consequence, the average degree of the graph may increase, but by no more than a factor \(d\): whereas \(G\) has average degree \(2|E|/|V|=O(dn)/n=O(d)\), graph \(G_{VCC}\) has average degree \(2|E_{VCC}|/|V_{VCC}|=O(d^{2}m)/m=O(d^{2})\). Therefore, for input graphs with small degeneracy, the transformed graph is expected to be sparse as well. However, even if the degeneracy \(d\) is small, the graph \(G_{VCC}\) may be very large in practice. Hence, to use this transformation, we require techniques to keep the graph size manageable. ## 5 Synergistic Reductions: Applying ECC and VCC Reductions We propose to handle the blow-up by Kou et al. [25] by applying both ECC and VCC reductions to the problem, which we call _synergistic_ data reduction. We first show how to adjust the transformation to work on reduced ECC instances, after which we can apply VCC reductions. We also explore the possibility of "lifting" VCC reductions to ECC reductions. ### Transforming a Partially-Covered ECC Problem Kernel Recall that the data reductions from Gramm et al. [19] result in a graph in which some edges are covered, which is not supported by the transformation of Kou et al. [25]. While it is tempting to modify the transformation to operate on _only_ the uncovered edges \(E^{\prime}\), this does not necessarily result in an equivalent instance, as already-covered edges may still be needed to compute a minimum number of cliques covering \(E^{\prime}\). For instance, in Figure 1, covering edges \(\{x,y\}\) and \(\{y,z\}\) with the single clique \(C_{2}\) uses the already-covered edge \(\{x,z\}\). One way to correct for this is to first perform the transformation on the entire graph \(G=(V,E)\), and then take the subgraph induced by the vertices corresponding to uncovered edges in \(E^{\prime}\). However, this strategy is slow when the edge set \(E\) is significantly larger than \(E^{\prime}\). We show that it is possible to perform the transformation without making vertices for all edges in \(E\). Note that since all that remains is to cover the edges in \(E^{\prime}\), we now focus on covering all \(E^{\prime}\) using a minimum number of cliques in \(G\). Taken together with already-chosen cliques from ECC reductions, this gives us a covering of all of \(G\). (See Figure 3.) We transform \(G\) to a graph \(G^{\prime}_{VCC}=(V^{\prime}_{VCC},E^{\prime}_{VCC})\), where \(V^{\prime}_{VCC}=\{v_{xy}\mid\{x,y\}\in E^{\prime}\}\) and \(E^{\prime}_{VCC}=\{\{v_{xy},v_{wz}\}\mid\{x,y\},\{w,z\}\in E^{\prime}\text{ and }\{x,y\}\cup\{w,z\}\text{ is a clique in }G\}\). This transformation preserves cliques in \(G\) that cover edges in \(E^{\prime}\), which we capture with the following observation. **Observation 3**.: _If \(C^{\prime}\) is a clique in \(G^{\prime}_{VCC}\) then \(C=\cup_{v_{xy}\in C^{\prime}}\{x,y\}\) is a clique covering \(|C^{\prime}|\) edges of \(E^{\prime}\) in \(G\)._ Furthermore, the transformation gives a correspondence between cliques covering \(E^{\prime}\) in \(G\) and VCCs in \(G^{\prime}_{VCC}\). **Theorem 4**.: _If \(\mathcal{C}^{\prime}\) is a VCC in \(G^{\prime}_{VCC}\) then \(\mathcal{C}=\{\cup_{v_{xy}\in C^{\prime}}\{x,y\}\mid C^{\prime}\in\mathcal{C}^ {\prime}\}\) is a set of cliques covering \(E^{\prime}\) in \(G\)._ Proof.: By Observation 3, every clique \(C^{\prime}\in\mathcal{C}^{\prime}\) in \(G^{\prime}_{VCC}\) corresponds to a clique \(C=\cup_{v_{xy}\in C^{\prime}}\{x,y\}\) in \(G\) that covers its corresponding edges of \(E^{\prime}\). Hence, a VCC that covers all \(V^{\prime}_{VCC}\) of \(G^{\prime}_{VCC}\) corresponds to a collection of cliques covering all edges \(E^{\prime}\) in \(G\). Note that in Theorem 4, \(|\mathcal{C}|=|\mathcal{C}^{\prime}|\). Hence, a minimum VCC in \(G^{\prime}_{VCC}\) corresponds to a minimum-cardinality set of cliques covering \(E^{\prime}\) in \(G\). This transformation gives us a technique for computing a minimum ECC: First apply the data reductions of Gramm et al., then compute \(G^{\prime}_{VCC}\) and use VCC reductions combined with any VCC solver to compute a minimum VCC in \(G^{\prime}_{VCC}\), giving us cliques covering \(E^{\prime}\) in \(G\) and, ultimately an entire ECC of \(G\). While applying VCC reductions to \(G^{\prime}_{VCC}\) may produce a smaller instance, these data reductions are not actually producing a smaller _ECC instance_. However, as we now show, we can also "lift" some VCC reductions to the ECC problem, by keeping the equivalence between cliques in the transformation in mind. ### Lifting VCC Reduction Rules to ECC Unlike the ECC problem, the VCC problem has many data reduction rules [31]. These include reductions based on simplicial vertices, dominance, twins, degree-2 folding, and crowns. We briefly discuss two classes of VCC reductions: clique-removal-based rules and folding-based rules. We place them in the context of the ECC problem, and discuss whether it is viable to "lift" them to the ECC problem, and if the graph transformation is needed. By combining existing ECC reductions with VCC reductions, we aim to reduce ECC instances even further. #### Clique-Removal-Based VCC Reductions We call a VCC reduction that removes a set of cliques from the graph a _clique-removal-based_ rule. Four VCC reductions (simplicial vertex, dominance, twin removal, and crown) are clique-removal-based rules [31]. Such rules can be easily transformed into an ECC reduction: By the equivalence between cliques in the problem transformation, stated in Observation 3, removing a clique in \(G^{\prime}_{VCC}\) is equivalent to covering its corresponding clique in \(G\). Thus, to Figure 3: A partially-covered graph \(G\) with cliques \(C_{1}\), \(C_{2}\) already added to the cover, and its transformed graph \(G^{\prime}_{VCC}\). Graved vertices and (dotted) edges are those in \(G_{VCC}\), but not \(G^{\prime}_{VCC}\). apply clique-removal-based VCC reductions directly to the ECC problem, we can compute \(G^{\prime}_{VCC}\), apply any clique-removal-based rules, and then cover these cliques in \(G\). We capture this with the following theorem. Any clique-removal-based VCC reduction can be lifted to an ECC reduction. Of course, we could try to apply these reductions more efficiently to \(G\) directly. We discuss two clique-removal-based VCC reductions and discuss whether they are worth implementing for ECC directly, or if we should transform \(G\) to \(G^{\prime}_{VCC}\) first. ### Simplicial Vertex Reduction A vertex \(v\) is _simplicial_ if \(N[v]\) forms a clique. In this case, the clique \(C=N[v]\) is in some minimum VCC. (See Figure 3(a).) [Simplicial Vertex Reduction [31]] Let \(v\in V\) be a simplicial vertex. Then \(C=N[v]\) is a clique in some minimum VCC. Add \(C\) to the clique cover and remove \(C\) from the graph. Applying VCC Reduction 1 on \(G^{\prime}_{VCC}\) is reminiscent of applying ECC Reduction 2 on the untransformed graph \(G\). While it is true that for a \(\{u,w\}\in E^{\prime}\), if \(N_{\{u,w\}}\) is a clique in \(G\), then \(v_{uw}\) is simplicial in \(G^{\prime}_{VCC}\), the converse is not true in general. Hence, VCC Reduction 1 is more powerful. Consider the counterexample in Figure 3(b). Vertex \(v_{uw}\) is simplicial in \(G^{\prime}_{VCC}\), but \(\{u,w\}\in E^{\prime}\) is in two cliques of \(G\). Thus, we have new data reduction for the ECC problem, which subsumes ECC Reduction 2: [Lifted Simplicial Vertex Reduction] Let edge \(\{u,w\}\in E^{\prime}\) and let set \(C=\{x,y\in V\mid\{x,y\}\in E^{\prime}\text{ and }\{u,w\}\cup\{x,y\}\text{ is a clique in }G\}\) be the set of vertices of edges in some clique with \(\{u,w\}\). If \(C\) is a clique, then add \(C\) to the clique cover, and cover any uncovered edges of \(C\) in \(G\). To apply our lifted reduction, we could of course first compute \(G^{\prime}_{VCC}\) and then apply VCC Reduction 1. However, we can also apply it directly to \(G\) with a slight modification to ECC Reduction 2. For each edge \(\{u,w\}\in E^{\prime}\) compute the common neighborhood \(N_{\{u,w\}}\). Instead of checking that the common neighborhood is a clique, collect the uncovered edges between vertices in \(N_{\{u,w\}}\), and check if they induce a clique. Since \(|N_{\{u,w\}}|\leq\Delta\), it takes \(O(d\Delta)\) to collect uncovered edges by iterating through the at most \(d\) later neighbors of each vertex, which dominates the running time of this step. Exhaustively applying the reduction to all edges takes time \(O(d\Delta m)\), which is slightly slower than the \(O(d^{2}m)\) time for ECC Reduction 2. Figure 4: The simplicial vertex VCC reduction can be applied after transforming \(G\) to \(G^{\prime}_{VCC}\). Is it worth applying ECC Reduction 5 directly to \(G\), or should we first transform \(G\) and run VCC Reduction 1 instead? The transformation can be done in time \(O(d^{2}m)\) by enumerating all of the triangles and 4-cliques of \(G\)[11], hence performing the transformation is faster in theory than applying ECC Reduction 5 to \(G\) directly. However, in \(G^{\prime}_{VCC}\) the largest clique may have as many as \(\Theta(d^{2})\) vertices and \(\Theta(d^{4})\) edges since a clique of size \(d+1\) in \(G\) has \(\Theta(d^{2})\) edges in \(G\). Therefore, the time to apply the VCC Reduction 1 for each of the \(m\) vertices of \(G^{\prime}_{VCC}\) is \(O(d^{4}m)\). Thus, in theory, it is more efficient to apply ECC Reduction 5 directly, rather than first applying a conversion. However, there are compelling reasons to perform the conversion. For one, most implementations of simplicial vertex reductions limit the degree of the vertex considered - in some cases to as small as two - since large-degree simplicial vertices rarely appear in sparse graphs. Therefore, in practice, it is unlikely that we would observe this large running time. However, a more compelling reason to perform the transformation is that there are two highly effective VCC reductions that we do not know how to apply directly to \(G\). The first is the crown removal reduction (a clique-removal-based reduction) and the second is the degree-2 folding-based reduction. ### Crown Removal Reduction The crown removal reduction is arguably one of the most powerful data reductions, successfully reducing sparse instances for the minimum vertex cover and VCC problems [3, 4, 10]. In a pair of vertex sets \((H,I)\), \(H\) is called a _head_ and \(I\) a _crown_ if: \(I\) is an independent set, \(N(I)=H\), and there exists a matching from \(H\) to \(I\) of size \(|H|\). Figure 4(a) shows a crown structure. Note that, due to the matching requirement, \(|I|\geq|H|\). If \(|I|=|H|\), the crown is called _straight_, otherwise it is _flared_. Strash and Thompson [31] give the following data reduction for the VCC problem, adapting a data reduction for the dual coloring problem [17]. [Crown Removal Reduction [31]] Let \((H,I)\) be a head and crown with matching \(M\) and unmatched vertices \(I^{\prime}\subseteq I\). Then add cliques in \(M\) and \(I^{\prime}\) to the clique cover and remove \(N[I]\) from the graph. (See Figure 4(a).) Note that it is possible to identify flared crowns by applying a reduction based on an LP relaxation, originally introduced for the minimum vertex cover problem by Nemhauser and Trotter [29]. A variant of this algorithm due to Iwata et al. [21] identifies and removes _all_ flared crowns at once by computing a maximum matching on a bipartite graph with \(2n\) vertices and \(2m\) edges using the Hopcroft-Karp algorithm [20] with running time \(O(m\sqrt{n})\). As Figure 4(b) illustrates, after exhaustively applying Gramm et al.'s [19] ECC reductions it is possible to have a crown structure after transforming to \(G^{\prime}_{VCC}\). Thus, lifting the crown removal reduction can further reduce an ECC instance. However, algorithms for computing Figure 5: The crown removal VCC reduction can be applied after transforming \(G\) to \(G^{\prime}_{VCC}\). a maximum matching for the LP relaxation use an explicit representation of \(G^{\prime}_{VCC}\) and therefore it is unclear how to run this reduction without first transforming \(G\) to \(G^{\prime}_{VCC}\). The transformation and maximum matching can be computed in time \(O(d^{2}m+d^{2}m\sqrt{m})=O(d^{2}m^{3/2})\), since there are \(O(m)\) vertices and \(O(d^{2}m)\) edges in \(G^{\prime}_{VCC}\). We leave the question of whether the LP relaxation reduction can be more efficiently lifted to an ECC reduction as an open problem. #### 5.2.2 Folding-Based VCC Reductions In contrast to clique-removal-based reductions, _folding-based_ reductions contract a subset \(S\subseteq V\) of vertices into a single vertex \(v^{\prime}\). _Folding S_ produces a new graph \(G^{f}=(V^{f},E^{f})\) with \(V^{f}=(V\setminus S)\cup\{v^{\prime}\}\) and \(E^{f}=(E\setminus\{\{v,x\}\in E\mid v\in S\})\cup\{\{v^{\prime},x\}\mid\exists v \in S,x\notin S,\{v,x\}\in E\}\). We discuss the connections between the ECC problem and the simplest folding-based reduction, folding vertices of degree two. ### Degree-2 Folding The degree-2 folding reduction for VCC contracts a degree-2 vertex \(v\) with non-adjacent neighbors \(u\) and \(w\) that are _crossing independent_[31]. That is, for each edge \(\{x,y\}\subseteq N(u)\cup N(w)\) either \(\{x,y\}\subseteq N(u)\) or \(\{x,y\}\subseteq N(w)\). This condition ensures that no spurious cliques are formed after folding. A vertex \(v\) meeting these conditions is _foldable_. [Degree-2 Folding Reduction [31]] Let \(v\in V\) be a foldable degree-2 vertex with non-adjacent neighbors \(N(v)=\{u,w\}\). Let \(G^{f}\) be the graph obtained by folding \(\{v,u,w\}\). Let \(\mathcal{C}^{f}\) be a minimum VCC of \(G^{f}\) with clique \(C_{v^{\prime}}\in\mathcal{C}^{f}\) covering vertex \(v^{\prime}\) and let \(C=C_{v^{\prime}}\setminus\{v^{\prime}\}\). Then, the clique cover \[\mathcal{C}=\begin{cases}(\mathcal{C}^{f}\setminus\{C_{v^{\prime}}\})\cup\{C \cup\{u\},\{v,w\}\}&\text{if $C\subseteq N(u)$,}\\ (\mathcal{C}^{f}\setminus\{C_{v^{\prime}}\})\cup\{C\cup\{w\},\{v,u\}\}&\text{ otherwise,}\end{cases}\] is a minimum VCC of \(G\). See Figure 5(a) for an example of the degree-2 VCC reduction. We note that the transformation from an ECC instance to a VCC instance by Kou et al. [25] does not produce any degree-2 vertices with non-adjacent neighbors, as edges forming a triangle or 4-clique in \(G\) form a triangle or 6-clique in \(G_{VCC}\). However, our transformation with covered edges can result in such vertices (see Figure 5(b)). Thus, the degree-2 folding VCC reduction can be used to further reduce the instance when applied to \(G^{\prime}_{VCC}\). We leave as an open problem whether folding-based rules can be lifted to new ECC reductions; we conjecture that it is possible to lift at least degree-2 folding. However, given how effective the degree-2 folding reduction is in practice for the VCC problem, we highly recommend applying it, even though it incurs the overhead of the transformation to \(G^{\prime}_{VCC}\) Figure 6: The degree-2 folding VCC reduction can be applied after transforming \(G\) to \(G^{\prime}_{VCC}\). ### Wrapping It All Up With the tools in this section in hand, we have a clear path to solving the ECC problem on sparse graphs: first apply the data reductions due to Gramm et al. [19], then transform the partially-covered graph into a VCC instance, which can then be reduced further and solved with any VCC solver. We next perform experiments to evaluate this method. ## 6 Experimental Evaluation We now compare our technique to the state of the art through extensive experiments on both synthetic instances and real-world graphs. ### Experimental Setup We implemented the ECC reductions and ECC to VCC transformation in C++ and integrated our methods with the VCC reductions and VCC algorithms by Strash and Thompson2[31], which we then compiled with g++ version 11 using the -O3 optimization flag. Our source code will be made available under the open source MIT license. All experiments were conducted on Hamilton College's High Performance Computing Cluster (HPCC), on a machine running CentOS Linux 7.8.2003, with four Intel Xeon Gold 6248 processors running at 2.50GHz with 20 cores each, and 1.5TB of memory. Each algorithm is run sequentially on its own core. Footnote 2: [https://github.com/darrenstrash/ReduVCC](https://github.com/darrenstrash/ReduVCC) We run experiments on six different algorithms. Gramm is the original branch-and-reduce code by Gramm et al. [19] written in OCAML, which we compiled with ocamlc version 3.10.2, and provided a sufficiently large stack size due to its heavy use of recursion. We implement three algorithms in C++ that first exhaustively apply ECC data reductions, perform a problem reduction to a VCC instance, apply VCC reductions, and then run a VCC solver: \(\mathsf{Redu}^{3}\mathsf{BnR}\) solves with the VCC branch-and-reduce algorithm by Strash and Thompson [31], \(\mathsf{Redu}^{3}\mathsf{IG}\) solves with the VCC iterated greedy (IG) heuristic algorithm by Chalupa [9], and \(\mathsf{Redu}^{3}\mathsf{ILP}\) solves with an assignment-based ILP formulation [22, 28] for VCC and Gurobi version 9.5.1. Finally, the two heuristic algorithms \(\mathsf{Cnote}\)[12] and \(\mathsf{EO}\)-\(\mathsf{ECC}\)[1] are from their respective authors and are compiled with javac version 8 and g++ version 11 with -O3, respectively. Unless stated otherwise, we run each algorithm with a 24-hour time limit. Our stated running times do not include I/O time such as graph reading and writing. In our tables, 'Kernel' denotes the relevant size of the graph after reductions as either uncovered edges (\(\mathsf{Gramm}\)) or vertices remaining (for VCC-based algorithms). 'Time' is the time (in seconds) the solver takes to exactly solve the instance. A '-' indicates that the solver did not finish in the 24-hour time limit. **Bold** values indicate the value is the smallest among all algorithms in the table. We run our experiments on randomly-generated instances as well as real-world graphs. **Erdos-Renyi Graphs.** We generate 70 instances of varying density using the \(G(n,p)\) model of generating an \(n\)-vertex graph where each edge is selected independently with probability \(p\). We use values of \(n\) that are powers of two from 64 to 2048, with two different values of \(p\) for each to show the effect of density on the tested algorithms. We generate 5 graphs with each \(n\), \(p\) pair using different random seeds to observe the behavior of algorithms on multiple instances of similar size and density. (See Tables 5 and 6 in Appendix B for the full statistics.) **Real-World Instances.** We run our experiments on 52 large, sparse, complex networks from the Stanford Network Data Repository (SNAP)3, the Laboratory for Web Algorithmics (LAW)4, and the Koblenz Network Collection (KONECT)5. These graphs include citation networks, web-crawl graphs, and social networks; the largest graph has 18M vertices, and most graphs follow a scale-free degree distribution: there are many low degree vertices and few high degree vertices. The number of vertices and edges for each instance can be found with experimental results in Tables 2 and 3. Footnote 3: [https://snap.stanford.edu/data/](https://snap.stanford.edu/data/) Footnote 4: [http://law.di.unimi.it/datasets.php](http://law.di.unimi.it/datasets.php) Footnote 5: [http://konect.cc/](http://konect.cc/) ### Results on Synthetic Instances We begin by comparing the performance of Gramm and Redu\({}^{3}\)BnR on synthetic instances generated with the Erdos-Renyi \(G(n,p)\) model. We present the average kernel size and running time from the execution of Gramm and Redu\({}^{3}\)BnR on the 5 instances of each pair of \(n\) and \(p\) in Table 1. (Individual results can be found in Tables 5 and 6 in Appendix B.) Focusing on running time, Gramm and Redu\({}^{3}\)BnR are equally matched on very sparse graphs, quickly solving many instances in significantly less than one second. Though, as the density increases even slightly, which can be seen when fixing \(n\) but increasing \(p\), Gramm is no longer able to solve even small instances in a 24-hour time limit. However, on all instances, Redu\({}^{3}\)BnR easily computes exact solutions. The reason why is clear, on problems that Gramm is unable to solve, the ECC kernel is large (for the highest density instance with \(n=64,p=0.200\), even a kernel of average size 50 is too large for Gramm to solve), whereas the VCC kernels for Redu\({}^{3}\)BnR are significantly smaller in all cases. Indeed, for the densest graphs of each value of \(n\), Gramm is unable to solve every instance in 24 hours, but Redu\({}^{3}\)BnR solves all graphs in less than a second. This illustrates that the combined reduction power of ECC and VCC reductions is able to handle denser instances than running ECC reductions alone. ### Solving Large Real-World Instances Exactly We now see which graphs can be solved exactly by one of three algorithms: Gramm, Redu\({}^{3}\)BnR, and Redu\({}^{3}\)ILP. The results are presented in Table 2. Gramm was able to solve 12 of the 27 instances exactly; 10 of these graphs were solved because the kernel had 0 uncovered edges and the other two instances (ca-CondMat and ca-GrQc) had small kernels of less than 100 uncovered edges. However, Gramm exceeds the 24-hour time limit on the 15 other instances, even those with as few as 176 uncovered edges. In contrast, Redu\({}^{3}\)BnR solves 18 of the instances. On all instances, the kernel computed by Redu\({}^{3}\)BnR was smaller than that of Gramm, the smallest of which is on zhishi-hudong-int, which is reduced to 2% of the size of Gramm's kernel. With the exception of three instances (email-EuAll, web-NotreDame, and web-Stanford), every instance was reduced to at most 10% of Gramm's kernel size. However, the limitations of branch and reduce for the VCC problem begin to show on these instances. Similar to Gramm, Redu\({}^{3}\)BnR only finishes within the 24-hour time limit on graphs with kernel size less than 100, and therefore its success is largely due to the reduction of the input instance (a pattern observed in other problems [30]). On the other hand, the Gurobi solver with an ILP formulation is able to solve kernels of much larger size, even up to 536,196 vertices (in the case of eu-2005). ### Solving Remaining Instances Heuristically We now look at the instances that could not be solved in the 24-hour time limit by any exact method. The results are presented in Table 3. Nine instances were reduced to VCC within the time limit of 24 hours, and the remaining instances were too large to finish in the time limit (not in the table). After fully transforming the input ECC instance to a reduced VCC instance, we ran the iterated greedy approach IG due to Chalupa et al. [9], which we call \(\mathsf{Redu}^{3}\mathsf{IG}\), and compare its best solution with a lower bound from KaMIS, a state-of-the-art evolutionary algorithm for finding near-maximum independent sets on huge networks [26]. Four instances were solved to within 300 vertices of optimum, two of which (soc-Slashdot0811 and soc-Slashdot0902) are within 100 vertices. The remaining instances are solved to within 6,000 vertices of optimum. ### Summarizing the Quality of Existing Heuristic Solvers Finally, using our exact results, we evaluate the quality of two heuristic solvers designed for large sparse graphs. We compare \(\mathsf{Conte}\), an algorithm by Conte et al. [12] designed for large sparse graphs and \(\mathsf{EO-ECC}\) by Abdullah et al. [1]. We run \(\mathsf{Conte}\) and \(\mathsf{EO-ECC}\) on all instances that were solved exactly (i.e., those from Table 2). The results are presented in Table 4. From among the 27 graphs, \(\mathsf{Conte}\) solves five instances exactly. A further nine instances are solved within 50 cliques of optimal, and eight additional graphs are solved within 2,000 of optimal. \(\mathsf{EO-ECC}\), on the other hand, solves eight instances exactly (a superset of \(\mathsf{Conte}\)'s five) and solves these faster than \(\mathsf{Conte}\). Furthermore, \(\mathsf{EO-ECC}\) finds 14 smaller solutions faster than \(\mathsf{Conte}\) (\(\mathsf{Conte}\) only finds four smaller solutions faster). However, a distinct negative is \(\mathsf{EO-ECC}\)'s running time and solution quality on chr-2000, eu-2005, and web-BerkStan, which is much worse than \(\mathsf{Conte}\). We conclude that \(\mathsf{Conte}\) gives consistently fast results with reasonable solutions, and \(\mathsf{EO-ECC}\) is sometimes very fast and accurate, and other times not. \begin{table} \begin{tabular}{r r r r r r r} \hline \hline \multicolumn{3}{c}{Graph} & \multicolumn{3}{c}{\(\mathsf{Gramm}\)} & \multicolumn{2}{c}{\(\mathsf{Redu}^{3}\mathsf{BnR}\)} \\ \hline \(n\) & \(p\) & \(m\) & Kernel & Time (s) & Kernel & Time (s) \\ \hline 64 & 0.150 & 151 & 1 & \(<\)**0.01** & **0** & \(<\)**0.01** \\ 64 & 0.200 & 203 & 50 & 1,324.52\({}^{*}\) & **10** & \(<\)**0.01** \\ 128 & 0.100 & 404 & **0** & \(<\)**0.01** & **0** & \(<\)**0.01** \\ 128 & 0.150 & 610 & 245 & – & **51** & **0.03** \\ 256 & 0.075 & 1,217 & 21 & 0.02 & **0** & \(<\)**0.01** \\ 256 & 0.100 & 1,633 & 552 & – & **12** & **0.02** \\ 512 & 0.050 & 3,279 & 69 & 0.21 & **1** & **0.02** \\ 512 & 0.065 & 4,258 & 1,140 & – & **5** & **0.04** \\ 1,024 & 0.037 & 9,537 & 629 & 153.21\({}^{*}\) & **4** & **0.08** \\ 1,024 & 0.038 & 9,799 & 852 & – & **4** & **0.07** \\ 2,048 & 0.025 & 26,123 & 1,574 & 5.18 & **4** & **0.18** \\ 2,048 & 0.028 & 28,745 & 3,618 & – & **5** & **0.20** \\ \hline \hline \end{tabular} \end{table} Table 1: Results on small Erdős-Rényi graphs of varying density. A ‘\({}^{*}\)’ indicates that not all runs finished in the 24-hour time limit, ‘\(\sim\)’ indicates that no runs finished in the 24-hour time limit. \begin{table} \begin{tabular}{l r r r r r r r} \hline \hline & Graph & \multicolumn{3}{c}{Gramm} & \multicolumn{3}{c}{Redu\({}^{3}\)BnR} & \multicolumn{1}{c}{Redu\({}^{3}\)ILP} \\ \hline Name & \(n\) & \(m\) & Kernel & Time (s) & Kernel & Time (s) & Time (s) \\ \hline ca-AstroPh & 18,772 & 198,050 & 2,837 & – & **0** & **0.33** & **0.33** \\ ca-CondMat & 23,133 & 93,439 & 62 & 1.74 & **0** & **0.10** & **0.10** \\ ca-GrQc & 5,242 & 14,484 & 9 & 0.15 & **0** & **0.02** & **0.02** \\ ca-HepPh & 12,008 & 118,489 & 491 & – & **0** & **0.16** & **0.16** \\ ca-HepTh & 9,877 & 25,973 & 176 & – & **0** & **0.03** & **0.03** \\ cnr-2000 & 325,557 & 2,738,969 & 755,617 & – & **23,880** & – & **10,727.29** \\ dblp-2010 & 326,186 & 807,700 & 868 & – & **0** & **1.98** & **1.98** \\ dblp-2011 & 986,324 & 3,353,618 & 8,898 & – & **50** & **9.13** & 9.88 \\ email-EALL & 265,214 & 364,481 & 20,648 & – & **5,064** & – & **6.99** \\ eu-2005 & 862,664 & 16,138,468 & 5,555,826 & – & **536,209** & – & **12,966.59** \\ p2p-Gnutella04 & 10,876 & 39,994 & **0** & 0.34 & **0** & \(0.05^{*}\) & \(0.05^{*}\) \\ p2p-Gnutella05 & 8,846 & 31,839 & **0** & 0.23 & **0** & \(0.05^{*}\) & \(0.05^{*}\) \\ p2p-Gnutella06 & 8,717 & 31,525 & **0** & 0.33 & **0** & \(0.04^{*}\) & \(0.04^{*}\) \\ p2p-Gnutella08 & 6,301 & 20,777 & 261 & – & **17** & **0.04** & **0.06** \\ p2p-Gnutella09 & 8,114 & 26,013 & 214 & – & **5** & **0.04** & **0.08** \\ p2p-Gnutella24 & 26,518 & 65,369 & **0** & 0.91 & **0** & \(0.10^{*}\) & \(0.10^{*}\) \\ p2p-Gnutella25 & 22,687 & 54,705 & **0** & 0.63 & **0** & \(0.08^{*}\) & \(0.08^{*}\) \\ p2p-Gnutella30 & 36,682 & 88,328 & **0** & 1.27 & **0** & \(0.09^{*}\) & \(0.09^{*}\) \\ p2p-Gnutella31 & 62,586 & 147,892 & **0** & 2.14 & **0** & \(0.23^{*}\) & \(0.23^{*}\) \\ roadNet-CA & 1,965,206 & 2,766,607 & **0** & 115.17 & **0** & \(5.60^{*}\) & \(5.60^{*}\) \\ roadNet-PA & 1,088,092 & 1,541,898 & **0** & 45.75 & **0** & \(2.94^{*}\) & \(2.94^{*}\) \\ roadNet-TX & 1,379,917 & 1,921,660 & **0** & 73.21 & **0** & \(3.64^{*}\) & \(3.64^{*}\) \\ web-BerkStan & 685,230 & 6,649,470 & 2,096,936 & – & **152,581** & – & **6,753.27** \\ web-Google & 875,713 & 4,322,051 & 266,455 & – & **16,440** & – & **35.58** \\ web-NotreDame & 325,729 & 1,090,108 & 98,861 & – & **14,553** & – & **20.10** \\ web-Stanford & 281,903 & 1,992,636 & 523,480 & – & **57,463** & – & **981.82** \\ zhishi-hudong-int & 1,984,484 & 14,428,382 & 1,175,068 & – & **26,536** & – & **568.26** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparing exact algorithms Gramm, Redu\({}^{3}\)BnR, and Redu\({}^{3}\)ILP on real-world instances solved by at least one of the algorithms in a 24-hour time limit. Times marked with a ‘**’ indicate that the algorithm’s speed was due to programming language differences and not algorithmic improvements. \begin{table} \begin{tabular}{l r r r r r r} \hline \hline & Graph & \multicolumn{3}{c}{KaMIS} & \multicolumn{3}{c}{Redu\({}^{3}\)IG} \\ \hline Name & \(n\) & \(m\) & lb & ub & Time (s) \\ \hline as-skitter & 1,696,415 & 11,095,298 & 5,843,072 & 5,847,591 & 20,848.17 \\ email-Enron & 36,692 & 183,831 & 42,141 & 42,207 & 2,201.00 \\ soc-Epinions1 & 75,879 & 405,740 & 185,544 & 186,384 & 18,064.79 \\ soc-pokec-relationships & 1,632,803 & 22,301,964 & 12,222,248 & 12,227,949 & 21,451.91 \\ soc-Slashdot0811 & 77,360 & 469,180 & 328,018 & 328,079 & 3,073.75 \\ soc-Slashdot0902 & 82,168 & 504,230 & 351,012 & 351,072 & 3,125.21 \\ wiki-Talk & 2,394,385 & 4,659,565 & 3,645,692 & 3,648,312 & 21,088.53 \\ wiki-Vote & 7,115 & 100,762 & 34,789 & 35,004 & 21,424.48 \\ zhishi-baidu-relatedpages & 415,641 & 2,374,044 & 1,372,941 & 1,373,912 & 9,989.00 \\ \hline \hline \end{tabular} \end{table} Table 3: Heuristic solutions for graphs that could not be solved exactly in 24 hours. ‘lb’ is a lower bound on \(\theta_{E}(G)\) from KaMIS, ‘ub’ is the smallest clique cover computed by Redu\({}^{3}\)IG, and ‘Time’ is the time in seconds for Redu\({}^{3}\)IG to reach this result. ## 7 Conclusion and Future Work We introduced a technique to further reduce ECC problem instances via VCC data reductions, enabling us to solve sparse real-world graphs that could not be solved before. Critical to this technique is the ability to transform reduced ECC instances to the VCC problem, through a modification of the polynomial-time reduction of Kou et al. [25]. The combined reduction power of ECC and VCC reductions, which we call _synergistic_ data reduction, produces significantly smaller kernels than ECC reductions alone. Of particular interest for future work is integrating data reduction rules with existing heuristic algorithms for the ECC problem, trying to implement a more efficient LP relaxation ECC reduction without a transformation, and to see if folding-based reductions can be lifted to the ECC problem. \begin{table} \begin{tabular}{l r r r r r r r} \hline \hline & \multicolumn{2}{c}{Graph \(G\)} & \multicolumn{2}{c}{\(\mathsf{Conte}\)} & \multicolumn{2}{c}{\(\mathsf{EO-ECC}\)} \\ \hline Name & \(n\) & \(m\) & \(\theta_{E}(G)\) & ub & Time (s) & ub & Time (s) \\ \hline ca-AstroPh & 18,772 & 198,050 & 15,134 & 15,481 & 0.92 & _15,373_ & _0.50_ \\ ca-CondMat & 23,133 & 93,439 & 16,283 & 16,378 & 0.54 & _16,307_ & _0.07_ \\ ca-GrQc & 5,242 & 14,484 & 3,737 & 3,749 & 0.15 & _3,739_ & _0.01_ \\ ca-HepPh & 12,008 & 118,489 & 10,031 & 10,142 & 0.69 & _10,097_ & _0.35_ \\ ca-HepTh & 9,877 & 25,973 & 9,190 & 9,264 & 0.19 & _9,212_ & _0.02_ \\ cnr-2000 & 325,557 & 2,738,969 & 752,118 & _756,905_ & _14.92_ & 763,365 & 2,820.97 \\ dblp-2010 & 326,186 & 807,700 & 186,834 & 187,395 & 2.22 & _186,968_ & _0.44_ \\ dblp-2011 & 986,324 & 3,353,618 & 707,773 & 713,219 & 13.56 & _709,156_ & _3.48_ \\ email-EuAll & 265,214 & 364,481 & 297,092 & _298,943_ & 2.58 & 299,257 & 2.14 \\ eu-2005 & 862,664 & 16,138,468 & 2,832,059 & 2,883,585 & 108.67 & 3,032,337 & 8,458.21 \\ p2p-Gnutella04 & 10,876 & 39,994 & 38,491 & **38,491** & 0.29 & **38,491** & **0.04** \\ p2p-Gnutella05 & 8,846 & 31,839 & 30,523 & 30,527 & 0.25 & _30,525_ & _0.04_ \\ p2p-Gnutella06 & 8,717 & 31,525 & 30,322 & 30,327 & 0.26 & _30,324_ & _0.04_ \\ p2p-Gnutella08 & 6,301 & 20,777 & 19,000 & 19,042 & 0.20 & _19,012_ & _0.03_ \\ p2p-Gnutella09 & 8,114 & 26,013 & 24,117 & 24,150 & 0.24 & _24,133_ & _0.03_ \\ p2p-Gnutella24 & 26,518 & 65,369 & 63,725 & 63,726 & 0.41 & **63,725** & **0.06** \\ p2p-Gnutella25 & 22,687 & 54,705 & 53,367 & **53,367** & 0.33 & **53,367** & **0.05** \\ p2p-Gnutella30 & 36,682 & 88,328 & 85,821 & **85,823** & 0.52 & **85,821** & **0.10** \\ p2p-Gnutella31 & 62,586 & 147,892 & 144,478 & **144,478** & 0.83 & **144,478** & **0.15** \\ roadNet-CA & 1,965,206 & 2,766,607 & 2,537,936 & 2,537,945 & 17.90 & **2,537,936** & **1.02** \\ roadNet-PA & 1,088,092 & 1,541,898 & 1,413,370 & **1,413,370** & 10.62 & **1,413,370** & **0.69** \\ roadNet-TX & 1,379,917 & 1,921,660 & 1,763,295 & 1,763,298 & 13.48 & **1,763,295** & **0.89** \\ web-BerkStan & 685,230 & 6,649,470 & 1,834,074 & _1,850,605_ & _54.34_ & 1,903,872 & 2,089.25 \\ web-Google & 875,713 & 4,322,051 & 1,242,770 & 1,254,107 & 24.96 & _1,251,672_ & 33.10 \\ web-NotreDame & 325,729 & 1,090,108 & 451,424 & 453,864 & 7.09 & _453,805_ & 7.31 \\ web-Stanford & 281,903 & 1,992,636 & 562,417 & _570,958_ & _16.85_ & 591,957 & 326.92 \\ zhishi-hudong-int & 1,984,484 & 14,428,382 & 10,557,244 & 10,698,424 & 123.45 & _10,678,121_ & 322.89 \\ \hline Summary (\(\mathsf{\#optimal}\) / \(\#smaller\) and faster) & \multicolumn{2}{c}{\(\mathbf{(5\ /\ 4)}\)} & \multicolumn{2}{c}{\(\mathbf{(8\ /\ 14)}\)} \\ \hline \hline \end{tabular} \end{table} Table 4: Evaluation of the quality of heuristic solvers \(\mathsf{Conte}\) and \(\mathsf{EO-ECC}\) on all graphs with known edge clique cover number \(\theta_{E}(G)\). ‘ub’ is the solution found by the given algorithm, and ‘Time’ is the algorithm’s time in seconds. Values of ‘ub’ marked in \(\mathbf{bold}\) indicates the algorithm found an optimal solution, with its time in \(\mathbf{bold}\) if it did so faster than its competitor. Values of ‘ub’ in _italics_ indicate that an algorithm found an ECC smaller than its competitor, with its time in _italics_ if it did so faster than its competitor.
2302.14622
Now It Compiles! Certified Automatic Repair of Uncompilable Protocols
Choreographic programming is a paradigm where developers write the global specification (called choreography) of a communicating system, and then a correct-by-construction distributed implementation is compiled automatically. Unfortunately, it is possible to write choreographies that cannot be compiled, because of issues related to an agreement property known as knowledge of choice. This forces programmers to reason manually about implementation details that may be orthogonal to the protocol that they are writing. Amendment is an automatic procedure for repairing uncompilable choreographies. We present a formalisation of amendment from the literature, built upon an existing formalisation of choreographic programming. However, in the process of formalising the expected properties of this procedure, we discovered a subtle counterexample that invalidates the original published and peer-reviewed pen-and-paper theory. We discuss how using a theorem prover led us to both finding the issue, and stating and proving a correct formulation of the properties of amendment.
Luís Cruz-Filipe, Fabrizio Montesi
2023-02-28T15:02:27Z
http://arxiv.org/abs/2302.14622v1
# Now It Compiles! ###### Abstract Choreographic programming is a paradigm where developers write the global specification (called choreography) of a communicating system, and then a correct-by-construction distributed implementation is compiled automatically. Unfortunately, it is possible to write choreographies that cannot be compiled, because of issues related to an agreement property known as knowledge of choice. This forces programmers to reason manually about implementation details that may be orthogonal to the protocol that they are writing. Amendment is an automatic procedure for repairing uncompilable choreographies. We present a formalisation of amendment from the literature, built upon an existing formalisation of choreographic programming. However, in the process of formalising the expected properties of this procedure, we discovered a subtle counterexample that invalidates the original published and peer-reviewed pen-and-paper theory. We discuss how using a theorem prover led us to both finding the issue, and stating and proving a correct formulation of the properties of amendment. choreographic programming, theorem proving, compilation, program repair 10.4230/LIPIcs... 1 ## 1 Introduction Programming correct implementations of protocols for communicating systems is challenging, because it requires writing a correct program for each participant that performs the right send and receive actions at the right times [21]. _Choreographic programming_[24] is an emerging paradigm that offers a direct solution: protocols are written in a "choreographic" programming language, and then automatically compiled to correct implementations by means of an operation known as _Endpoint Projection_ (EPP or projection for short) [4, 12, 14, 15, 18, 22, 23]. Choreographic languages are inspired by the Alice and Bob notation of security protocol [26], in the sense that they offer primitives for expressing communications between different processes. Implementations are usually modelled in terms of a process calculus. Besides being simple, choreographic programming is interesting because it typically includes strong theoretical guarantees, most notably deadlock-freedom and an operational correspondence between choreographies and the (models of the) generated distributed implementations. Not all choreographies can be compiled (or _projected_) to a distributed implementation due to a problem known as "knowledge of choice" [5]. Consider the following choreography for a simple purchase scenario (this example also anticipates some of our syntax). ``` buyer.offer\(\longrightarrow\)seller.x; Ifseller.acceptable(x)Thenseller.product\(\longrightarrow\)buyer.y;End ElseEnd ``` Listing 1: An unprojectable choreography. This choreography reads: a buyer communicates their offer for the purchase of a product to a seller, who stores the offer in their local variable x; the seller then checks whether the offer is acceptable, and in the affirmative case sends the product to buyer. This choreography cannot be projected to a behaviourally-equivalent implementation, because buyer has to behave differently in the two branches of the conditional. However, this conditional is evaluated by seller, and buyer has no way of discerning which branch gets chosen. Choreographies are typically made projectable by adding selections, i.e., communications of constants called _selection labels_.1 A projectable version of Listing 1 looks as follows. Footnote 1: Selections are essentially the choreographic version of branch selections in session types, or the additive connectives in linear logic [3, 16]. ``` buyer.offer\(\longrightarrow\)seller.x; If seller.acceptable(x)Thenself\(\longrightarrow\)buyer[left];seller.product\(\longrightarrow\)buyer.y;End Elseseller\(\longrightarrow\)buyer[right];End ``` Listing 2: A projectable choreography. This choreography differs from the previous one by the presence of a selection in each branch of the conditional. Specifically, if seller chooses the Then branch, they now communicate the label left to buyer. Otherwise, if the Else branch is chosen, the label right is communicated instead. The key idea is that now the implementation generated for buyer can read the label received by seller and know which branch of the conditional should be executed. Since labels are constants, compilation can statically verify that buyer receives different labels for the different branches, and therefore has "knowledge of choice". Projection can be smart about knowledge of choice, allowing selections to be kept to a minimum [2]. A process only needs to know which branch of a conditional has been chosen if its behaviour depends on that choice; if the process has to perform the same actions in both branches of a conditional, then this knowledge is irrelevant to it. Knowledge of choice can also be propagated: if a process q knows of a choice performed by another process p, then either process can forward this information to any other process that needs it. Amendment.Previous work investigated how unprojectable choreographies can be automatically transformed into projectable ones. Such a transformation is called _amendment_[8, 20] or repair [1, 13]. For example, applying the amendment procedure from [8] to the choreography in Listing 1 returns the choreography in Listing 2 (up to minor differences in notation). Amendment is interesting for (at least) two reasons. On a practical level, it can suggest valid selection strategies to developers to make their choreographies executable--or even do it automatically, so that they do not have to worry about knowledge of choice. On a theoretical level, it allows porting completeness properties of the set of all choreographies to the set of projectable choreographies. An example of the latter occurs in the study of _Core Choreographies_ (CC), a minimalistic theory of choreographic programming [8], where we show that the set of projectable choreographies in CC is Turing-complete in two steps. First, we show that CC is Turing-complete, ignoring the question of projectability (the choreographies constructed in the proof are clearly not projectable); then, we define an amendment procedure and prove an operational correspondence between choreographies and their amendments. As a consequence, the subset of projectable choreographies is also Turing-complete. A similar argument, using the operational correspondence result between projectable choreographies and their implementations, shows that the process calculus used (_Stateful Processes_, or SP) is Turing-complete. The problem.Our original objective was to formalise amendment and its properties from [8] in the Coq theorem prover, building upon our previous formalisation of CC [10] and its accompanying notion of projection [9]. That formalisation uses a variation of CC based on the theory from [25], which we found more amenable to formalisation. Unfortunately, after formalising the definition of amendment, our attempt to prove its operational correspondence result failed. An inspection of the state of the failed proof quickly led us to a counterexample. The incorrectness of the original statement jeopardises the subsequent developments that rely on it, in particular Turing completeness of the set of projectable choreographies and of SP. These results were instrumental in substantiating the claim that CC is a "good" minimalistic model for choreographic programming. This finding pointed us towards a more ambitious goal: reformulate the operational correspondence for amendment such that it is correct, and still powerful enough to obtain the aforementioned consequences. Contribution.To the best of our knowledge, this is the first time that choreography amendment has been formalised. We state and prove a relaxed version of the operational correspondence between choreographies and their amendments in the Coq theorem prover, thus increasing confidence in its correctness. We discuss how working with an interactive theorem prover was instrumental to identifying counterexamples that guided us towards this new, correct formulation that considers all corner cases. We then use our result to formalise the proofs of Turing completeness of projectable choreographies and SP from [8], which were not included in [10]. Structure of the paper.We present the relevant background on CC and its formalisation in Section 2. Section 3 presents the definition of amendment, its formalisation, and discusses and corrects the operational correspondence result from [8]. Section 4 shows that the revised semantic property is still strong enough to derive the Turing completeness results in that work. We discuss related work in Section 5 and conclude in Section 6. Our exposition assumes some familiarity with interactive theorem proving. We include some Coq code in the article, but the work is intended to be accessible to non-Coq experts. ## 2 Background We summarise the latest version of the Coq formalisation of CC [11]. For simplicity, we omit two ingredients that are immaterial for our work: the fact that the language is parameterised on a signature, and the fact that communications have annotations (these are meant to include information relevant for future implementations in actual programming languages). This allows us to omit some subterms that play no role in the development of amendment. In our presentation, we use Coq notation with some simplifications for enhanced readability: choreography and process terms are written overloading dots (this is not allowed by the Coq notation mechanism), and inductive definitions and inference rules are given with the usual mathematical notation. ### Core Choreographies We start by giving an overview of Core Choreographies (CC) together with its formalisation in Coq [10]. Syntax.The syntax of CC is given by the following grammar. \begin{tabular}{l|l|l} **Type** & **Variable** & **Description** \\ \hline Choreography & C & Choreographies \\ Pid & p, q, r, s & Process names (identifiers) \\ list Pid & ps & List of process names \\ Var & x, y, z & Variable names \\ Val & v & Values \\ Expr & e & Expressions (evaluate to values) \\ BEpr & b & Boolean expressions (evaluate to Booleans) \\ Label & l & Labels (left and right) \\ RecVar & X & Procedure names (or recursive variables) \\ DefSet & D & Sets of procedure definitions in CC \\ State & s & Maps from variables to values \\ Configuration & c & Choreographic programs equipped with states \\ TransitionLabel & t & Transition labels \\ list TransitionLabel & tl & Lists of transition labels \\ Behaviour & B & Behaviours \\ option Behaviour & mB & option monad for behaviours \\ Network & N & Networks \\ DefSetB & D & Sets of procedure definitions in SP \\ Program & P & Choreographies/networks with procedure definitions \\ \end{tabular} **Table 1** Summary of types in the original Coq formalisation [10, 9]. \begin{tabular}{l|l|l} **C** ::= \(\eta\); **C** | If p.b Then C1 Else C2 | Call X | RT_Call Xps C | End \\ \(\eta\) ::= p.e \(\longrightarrow\) q.x | p \(\longrightarrow\) q[1] \\ \end{tabular} A choreography C can be either: a communication \(\eta\) followed by a continuation (\(\eta\); c); a conditional If p.b Then C1 Else C2, where the process p evaluates the boolean expression b to choose between the branches C1 and C2; a procedure call X, where X is the name of the procedure being invoked; a runtime term RT_Call Xps C;2 or the terminated choreography End. A communication \(\eta\) can be: a value communication p.e \(\longrightarrow\) q.x, read "process p evaluates expression e locally and sends the result to process q, which stores it in its locally in x"; or a selection p \(\longrightarrow\) q[1], where the label l can be either left or right, read "p sends label l to q". Footnote 2: Runtime terms are needed for technical reasons in the definition of the semantics of choreographies [10]. These aspects are irrelevant for the present development. Choreographies are formalised in Coq as an inductive type called Choreography. Table 1 summarises the Coq types used in this paper and our conventions for ranging over their elements. Executing a choreography requires knowing the definitions of the choreographies associated to the procedures that can be invoked, as well as the processes involved in those procedures. A set of procedure definitions is defined as a mapping from procedure names to pairs of process names and choreographies. \begin{tabular}{l|l} **Definition DefSet** ::= RecVar \(\rightarrow\) (list Pid)*Choreography. \\ \end{tabular} A _choreographic program_ is then a pair consisting of a set of procedure definitions and a choreography (which represents the "main" or "running" choreography). \begin{tabular}{l|l} **Definition DefSet** ::= RecVar \(\rightarrow\) (list Pid)*Choreography. \\ \end{tabular} DefinitionProgram:=DefSet*Choreography. We write ProceduresP and MainP for the two components of P. The set of all processes used by a program P is defined as \(\mathtt{CCP\_pmP}\). It is standard practice to assume some well-formedness conditions about choreographies, e.g., that no process communicates with itself. Choreographic programs have additional well-formedness conditions that must hold for all procedures that can be reached at runtime. This notion is not decidable in general, but it becomes so in the practical case of programs that only use a finite number of procedures. We return to this aspect at the end of Section 3.2, where it becomes relevant. The choreographies in Listings 1 and 2 are well-formed. Semantics.The intuitive system assumptions in CC are that: processes run independently of each other (concurrently) and possess local stores (associating their variables to values); communications are synchronous; and the network is reliable (messages are not lost nor duplicated, and they are delivered in the right order between any two processes). These assumptions are imported from process calculi, where they are quite standard. Since processes run concurrently, it is possible to express choreographies with concurrent behaviour. Consider the following simplification of the factory example in [25]. ``` o.order->p.x;o.order->p'.y;End ``` Listing 3Parallel orders. In Listing 3, two processes o and o' place their respective orders to two different providers p and p'. Since all processes are distinct and there is no causal dependency between the two communications, the two communications can in principle be executed in any order. This gives rise to a notion of out-of-order execution for choreographies. The semantics of choreographies in [10] is given as a labelled transition system on configurations, which consist of a program and a (memory) state. States associate to each process a map from variable names to values, which defines the memory of that process. ``` DefinitionState:=Pid->Var->Value ``` States come with some notation: s[==] s' says that s and s' are extensionally equal, and s[p,x>v] is the state obtained from updating s with the mapping p,x>v. With these concepts in place, we can show some representative transition rules for choreographic configurations in Figure 1.3 Transitions have the form \((\texttt{D},\texttt{C},\texttt{s})\)\(\rightarrow\)[t]\(\rightarrow\)\((\texttt{D},\texttt{C}^{\prime},\texttt{s}^{\prime})\), where t is a transition label that allows for observing what happened in the transition. Footnote 3: In the actual formalisation, the transition relation was defined in two layers for technical reasons. This technicality is immaterial for our development, since our results follow from the rules shown here. Rule CC_Com deals with the execution of a value communication from a process p to a process q: if the expression e at p can be evaluated to a value v (first condition, which uses the auxiliary function eval), then the communication term is consumed and the state of the receiver is updated such that its receiving variable x is now mapped to value v. The transition label TL_Com p v q denotes that p has communicated the value v to q, modelling what would be visible on a network. Rule CC_Sel is similar but does not alter the state of the receiver (the role of selections will be clearer when we explain the language for modelling implementations of choreographies). The transition label TL_Sel p q l registers the communication of label l from p to q. Rule CC_Then deals with the case in which a process p can evaluate the guard b of a conditional to true (using the auxiliary function beval), proceeding to the then-branch of the conditional. The transition label TL_Tau p denotes that process p has executed an internal action (\(\tau\) is the standard symbol for such actions in process calculi). Rule CC_Delay_Eta deals with out-of-order execution of communications, formalising the reasoning anticipated in Example 2. Specifically, the continuation of an interaction \(\eta\) is allowed to perform a transition (without affecting \(\eta\)) as long as the transition does not involve any of the processes in \(\eta\). The latter condition is checked by the first premise of the rule, disjoint_eta_rl \(\eta\) t, which checks that the processes mentioned in \(\eta\) are distinct from those mentioned by the transition label t. Rule CC_Delay_Cond applies the same reasoning to the out-of-order execution of conditionals. The reflexive and transitive closure of the transition relation is written \(\ \mbox{\tt\small---}\{\mbox{\tt\small---}\{\mbox{\tt\small---}\}\mbox{\tt \small---}\}^{*}\), where tl is a list of transition labels. **Example 3**.: For any D and s such that order evaluates to v at o and order' evaluates to v' at o', according to eval, \[\mbox{\tt\small\ municating processes, where the code of each process is given separately and communication is achieved when processes perform compatible I/O actions. Syntax.The code of a process is written as a behaviour (B), following the grammar below. ``` B::=p[e];B[p]?x;B[p+1;B[p&=B1//mB2]IfbThenB1ElseB2|CallX|End mB::=None|SomeB ``` These terms are the local counterparts to the choreographic terms of CC. The first two productions deal with value communication. Specifically, a send action p!e;B sends the result of evaluating e to the process p and then continues as B. Dually, a receive action p?x;B receives a value from p, stores it in x, and then continues as B. Selections are implemented by the primitives p+1;B and p & mB1 //mB2. The former sends the label l to the process p and continues as B. The latter is a branching term, where mB1 and mB2 are the behaviours that the process will execute upon receiving left or right, respectively. To cover the case where a process does not offer a behaviour for a specific label, mB1 and mB2 have type option Behaviour. Conditions (If bThenB1ElseB2), procedure calls (Call X), and the terminated behaviour (End) are standard. Processes are intended to run together in networks. These are formalised as maps from processes to behaviours. ``` DefinitionNetwork::=Pid->Behaviour. ``` Networks come with some convenient notation for their construction: p[B] is the network that maps p to B and all other processes to End; and N | N' is the composition of N and N'. In particular, (N | N') p returns N p if this is different from End, and N' p otherwise.4 Footnote 4: This asymmetry does not matter for our results, since we never compose networks that define nonterminated behaviours for the same processes. The following network implements the choreography in Listing 2. ``` buyer[sellerloffer;seller&Some(seller?y;End)//SomeEnd]| seller[buyer?x;Ifacceptable(x)Thenbuyer+left;buyerproduct;End Elsebuyer+right;End] ``` For the semantics of networks, we need two additional ingredients. The network N \(\backslash\)p is obtained from N by redefining p's behaviour as End (p is "removed" from N). The relation N (\(==\)) N' holds if the networks N and N' are extensionally equal. As in CC, processes in a network can invoke procedures defined in a separate set. ``` DefinitionDefSetB::=RecVar->Behaviour. ``` A Program in SP consists of a set of procedure definitions and a network. ``` DefinitionProgram::=DefSetB*Network. ``` We use D to range over elements of DefSetB and P to range over elements of Program, as for choreographies (the difference will be clear from the context). Semantics.The semantics of SP is also given as a labelled transition system on configurations that consist of a program and a memory state, as in CC. A selection of the transition rules defining this semantics is displayed in Figure 2. Rule SP_Com matches a send action at a process p with a compatible receive action at another process q (conditions N p = q*e; B and N q = p?x; B'). The resulting network N' is obtained from N by replacing the behaviours of these processes with their continuations (N p \(\backslash\) q \(|\) p[B] \(|\) q[B']). The update to the state is handled as in CC. Rules SP_LSel and its dual SP_RSel model, respectively, the selection of the left and right branches offered by a branching term, by inspecting the label sent by the sender. Rule SP_Then captures the case in which a conditional enters its then-branch. ### Endpoint Projection (Epp) Choreographies are compiled to networks by a procedure called behaviour projection. This procedure is a partial function, and since all functions in Coq are total it was formalised as the following inductive relation. ``` bproj:DefSet\(\rightarrow\)Choreography\(\rightarrow\)Pid\(\rightarrow\)Behaviour\(\rightarrow\)Prop ``` Term bprojDCPB, written [D,C \(|\) p]==B, reads "the projection of C on p in the context of the set of procedure definitions D is B".5 Footnote 5: The parameter D is used for projecting procedure calls, which is irrelevant for our development. Intuitively, behaviour projection is computed by going through the choreography; for each choreographic term, projection constructs the local action that the input process should perform to implement it. The rules defining bproj that are relevant for this work are those that deal with selections and conditionals. These are shown in Figure 3. A label selection p \(\longrightarrow\)q[l] is projected as either: (i) the sending of label l to q for process p (rule bproj_Pick); (ii) the appropriate branching term that receives l from p for process q, where only the branch for l offers a behaviour (rules bproj_Left and the dual rule bproj_Right); or (iii) no action for any other process (rule bproj_Sel). Figure 2: Semantics of network configurations (selected rules). Similarly, a conditional in a choreography is projected to a conditional for the process that evaluates the guard (rule bproj_Cond). However, projecting conditionals becomes complex when considering the other processes, because this requires dealing with the problem of knowledge of choice discussed in Section 1. This case is handled by rule bproj_Cond3, which sets the result of projection to be the "merging" of the projections of the two branches, written B1[V] B2==B, if this is defined. Intuitively, merging attempts to build a behaviour B from two behaviours B1 and B2 that have similar structures, but may differ in the labels that they accept in branching terms. For all terms but branchings, merging requires term equality and then proceeds homomorphically in subterms. This is exemplified by the rules merge_End, merge_Sel, and merge_Cond in Figure 4. The interesting part regards the merging of branching terms, which has a rule for every possible combination. Figure 4 shows two representative cases. If two branching terms have branches for different labels, then we obtain a branching term where the two branches are combined as exemplified by rule merge_Branching_SNNS. If two branching terms have overlapping branches, then we try to merge them as exemplified by rule merge_Branching_SSSS.6 Footnote 6: Due to space constraints, the names of these rules have been abbreviated in Figure 4. As we remarked, merging (seen as a partial function) can be undefined, for example End and p+1; End cannot be merged. This gives rise to the notion of _projectability_ anticipated in Section 1: a choreography C is projectable on a process p in the context of a set of procedure definitions D if bproj is defined for those parameters. Definition projectable_B D C p := B, B, C := B This is generalised by projectable_C D C ps, which states that C is projectable for all processes in the list ps. For a choreographic program P to be projectable, written projectable_P P, we Figure 3: Selected rules for behaviour projection. require that Main P be projectable for all processes in CCP_pn P and that all procedures be projectable for the processes that they use. With projectability in place, Endpoint Projection (EPP) is defined as a function that maps a projectable choreographic program to a process program in SP. Definition epp P: projectable_P P Program. The second argument of epp is a proof of projectable_P P, but it is shown that the result does not depend on this term. The behaviours of buyer and seller in Example 4 are the respective projections for these two processes of the choreography in Listing 2. ### Turing completeness The authors of [10] formalise that CC is Turing-complete, in the sense that all of Kleene's partial recursive functions [19] can be implemented as a choreography for a suitable notion of implementation. The proof is interesting because it considers CC instantiated with very restricted computational capabilities at processes: values are natural numbers; expressions can only be the constant zero, a variable, or the successor of a variable; and Boolean expressions can only check if the two variables at a process contain the same value. Kleene's partial recursive functions are then implemented concurrently, by making processes communicate according to appropriate patterns. A choreographic program P implements f:PRFunction m (representing a partial recursive function \(f:\mathbb{N}^{m}\rightarrow\mathbb{N}\)) with input processes ps1,...,ps2 and output process q iff: for any state s where ps1,...,ps2 contain the values n1,...,n2 in their variable x, (i) if \(f(\texttt{n}_{1},\ldots,\texttt{n}_{m})=\texttt{n}\) Figure 4: Definition of the merge relation (selected rules). then all executions of P from s terminate, and do so in a state where q stores n in its variable x; and (ii) if \(f(\texttt{n}_{1},\ldots,\texttt{n}_{m})\) is undefined, then execution of P from s never terminates.7 This is captured by the Coq term implements P m f ps q, where ps is the vector ps\({}_{1},\ldots,\texttt{ps}_{m}\). Footnote 7: This is a straightforward adaption of the definition of function implementation by a Turing machine [28]. The proof of Turing completeness encodes partial recursive functions to choreographies that are not always projectable, since they contain no selections but some processes behave differently in conditionals. ## 3 Amendment Several works have studied how unprojectable choreographies can be automatically amended to obtain projectable versions [1, 8, 20]. In particular, [8] developed an amendment procedure based on merging. The idea is that, whenever a choreography contains a conditional, amendment adds selections, in both branches, from the process evaluating the guard to any processes whose behaviour projection is undefined. Intuitively, this makes the output choreography projectable. Let c be the choreography: Amendment is claimed to have the following properties. [Amendment Lemma [8], rephrased] For every choreography C: 1. The amendment of C is well-formed. 2. The amendment of C is projectable. 3. If DA, A, and A' are obtained by amending all procedures in D as well as C and C', then (D,C,s) --[t1]--* (D,C,s') iff (DA,A,s) --[t1']--* (DA,A',s') for some t1'. In point one, well-formedness refers to a set of syntactic conditions that exclude ill-written choreographies, e.g., self-communications (interactions where a process communicates with itself) [8]. Points one and two are simple to prove by induction on the structure of the choreography. Point three, unfortunately, is wrong. When attempting to formalise this result, we failed, and the state of the proof led us to the following counterexample. Given a suitable state, the choreography C from Example 3 can make a transition to C' defined as [frame=single,leftmargin=0.5cm] \(\texttt{p.e}\longrightarrow\texttt{q.x}\); \(\texttt{r.e}^{\prime}\longrightarrow\texttt{p.y}\); End by rules CC_Delay_Fta and CC_Then. However, C's amendment A can move to [frame=single,leftmargin=0.5cm] \(\texttt{p.e}\longrightarrow\texttt{q.x}\); \(\texttt{r}\longrightarrow\texttt{p[1]}\); \(\texttt{r.e}^{\prime}\longrightarrow\texttt{p.y}\); End by the same rules, but this is neither the amendment of C', nor can it reach it since the offending selection term is blocked by the initial communication. In hindsight, this is not so surprising: amendment introduces causal dependencies that were not present in the source choreography. However, this intuition was completely missed by both authors and reviewers of the original publications discussing amendment [7, 8]. Therefore, amending a choreography can remove some execution paths. In the rest of this section, we show how to define amendment formally in Coq, and formulate a correct variation of Lemma 7. ### Definition We decompose the definition of amendment in three functions: one for identifying the processes that need to be informed of the outcome of a specific conditional; one for prepending a list of selections to a choreography; and one that recursively amends a whole choreography by using the former two. This division simplifies not only the definition, but also the structure of proofs about amendment since they can be modularised. To identify the processes that require knowledge of choice, we define a function up_list (up is short for "unprojectable processes"). This function recursively goes through a list ps of processes and checks for each process in the list whether the choreography If p.b Then C1 Else C2 can be projected on that process (function projectable_B_dec does precisely this test). If this is not the case, then the process is added to the result. (Since projectability is relative to a set of procedure definitions, this also needs to be given as an argument, D.) ``` Fixpointup_listDpbpsC1C2:listPid:=matchpswith nil=nil r::ps'=letps''=up_listDpbs'C1C2in if(r=?p)thenps'' elseifprojectable_B_decD(Ifp.bThenC1ElseC2)r thenps'' else(r::ps'')end. ``` Note that p, as the evaluator of the conditional, does not need to be informed of the outcome. This justifies the check r =?p, whose inclusion also avoids introducing self-communications and simplifies subsequent proofs. The second ingredient is straightforward: given a process p, a selection label 1, and a choreography C, it recursively adds selections of l from p to each element of a list ps. ``` Fixpointadd_selsp1psC:Choreography:=matchpswith nil=C r::ps'=prand[l];add_selsp1ps'Cend. ``` We can now define amendment following the informal procedure described in [8]. Given a list of processes ps, we go through a choreography C; whenever we meet a conditional on a process p, we compute the list of processes from ps with an undefined projection and prepend the branches of the conditional with appropriate selections. (We show only the most interesting cases.) ``` FixpointamendDpsC:=matchCwith eta;C'=eta;(amendDpsC') Ifp.bThenC1ElseC2= letl:=up_listDpbps(amendDpsC1)(amendDpsC2) in Ifp.bThen(add_selsp1leftl(amendDpsC1)) Else(add_selsprightl(amendDpsC2))...end. ``` Amendment is generalised to sets of procedure definitions in the obvious way. Definition amend_DDps:DefSet:=funX=(fst(DX),amendDps(snd(DX))). To amend a programP, the parameterps of the previous functions is instantiated with the set of processes used inP. Definition amend_PP:= (amend_D(ProceduresP)(CCP_pnP),amend(ProceduresP)(CCP_pnP)(MainP)). This formal definition corresponds to the informal one given in [8]. In particular, all our examples are formalised in Coq. [] Consider the following choreography. [] Here, p decides if (i) it will communicate a value to q that can be used in the computation of a later message from q to r (so q acts as a sort of proxy) or (ii) q should just compute the value that it will communicate to r by itself. Amendment is smart enough to notice that while q requires a selection from p, r does not since it behaves in the same way (receive from q on x). Therefore, amending the choreography returns the following. [] ### Syntactic Properties We now discuss the key properties of amendment. Amendment preserves well-formedness of choreographies (Choreography_WF) and choreographic programs (Program_WF). This follows from the fact that add_sels preserves all syntactic properties of well-formedness, using induction. [] Lemma amend_Choreography_WF:Choreography_WFC-Choreography_WF(amendDpsC). [] Lemma amend_Program_WF:Program_WF(D,C)-Program_WF(amend_DDps,amendDpsC). (For simplicity, we omit universal quantifiers at the beginning of lemmas.) Likewise, it is straightforward to prove that amending for some processes guarantees that the output choreography is projectable on all those processes. [] Lemma amend_projectable_C:projectable_C(amend_DDps)(amendDpsC)ps. We do not generalise this result to choreographic programs: it is not straightforward to do and our later development does not need it. The issue we encounter is related to a problem discussed in [10, 9]: computing the set of processes and procedures that are used by a choreography can require an infinite number of steps, and is therefore not definable as a function in Coq. (A simple example is a program with an infinite set of procedure definitions where each procedure x\({}_{i}\) invokes the next procedure x\({}_{i+1}\).) The function CCP_pn used in the definition of amend_P does return the set of processes involved in a program P, but it does not check that P does not define unused procedures. If this is the case, these procedures may use processes not in CCP_pnP, and therefore they may be unprojectable for these processes. Rather than stating a result with complex side-conditions as hypotheses, we prove projectability of particular programs applying amend_projectable_C to Main P and to the bodies of all procedure definitions. The development in the next section uses this strategy. ### Semantic Properties We now discuss how the formulation of the semantic relation between a choreography and its amendment needs to be changed. The counterexample shown earlier suggests allowing both choreographies to perform additional transitions in order to unblock and remove lingering selections introduced by amendment. (In our example, this would be the communication from p to q.) The correspondence would then look as follows, where the dotted lines correspond to existentially quantified terms: and the list of transition labels tl can be obtained from tl' by removing some selections. Our attempt to prove this result showed that it holds for all cases but one: when the transition t is obtained by applying rule CC_Delay_Cond. We show a minimal counterexample. Consider the choreography ``` Ifp.bThen(q.e\(\rightarrow\)r.x;q.e\(\rightarrow\)p.x;End) Else(q.e\(\rightarrow\)r.x;End) ``` and its amendment ``` Ifp.bThen(p\(\rightarrow\)q[left];q.e\(\rightarrow\)r.x;q.e\(\rightarrow\)p.x;End) Else(p\(\rightarrow\)q[right];q.e\(\rightarrow\)r.x;End). ``` The original choreography can execute the communication between q and r, reaching ``` Ifp.eThen(q.e\({}^{!}\)\(\rightarrow\)p.x;End)ElseEnd ``` but its amendment needs to run the conditional and a selection before it can execute the same communication. There are two ways to solve this problem: changing the definition of amendment, or refining the correspondence result further. We opted for the second route, for two reasons: first, we get to keep the original definition given on paper in [8]; second, making amendment clever enough to recognise this kind of situations requires a non-local analysis of the choreography (i.e., looking at the structure of the branches of conditionals instead of simply checking for projectability of the term). In our example, such an analysis could detect that the additional selections from p to q could be added only after the communication from q to r, solving the issue. Therefore, our final correspondence result requires that the amendment of a choreography be allowed to perform additional transitions _before_ it matches the transition performed by the original choreography. Since a transition may invoke rule c_delay_Cond more than once, this means that the orders of the transitions performed by the original choreography and its amendment can be arbitrary permutations of each other (ignoring the extra selections). The correspondence result we prove looks as follows: where t:: tl can be obtained from tl' by removing some selections and permuting labels. To formalise this in Coq, we introduce a relation sel_exp ("selection expansion") between lists of transition labels. ``` Inductivesel_exp:= |se_basetltl':Permutationtltl'->sel_exptltltl'|se_extrapqtltltl'-> Permutation(TL_Selpql:tl)tl"->sel_exptltltl". ``` We can now prove a correct version of the correspondence between choreographies and their amendments. There are four results in total: the one depicted above and its generalisation to the case where t is replaced with a list of transition labels; and the two dual results where the amendment of a choreography moves first. We show the two more general statements. ``` Lemmaamend_complete_many:Program_WF(D,C)->(D,C,s)-[tl]->"(D,C',s)-> \(\exists\)tl'tl'C's',sel_exp(tl++tl')tl"->(D,C',s)->[tl]->"(D,C',s)-> \(\wedge\)(amend_DDps,amendDpsc,s)->[tl]->"(amend_DDpsc,s)->[tl]->"(amend_DDpsc,s)->[tl]->"(amend_Dpsc,s)->[tl]->"(D',s)->[tl]->"(D,C',s)->[tl]->"(D,C',s)->[tl]->"(D,C',s)->[tl]->"(D,C',s)->[tl]->"(D,C',s)->[tl]->"(D,C',s)->sel_exptl"(tl+tl'). ``` The challenging part of the work in this section was understanding what the correct formulation of these results should be. Once we reached this formulation, proofs were relatively straightforward inductions on the given transitions (10-15 lines of Coq code per case). The formalisation of the amendment lemma consists of 6 definitions, 50 lemmas, and 4 examples, with a total of roughly 1050 lines of Coq code. ## 4 Implications of Amendment In the previous section, we had to weaken the original statement for the semantic correspondence guaranteed by amendment that was given in [8]. Since the original statement was used in the proofs of Turing completeness for projectable core choreographies and SP, it is natural to investigate whether our new formulation still yields these results. For uniformity, we start by reformulating the Turing completeness result for core choreographies from [10], where process names are identified with natural numbers. ``` TheoremCC_Turing_Complete:Vn(f:PRFunctionn), \(\exists\)P,Program_WFP\(\wedge\)implementsPf(vec_1_to_n)0. ``` The theorem states that, for any partial recursive function f, there exists a well-formed choreographic program P that implements f with input processes 1,..., n and output process 0. The proof is a straightforward combination of results already presented in [10]. Combining this result with our lemmas about amendment yields that the fragment of projectable core choreographies is also Turing-complete. ### Now It Compiles! ``` Lemmaproject_Turing_Complete:Vn(f:PRFunctionn), ``` The proof is split into several steps. The most interesting sublemma is the one establishing that amending a choreography that implements a function yields a choreography that implements the same function. This is formulated as a general result about amendment. ``` Lemmaamend_implements:Program_WFP-> implementsPfpsqimplements(amend_PP)fpsq. ``` The proof uses the fact that terminated choreographies cannot execute further to show that the list of additional transitions added to the original choreography by the amendment lemma (tl in the last diagram) must be empty. The remaining lemmas for projCC_Turing_Complete deal with projectability of the amended choreography, as discussed in the previous section, and are simple to prove. Since amended choreographies are projectable, we can further apply the EPP theorem from [9] to show that SP is also Turing-complete. ``` TheoremSP_Turing_Complete:Vn(f:PRFunctionn), ``` The definition of SP_implements is a straightforward adaptation of the definition of implements for choreographies. The proof of SP_Turing_Complete follows a similar strategy to the one for projCC_Turing_Complete: we prove a sublemma app_implements stating that the EPP of a choreography that implements a function f is a process program that implements f. The formalisation of this section consists of 2 definitions and 11 lemmas, totaling about 250 lines of Coq code. The conciseness of this development substantiates our previous comment on not providing a complex lemma for projectability of programs, at the end of Section 3.2. ## 5 Related Work To the best of our knowledge, our work is the first formalisation of amendment, its properties, and its intended consequences. The work nearest to ours is the original presentation of the amendment procedure that inspired us [8]. As we discussed, the behavioural correspondence for amendment that the authors state is wrong. We developed a correct statement and managed to update and formalise the proofs of Turing completeness for CC and SP accordingly. Our formalisation of the behavioural correspondence also clarifies what semantic property amendment actually guarantees, which might be important for future work and practical applications of amendment. Amendment or similar procedures have been investigated also for other choreographic languages [1, 20]. In all these works, the general idea is to make choreographies projectable by adding communications as needed. However, the differences between the underlying languages make the procedures very different from ours, which is based on merging. Merging was first introduced in [2]. Our work is based on the most recent version of the formalisation of CC, SP, and EPP [11], which was originally introduced in [9, 10]. We did not need to modify this formalisation in order to use it for our development, which shows that it reached a sufficient level of maturity for being used as a library to reason about choreographies. Other formalisations of choreographies include: Kalas, a choreographic programming language that targets CakeML [27]; the choreographic DSL Zooid, a Coq library for verifying that message passing code respects a given multiparty session type (these are abstract choreographies without computation) [6]; and multiparty GV, a formalised functional language with a similar goal to Zooid [17]. ## 6 Conclusion We have presented the first formalisation of an amendment procedure for choreographies. Our work is based on a previous formalisation of CC and its accompanying notion of EPP, which we used as a library. We found this formalisation to be modular and complete enough to support the separate development presented here. In the same spirit of generality and reusability, our formalisation does not add any assumptions about CC that were not present in the library. Our development is an illustration of how theorem provers can assist in research: interacting with Coq guided us to (i) discovering that the semantic property of amendment found in the background literature for this work is wrong, and (ii) a correct formulation that is still powerful enough for its intended use in previous work. The formalisation of amendment is amenable to extraction, and therefore our work offers a basis for a certified transformer from arbitrary choreographies in CC to projectable ones. In the future, we plan on studying how this transformer can be integrated into existing frameworks for choreographic programming. Our notion of amendment is intrinsically related to how EPP is defined for CC. In the literature, there are choreographic languages with a more permissive notion of knowledge of choice, e.g., where replicated processes intended to be used as services are allowed to be involved in only one branch of a conditional [2, 4]. It would be interesting to study how amendment can be adapted to these settings.
2309.10722
LEA*: An A* Variant Algorithm with Improved Edge Efficiency for Robot Motion Planning
In this work, we introduce a new graph search algorithm, lazy edged based A* (LEA*), for robot motion planning. By using an edge queue and exploiting the idea of lazy search, LEA* is optimally vertex efficient similar to A*, and has improved edge efficiency compared to A*. LEA* is simple and easy to implement with minimum modification to A*, resulting in a very small overhead compared to previous lazy search algorithms. We also explore the effect of inflated heuristics, which results in the weighted LEA* (wLEA*). We show that the edge efficiency of wLEA* becomes close to LazySP and, thus is near-optimal. We test LEA* and wLEA* on 2D planning problems and planning of a 7-DOF manipulator. We perform a thorough comparison with previous algorithms by considering sparse, medium, and cluttered random worlds and small, medium, and large graph sizes. Our results show that LEA* and wLEA* are the fastest algorithms to find the plan compared to previous algorithms.
Dongliang Zheng, Panagiotis Tsiotras
2023-09-19T16:04:09Z
http://arxiv.org/abs/2309.10722v1
# LEA*: An A* Variant Algorithm with Improved Edge Efficiency ###### Abstract In this work, we introduce a new graph search algorithm, lazy edged based A* (LEA*), for robot motion planning. By using an edge queue and exploiting the idea of lazy search, LEA* is optimally vertex efficient similar to A*, and has improved edge efficiency compared to A*. LEA* is simple and easy to implement with minimum modification to A*, resulting in a very small overhead compared to previous lazy search algorithms. We also explore the effect of inflated heuristics, which results in the weighted LEA* (wLEA*). We show that the edge efficiency of wLEA* becomes close to LazySP and, thus is near-optimal. We test LEA* and wLEA* on 2D planning problems and planning of a 7-DOF manipulator. We perform a thorough comparison with previous algorithms by considering sparse, medium, and cluttered random worlds and small, medium, and large graph sizes. Our results show that LEA* and wLEA* are the fastest algorithms to find the plan compared to previous algorithms. ## I Introduction We consider the shortest path problem on a graph having in mind path planning for robotics applications. A graph \(G\) is given by a vertex set \(V\), an edge set \(E\), and an edge cost/weight set \(W\). Given two query vertices \(v_{i},v_{j}\in V\), the shortest path problem is to find the minimum cost path \(\tau^{*}\) connecting \(v_{i}\) and \(v_{j}\) through graph \(G\), if such a path exists. For robot path planning, sampling-based methods [1, 2, 3] are popular methods to make the problem tractable. The planning space is abstracted as a graph, where vertices represent robot configurations and edges represent movements of the robot. Then, graph search algorithms are employed to find a path for the robot. Numerous algorithms have been developed to solve the shortest path problem. For example, A* [4] is a popular algorithm that is vertex optimal. That is, any other algorithm that finds the same shortest path will expand at least as many vertices when using the same heuristic. The procedure of computing the cost of an edge is called _edge evaluation_. One practical issue with robot path planning is that the edge costs \(W\) may be unknown when starting the graph search. This is the case when the environment is unknown, and we do not know if the edges are in collision with obstacles or not. Then, edge evaluations are performed online as part of the graph search algorithm to compute the edge cost (e.g., perform collision checking). Even if the environment is known before starting the graph search, evaluating all edges before starting the graph search is unnecessary since only some of the edges will probably be visited when searching for the solution. Thus, performing edge evaluation online will save time compared to evaluating all edges before the graph search. In many robotic motion planning problems, edge evaluation, including performing collision checking, is the major computational bottleneck [5]. Lazy search algorithms that intend to reduce the number of edge evaluations, have been developed. Two notable lazy search algorithms are LWA* [6] and LazySP [7]. Lazy search algorithms use a heuristic for the edge cost, which is easy to compute, and provides a lower bound of the true edge cost. The edge heuristic is used to guide the graph search so that edge evaluation is done only when necessary. LazySP is also proved to be optimal with respect to minimizing edge evaluations [8]. Intuitively, Lazy search algorithms reduce the overall planning time if edge evaluations dominate the planning time by reducing the number of edge evaluations. However, lazy search algorithms reduce edge evaluations at the expense of extra graph operations (vertex expansion, calculating current best plan), which may introduce substantial computational overhead compared to A*. The ultimate goal is to achieve optimal edge efficiency while at the same time, introduce minimum computational overhead. In this paper, we propose the LEA* algorithm, which is shown to have the same vertex efficiency and edge efficiency as LWA* but it has a lower computational cost and memory cost. Through simulation studies, we also show that the edge evaluation of weighted LEA* (wLEA*) is close to optimal compared to LazySP without the extra graph operations. The contributions of this paper are summarized as follows: * We propose the LEA* algorithm for the shortest path planning problem. LEA* uses an edge queue and imposes minimal modifications on A*, which results in a small overhead compared with previous lazy search algorithms. * We prove the completeness, optimality, and optimal vertex efficiency of LEA*. * We show that the edge efficiency of wLEA* with an inflated heuristic is similar to LazySP, thus wLEA* has near-optimal edge efficiency. * We test LEA* on various randomly generated environments and perform a thorough comparison with previous algorithms, demonstrating the efficiency of LEA*. ## II Related Works Graphs offer a powerful abstraction tool for robot path and motion planning. Graphs can be divided into explicit graphs and implicit graphs. When combined with sampling-based methods, algorithms such as PRM [1], PRM* [9], and BIT* [3] solve planning problems in very high-dimensional spaces. They construct a graph of robot configurations and find the shortest path by exploiting this graph. These algorithms use explicit graphs. On the other hand, implicit graphs are defined implicitly using state lattice or motion primitives [10, 11, 12]. By constructing motion primitives offline, they are beneficial in dealing with kinematic constraints, non-holonomic constraints, and differential constraints. Planning problems for ground vehicles and micro aerial vehicles are studied in [11] and [12], respectively. The graph search algorithms studied in this paper can be used with both explicit graphs and implicit graphs. The A* search algorithm is a popular search algorithm that is vertex optimal [4]. Recent studies show that A* is not edge optimal [7]. By employing a lazy approach, the number of edge evaluations can be reduced [2, 6, 13]. For instance, LWA* [6] uses a one-step lookahead to postpone edge evaluation. It allows duplicate vertices in the vertex queue. A valid vertex with an estimated cost that is poped from the vertex queue is inserted into the queue again after edge evaluation. LazySP [7] uses an infinite-step lookahead and is shown to be edge optimal (evaluating the minimum number of edges). As the lookahead steps increase, the number of edge evaluations required to find the shortest path decrease, while the additional graph operation increase. Therefore, LRA* [8] interpolates between LWA* and LazySP and uses a constant lookahead in the interval \([1,\infty]\). A modified version of LWA* is also presented in [7] by using both a vertex queue and an edge queue. Compared to LWA*, our proposed LEA* algorithm shows that the vertex queue is unnecessary. By only maintaining an edge queue, LEA* achieves the same vertex efficiency and edge efficiency of LWA*. Thus, LEA* has a lower computational cost and possibly a lower memory usage. The advantage is more evident with large graphs as shown in the evaluation section. The weighted LEA* (wLEA*) has similar edge efficiency as LazySP without the extra graph operation needed by LazySP. In [14], the generalized lazy search algorithm (GLS) is proposed that unifies LWA*, LazySP, and LRA*. It also uses prior information of the planning environment to accelerate the search. A lazy incremental algorithm dealing with dynamic changing graphs is also introduced in [15] by combining the idea of lazy search and lifelong planning [16]. ## III Problem formulation We consider the problem of finding the shortest path between a starting vertex \(v_{s}\) and a goal vertex \(v_{g}\) on a graph \(G=(V,E)\), where \(V\) is the vertex set and \(E\) is the edge set of the graph. Each edge \(e\) has a cost \(w(e)\) and the edge cost set is denoted as \(W\). We consider the case when the edge cost set \(W\) is unknown at the beginning of the graph search. This setting has practical advantages for robot path/motion planning problems. Evaluating all edges before the graph search is unnecessary since only part of the edges will most likely be visited during the graph search to find the shortest path. Path planning for robots involves two main steps: constructing the graph and graph search. Since edge evaluation (i.e., checking if the robot collides with the environment when moving along an edge) is expensive, evaluating only edges that are visited during the search saves the total planning time. Another advantage of this problem setting is that the graph \(G\) is environment-independent. Since we do not perform collision checking when building the graph, \(G\) can be used with different environments with different obstacles. We assume that an admissible and consistent heuristic of the edge cost is available, which is easier to compute compared to the true cost. Specifically, we consider the special case where the estimated edge cost \(\hat{w}\) is equal to the true cost \(w\) without considering obstacles, and \[w(e)=\begin{cases}\hat{w}(e),&\text{if $e$ is not in collision},\\ \infty,&\text{if $e$ is in collision},\end{cases} \tag{1}\] where \(\hat{w}(e)\) is the length of the edge assuming it is collision-free. ## IV The LEA* Algorithm In this section, we provide a detailed description of the proposed lazy edge-based A* (LEA*) algorithm. LEA* uses the idea of lazy search and the tree expansion is ordered using an edge queue. We design LEA* to have minimum changes compared to the original A* algorithm, and have as little computational overhead as possible. Similar to A*, we define the _cost-to-come_ and the heuristic _cost-to-go_ of the vertex \(v\) as \(g(v)\) and \(h(v)\), which are the actual cost from \(v_{s}\) to \(v\) given the current search tree and the estimated cost from \(v\) to \(v_{g}\), respectively. Here, \(h(\cdot)\) is an admissible and consistent heuristic. The total estimated cost of the path passing through \(v\) is given by \[f(v)=g(v)+h(v). \tag{2}\] An edge is given by \(e=(\underline{v},\overline{v})\), where \(\underline{v}\) is the source vertex and \(\overline{v}\) is the target vertex. We define the _total estimated cost of edge \(e\)_ as \[f(e)=g(\underline{v})+\hat{w}(e)+h(\overline{v}), \tag{3}\] which is the total estimated cost of the path from \(v_{s}\) to \(v_{g}\) passing through \(e\). The LEA* algorithm is given in Algorithm 1. In Line 1, the edges from the starting vertex \(v_{s}\) to its successors \(v_{i}\) are added to the edge queue \(Q_{E}\). The edges in \(Q_{E}\) are ordered by their \(f\)-values given by (3). In Line 3, the best edge \(e=(\underline{v},\overline{v})\), which has the smallest _key_, is popped from \(Q_{E}\). Here _key_ is the \(f\)-value of \((\underline{v},\overline{v})\) and is used in the termination condition in Line 4. If the cost-to-come of goal vertex \(g(v_{g})\) is less than or equal to _key_, we have found the shortest path. The true edge cost \(w\) is computed in Line 6. If \((\underline{v},\overline{v})\) is in collision, \(w=\infty\). Otherwise, \(w\) is the distance between \(v\) and \(\overline{v}\). Provided that \(w<\infty\), the new cost-to-come of \(\overline{v}\), \(g_{\text{new}}\), is computed in Line 8. If \(g_{\text{new}}\) is better than the current cost-to-come \(g(\overline{v})\), \(g(\overline{v})\) is updated. The parent of \(\overline{v}\) is also updated (Line 11). Finally, the edges from \(\overline{v}\) to its successors \(v_{i}\) are added to the edge queue \(Q_{E}\) in Line 12. After the algorithm exits, we can get the shortest path from \(v_{s}\) to \(v_{g}\) using the parent information. A simple illustration of the LEA* and A* algorithms is given in Figure 1. Figures 1(a)-(c) are the steps of LEA* and Figures 1(d)-(f) are the steps of A*. At the beginning of LEA*, four edges are added to the edge queue as shown in Figure 1(a). Note that these edges are not evaluated, instead, they are lazily evaluated using the estimated cost \(\hat{w}\), and \(\hat{w}\) is used to compute their \(f\)-values according to (3). The best edge \(e_{4}\) is selected for edge evaluation. In Figure 1(b), \(e_{4}\) is evaluated and the next edges are added to the queue. Then, the best edge \(e_{7}\) in the current queue is selected for evaluation. For A*, all outgoing edges are evaluated when expanding a vertex. In this example, by using lazy evaluation and an edge queue, LEA* only evaluated two edges to find the solution, while A* evaluated eight edges. ``` 1\(Q_{V}\leftarrow\{v_{s}\}\); 2while\(Q_{V}\neq\emptyset\)do 3\(v_{c}\leftarrow Q_{V}\).Pop; 4if\(v_{c}=v_{g}\)then 5return 6foreach\(v_{i}\in\mathsf{Succ}(v_{c})\)do 7\(w\leftarrow\mathsf{Evaluate}((v_{c},v_{i}))\); 8if\(w<\infty\)then 9\(g_{\text{new}}\gets g(v_{c})+w\); 10if\(g_{\text{new}}<g(v_{i})\)then 11\(g(v_{i})\gets g_{\text{new}}\); 12\(v_{i}\).parent \(\gets v_{c}\); 13\(Q_{V}\gets Q_{V}\cup\{v_{i}\}\); 14 ``` **Algorithm 2**A* ## V Algorithmic Analysis In this section, we analyze the completeness, optimality, and vertex efficiency of LEA*. ### _Completeness of LEA*_ **Theorem 1**: _If at least one solution exists for the graph search problem, LEA* will return a solution. Otherwise, will return that no solution exists._ We first show that the algorithm terminates with \(Q_{E}=\emptyset\) and \(g(v_{g})=\infty\) when no solution exists. All edges have positive edge costs. The new edge \((\overline{v},v_{i})\) is added to \(Q_{E}\) in Line 12, Algorithm 1, only when we find a better path to \(\overline{v}\) (Line 9). Since the cost-to-come to every vertex is Fig. 1: Illustration of the steps of LEA* ((a)-(c)) and A* ((d)-(f)). The set below each figure is the current edge queue or vertex queue. Dashed red lines are edges that are lazily evaluated. Solid red lines are evaluated edges. In (a), four edges are added to the edge queue, and \(e_{4}\) is selected for evaluation. In (b), \(e_{4}\) is evaluated, the next edges are added to the edge queue, and \(e_{7}\) is selected. By using lazy evaluation and an edge queue, LEA* only evaluated two edges while A* evaluated eight edges. decreasing and lower bounded, Line 12 will only run a finite number of iterations. Thus, \(Q_{E}=\emptyset\) after executing Line 3 a finite number of iterations. Therefore, LEA* will terminate with \(Q_{E}=\emptyset\) and \(g(v_{g})=\infty\) if no solution exists. When the problem has a solution, LEA* terminates when \(g(v_{g})\leq\mathit{key}=\min_{e\in Q_{E}}f(e)\) or \(Q_{E}=\emptyset\). If the algorithm terminated with \(g(v_{g})\leq\min_{e\in Q_{E}}f(e)\), we have \(g(v_{g})<\infty\), which implies a solution has been found. Let \((e_{0},e_{1},\ldots,e_{n-1})\) (equivalently, \((v_{s},v_{1},v_{2},\ldots,v_{g})\)) be a solution path. Initially, \(e_{0}\in Q_{E}\). After evaluating \(e_{0}\), \(e_{1}\) is added to \(Q_{E}\). Similarly after evaluating \(e_{1}\), \(e_{2}\) is added to \(Q_{E}\). If the algorithm terminated with \(Q_{E}=\emptyset\), it must have evaluated \(e_{0},e_{1},\ldots,e_{n-1}\). Thus, the algorithm has found this solution and the returned solution is at least as good as \((e_{0},e_{1},\ldots,e_{n-1})\). ### _Optimality of LEA*_ **Theorem 2**: _If at least one solution exists for the graph search problem, LEA* finds the minimum cost solution._ Let \(\tau^{*}=(v_{s},v_{1}^{*},v_{2}^{*},\ldots,v_{n-1}^{*},v_{g})\) (equivalently \((e_{0}^{*},e_{1}^{*},\ldots,e_{n-2}^{*},e_{n-1}^{*})\)) be an optimal solution path with path cost \(c^{*}\). Let \(\tau=(v_{s},v_{1},v_{2},\ldots,v_{n-1},v_{g})\) (equivalently \((e_{0},e_{1},\ldots,e_{n-2},e_{n-1})\)) be the solution path returned by LEA*. The cost of \(\tau\) is \(c\). We need to show that \(c\) is equal to \(c^{*}\) to prove optimality. To reach a contradiction, let us assume \(c^{*}<c\). Note that \(\max\{f(e_{0}^{*}),f(e_{1}^{*}),\ldots,f(e_{n-2}^{*}),f(e_{n-1}^{*})\}\leq c^{*}\). Also, \(g(v_{g})=c\leq\min_{e\in Q_{E}}f(e)\) holds when LEA* terminates. In the beginning of LEA*, \(e_{0}^{*}\in Q_{E}\). Therefore, \(e_{0}^{*}\) must have been evaluated before \(c\leq\min_{e\in Q_{E}}f(e)\) is true. After evaluating \(e_{0}^{*}\), \(v_{1}^{*}\) is added to the expansion tree, and \(e_{1}^{*}\) is added to \(Q_{E}\). By repeating this analysis, \(e_{1}^{*},\ldots,e_{n-2}^{*},e_{n-1}^{*}\) must all have been evaluated before \(c\leq\min_{e\in Q_{E}}f(e)\) is true. After evaluating \(e_{1}^{*},\ldots,e_{n-1}^{*}\), we have found a path to \(v_{g}\) and \(f(v_{g})\leq c^{*}\). Since the \(f(v_{g})\) is nonincreasing, the algorithm will never return a path with cost \(c>c^{*}\). Therefore \(c=c^{*}\). ### _Optimal Vertex Efficiency_ **Theorem 3**: _LEA* has the same vertex efficiency as A*. Furthermore, the expanded vertex set of LEA* is a subset of the expanded vertex set of A*._ For LEA*, we call evaluating any outgoing edge of \(v\), expanding \(v\). A* expands \(v\) by evaluating all outgoing edges of \(v\). Vertex \(v\) is expanded in LEA* if a subset of its outgoing edges is evaluated. Let \(\tau^{*}=(e_{0}^{*},e_{1}^{*},\ldots,e_{n-2}^{*},e_{n-1}^{*})\) be the path found by A* and LEA*, and let \(c^{*}\) be the cost of \(\tau^{*}\). Let \(T=(V,E)\) and \(T^{e}=(V^{e},E^{e})\) be the expansion tree grown by A* and LEA*, respectively. Let \(W^{v}\) and \(W^{e}\) be the set of vertices expanded by A* and LEA*, respectively. By showing \(W^{e}\subseteq W^{v}\), we prove the optimally efficient search of LEA*. Note that if \(W^{e}\subseteq W^{v}\), we have \(V^{e}\subseteq V\). Now assume \(W^{e}\not\subseteq W^{v}\). Then, there exists \(v\), such that \(v\notin W^{v}\), \(v\in W^{e}\), \(v\in V^{e}\) and \(v\in V\). To show that this is true, we start with \(v_{0}\in W^{e}\), and \(v_{0}\notin W^{v}\). Then, \(v_{0}\in V^{e}\) (vertex must be in the tree for it to be expanded). If \(v_{0}\notin V\), we find \(v_{1}\), which is the parent of \(v\) in \(T^{e}\). Then, we have \(v_{1}\in V^{e}\), \(v_{1}\in W^{e}\), and \(v_{1}\notin W^{v}\). If \(v_{1}\notin V\), we repeat this process by finding its parent vertex in \(V^{e}\), and one of these parents \(v_{i}\) must satisfy \(v_{i}\in V\) since \(V\) and \(V^{e}\) share the same root vertex. From \(v\notin W^{v}\), we have \(c^{*}\leq f(v)=g(v)+h(v)\). From \(v\in W^{e}\), we have that there exists \(v_{i}\in\mathsf{Succ}(v)\), such that \(f((v,v_{i}))=g(v)+\hat{w}((v,v_{i}))+h(v_{i})\leq c^{*}\). Note that \(f(v)\leq f((v,v_{i}))\). Then, we have \[c^{*}\leq f(v)\leq f((v,v_{i}))\leq c^{*}, \tag{4}\] which only holds when \(f(v)=f((v,v_{i}))=c^{*}\). Note that \[\max\{f(e_{0}^{*}),f(e_{1}^{*}),\ldots,f(e_{n-2}^{*}),f(e_{n-1}^{*})\}\leq c^{*}. \tag{5}\] Using (5), in order for \((v,v_{i})\) to be evaluated by LEA*, there exist \(e_{i}^{*}=(\underline{v},\overline{v})\in\tau^{*}\), such that \(f(e_{i}^{*})=c^{*}\) and \(g(v)<g(\underline{v})\). Otherwise, all \(e_{i}^{*}\in\tau^{*}\) have a higher priority than \((v,v_{i})\) and \((v,v_{i})\) will not be evaluated. Using \(f(v)=c^{*}\) and \(g(v)<g(\underline{v})\leq g(v_{g})\), \(v\) has a higher priority than \(v_{g}\). Therefore, \(v\) must be expanded by A* before \(v_{g}\) is selected from the vertex queue. Therefore, \(v\in W^{v}\), contradicting \(v\notin W^{v}\). ## VI Experimental Results In this section, we compare LEA* with A*, LWA*, LazySP, and LRA*. Our code is available online1. The lookahead parameter for LRA* is set to 4 in our implementations. Two problems are studied: a planning problem in 2D environments and planning for a 7DOF manipulator. Footnote 1: [https://github.com/dongliangCH/LEAstar](https://github.com/dongliangCH/LEAstar) ### _2D Planning Problem_ The 2D environment is filled with randomly generated obstacles. An example of the planning environment is given in Figure 2. The 2D environment is divided into sparse, medium, and cluttered environments, each with 8, 16, and 24 obstacles, respectively. Each obstacle is a rectangle, and its width and height are uniformly sampled from an interval. The location of each obstacle is also uniformly sampled inside the boundary of the environment. We also consider different graph sizes from small graphs to large graphs. The graph size is indicated by the number of Fig. 2: 2D planning problem. Obstacles are sampled randomly. The graph is constructed without considering the presence of obstacles. vertices \(N\) in the graph. Specificly, \(N=200\), \(N=1,000\), \(N=5,000\), \(N=10,000\), \(N=20,000\) are considered. Following [9], the connecting radius of the graph is a function of \(N\). The graph construction step follows the lazy PRM algorithm [2]. No collision check is performed when sampling vertices or adding edges. As has been discussed in Section III, this will save planning time since evaluating all edges is unnecessary. In addition, after the graph is constructed, the same graph may be used with different environments. We sample 10 environments for each environment category (sparse, medium, cluttered). For each environment and graph combination, 50 start-goal vertices are sampled as queries of the shortest path problem. The total number of planning problems for the 2D environment is therefore \(7,500=3\times 5\times 10\times 50\). The results are summarized in Tables I and II. The shortest paths returned by all five algorithms have the same path length for each tested case. The planning time results are given in Table I. LEA* uses the least amount of time to find the same solution compared to other algorithms. LWA* and LEA* have similar performance. The improvement of LEA* over LWA* is small but consistent, and the improvement becomes clearer with larger graphs since LWA* needs to maintain both vertex queue and edge queue. Bounded suboptimal solutions can be easily obtained by inflating the cost-to-go heuristic \(h(\cdot)\) with a factor \(\epsilon\). In our implementation, the inner loop of LazySP and LRA* use A* for lazy search. The values of \(\epsilon=\{1,1.5,2,2.5\}\) were tested. All algorithms find the optimal path when \(\epsilon=1\). The results for \(\epsilon=2\) are also given. As we can see, the planning time decreased for all algorithms. The trend for the planning time of LEA* and solution path length is given in Figure 3(a). The number of evaluated edges is given in Table II. Since LazySP is edge optimal, it evaluates the minimum number of edges. However, as it requires extra graph operations such as vertex expansions and finding the current best path, the overall planning time is greater than LEA*. Using an inflated heuristic \(\epsilon=2\), the average number of edge evaluations are decreased for all algorithms. The gap between LazySP and LEA* also becomes smaller. The trend for edge evaluations with increasing inflation factor is given in Figure 3(b). ### _Planning for 7DOF Manipulator_ We consider the planning problem of a KUKA robot on a tabletop. The algorithms are implemented using the Pybullet simulator [17]. The planning environment is shown in Figure 4. Cubic obstacles are randomly generated on the tabletop. The environment is divided into sparse, medium, and cluttered environments where each of them has 4, 8, and 12 obstacles, respectively. We also set a wall and a ceiling to limit the operation space of the manipulator. The width, depth, and height of each cubic obstacle is uniformly sampled from its respective interval. The location of the cubic obstacle is also sampled on the tabletop. We consider small graphs, medium graphs, and large graphs with \(N=1,000\), \(N=5,000\), and \(N=10,000\), respectively. We sample ten environments for each environment category and sample 50 start-goal queries for each environment and graph combination. Therefore, the total number of planning problems is \(4,500\). The results are summarized in Tables III and IV. The planning time results are given in Table III. LEA* uses the least amount of time to find the same solution compared to other algorithms. Bounded suboptimal solutions are obtained by using an inflation factor. The values \(\epsilon=\{1,1.5,2,2.5\}\) were tested. The planning time decreases significantly for all algorithms. The trend for the planning time of LEA* and solution path length is given in Figure 5(a). The number of evaluated edges is given in Table IV. The results are consistent with the 2D planning problem. LazySP evaluates the minimum number of edges and is edge optimal. The Fig. 4: Manipulator tabletop example. Fig. 5: Manipulator example. (a) The trend of the planning time of LEA* and solution path length with different \(\epsilon\). (b) The number of edge evaluations of LazySP and LEA* for different \(\epsilon\). Fig. 3: (a) The trend of the planning time of LEA* and solution path length with different \(\epsilon\). The average planning time decreases \(\sim\) 85% while the path length only increases \(\sim\) 5%. (b) The number of edge evaluations of LazySP and LEA* for different \(\epsilon\). The detailed results for \(\epsilon=1\) and \(\epsilon=2\) are given in Table II. The gap between LazySP and LEA* becomes smaller as \(\epsilon\) increases. trend for edge evaluations with increasing inflation factor is given in Figure 5(b). The gap between LazySP and LEA* becomes smaller as \(\epsilon\) increases. ## VII Conclusion In this work, we introduce the LEA* algorithm for robot path planning. LEA* is complete, optimal, with optimal vertex efficiency, and improved edge efficiency. LEA* imposes only a few modifications to A*, making it easy to implement. We show that the number of evaluated edges of weighted LEA* becomes close to the optimal value, while avoiding the computational overhead of LazySP and LRA*. We benchmark our algorithm with previous algorithms on various randomized environments. Our results show that LEA* and weighted LEA* are the fastest algorithms to find the plan in all tested examples. Future work includes using LEA* and motion primitives for motion planning of robots with nonholonomic and differential constraints. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Avg. Time (s)} & \multicolumn{3}{c|}{N = 1000} & \multicolumn{3}{c|}{N = 5000} & \multicolumn{3}{c|}{N = 10000} & \multicolumn{3}{c}{N = 20000} \\ \cline{2-13} & o = 8 & o = 18 & o = 28 & o = 8 & o = 18 & o = 28 & o = 8 & o = 18 & o = 28 \\ \hline A* (\(\epsilon=1\)) & 900.3 & 1008.5 & 1045 & 4049.3 & 4265.1 & 4913.8 & 7718.2 & 8279.2 & 9980.1 & 14434 & 16138 & 18666 \\ LWA* (\(\epsilon=1\)) & 49.39 & 64.35 & 74.89 & 164.7 & 196.58 & 252.27 & 294.32 & 318.44 & 410.43 & 501.89 & 588.78 & 720.81 \\ LEA* (\(\epsilon=1\)) & 49.39 & 64.35 & 74.89 & 164.7 & 196.58 & 252.27 & 294.32 & 318.44 & 410.43 & 501.89 & 588.78 & 720.81 \\ LEA* (\(\epsilon=1\)) & **15.87** & **26.05** & **33.53** & **34.02** & **57.97** & **82.06** & **49.75** & **80.26** & **114.23** & **76.98** & **118.26** & **174.78** \\ LRA* (\(\epsilon=1\)) & 23.60 & 33.84 & 41.94 & 75.39 & 98.56 & 132.27 & 133 & 156.06 & 208.88 & 227.69 & 281.40 & 359.58 \\ \hline A* (\(\epsilon=2\)) & 201.6 & 251.7 & 282.08 & 485.2 & 590.95 & 981.50 & 711.76 & 865.41 & 2510.3 & 1934.5 & 1781.1 & 3010.2 \\ LWA* (\(\epsilon=2\)) & 13.33 & 21.84 & 28.59 & 24.69 & 41.06 & 73.69 & 35.44 & 51.87 & 130.57 & 85.87 & 94.73 & 173.74 \\ LEA* (\(\epsilon=2\)) & 13.33 & 21.84 & 28.59 & 24.69 & 41.06 & 73.69 & 35.44 & 51.87 & 130.57 & 85.87 & 94.73 & 173.74 \\ LazySP (\(\epsilon=2\)) & 13.33 & 21.84 & 28.59 & 24.69 & 41.06 & 73.69 & 35.44 & 51.87 & 130.57 & 85.87 & 94.73 & 173.74 \\ LazySP (\(\epsilon=2\)) & **12.79** & **19.08** & **23.81** & **23.74** & **34.59** & **48.15** & **33.16** & **43.62** & **63.42** & **46.71** & **59.02** & **86.78** \\ LRA* (\(\epsilon=2\)) & 12.82 & 19.20 & 24.03 & 23.76 & 34.89 & 50.07 & 33.27 & 44.09 & 68.01 & 50.12 & 60.88 & 94.36 \\ \hline \hline \end{tabular} \end{table} TABLE II: Number of edge evaluations of the 2D planning problem \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Avg. Time (s)} & \multicolumn{3}{c|}{N = 1000} & \multicolumn{3}{c|}{N = 5000} & \multicolumn{3}{c|}{N = 10000} & \multicolumn{3}{c}{N = 20000} \\ \cline{2-13} & o = 8 & o = 18 & o = 28 & o = 8 & o = 18 & o = 28 & o = 8 & o = 18 & o = 28 \\ \hline A* (\(\epsilon=1\)) & 0.0871 & 0.0998 & 0.1036 & 0.2270 & 0.2397 & 0.2761 & 0.3500 & 0.3913 & 0.4653 & 0.5358 & 0.5952 & 0.6846 \\ LWA* (\(\epsilon=1\)) & 0.0118 & 0.0150 & 0.0167 & 0.0424 & 0.0495 & 0.0618 & 0.0781 & 0.0895 & 0.1152 & 0.1436 & 0.1676 & 0.2061 \\ LEA* (\(\epsilon=1\)) & **0.0109** & **0.0137** & **0.0152** & **0.0378** & **0.0428** & **0.0530** & **0.0682** & **0.0771** & **0.0965** & **0.1204** & **0.1403** & **0.1671** \\ LazySP (\(\epsilon=1\)) & 0.0236 & 0.0690 & 0.1104 & 0.2030 & 0.6452 & 1.2149 & 0.5167 & 1.4903 & 3.0549 & 1.7829 & 3.7850 & 5.1193 \\ LRA* (\(\epsilon=1\)) & 0.0393 & 0.0799 & 0.1182 & 0.4536 & 0.8561 & 1.5409 & 1.4263 & 2.3998 & 3.4414 & 4.444 & 5.6916 & 7.3189 \\ \hline A* (\(\epsilon=2\)) & 0.0198 & 0.0246 & 0.0269 & 0.0278 & 0.0329 & 0.0542 & 0.0329 & 0.0409 & 0.1199 & 0.0739 & 0.0654 & 0.1078 \\ LWA* (\(\epsilon=2\)) & 0.0031 & **0.0040** & 0.0053 & **0.0050** & 0.0079 & 0.0142 & 0.0081 & 0.0099 & 0.0353 & 0.0239 & 0.0222 & 0.0404 \\ LEA* (\(\epsilon=2\)) & **0.0028** & **0.0040** & **0.0409** & 0.0051 & **0.0070** & **0.0122** & **0.0071** & **0.0093** & **0.0264** & **0.0182** & **0.0179** & **0.0322** \\ LazySP (\(\epsilon=2\)) & 0.0063 & 0.0175 & 0.0238 & 0.0218 & 0.0622 & 0.1771 & 0.0438 & 0.1101 & 0.4731 & 0.2760 & 0.3312 & 0.8808 \\ LRA* (\(\epsilon=2\)) & 0.0070 & 0.0157 & 0.0227 & 0.0235 & 0.0573 & 0.1935 & 0.0470 & 0.0995 & 0.6607 & 0.5203 & 0.4129 & 1.0473 \\ \hline \hline \end{tabular} \end{table} TABLE III: Planning time of the 7D planning problem \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Avg. Time (s)} & \multicolumn{3}{c|}{N = 1000} & \multicolumn{3}{c|}{N = 5000} & \multicolumn{3}{c}{N = 10000} \\ \cline{2-13} & o = 4 & o = 8 & o = 12 & o = 4 & o = 8 & o = 12 & o = 4 & o =
2309.09273
Asymptotic Analysis of the Downlink in Cooperative Massive MIMO Systems
We consider the downlink of a cooperative cellular communications system, where several base-stations around each mobile cooperate and perform zero-forcing to reduce the received interference at the mobile. We derive closed-form expressions for the asymptotic performance of the network as the number of antennas per base station grows large. These expressions capture the trade off between various system parameters, and characterize the joint effect of noise and interference (where either noise or interference is asymptotically dominant and where both are asymptotically relevant). The asymptotic results are verified using Monte Carlo simulations, which indicate that they are useful even when the number of antennas per base station is only moderately large. Additionally, we show that when the number of antennas per base station grows large, power allocation can be optimized locally at each base station. We hence present a power allocation algorithm that achieves near optimal performance while significantly reducing the coordination overhead between base stations. The presented analysis is significantly more challenging than the uplink analysis, due to the dependence between beamforming vectors of nearby base stations. This statistical dependence is handled by introducing novel bounds on marked shot-noise point processes with dependent marks, which are also useful in other contexts.
Itsik Bergel, Siddhartan Govindasamy
2023-09-17T13:42:23Z
http://arxiv.org/abs/2309.09273v2
# Asymptotic Analysis of the Downlink in Cooperative Massive MIMO Systems ###### Abstract We consider the downlink of a cooperative cellular communications system, where several base-stations around each mobile cooperate and perform zero-forcing to reduce the received interference at the mobile. We derive closed-form expressions for the asymptotic performance of the network as the number of antennas per base station grows large. These expressions capture the trade off between various system parameters, and characterize the joint effect of noise and interference (where either noise or interference is asymptotically dominant and where both are asymptotically relevant). The asymptotic results are verified using Monte Carlo simulations, which indicate that they are useful even when the number of antennas per base station is only moderately large. Additionally, we show that when the number of antennas per base station grows large, power allocation can be optimized locally at each base station. We hence present a power allocation algorithm that achieves near optimal performance while significantly reducing the coordination overhead between base stations. The presented analysis is significantly more challenging than the uplink analysis, due to the dependence between beamforming vectors of nearby base stations. This statistical dependence is handled by introducing novel bounds on marked shot-noise point processes with dependent marks, which are also useful in other contexts. ## I Introduction Multiple-input-multiple-output (MIMO) systems with large numbers of antennas, also known as massive MIMO, have received tremendous attention in the research community (e.g., [1, 2, 3, 4, 5, 6, 7]). Early works on massive MIMO systems (e.g., [1]) used maximal ratio-transmission (MRT) for the downlink or maximal ratio combining (MRC) for the uplink, to ensure that signals add constructively. Interference mitigation was accomplished by the approximate (and asymptotically exact) orthogonality of long channel vectors with random, independent fading coefficients. While MRT/MRC are optimal when the number of antennas at the base stations (BSs) go to infinity, for finite (and even large) numbers of antennas, significantly higher spectral efficiencies are achievable using other linear processing techniques. On the uplink, minimum-mean-square-error (MMSE) processing is the optimal linear processing scheme for receive beamforming [8, 9, 10]. In contrast, in the downlink the precoding decisions of each BS affect mobiles of all BSs, and optimal processing can only be achieved by numeric optimization. Since no optimal processing scheme is known for the downlink, several heuristic beamforming approaches were considered for interference mitigation. Most of these are variations of zero-forcing precoding, where transmit beamformers are designed to produce (near) zero interference at certain mobiles (e.g., [11, 12, 13, 14, 15]). In [10], regularized zero-forcing was found to perform as well as MRT, but with an order of magnitude fewer antennas. Additionally, [13] showed significantly larger spectral efficiencies with zero forcing compared to MRT, even when the number of antennas is \(\sim 200\). Hence, although MRT is asymptotically optimal on the downlink, zero-forcing precoding is attractive even when the number of antennas per BS is quite large. In high-density networks, signals from other base stations may cause significant interference. Thus, we focus on BS cooperation, which promises significant performance improvement. In particular, we focus on user-centric cooperation, where all BSs up to a certain distance from each mobile protect the mobile from interference. This network structure is strongly related to cell-free systems (e.g., [8, 16, 17]) which have received a lot of attention in the literature. Thus our results are also relevant to some cell-free scenarios. Cell-free systems may differ from our work in 3 main aspects: They typically use access points with a small number of antennas; They typically use joint processing of all BSs; and they cooperate both for interference mitigation and for signal transmission (joint signal transmission is not considered herein as its contribution is smaller, and it complicates the analysis). So far, all analysis of the downlink in cooperative massive MIMO networks was done either for specific BS locations, or through simulation (e.g., [8, 10]). As such, obtaining generalizable conclusions using these approaches is challenging, and requires extensive numerical simulations. On the other hand, in the approach taken in this work, we explicitly model how the BSs and mobiles are positioned in space, which enables us to obtain insights on system performance such as the impact of BS density and mobile distribution on the spectral efficiency. Such an approach can enable system designers to trade-off various parameters such as BS and mobile densities, number of antennas, number of cooperating base stations and spectral efficiency. These approaches have been observed to "lead to remarkably precise characterizations" [18]. Further, analysis of cellular networks that explicitly considers the BS and mobile distribution in space is quite challenging, even for single antenna systems [19]. Hence, an analysis of cooperative cellular networks with multiple antennas, where the spatial distribution of BSs and mobiles is explicitly considered, is useful and challenging. Here, we derive closed-form expressions for the asymptotic behavior of the Signal-to-Interference-plus-Noise Ratio (SINR), which give insights on the network performance and allow system optimization. The analysis explicitly addresses the spatial distribution of BSs and mobiles. We also introduce a novel approach for asymptotic analysis that captures the effect of both noise and interference. This approach is important because asymptotic expressions are often used to generate performance approximations for finite systems. Finite systems are affected by both interference and noise, while traditional asymptotic analysis either reaches a noise-limited regime or an interference-limited regime. Additionally, our results indicate that with large number of antennas, the effect of each BS on its neighbors can be captured through the network's average transmitted power per BS. This finding allows us to present a simple and efficient power allocation algorithm for large networks which only involves a minor coupling between the local processing done at each base station. To facilitate analysis, this paper considers a relatively simple channel model with perfect channel knowledge and independent fading. Yet, it is important to note that practical channels are often characterized both by fading correlations and estimation errors. E.g., taking advantage of the fading correlation was shown to reduce the channel estimation load and allow lower feedback rates in [20, 21].Furthermore, [22] showed that the throughput in cell-free MIMO is asymptotically unbounded in a correlated fading model that satisfies specific conditions. Yet, such models are significantly more complicated and so far, have not yielded closed-form expressions of the achievable data rates. In contrast, in this work, we provide expressions for the spectral efficiency that can be evaluated directly, without having to resort to simulations of a large number of network realizations. Note that channel estimation for large networks is an active research area with significant recent progress (e.g., [23, 24] which use Bayesian and machine-learning approaches). As channel estimation is constantly improving, it is important to also understand the achievable system performance if channel estimation errors become negligible. We emphasize that the downlink scenario is especially challenging for analysis due to statistical dependence between the actions of the BSs which are coupled through the locations of nearby mobiles. This form of dependence is different from the correlation between the fading coefficients as considered in works such as [16, 25]. Thus, this work is the first to present asymptotic results for the performance of the downlink in a cooperative network. Note that several works have presented large network analysis but without closed-form expressions (e.g. [8, 17]). Such works still require Monte-Carlo simulations to obtain insight into system performance. Analysis of spatially distributed systems with statistical dependence induced through spatial locations is usually very complicated. Thus, this work was only enabled through the derivation of novel bounds on the second joint moments of shot noises driven by marked Poisson Point processes (PPPs), where the marks are allowed to be statistically dependent1. These novel bounds are especially useful for scenarios with weak dependence between the marks, where the bounds become tight. In this work we use these bounds to derive tight upper and lower bounds on the joint moments of the interference and hence to prove the limited effect of the statistical dependence between the transmitting base stations. Footnote 1: An introduction to shot noise processes and marked PPPs is given for example in [26] and [27]. To emphasize the importance of our findings, we also present numerical results based on Monte-Carlo simulations. These results show agreement with the theoretical predictions. Moreover, they show that the theoretical predictions are valuable even when the numbers of antennas per BS are only moderately large. In summary, the main contributions of this work are: * Closed-form expressions for the asymptotic performance in the downlink of cooperative cellular networks. * A novel approach for asymptotic analysis of the downlink of cooperative cellular networks that captures the effect of both noise and interference. * A proof that the asymptotic effect of each BS on its nearby mobiles depends only on the network's average transmitted power per BS. We used this finding to derive a novel, low-complexity power allocation scheme for large networks. * Novel bounds on the second joint moments of shot noises driven by marked Poisson Point processes (PPPs), where the marks are allowed to be statistically dependent. ## II Problem Statement ### _Network and channel model_ We consider the downlink in a cellular network. The network transmission is done through BSs which are distributed according to a homogeneous Poisson Point Process (HPPP), \(\Phi_{b}\), with density \(\lambda_{\rm b}\) BSs per unit area. Each BSs has \(L\) antennas and serves \(M\) active mobiles, which are located at a distance of at most \(R\) from the BS. Hence, the area density of mobiles is \(\lambda=\lambda_{\rm b}M\). Each mobile has one antenna. Without loss of generality, we analyze the performance of a _typical_ mobile located at the origin. The typical mobile is labeled as mobile-0, and is served by BS-0. BSs and mobiles are indexed separately and we denote by \(r_{i,k}\) the distance between mobile \(i\) and BS \(k\). Let the BS that serves mobile \(i\) be \(b_{i}\) (and hence \(r_{i,b_{i}}\leq R\)). Also, let \(\mathcal{M}_{k}=\{i:b_{i}=k\}\) be the set of mobiles served by BS \(k\) (with \(|\mathcal{M}_{k}|=M\)). Note that given the existence of the typical mobile at the origin, its serving BS must be within radius \(R\) of the origin (i.e., \(r_{0,0}\leq R\)). All other BSs still form an HPPP with a density of \(\lambda_{\rm b}\). Let \(v_{i}\) be the symbol transmitted by BS \(b_{i}\) to mobile-\(i\) (with \(E[v_{i}]=0\) and \(E[|v_{i}|^{2}]=1\)) using a normalized beamforming vector \(\mathbf{w}_{i}\) (with \(\|\mathbf{w}_{i}\|=1\)). The specific form for \(\mathbf{w}_{i}\) is defined in Subsection II-B. For a specific symbol, the sampled received signal at mobile-\(j\) is \[u_{j}=a_{b_{j}}\varphi_{j,b_{j}}\mathbf{h}_{j,b_{j}}^{\dagger}\mathbf{w}_{j}v _{j}+\sum_{i\neq j}a_{b_{i}}\varphi_{i,b_{i}}\mathbf{h}_{j,b_{i}}^{\dagger} \mathbf{w}_{i}v_{i}+n_{j}\] where \(\mathbf{h}_{j,k}\) is the (complex conjugate of the) channel between BS \(k\) and mobile \(j\), \(n_{j}\sim\mathcal{CN}(0,\sigma^{2})\) is the additive Gaussian noise. \(\mathcal{CN}(0,\sigma^{2})\) refers to a circularly symmetric, zero mean, complex Gaussian random variable with variance \(\sigma^{2}\). The random variables \(a_{k}\in\{0,1\}\) determine whether or not BS \(k\) is active, as detailed in Subsection II-B. \(\varphi_{i,b_{i}}^{2}\) is the power allocated by BS \(b_{i}\) to mobile \(i\). We enforce a maximum transmit power constraint on each BS, such that for each \(k\)\(\sum_{i\in\mathcal{M}_{k}}\varphi_{i,k}^{2}\leq P\). The channel gain vector is given by: \(\mathbf{h}_{i,k}=\mathbf{g}_{i,k}r_{i,k}^{-\frac{2}{2}}\) where \(\alpha>2\) is the path loss exponent and \(\mathbf{g}_{i,k}\) is the fading vector between mobile \(i\) and BS \(k\), which has i.i.d., \(\mathcal{CN}(0,1)\) entries. Thus, the SINR for mobile \(i\) is: \[\eta_{i}=\frac{\left|a_{b_{i}}\varphi_{i,b_{i}}\mathbf{h}_{i,b_{i}}^{\dagger} \mathbf{w}_{i}\right|^{2}}{\sum_{j\neq i}\left|a_{b_{j}}\varphi_{j,b_{j}} \mathbf{h}_{i,b_{j}}^{\dagger}\mathbf{w}_{j}\right|^{2}+\sigma^{2}}. \tag{1}\] Assuming Gaussian codebooks are used to serve all mobiles, with long enough blocks and system bandwidth \(B\), the achievable rate for mobile \(i\), denoted by \(R_{i}\) is \[R_{i}=B\log_{2}(1+\eta_{i})\,. \tag{2}\] ### _Partial Zero Forcing_ The network establishes partial cooperation, where user data is not shared between BSs, but each BS obtains the channel state information (CSI) for all neighboring mobiles. This CSI is used by each BS to perform zero-forcing on all mobiles within a radius \(D\) of the BS.2 Following our comment above, we take the approach of [28] and focus on characterizing the spectral efficiencies that can be achieved if accurate channel estimation is possible. Footnote 2: In some studies, all cooperating BS also share the user data and jointly transmit it to the user. Such schemes require a higher level of cooperation between the BSs and hence are not considered herein. We note that the analysis of such schemes can be done using the same approach (and has the same complexity) but requires a longer analysis. As each BS has \(L\) antennas, a proper choice of the precoding weight vector can null the signal at \(L-1\) mobiles. If the number of mobiles in the disk of radius \(D\) around a given BS exceeds \(L\), then the BS cannot perform proper transmission and ZF. Moreover, if the distance to a served mobile is greater than \(D\), the BS is capable of serving it if the number of users in the disk of radius \(D\) does not exceed \(L-1\). In all cases, no BS interferes with a mobile at a distance of less than \(D\). If the BS cannot serve a mobile without creating such interference, it has to adopt an alternative transmission strategy. As the choice of active users is beyond the scope of this paper, we consider here a simple scheme, in which a BS transmits only if it can serve all of its users without violating the interference constraint. Thus, a BS does not transmit at all if it has more than \(L\) users within a radius \(D\), or if it has \(L\) users within this radius and at least one served mobile outside of this radius. Hence \[a_{k}=\begin{cases}1\text{ if }(|\mathcal{D}_{k}|<L)\text{ or }(|\mathcal{D}_{k}\cup\mathcal{M}_{k}|\leq L),\\ 0\text{ otherwise.}\end{cases} \tag{3}\] where \(\mathcal{D}_{k}=\{i:r_{i,k}<D\}\) is the set of mobiles in the radius \(D\) disk around BS \(k\). We will ultimately show that the probability of no transmission (\(a_{k}=0\)) becomes negligible in the regime of interest, where \(L\) is large. While we assume that the CSI at BS \(k\) includes the channel vectors \(\mathbf{h}_{i,k}\)\(\forall i\in\mathcal{D}_{k}\cup\mathcal{M}_{k}\), for analysis purposes, it is more convenient to use only the fading vectors, \(\mathbf{g}_{i,k}\). Note that the resulting ZF precoder is identical whether the BSs use \(\mathbf{g}_{i,k}\) or \(\mathbf{h}_{i,k}\) to construct the precoders. Thus, for served user \(i\) (with \(b_{i}=k\)) BS \(k\) constructs the interference channel matrix, \(\mathbf{G}_{i,k}\in\mathbb{C}^{L\times(|\mathcal{D}_{k}\setminus\{i\}|)}\) whose columns are the vectors \(\mathbf{g}_{\ell,k}\) for all \(\ell\in\mathcal{D}_{k}\setminus\{i\}\). Then, the precoding weight vector used by BS \(k\) for user \(i\) is given by: \[\mathbf{w}_{i,k}=\frac{\mathbf{Q}_{i,k}\mathbf{g}_{i,k}}{\|\mathbf{Q}_{i,k} \mathbf{g}_{i,k}\|} \tag{4}\] where \(\mathbf{Q}_{i,k}=\mathbf{I}-\mathbf{G}_{\tilde{i},k}(\mathbf{G}_{i,k}^{H}\mathbf{G }_{\tilde{i},k})^{-1}\mathbf{G}_{\tilde{i},k}^{H}\) is a projection matrix onto the null space of \(\mathbf{G}_{\tilde{i},k}\). ## III Asymptotic behavior of the SINR In this section, we characterize the asymptotic limit of the typical mobile's SINR as the number of antennas increases. The analysis herein is enabled by the following novel bounds on shot noise functions over marked PPP. The uniqueness of these bounds is that they hold even if the marks are statistically dependent. The bounds are presented as a theorem because they are very general, and can be applied to various scenarios. **Theorem 1** (Bounds on the second joint moment of shot noise functions): _Let \(\Phi\) be a marked PPP over \(\mathbb{R}^{2}\) with density \(\lambda(r,\theta)\), where each point \((r_{i},\phi_{i})\) is associated with two random marks \(p_{i,1}\) and \(p_{i,2}\). Also, let \(I_{1}\) and \(I_{2}\) be shot noise functions of the form \(I_{k}=\sum_{i}f_{k}(r_{i})\cdot p_{i,k}\) for continuous functions \(f_{k}(\cdot)\). If there exists three continuous functions \(q_{0}(r),q_{1}(r),q_{2}(r)\) such that_ \[E\left[\left.p_{i,1}p_{j,2}\right|r_{i},r_{j}\right]\leq(\geq)\begin{cases}q_{ 1}(r_{i})q_{2}(r_{j})&i\neq j\\ q_{0}(r_{i})&i=j\end{cases} \tag{5}\] _is satisfied for any two points with indexes \(i\),\(j\), then the second joint moment of the shot noises is lower (upper) bounded by:_ \[E\left[I_{1}I_{2}\right]\leq(\geq)\prod_{k=1}^{2}\left(2\pi\int _{0}^{\infty}f_{k}(r)q_{k}(r)\bar{\lambda}(r)rdr\right)+\] \[2\pi\int_{0}^{\infty}f_{1}(r)f_{2}(r)q_{0}(r)\bar{\lambda}(r)rdr \tag{6}\] _where \(\bar{\lambda}(r)=\frac{1}{2\pi}\int_{0}^{2\pi}\lambda(r,\theta)d\theta\)._ _Proof: See Appendix A_ Before we present our main result, we need to discuss the behavior of the ZF radius, \(D\), and of the noise variance \(\sigma^{2}\). As described below the SINR limit is less interesting if these quantities are constant. Hence we allow both to be a function of \(L\). For the ZF radius, we set \(D=s\cdot L^{\beta}\) (which includes constant \(D\) as a special case). We will show in Appendix B-A that if \(\beta>0.5\) or if \(\beta=0.5\) and \(s\geq 1/\sqrt{\pi\lambda_{b}M}\), then asymptotically, each BS will try to zero more users than it has antennas. Thus the network will have a zero asymptotic rate. Hence, the rest of the manuscript will focus on \(\beta<0.5\) or \(\beta=0.5\) and \(s^{2}\pi\lambda_{b}M<1\). Typical network analysis assumes that the noise variance does not depend on the number of antennas. Yet, such networks are asymptotically (as the number of antennas grows large) noise limited. Thus, asymptotic analysis with constant noise variance cannot capture the trade off between noise and interference. In this work, we allow the noise to change as \(\sigma^{2}=\mu L^{-\zeta}\). Such a dependence allows us to derive asymptotic expressions that characterize the interference limited regime, the noise limited regime, and the joint effect of noise and interference. In Section VII we demonstrate that the resulting asymptotic expressions capture the joint effect of interference and noise even for finite networks where the number of antennas is moderately large. Denote by \(\bar{P}\) the average transmit power of the BSs in the network. In the following, we analyze systems when the BSs apply any power allocation as long as the correlation of the total BS power on other BSs is decreasing with distance. Practically, this would apply to any reasonable power allocation scheme. We mathematically require that for any BS: \[\bar{P}(r_{0,k})\triangleq E\left[\sum_{i\in\mathcal{M}_{k}}\varphi_{i,k}^{2}| a_{k}=1,r_{0,k}\right]=\bar{P}+\xi_{0,k} \tag{7}\] and for any two BSs, denoting their distance by \(\tilde{r}_{k,\ell}\): \[E\left[\sum_{i\in\mathcal{M}_{k}}\varphi_{i,k}^{2}\sum_{j\in\mathcal{M}_{\ell }}\varphi_{j,\ell}^{2}\Big{|}\Phi_{b},a_{k}=a_{\ell}=1\right]=\bar{P}^{2}+ \tilde{\xi}_{k,\ell}\] where \(|\xi_{0,k}|\leq\delta r_{0,k}^{-\gamma}\) and \(|\tilde{\xi}_{k,\ell}|\leq\min\{P^{2},\delta\tilde{r}_{k,\ell}^{-\gamma}\}\) with \(\gamma>0\). With these assumptions, the next theorem describes the asymptotic limit of \(\eta_{0}\), the SINR of the typical user: **Theorem 2**: _If \(\zeta<\beta(\alpha-2)\), then_ \[\lim_{L\rightarrow\infty}L^{-1-\zeta}\eta_{0}=(1-\tilde{s}^{2}\pi\lambda_{b}M) \frac{\varphi_{0,0}^{2}r_{0,0}^{-\alpha}}{\mu} \tag{8}\] _where \(\tilde{s}=s\) if \(\beta=0.5\) and zero if \(\beta<0.5\). If \(\zeta\geq\beta(\alpha-2)\), then_ \[\lim_{L\rightarrow\infty}L^{-1-\beta(\alpha-2)}\eta_{0}=\frac{(1-\tilde{s}^{2} \pi\lambda_{b}M)\varphi_{0,0}^{2}r_{0,0}^{-\alpha}}{\frac{1}{s}\frac{2\pi P \lambda_{b}}{\alpha-2}+\tilde{\mu}} \tag{9}\] _where \(\tilde{\mu}=\mu\) if \(\zeta=\beta(\alpha-2)\) and zero otherwise. Both limits hold in probability._ _Proof:_ We show the convergence of appropriately scaled interference and signal powers at the typical mobile, and then combine them to complete the proof. Define the interference power at the typical mobile as \[I_{0}=\sum_{i=1}^{\infty}\Big{|}a_{b_{i}}\varphi_{i,b_{i}}\mathbf{h}_{0,b_{i}}^ {\dagger}\mathbf{w}_{i}\Big{|}^{2} \tag{10}\] The convergence of \(I_{0}\) with appropriate normalization is given by the following lemma: **Lemma 1**: _For any \(s>0\) with \(0<\beta<0.5\) or \(0<s<1\) with \(\beta=0.5\), we have in mean-square error (MSE):_ \[\lim_{D\rightarrow\infty}D^{\alpha-2}I_{0}=\frac{2\pi\bar{P}\lambda_{b}}{ \alpha-2}, \tag{11}\] _and \(\text{var}\left\{D^{\alpha-2}I_{0}\right\}=O(D^{4-2\alpha})\)._ _Proof:_ See Appendix B-A. Next, we define the signal power at the typical mobile: \[S_{0}=\left|\varphi_{0,0}\,a_{0}\,r_{0,0}^{-\alpha/2}\,\mathbf{g}_{0,0}^{\dagger }\,\frac{\mathbf{Q}_{0,0}\mathbf{g}_{0,0}}{\|\mathbf{Q}_{0,0}\mathbf{g}_{0,0}\| }\right|^{2} \tag{12}\] The convergence of \(S_{0}\) is given by: **Lemma 2**: _The normalized signal power converges in probability as follows:_ \[\lim_{L\to\infty}\frac{S_{0}}{L-s^{2}L^{2\beta}\pi\lambda_{b}M}=\varphi_{0,0} ^{2}r_{0,0}^{-\alpha} \tag{13}\] _Proof: See Appendix B-B_ Combining (11) with the noise scaling, \(\sigma^{2}=\mu L^{-\zeta}\), and the ZF radius scaling, \(D=s\cdot L^{\beta}\), we see that if \(\zeta<\beta(\alpha-2)\), the noise plus interference satisfies: \[\lim_{L\to\infty}L^{\zeta}(I_{0}+\sigma^{2})=\mu \tag{14}\] while if \(\zeta\geq\beta(\alpha-2)\) the noise plus interference satisfies: \[\lim_{L\to\infty}L^{\beta(\alpha-2)}(I+\sigma^{2})=\frac{1}{s}\frac{2\pi \bar{P}\lambda_{b}}{\alpha-2}+\tilde{\mu}. \tag{15}\] in MSE (and hence, in probability) where \(\tilde{\mu}=\mu\) if \(\zeta=\beta(\alpha-2)\) and zero otherwise. Combining (14), (15) and (13) proves convergence of (8) and (9) in probability. The asymptotic expressions of Theorem 2 are very useful and give important insights on the network behavior. They capture the effect of both noise and interference, and allow the evaluation of the optimal ZF radius. It is important to note the different scenarios described by the values of \(\zeta\) and \(\beta\) in the theorem. Recall that \(\zeta\) determines the scaling of the noise with the number of antennas. In the first scenario, if \(\zeta<\beta(\alpha-2)\) then the noise decreases too slowly and the network is noise-dominated (this also includes the case of \(\zeta=0\)). Thus, the performance in this case (12) are not affected by the interference. In the second scenario, if \(\zeta>\beta(\alpha-2)\) then the noise decreases too fast and the network is interference-dominated (\(\tilde{\mu}=0\) in (13)). Thus, the most interesting scenario, where we see the effect of both noise and interference, is when \(\zeta=\beta(\alpha-2)\). As for the scaling of the ZF radius, \(\beta\), we already showed that if \(\beta>0.5\) then the ZF radius grows too fast and the network cannot handle the needed nulls. The theorem shows that if \(\beta<0.5\), the ZF radius growth is slow enough such that it doesn't burden the network at all (\(\tilde{s}=0\)). Thus, optimal performance, which balances the effect of noise and interference, is achieved when \(\beta=0.5\) and \(\tilde{s}=s\). Further discussion on this optimal scenario, and on the achievable performance with a finite number of antennas are given in the numerical results section, below. Note that in the most interesting case, where \(\beta=0.5\) (later shown to be optimal) and \(\zeta=\alpha/2-1\), the limit simplifies to: \[\lim_{L\to\infty}L^{-\alpha/2}\eta_{0}=\frac{(1-s^{2}\pi\lambda_{b}M)\varphi_ {0,0}^{2}r_{0,0}^{-\alpha}}{\frac{1}{s}\frac{2\pi\bar{P}\lambda_{b}}{\alpha-2 }+\mu}. \tag{16}\] This limiting expression for the SINR is particularly interesting as it captures the effect of both the interference and the noise and is hence useful for analyzing the SINR of practical systems with moderately large numbers of antennas (for which interference is still significant). The theorem also shows that the asymptotic SINR does not depend on the power allocation scheme, but only on the average power per BS (\(\bar{P}\)). This insight significantly simplifies the analysis of networks with different power allocation schemes. Further, it allows a simple derivation of an asymptotically optimal power allocation scheme for massive MIMO networks, as shown in Section VI. ## IV Optimization of the ZF radius Using Theorem 2 we can find the zero forcing radius, \(D\), that maximizes the throughput, i.e., to find the optimal values of \(\beta\) and \(s\). If \(\zeta<\beta(\alpha-2)\) the network is asymptotically noise limited. In this case, the ZF has no effect and optimization trivially results in \(D=0\) (or in terms of (8), \(\tilde{s}=0\)). Hence, from hereon, we focus only on the case that \(\zeta\geq\beta(\alpha-2)\). Inspecting (9), the SINR, \(\eta_{0}\) scales as \(L^{1+\beta(\alpha-2)}\). Recalling that \(\alpha>2\), the SINR is maximized when \(\beta\) takes its maximal value, that is \(\beta=0.5\). For \(\beta=0.5\), (16) shows that the SINR scales as \(L^{\alpha/2}\). To optimize the SINR, we need to maximize the RHS of (16) with respect to \(s\). Evaluating the derivative of the RHS of (16) with respect to \(s\) and equating to zero, we conclude that the optimal value, \(s^{*}\), must satisfy: \[\frac{\alpha}{\alpha-2}\pi\lambda_{b}s^{*2}+\frac{\mu}{\bar{P}}s^{*\alpha}= \frac{1}{M}. \tag{17}\] While the optimal expression has no general closed-from solution, it can easily be solved numerically. For further insight, we can consider the no noise case (\(\mu=0\)) which leads to: \[s^{*}=\sqrt{\frac{\alpha-2}{\alpha\pi\lambda_{b}M}}. \tag{18}\] Thus, for large enough \(L\), the best nulling radius is \[D^{*}\approx\sqrt{\frac{\alpha-2}{\alpha\pi\lambda_{b}M}L}. \tag{19}\] Since for large \(L\) and \(D\), the average number of mobiles in a radius \(D\) disk is approximately \(D^{2}\pi\lambda_{b}M\), on average, each BS protects approximately \((1-2/\alpha)L\) mobiles from interference. ## V Throughput The characterization of the asymptotic SINR of each mobile in Theorem 2 is the main theoretic contribution of this manuscript. To better understand the implication of these results, it is beneficial to consider the total throughput of each BS in the network. For simplicity, this section focuses only on the more interesting case where \(\beta=0.5\) and \(\zeta=\beta(\alpha-2)=\alpha/2-1\). The spectral efficiency of link \(i\), \(R_{i}\) is given by (2). The following theorem uses (16) to characterize the asymptotic rate for user \(i\) (recall that mobile 0 is a typical mobile). It is important to note here that while Theorem 2 shows convergence in probability of the normalized SINR, Corollary 1 shows convergence in MSE of the achievable rate. We need this stronger form of convergence in order to make assertions on the mean rate in the subsequent discussion. **Corollary 1**: \(R_{i}\) _converges in MSE to:_ \[\tilde{R}_{i}=\log_{2}\Big{(}1+L^{\alpha/2}\frac{(1-s^{2}\pi\lambda_{b}M) \varphi_{i,b_{i}}^{2}r_{i,b_{i}}^{-\alpha}}{s^{2-\alpha}\frac{2\bar{r}\, \partial_{\lambda_{b}}}{\alpha-2}+\mu}\Big{)}. \tag{20}\] _That is, \(\lim_{L\to\infty}E\Big{[}\big{|}R_{i}-\tilde{R}_{i}\big{|}^{2}\Big{]}=0\)._ Proof:: (20) holds in probability by applying the continuous mapping theorem, in a manner similar to the proof of Theorem 2 in [9]. Proving convergence in MSE is more challenging and is given in Section C. To further characterize the throughput, we need to limit the discussion and assume that the mobile locations admit a probability distribution (while so far we made no assumption on the mobiles except for their maximal distance from their BS). Additionally, as the network is ergodic, we can characterize the network throughput by the throughput of the characteristic BS (BS 0). Without loss of generality, we assume that BS \(0\) serves mobiles \(0,\ldots,M-1\). Thus, the average throughput per BS is: \[R_{\text{av}}=E\left[\sum_{m=0}^{M-1}\log_{2}(1+\eta_{m})\right]. \tag{21}\] To make this derivation applicable also for finite SNRs, we define an asymptotic approximation of the average throughput per BS: \[R_{\text{as}}=E\left[\sum_{m=0}^{M-1}\log_{2}\Big{(}1+\tilde{R}_{m}\Big{)} \right]. \tag{22}\] We note that (20) guarantees the convergence in MSE, hence \(\lim_{L\to\infty}\left(R-R_{\text{as}}\right)=0\). Thus, for large enough number of antennas, \(L\), we can claim that \(R\approx R_{\text{as}}\). Fig 3. demonstrates the accuracy of the prediction in (22) for two different mobile distributions (mobiles on the cell edge, and uniformly randomly distributed within a cell). Markers show the simulated results while solid lines show the theoretical result based on (22)3. The cell radius is \(R=0.15\)km, \(\alpha=3\), and the transmit power is set such that the SNR for a mobile at the cell edge is 10 dB from a single antenna. Footnote 3: Closed form expressions for the throughput in these scenarios can be easily derived, but are omitted here due to space constraints ## VI Power allocation In this section, we consider the power allocation to each mobile. We wish to maximize \[R_{\text{as}}=\lambda_{b}E\left[\sum_{m=0}^{M-1}\log_{2}\left(1+g\varphi_{0,m} ^{2}r_{0,m}^{-\alpha}\right)\right], \tag{23}\] subject to a peak power constraint \(\sum_{m=0}^{M-1}\varphi_{0,m}^{2}\leq P\), where \(g=L^{\alpha/2}\frac{(1-s^{2}\pi\lambda_{b}M)}{s^{2-\alpha}\frac{2\bar{r}\, \partial_{\lambda}}{\alpha-2}+\mu}\). That is, we search for \(M\) functions \(\varphi_{0,0}^{2}=f_{0}(\{r_{0,M}\}),\ldots,\varphi_{0,M-1}^{2}=f_{M-1}(\{r_{0,M}\})\) that maximize (23) subject to the peak power constraint \(0\leq\sum_{m=0}^{M-1}f_{m}(\{r_{0,M}\})\leq P\), where \(\{r_{0,M}\}\) means \(r_{0,0},\ldots,r_{0,M-1}\). Note that \(g\) depends on the power allocation through the average transmission power \(\bar{P}=E\left[\sum_{m=0}^{M-1}\varphi_{0,m}^{2}\right]\).This is a calculus of variations problem (or an optimal control problem). Theorem 3 shows that it can be solved using a variation of the well known water filling algorithm. **Theorem 3**: _The maximum rate of (23), subject to the peak power constraint, is given by:_ \[\max_{0<P\leq P}\tilde{R}(\bar{P}) \tag{24}\] _where \(\tilde{R}(\bar{P})\) is the rate obtained using the power allocation functions \(f_{m}(\{r_{0,M}\})\) evaluated by:_ 1. _For a given_ \(\tilde{\lambda}\)_, define_ \[\breve{f}_{m}(\breve{\lambda},\{r_{0,M}\})=\max\left\{0,\breve{\lambda}- \frac{1}{gr_{0,m}^{-\alpha}}\right\}\] (25) _and_ \[\hat{P}(\breve{\lambda})=E\left[\min\left\{P,\sum_{m=0}^{M-1}\breve{f}_{m}( \breve{\lambda},\{r_{0,M}\})\right\}\right]\] (26) 2. _Choose_ \(\breve{\lambda}\) _such that_ \(\hat{P}(\breve{\lambda})=\bar{P}\)_._ 3. _If_ \(\sum_{m=0}^{M-1}\breve{f}_{m}(\breve{\lambda},\{r_{0,M}\})\leq P\)__ * _Set_ \(f_{m}(\{r_{0,M}\})=\breve{f}_{m}(\breve{\lambda},\{r_{0,M}\})\)__ _else_ Fig. 1: Spectral efficiency vs. number of antennas, \(L\), for cell-edge and uniformly distributed mobiles. * Set \[f_{m}(\{r_{0,M}\})=\max\left\{0,\breve{\gamma}-\frac{1}{gr_{0,m}^{-\alpha}}\right\}\] (27) where \(\breve{\gamma}\) is chosen such that \(\sum_{m=0}^{M-1}f_{m}(\{r_{0,M}\})=P\). Furthermore, the optimization in (24) is convex, and the choices of the constants \(\breve{\lambda}\) and \(\breve{\gamma}\) is over monotonic non-decreasing functions. Thus, the optimization problem can be solved easily, using one dimensional convex optimization over a power allocation scheme with two water levels: a water level for the average BS power and a different water level for each BS that is constrained to the maximal peak power. Note that \(\breve{R}(\bar{P})\) is the maximal average BS throughput subject to an average power constraint \(\bar{P}\). Proof.: We start by constraining the optimization in (23) by the average BS power, \(\bar{P}\). Since \(g\) depends on \(\bar{P}\), the additional constraint significantly simplifies the problem (as \(g\) becomes constant). The resulting optimization problem for a single BS given \(\bar{P}\) is: \[\breve{R}(\bar{P})=\max_{f_{0},\ldots,f_{M-1}}:E\left[\sum_{m=0}^ {M-1}\log_{2}\left(1+gf_{m}r_{0,m}^{-\alpha}\right)\right]\] (28) Subject to \[:\sum_{m=0}^{M-1}f_{m}\leq P \tag{29}\] \[E\left[\sum_{m=0}^{M-1}f_{m}\right]=\bar{P} \tag{30}\] where we have suppressed dependence on \(r_{0,0},\cdots,r_{0,M-1}\) to simplify notation. This is a convex maximization problem. The relevant variant of the Karush-Kuhn-Tucker (KKT) conditions (see for example Theorem 9.4 in [29]) states that there exists scalar \(\lambda\) and a function \(\gamma=\gamma(r_{0,0},\ldots,r_{0,M-1})\geq 0\) such that the optimal solution must maximize \[E\left[\sum_{m=0}^{M-1}\log_{2}\left(1+gf_{m}r_{0,m}^{-\alpha} \right)\right]\] \[+E\left[\gamma\sum_{m=0}^{M-1}f_{m}\right]+\lambda E\left[\sum_{ m=0}^{M-1}f_{m}\right] \tag{31}\] with respect to \(f_{m}\) and must satisfy \[E\left[\gamma\left(\sum_{m=0}^{M-1}f_{m}-P\right)\right]=0 \tag{32}\] and \(E\left[\sum_{m=0}^{M-1}f_{m}\right]=P\). As we can choose \(f_{m}\) for each set of distances, the maximization of (31) can be written as a set of maximizations: \[\max_{\phi_{0},\ldots,\phi_{M-1}} \sum_{m=0}^{M-1}\log_{2}\left(1+g\phi_{m}r_{0,m}^{-\alpha}\right)\] \[+\gamma\left(\sum_{m=0}^{M-1}\phi_{m}-P\right)+\lambda\sum_{m=0}^ {M-1}\phi_{m} \tag{33}\] for each realization of the mobile distances. Note that (32) indicates that \(\gamma\left(\sum_{m=0}^{M}\phi_{m}-P\right)=0\) for any mobile distances (up to a set with zero probability). As long as we have a solution with \(\sum_{m=0}^{M}\phi_{m}<P\) we set \(\gamma=0\) and say that the constraint is not active. For these realizations we take the derivative of (33) with respect to \(\phi_{\ell}\) and equate to zero: \[0=\frac{gr_{0,\ell}^{-\alpha}\log_{2}e}{1+g\phi r_{0,\ell}^{-\alpha}}+\lambda. \tag{34}\] Simplifying and defining also the water level \(\breve{\lambda}=-\frac{\log_{2}e}{\lambda}\) leads to (25). When (25) leads to BS power that exceeds \(P\), we have \(\gamma>0\). In this case, (32) states the BS transmits at its max power, i.e., \(\sum_{m=1}^{M}\phi_{m}=P\). This results in the simpler optimization \[\max_{\phi_{0},\ldots,\phi_{M-1}:\sum_{m=1}^{M}\phi_{m}-P}\sum_{m=1}^{M}\log _{2}\left(1+g\phi_{m}r_{0,m}^{-\alpha}\right) \tag{35}\] which results in: \[\varphi_{\ell}^{2}=-\frac{\log_{2}e}{\gamma}-\frac{1}{gr_{0,m}^{-\alpha}}. \tag{36}\] The secondary water level (associated with the peak power constraint for a specific realization) is then defined as \(\breve{\gamma}=-\frac{\log_{2}e}{\lambda}\), and leads to (27). The values of the water levels are determined to satisfy the average and peak power constraint (recalling that we have a scalar water level, \(\breve{\lambda}\), for the average power constraint, and a separate water level, \(\breve{\gamma}=\breve{\gamma}(r_{0,0},\ldots,r_{0,M-1})\), for each BS in which the peak power constraint is active. The monotonicity of the average and peak powers on their respective water level is obvious from the problem structure. ## VII Numerical results In this section, we present numerical results that demonstrate our main assertions on the convergence of the normalized SINR and spectral efficiency, and show that localized power allocation (Theorem 3) is sufficient even when the number of antennas per base station is only moderately large. For all simulations, the base stations and mobiles were placed on a square area with the edges wrapped around to reduce edge effects. ### _Normalized SINR_ To demonstrate the convergence of the signal and interference powers, first consider the normalized SINR of the mobiles in the network, where the SINR is normalized by \(L^{\frac{\alpha}{2}}\), the transmit power and the path loss between each mobile and its serving base station. Thus, the normalized SINR for mobile \(i\) is defined as \[\underline{\eta}_{i}=\frac{1}{L^{\frac{\alpha}{2}}\phi_{i,b_{i}}r_{i,b_{i}}^{- \alpha}}\eta_{i}. \tag{37}\] Recall that the asymptotic analysis in the previous sections was made with an effective noise power that was changing with the number of antennas \(L\). This is in contrast to the practical scenario where the noise power, \(\sigma^{2}\), is fixed. To demonstrate this difference, Fig. 2 presents the convergence of the interference power for the case of fixed \(\mu\), while Fig. 3 presents this convergence for the case of fixed noise power (\(\sigma^{2}\)). For the sake of practicality, all other simulations (i.e., except those depicted in Fig. 2) used the latter approach with fixed noise power. Strictly speaking, the asymptotic predictions of the previous section hold only for constant \(\mu\) (i.e., when the noise variance changes with the number of antennas). In such case, from (16): \[\lim_{L\rightarrow\infty}\underline{\eta}_{i}=\frac{1-s^{2}\pi\lambda_{b}M}{ s^{2-\alpha}\frac{2\pi P\lambda_{b}}{\alpha-2}+\mu}. \tag{38}\] For the case of the normalized SINR, we simulated networks of \(1000\) base stations with density 30 BS/\(km^{2}\), path loss exponent of \(\alpha=4\), \(M=3\) active mobiles per base station and equal power allocated to each mobile. Each configuration was simulated 10 times to generate sufficient data for the histograms. Fig. 2 depicts histograms of the normalized SINRs, (37), for a constant \(\mu\) (and hence, noise power that decreases with \(L\)). The value of \(\mu\) is such that the SNR at the cell edge is 6dB for \(L=25\) (i.e., \(25^{\alpha/2-1}PR^{-\alpha}/\mu=6\)dB). The figure shows that even for \(L=25\), almost all the simulated normalized SINRs are within \(\pm 6\)dB of the asymptotic prediction given by the RHS of (38), which is indicated by the vertical line. As the number of antennas increases, the normalized SINRs concentrate around the predicted value. For \(L=200\) antennas per base station, almost all the simulated normalized SINRs are within \(\pm 2\)dB of the predicted value. These results demonstrate that the normalized SINR does indeed converge to the asymptotic prediction. To derive theoretic predictions for the case of fixed noise power, we consider a finite number of antennas, and set \(\mu=\sigma^{2}L^{\frac{\alpha}{2}-1}\). Hence, \(\mu\) becomes a function of \(L\) instead of being a constant. Yet, our asymptotic results show that for every \(\mu\) there exists a large enough \(L\) for which : \[\underline{\eta}_{i}\approx\frac{1-s^{2}\pi\lambda_{b}M}{s^{2-\alpha}\frac{2 \pi P\lambda_{b}}{\alpha-2}+\sigma^{2}L^{\frac{\alpha}{2}-1}}. \tag{39}\] Simulations herein show that when \(L\) is large enough, this approximation is very accurate, and gives good predictions even for the case of fixed noise power. Fig. 3 depicts the histograms of the normalized SINRs for a fixed noise power, \(\sigma^{2}\). The value of \(\sigma^{2}\) is set such that the SNR at the cell edge was 6dB (i.e., \(PR^{-\alpha}/\sigma^{2}=6\)dB). The theoretical predictions from (39) are shown as vertical lines. Note that in these simulations \(\mu\) increases with \(L\), and hence the asymptotic normalized SINR decreases with \(L\). The convergence behavior is quite similar to the one depicted in Fig. 2, with almost all of the simulated SINRs within \(\pm 2\) dB of the theoretical prediction for \(L=200\). These simulations thus demonstrate that the asymptotic predictions are useful when the number of antennas is large, even when the noise power is constant. ### _Power Control_ As noted in Section VI, the asymptotic results of this paper can be used to find the power allocation that maximizes the spectral efficiency. Further, most of the power allocation can be conducted on a per Fig. 3: Histogram of the normalized SINR for fixed \(\sigma^{2}\). The vertical lines show the normalized SINR from (39). Fig. 2: Histogram of the normalized SINR for fixed \(\mu\). The vertical line shows the normalized SINR from (38). base-station basis, resulting in a significant reduction in system complexity. To test the performance of the power allocation of Theorem 3 we compare its performance with two other policies, namely constant power allocation, where the transmit power for all mobiles is equal, and a power allocation policy based on the weighted-minimum-mean-square-error (WMMSE) sum-spectral efficiency maximizing algorithm from [30]. This latter algorithm is independent of the specific beamforming algorithm, and is applied here with the partial-zero-forcing approach. Note that the complexity of the latter algorithm is very high for large networks. Fig. 4 shows the sum spectral efficiency per base station vs. the number of antennas, \(L\), for the three power allocation policies considered. We used a base station density of 60 BS/\(km^{2}\), \(\alpha=3\), \(M=3\) active mobiles per cell, cell radius of \(0.15km\) and a constant noise power such that the cell-edge SNR (\(PR^{-\alpha}/\sigma^{2}\)) is \(25dB\). Note that the WMMSE approach has the highest sum spectral efficiency. At \(L=10\), the WMMSE power allocation achieves a throughput that is 29.4% higher than the throughput of the constant power allocation, but only 7.4% higher compared to the suggested power allocation (based on Theorem 3). Thus, our novel power allocation is useful even for quite a small number of antennas, leading to throughput that is just a few percent below optimal, while requiring much less computations. For larger numbers of antennas, this difference diminishes. E.g. at \(L=40\) antennas per BS, the discrepancy between the W-MMSE and our approach reduces to 0.8%. For reference, Fig. 4 also depicts the sum spectral efficiency with non-cooperative zero forcing, where each BS performs zero forcing only to users within its cell. In particular, the figure shows the performance with optimal W-MMSE power control and with equal power allocation. Note that the non-cooperative zero-forcing schemes perform considerably worse than the cooperative schemes as the number of antennas grows large. For a smaller number of antennas, the non-cooperative zero forcing system with W-MMSE power control is competitive with the cooperative scheme. This shows that optimal power allocation is especially important in non-cooperating networks. Yet, it is no match for BS cooperation with a large number of antennas. These results indicate that the simple power allocation policy of Theorem 3 results in a sum spectral efficiency that approaches that of a much more computationally intensive power allocation policy, which requires the power allocations to be optimized across all base stations. While our scheme was derived using asymptotic limits, it is shown to be efficient even when the number of antennas per BS is only moderately large (e.g., 10 or more). Finally, we observe from Figure 4 that the power allocation policy of Theorem 3 does result in a substantial spectral efficiency increase over an equal power allocation policy for a small number of antennas. This gain comes for only a small increase in complexity (as most computations can be performed locally using equations that only depend on system parameters and in-cell path-losses). ## VIII Summary and Conclusions We have developed a method to analyze the SINR and spectral efficiency on the downlink of cooperative cellular networks with multi-antenna base stations. The base stations perform partial zero forcing, nulling the interference to mobiles within a distance \(D\) of them. This system is complicated to analyze due to the dependence between the beam-forming vectors used by nearby base stations since base stations also null users of other cells. Such dependence between nodes in spatial point processes is known to be challenging to analyze. To handle this complexity, we derived novel bounds on marked Poisson point processes with dependent marks. Combining these bounds with an asymptotic analysis, we found a simple closed-form expression for an appropriately normalized version of the SINR in the limit as the number of antennas per base station grows large. This result is further used to derive asymptotic expressions for the spectral efficiency on the downlink. Further, these simple asymptotic expressions for the spectral efficiency are used to derive a simple power allocation scheme which is optimal in the sense of maximizing the mean spectral efficiency of a BS, if the asymptotic expressions are taken as accurate. This simple power allocation algorithm mostly requires base stations to perform power allocation locally, requiring minimal coordination between base stations on power allocation. Fig. 4: Sum spectral efficiency per base station vs. \(L\), for \(M=3\) and different power allocation approaches. \(\Delta_{o,e}\), and \(\Delta_{o,a}\) denote the percentage difference between the optimized and equal power allocations, and optimized and asymptotic power allocations, respectively. The findings of this work are validated by Monte Carlo simulations which show that the asymptotic expressions are useful even when the number of antennas per base station is moderately large. The asymptotic expressions we derived are simple and reveal the dependence of the spectral efficiency on tangible system parameters, such as numbers of antennas per base station, base station density, cell size and mobile distribution. Therefore, the results of this work are useful for system design and system optimization, and also allow better understanding of the network. ## Appendix A Proof of Theorem 1 Consider a finite (large) radius \(R\), and define \(I_{k}(R)=\sum_{i}f_{k}(r_{i})p_{i,k}1_{[r_{i}<R]}\). Also, consider a division of the area to rings of width \(\Delta=R/N\) and define an indicator function \(u_{n}(r)=1_{[(n-1)\Delta\leq r<n\Delta}\). Let \(q_{k}[n]=\sup_{(n-1)\Delta\leq r<n\Delta}q_{k}(r)\) (or \(q_{k}[n]=\inf_{(n-1)\Delta\leq r<n\Delta}q_{k}(r)\) for an upper bound), and similarly, \(f_{k}[n]=\sup_{(n-1)\Delta\leq r<n\Delta}f_{k}(r)\) (\(f_{k}[n]=\inf_{(n-1)\Delta\leq r<n\Delta}f_{k}(r)\)). We thus have: \[E\left[I_{1}(R)I_{2}(R)\right] \tag{40}\] \[=\sum_{n_{1}=1}^{N}\sum_{n_{2}=1}^{N}E\Big{[}\sum_{i,j}f_{1}(r_{ i})p_{i,1}u_{n_{1}}(r_{i})f_{2}(r_{j})p_{j,2}u_{n_{2}}(r_{j})\Big{]}\] Next, we apply the bounds in (5) which give: \[\sum_{i,j}E\Big{[}f_{1}(r_{i})p_{i,1}u_{n_{1}}(r_{i})f_{2}(r_{j}) p_{j,2}u_{n_{2}}(r_{j})|r_{i},r_{j}\Big{]} \tag{41}\] \[\leq(\geq)\sum_{i}E\Big{[}\sum_{i,j}f_{1}(r_{i})f_{2}(r_{i})q_{0 }(r_{i})u_{n_{1}}(r_{i})|r_{i},r_{j}\Big{]}\] \[+\!\!\sum_{i\neq j}\!\!E\Big{[}f_{1}(r_{i})f_{2}(r_{j})u_{n_{1}}( r_{i})u_{n_{2}}(r_{j})q_{1}(r_{i})q_{2}(r_{j})|r_{i},r_{j}\Big{]}\] Applying the chain rule for expectations and using the quantized bounds \(q_{k}(r)\leq(\geq)q_{k}[n]\) and \(f_{k}(r)\leq(\geq)f_{k}[n]\) for \((n-1)\Delta\leq r<n\Delta\) results in: \[E\left[I_{1}(R)I_{2}(R)\right] \leq(\geq)\] \[\sum_{n_{1}=1}^{N}f_{1}[n_{1}]f_{2}[n_{1}]q_{0}[n_{1}]E\Big{[} \sum_{i}u_{n_{1}}(r_{i})\Big{]}\] \[+\sum_{n_{1}=1}^{N}\sum_{n_{2}=1}^{N}f_{1}[n_{1}]f_{2}[n_{2}]q_{ 1}[n_{1}]q_{2}[n_{2}]\] \[\cdot E\Big{[}\sum_{i\neq j}u_{n_{1}}(r_{i})u_{n_{2}}(r_{j})\Big{]} \tag{42}\] Since \(\sum_{i}u_{n}(r_{i})\), is a Poisson random variable with parameter \(\lambda_{n}^{N}=\int_{(n-1)\Delta}^{n\Delta}f_{0}^{2\pi}r\lambda(r,\theta)d\theta dr\), \(E\big{[}\sum_{i}u_{n_{1}}(r_{i})\big{]}=\lambda_{n_{1}}^{N}\) and \(E\Big{[}\sum_{i\neq j}u_{n_{1}}(r_{i})u_{n_{2}}(r_{j})\big{]}=E\big{[}\sum_{i,j}u_{n_{1}}(r_{i})u_{n_{2}}(r_{j})\big{]}-E\big{[}\sum_{i}u_{n_{1}}(r_{i})u_{ n_{2}}(r_{j})\big{]}=\lambda_{n_{1}}^{N}\lambda_{n_{2}}^{N}\). Substituting in (A) we have: \[E\left[I_{1}(R)I_{2}(R)\right]\leq(\geq)\] \[=\sum_{n_{1}=1}^{N}\sum_{n_{2}=1}^{N}f_{1}[n_{1}]q_{1}[n_{1}]f_{2 }[n_{2}]q_{2}[n_{2}]\lambda_{n_{1}}^{N}\lambda_{n_{2}}^{N}\] \[+\sum_{n_{1}=1}^{N}f_{1}[n_{1}]f_{2}[n_{1}]q_{0}[n_{1}]\lambda_{n _{1}}^{N} \tag{43}\] To conclude, we note that for any continuous function, \(g(r)\), and any family of sample sequences \(r_{n,N}\) satisfying \((n-1)\frac{R}{N}\leq r_{n,N}<n\frac{R}{N}\), we have \[\lim_{N\to\infty}\sum_{n=1}^{N}g(r_{n,N})\lambda_{n}^{N} =\int_{0}^{R}\int_{0}^{2\pi}g(r)r\lambda(r,\theta)d\theta dr\] \[=2\pi\int_{0}^{R}g(r)r\bar{\lambda}(r)dr. \tag{44}\] Thus, \[E\left[I_{1}(R)I_{2}(R)\right] \leq(\geq)\prod_{k=1}^{2}\left(2\pi\int_{0}^{R}f_{k}(r)q_{k}(r)r \bar{\lambda}(r)dr\right)\] \[+2\pi\int_{0}^{R}f_{1}(r)f_{2}(r)q_{0}(r)r\bar{\lambda}(r)dr.\] Taking the limit as \(R\to\infty\) completes the proof. ## Appendix B Proofs of Lemmas ### _Proof of Lemma 1_ Since for large enough \(L\), \(D>R\), without loss of generality, we only need to consider the \(D>R\) case. The interference power at the typical mobile is given by \[I_{0}=\sum_{k=1}^{\infty}a_{k}\sum_{i\in\mathcal{M}_{k}}\left| \varphi_{i,k}\mathbf{h}_{0,k}^{\dagger}\mathbf{w}_{i}\right|^{2}=\sum_{k=1}^{ \infty}1_{\{r_{0,k}>D\}}r_{0,k}^{-\alpha}\tilde{a}_{k}.\] where \[\tilde{a}_{k}=a_{k}\sum_{i\in\mathcal{M}_{k}}\left|\varphi_{i,k}r_{0,k}^{\alpha} \mathbf{h}_{0,k}^{\dagger}\mathbf{w}_{i}\right|^{2}=a_{k}\sum_{i\in\mathcal{M} _{k}}\left|\varphi_{i,k}\zeta_{i,k}\right|^{2}.\] is the effective transmitted power of BS \(k\) to the typical mobile, and \(\zeta_{i,k}=\mathbf{g}_{0,k}^{\dagger}\mathbf{w}_{i}\). Since BSs that are further than \(D\) from the origin do not consider the typical mobile in their beamforming design, their precoding vectors, \(\mathbf{w}_{i}\)\(\mathbf{g}_{0,k}\) are independent. Recalling also that \(\mathbf{g}_{0,k}\sim\mathcal{CN}(0,\mathbf{I})\) and \(\|\mathbf{w}_{i}\|=1\) we have that \(\zeta_{i,k}\sim\mathcal{CN}(0,1)\) and are i.i.d. random variables for all BSs outside of a disk of radius \(D\) from the origin. For \(r_{0,k}\leq D\), \(\tilde{a}_{k}\) is always multiplied by zero. Thus, we can assume that \(\zeta_{i,k}\sim\mathcal{CN}(0,1)\) and i.i.d. for any \(k>0\). To show convergence of the normalized interference in the mean-square sense, we first find the limit of the expectation and then show that its variance goes to zero. 1 Expected Interference From Prop. 2.4 in [31]: \[E[I_{0}]=2\pi\int_{D}^{\infty}r^{1-\alpha}E[\tilde{a}_{k}|r_{0,k}=r]\lambda_{b}dr. \tag{45}\] To bound the expectation, we need upper and lower bounds (for \(k>0\) and \(r_{0,k}>D\)) on \[E[\tilde{a}_{k}|r_{0,k}=r]=\Pr(a_{k}=1|r_{0,k}=r)\cdot\bar{P}(r) \tag{46}\] where we used the fact that \(a_{k}\in\{0,1\}\) and \(\bar{P}(r)\) was defined in (7). We also define: \[\bar{a}(r)=\Pr(a_{k}=1|r_{0,k}=r)=E\left[a_{k}|r_{0,k}=r\right]. \tag{47}\] Recall that \(a_{k}=1\) if the number of mobiles in a distance less than \(D\) from BS \(k\) is at most \(L\). As we have \(M\) mobiles around each BS, \(a_{k}=1\) if (but not only if) there are less than or equal to \(L/M\) base stations closer than distance \(D+R\) to BS \(k\). Since we condition on an active mobile being at the origin, there must be one base station at a distance at most \(R\) from the origin. This BS should be treated separately from the others. As a worst case assumption, we will derive a bound by assuming that BS \(0\) is at a distance of less than \(D+R\) from BS \(k\). All other BSs are from a HPPP with intensity \(\lambda\). Thus, the number of BSs at a distance of at most \(D+R\) from BS \(k\) is a Poisson random variable with parameter \(\pi\lambda_{b}(D+R)^{2}\), and the probability that it is below \(L/M-1\) is \[Q( \lfloor L/M\rfloor,\pi\lambda_{b}(D+R)^{2})\] \[=e^{-\pi\lambda_{b}(D+R)^{2}}\sum_{i=0}^{\lfloor L/M\rfloor-1} \frac{(\pi\lambda_{b}(D+R)^{2})^{i}}{i!} \tag{48}\] where \(Q(a,z)=\Gamma(a,z)/\Gamma(a)\). Thus \[\bar{a}(r)\geq Q\left(\lfloor L/M\rfloor\,,\pi\lambda_{b}(D+R)^{2}\right) \tag{49}\] Similarly, \(a_{k}=1\) only if (but not necessarily if) there are less than \(L/M\) base stations closer than \(D-R\) to BS \(k\). For this bound we assume that BS \(0\) is far from BS \(k\). Using again the upper regularized gamma function to bound the probability that the number of BSs within a distance \(D-R\) from BS \(k\) is smaller than \(L/M\) we get: \[\bar{a}(r)\leq Q\left(\left\lfloor\frac{L}{M}\right\rfloor+1,\pi\lambda_{b}(D -R)^{2}\right). \tag{50}\] As we assume that \(\bar{P}(r)=\bar{P}+\xi(r)\) where \(|\xi(r)|\leq\delta r^{-\gamma}\) with \(\gamma>0\), using (45), we have \[E[I]\geq 2\pi\int_{D}^{\infty}r^{1-\alpha}Q\left(\left\lfloor\frac{L}{M} \right\rfloor,\pi\lambda_{b}(D+R)^{2}\right)\] \[\cdot(\bar{P}+\xi(r))\lambda_{b}dr\] \[\geq 2\pi\lambda_{b}Q\left(\left\lfloor\frac{L}{M}\right\rfloor,\pi \lambda_{b}(D+R)^{2}\right)\frac{D^{2-\alpha}}{\alpha-2}\] \[\cdot\left(\bar{P}-\delta D^{-\gamma}\frac{\alpha-2}{\alpha+ \gamma-2}\right) \tag{51}\] and on the other hand, using (50) we get: \[E[I]\leq 2\pi\int_{D}^{\infty}r^{1-\alpha}Q\left(\left\lfloor\frac{L}{M} \right\rfloor+1,\pi\lambda_{b}(D+R)^{2}\right)\] \[\cdot(\bar{P}+\xi(r))\lambda_{b}dr\] \[\leq 2\pi\lambda_{b}Q\left(\left\lfloor\frac{L}{M}\right\rfloor+1, \pi\lambda_{b}(D+R)^{2}\right)\frac{D^{2-\alpha}}{\alpha-2}\] \[\cdot\left(\bar{P}+\delta D^{-\gamma}\frac{\alpha-2}{\alpha+ \gamma-2}\right). \tag{52}\] Note that both in (51) and (52) the second term in the parentheses \(\to 0\) as \(D\to\infty\). We next consider the behavior of these expectations at the limit when \(L\to\infty\) and \(D=s\cdot L^{\beta}\). Lemma 2 of [32], shows that \(\lim_{L\to\infty}Q(L,qL)\) converges to \(1\) if \(0\leq q<1\) and converges to \(0\) if \(q\geq 1\). Additionally, since \(Q(L,x)\) monotonically decreases with \(x\), this is easily extended to show that \(\lim_{L\to\infty}Q(L,qL^{2\beta})\) converges to \(1\) if \(\beta<0.5\) and converges to \(0\) if \(\beta>0.5\). Hence, for any \(s\) and \(B\) combination satisfying the assumptions of the lemma the bounds in (49) and (50) become \[1\leq\lim_{L\to\infty}\bar{a}(r)\leq 1. \tag{53}\] substituting in (51) and (52), we conclude that \[\lim_{L\to\infty}E[D^{\alpha-2}I]=\frac{2\pi\bar{P}\lambda_{b}}{\alpha-2}. \tag{54}\] #### Iv-B2 Second Moment of the Interference Next, to evaluate the variance of the interference, \(I_{0}\), we bound the second moment of \(I_{0}\) using Theorem 1. We need to bound \(E[\tilde{a}_{k}\tilde{a}_{\ell}|r_{0,k},r_{0,\ell}]\) for \(k>0\) and \(\ell>0\). Recalling that \(a_{k}\in\{0,1\}\) we start with: \[E[\tilde{a}_{k}\tilde{a}_{\ell}|r_{0,k},r_{0,\ell}]\leq E[\tilde{a}_{k}\tilde{a}_ {\ell}|r_{0,k},r_{0,\ell},a_{k}=a_{\ell}=1]\] For the case that \(k\neq\ell\), we use the fact that \(\zeta_{i,k}\) are i.i.d. \(\mathcal{CN}(0,1)\) random variables. Thus \[E[\tilde{a}_{k}\tilde{a}_{\ell}|r_{0,k},r_{0,\ell}] \tag{55}\] \[\leq E\left[\left.\sum_{i\in\mathcal{M}_{k}}\varphi_{i,k}^{2}\sum_ {i\in\mathcal{M}_{\ell}}\varphi_{i,\ell}^{2}\right|r_{0,k},r_{0,\ell},a_{k}=a_{ \ell}=1\right].\] This expectation is bounded by the following lemma. **Lemma 3**: _The correlation between the transmitted power of two base stations, given their distances from the origin is upper bounded by_ \[E\left[\sum_{i\in\mathcal{M}_{k}}\varphi_{i,k}^{2}\sum_{j\in \mathcal{M}_{\ell}}\varphi_{j,\ell}^{2}\Big{|}r_{0,k},r_{0,\ell},a\quad_{k}=a_{ \ell}=1\right]\] \[\leq(\bar{P}+cr_{0,k}^{-\tilde{\gamma}/4})(\bar{P}+cr_{0,\ell}^{- \tilde{\gamma}/4}) \tag{56}\] _where \(\tilde{\gamma}=\min\{\gamma,0.5\}\) and \(c^{2}=\delta+(1-\frac{\pi^{2}}{12})^{-1/2}P^{2}/\pi\)._ See Appendix B-C In the case that \(k=\ell\), \[E[\tilde{a}_{k}^{2}|r_{0,k}]\leq E\bigg{[}\Big{(}\sum_{i\in\mathcal{ M}_{k}}\varphi_{i,k}^{2}\left|\zeta_{i,k}\right|^{2}\Big{)}^{2}\Big{|}r_{0,k}\bigg{]} \tag{57}\] \[=E\Big{[}\sum_{i\in\mathcal{M}_{k}}\varphi_{i,k}^{4}\left|\zeta_{ i,k}\right|^{4}\] \[\quad+\sum_{i\in\mathcal{M}_{k}}\sum_{j\in\mathcal{M}_{k},j\neq i} \varphi_{i,k}^{2}\varphi_{j,k}^{2}\big{|}\zeta_{i,k}\big{|}^{2}\left|\zeta_{j, k}\right|^{2}\left|r_{0,k}\right|\] \[=E\left[\sum_{i\in\mathcal{M}_{k}}\varphi_{i,k}^{4}+\sum_{i\in \mathcal{M}_{k}}\sum_{j\in\mathcal{M}_{k}}\varphi_{i,k}^{2}\varphi_{j,k}^{2} \right|r_{0,k}\right]\] The power constraint, \(\sum_{i\in\mathcal{M}_{k}}\varphi_{i,k}^{2}\leq P\), also induces \(\sum_{i\in\mathcal{M}_{k}}\varphi_{i,k}^{4}\leq P^{2}\), and hence we have: \[E[\tilde{a}_{k}^{2}|r_{0,k}]\leq 2P^{2}. \tag{58}\] Using Theorem 1 with \(f_{1}(r)=f_{2}(r)=r^{-\alpha}\cdot 1_{\{r>D\}}\), \(p_{1,k}=p_{2,k}=\tilde{a}_{k}\), \(q_{0}(r)=2P^{2}\). and \(q_{1}(r)=q_{2}(r)=(P+cr^{-\frac{\gamma}{4}/4})\), yields \[E\left[I_{0}^{2}\right]\leq \left(2\pi\lambda_{b}\int_{D}^{\infty}(\bar{P}+cr^{-\tilde{\gamma }/4})r^{-\alpha+1}dr\right)^{2}\] \[+4\pi P^{2}\lambda_{b}\int_{D}^{\infty}r^{-2\alpha+1}dr \tag{59}\] Solving the integral and using also (51) we have \[\text{var}\left\{I_{0}\right\}=E[I_{0}^{2}]-E^{2}[I_{0}] \tag{60}\] \[\leq\left(\frac{2\pi\bar{P}\lambda_{b}D^{2-\alpha}}{\alpha-2} \right)^{2}\left(1+\frac{c(\alpha-2)D^{-\tilde{\gamma}/4}}{\bar{P}(\alpha+ \tilde{\gamma}/4-2)}\right)^{2}\] \[+\frac{4\pi P^{2}\lambda_{b}D^{2-2\alpha}}{2\alpha-2}-Q^{2}\left( \lfloor\frac{L}{M}\rfloor+1,\pi\lambda_{b}(D+R)^{2}\right)\] \[\cdot\left(\frac{2\pi\lambda_{b}D^{2-\alpha}}{\alpha-2}\right)^{2 }\left(\bar{P}-\delta D^{-\gamma}\frac{\alpha-2}{\alpha+\gamma-2}\right)^{2}.\] With the nulling radius \(D\) such that \(s>0\) with \(0<\beta<0.5\) or \(0<s<1\) with \(\beta=0.5\), the limits in (53) and the fact that \(\alpha>2\), we get for any \(\gamma>0\) that \(\text{var}\left\{D^{\alpha-2}I_{0}\right\}\) scales as \(O(D^{4-2\alpha})\). So \(\lim_{D\to\infty}\text{var}\left\{D^{\alpha-2}I_{0}\right\}=0\) which combined with (54) completes the proof. ### _Proof of Lemma 2_ \(\mathbf{Q}_{0,0}\) is a projection matrix and thus is idempotent, and is also Hermitian. Hence, \(\|\mathbf{Q}_{0,0}\mathbf{g}_{0,0}\|=\sqrt{\mathbf{g}_{0,0}^{\dagger}\mathbf{ Q}_{0,0}\mathbf{g}_{0,0}}\). Substituting in (12) and using also \(a_{0}\in\{0,1\}\) gives \[S_{0}=\varphi_{0,0}^{2}a_{0}r_{0,0}^{-\alpha}\mathbf{g}_{0,0}^{ \dagger}\,\mathbf{Q}_{0,0}\mathbf{g}_{0,0} \tag{61}\] Note that given the number of zero-forced mobiles, \(\sum_{j}1_{r_{j,0}<D}\), the quantity \(2\cdot\mathbf{g}_{0,0}^{\dagger}\mathbf{Q}_{0,0}\mathbf{g}_{0,0}\) is a \(\chi^{2}\) random variable with \(2(L-\sum_{j}1_{r_{j,0}<D})\) degrees of freedom. Hence, \(\mathbf{g}_{0,0}^{\dagger}\mathbf{Q}_{0,0}\mathbf{g}_{0,0}\) is the sum of \((L-\sum_{j}1_{r_{j,0}<D})\), unit mean, exponential random variables. Recalling that \(D=sL^{\beta}\), with \(0<\beta\leq 0.5\), from the strong law of large numbers we have with probability 1 that: \[\lim_{L\to\infty}\frac{\mathbf{g}_{0,0}^{\dagger}\mathbf{Q}_{0,0} \mathbf{g}_{0,0}}{L-\sum_{j}1_{r_{j,0}<D}}=1. \tag{62}\] As the BSs form a HPPP, and there are \(M\) mobiles per base station, with probability 1, \[\lim_{D\to\infty}\frac{1}{D^{2}}\sum_{j}1_{r_{j,0}<D}=\pi\lambda_{b}M \tag{63}\] Combining with (62): \[\lim_{L\to\infty}\frac{\mathbf{g}_{0,0}^{\dagger}\mathbf{Q}_{0,0} \mathbf{g}_{0,0}}{L-s^{2}L^{2\beta}\pi\lambda_{b}M}=1 \tag{64}\] Substituting into (61), and recalling that \(a_{0}\to 1\) as \(L\to\infty\) in MSE completes the proof. ### _Proof of Lemma 3_ For any two BSs, we assumed that \[C_{k,\ell}(\Phi_{b}) \triangleq E\left[\sum_{i\in\mathcal{M}_{k}}\varphi_{i,k}^{2}\sum_{j \in\mathcal{M}_{\ell}}\varphi_{j,\ell}^{2}\Big{|}\Phi_{b},a_{k}=a_{\ell}=1\right]\] \[=\bar{P}^{2}+\xi_{k,\ell} \tag{65}\] with \(|\xi_{k,\ell}|\leq\min\{P^{2},\delta\tilde{r}_{k,\ell}^{-\gamma}\}\). Removing the condition on all BS locations and conditioning only on \(r_{0,k}\), \(r_{0,\ell}\): \[E \left[\sum_{i\in\mathcal{M}_{k}}\varphi_{i,k}^{2}\sum_{j\in \mathcal{M}_{\ell}}\varphi_{j,\ell}^{2}\Big{|}r_{0,k},r_{0,\ell},a_{k}=a_{\ell}=1\right]\] \[=E\left[E\left[C_{k,\ell}(\Phi_{b})\right]\Big{|}r_{0,k},r_{0,\ell },a_{k}=a_{\ell}=1\right]\] \[\leq\bar{P}^{2}+E\left[|\xi_{k,\ell}|\Big{|}r_{0,k},r_{0,\ell} \right]. \tag{66}\] Let \(q=\sqrt{r_{0,k}r_{0,\ell}}\). We have: \[E \left[|\xi_{k,\ell}|\Big{|}r_{0,k},r_{0,\ell}\right]\] \[=\Pr(\tilde{r}_{k,\ell}\leq q)E\left[|\xi_{k,\ell}|\Big{|}r_{0,k},r_ {0,\ell},\tilde{r}_{k,\ell}\leq q\right]\] \[+\Pr(\tilde{r}_{k,\ell}>q)E\left[|\xi_{k,\ell}|\Big{|}r_{0,k},r_{0, \ell},\tilde{r}_{k,\ell}>q\right]\] \[\leq\Pr(\tilde{r}_{k,\ell}\leq q)P^{2}+\delta q^{-\gamma}. \tag{67}\] Let \(\theta\) be the angle between the BSs as seen by the typical mobile, and using the law of cosines: \[\tilde{r}_{k,\ell}=r_{0,k}^{2}+r_{0,\ell}^{2}-2r_{0,k}r_{0,\ell}cos\theta. \tag{68}\] Using (68), the probability that the distance between the two BSs is smaller than \(q\) is \[\Pr(\tilde{r}_{k,\ell}\leq q) =\Pr\left(cos\theta\geq\frac{r_{0,k}^{2}+r_{0,\ell}^{2}-\sqrt{r_{0, k}r_{0,\ell}}}{2r_{0,k}r_{0,\ell}}\right)\] \[\leq\Pr\left(1-cos\theta\leq\frac{\sqrt{r_{0,k}r_{0,\ell}}}{2r_{0,k}r_{0, \ell}}\right). \tag{69}\] Using also the inequality \(1-\cos\theta\geq\frac{\theta^{2}}{2}-\frac{\theta^{4}}{24}\geq(1-\frac{\pi^{2}}{12} )\frac{\theta^{2}}{2}\) for \(|\theta|\leq\pi\) we have \[\Pr(\tilde{r}_{k,\ell}\leq q)\leq\Pr\left((1-\frac{\pi^{2}}{12}) \frac{\theta^{2}}{2}\leq\frac{\sqrt{r_{0,k}r_{0,\ell}}}{2r_{0,k}r_{0,\ell}}\right) \tag{70}\] \[\qquad=\Pr\left(|\theta|\leq(1-\frac{\pi^{2}}{12})^{-1/2}(r_{0,k} r_{0,\ell})^{-1/4}\right).\] Finally, since the network is homogenous, \(\theta\) is uniformly distributed over \([0,2\pi)\). Thus \[\Pr(\tilde{r}_{k,\ell}\leq q)\leq\frac{1}{\pi}\bigg{(}1-\frac{\pi^{2}}{12} \bigg{)}^{-1/2}\left(r_{0,k}r_{0,\ell}\right)^{-1/4}. \tag{71}\] Substituting in (67) and then in (66), we have: \[E\left[\sum_{i\in\mathcal{M}_{k}}\varphi_{i,k}^{2}\sum_{j\in \mathcal{M}_{\ell}}\varphi_{j,\ell}^{2}\Big{|}r_{0,k},r_{0,\ell},a_{k}=a_{ \ell}=1\right]\] \[\qquad\leq\bar{P}^{2}+c^{2}\cdot(r_{0,k}r_{0,\ell})^{-\frac{5}{ 2}}\] \[\qquad\leq(\bar{P}+cr_{0,k}^{-\frac{5}{4}})(\bar{P}+cr_{0,\ell}^{ -\frac{5}{4}}).\] where we used the lemma definitions for \(c^{2}\)and \(\tilde{\gamma}\). ## Appendix C Proof of Mean-Square Convergence of the Throughput Let \(\tilde{\eta}_{0}=\varphi_{0,0}^{2}r_{0,0}^{-\alpha}\mathbf{g}_{0,0}^{\dagger }\mathbf{g}_{0,0}/(\mu L^{1-\alpha/2})\), \(H=\lim_{L\to\infty}L^{-\alpha/2}\eta_{0}=\frac{(1-\tilde{s}^{2}\pi\lambda_{b} M)\varphi_{0,0}^{2}r_{0,0}^{-\alpha}}{\frac{1-\tilde{s}^{2}\pi\lambda_{b}-2}+\mu}\) and \(\hat{H}=\lim_{L\to\infty}L^{-\alpha/2}\tilde{\eta}_{0}=\varphi_{0,0}^{2}r_{0, 0}^{-\alpha}/\mu\). As \(\mathbf{g}_{0,0}^{\dagger}\mathbf{g}_{0,0}\) is a \(\chi^{2}\) random variable with \(2L\) degrees of freedom, \(L^{-\alpha/2}\tilde{\eta}_{0}\) has a bounded third moment. Noting that \(0\leq\eta_{0}\leq\tilde{\eta}_{0}\), both \(L^{-\alpha/2}\tilde{\eta}_{0}\) and \(L^{-\alpha/2}\eta_{0}\) are uniformly integrable random variables [33] and hence \(\lim_{L\to\infty}E\left[\left|L^{-\alpha/2}\eta_{0}-H\right|^{2}\right]=0\). Let \(\mathcal{Y}\) be \(1\) if \(\eta_{0}\geq L^{\alpha/2}H/4\), and \(0\) otherwise, and consider \(\mathcal{E}=E[|\log_{2}(1+\eta_{0})-\log_{2}(1+L^{\alpha/2}H)|^{2}]\). We need to prove that \(\lim_{L\to\infty}\mathcal{E}=0\). We have: \[\mathcal{E} = E[|\log_{2}(1+\eta_{0})-\log_{2}(1+L^{\alpha/2}H)|^{2}\cdot \mathcal{Y}]\] \[\quad+E[|\log_{2}(1+\eta_{0})-\log_{2}(1+L^{\alpha/2}H)|^{2}\cdot \mathcal{Y}]\] \[\leq E\left[\left|\log_{2}\left(\frac{\eta_{0}}{L^{\alpha/2}H}\right) \right|^{2}\mathcal{Y}\right]+E\left[\log_{2}^{2}(1+L^{\alpha/2}H)\bar{ \mathcal{Y}}\right]\] The first term converges to zero because \(\eta_{0}/L^{\alpha/2}H\to 1\) in MSE and \(|\log x|\) is continuous with bounded slope for \(x>1/4\). The second term is bound by \(\log_{2}^{2}(1+L^{\alpha/2}H)\cdot\Pr\left\{\mathcal{Y}=0\right\}\) where the first term scales as \(\log_{2}^{2}(L)\) and the second term can be bounded by \[\Pr\left\{\mathcal{Y}=0\right\}\leq\Pr\left\{a_{0}<1\right\} \tag{73}\] \[+\Pr\left\{\frac{\mathbf{g}_{0,0}^{\dagger}\mathbf{Q}_{0,0} \mathbf{g}_{0,0}}{L}<\frac{1-\tilde{s}^{2}\pi\lambda_{b}M}{2}\right\}\] \[+\Pr\left\{\frac{\sum_{i=1}^{\infty}\left|a_{b_{i}}\varphi_{i,b_{i }}\mathbf{h}_{0,b_{i}}^{\dagger}\mathbf{w}_{i}\right|^{2}+\mu}{2L^{1-\alpha/2 }}>\frac{1}{s}\frac{2\pi\bar{P}\lambda_{b}}{\alpha-2}+\mu\right\}\] Using the results derived above, it can be shown that all three terms in (73) decay to zero, where the slowest term (the third) decays as \(O(L^{2-\alpha})\). This is still much faster than the \(\log_{2}^{2}L\) term. Thus (72) converges to zero, which completes the proof.
2306.17810
A Massive Scale Semantic Similarity Dataset of Historical English
A diversity of tasks use language models trained on semantic similarity data. While there are a variety of datasets that capture semantic similarity, they are either constructed from modern web data or are relatively small datasets created in the past decade by human annotators. This study utilizes a novel source, newly digitized articles from off-copyright, local U.S. newspapers, to assemble a massive-scale semantic similarity dataset spanning 70 years from 1920 to 1989 and containing nearly 400M positive semantic similarity pairs. Historically, around half of articles in U.S. local newspapers came from newswires like the Associated Press. While local papers reproduced articles from the newswire, they wrote their own headlines, which form abstractive summaries of the associated articles. We associate articles and their headlines by exploiting document layouts and language understanding. We then use deep neural methods to detect which articles are from the same underlying source, in the presence of substantial noise and abridgement. The headlines of reproduced articles form positive semantic similarity pairs. The resulting publicly available HEADLINES dataset is significantly larger than most existing semantic similarity datasets and covers a much longer span of time. It will facilitate the application of contrastively trained semantic similarity models to a variety of tasks, including the study of semantic change across space and time.
Emily Silcock, Melissa Dell
2023-06-30T17:16:04Z
http://arxiv.org/abs/2306.17810v2
# A Massive Scale Semantic Similarity Dataset of Historical English ###### Abstract A diversity of tasks use language models trained on semantic similarity data. While there are a variety of datasets that capture semantic similarity, they are either constructed from modern web data or are relatively small datasets created in the past decade by human annotators. This study utilizes a novel source, newly digitized articles from off-copyright, local U.S. newspapers, to assemble a massive-scale semantic similarity dataset spanning 70 years from 1920 to 1989 and containing nearly 400M positive semantic similarity pairs. Historically, around half of articles in U.S. local newspapers came from newswires like the Associated Press. While local papers reproduced articles from the newswire, they wrote their own headlines, which form abstractive summaries of the associated articles. We associate articles and their headlines by exploiting document layouts and language understanding. We then use deep neural methods to detect which articles are from the same underlying source, in the presence of substantial noise and abridement. The headlines of reproduced articles form positive semantic similarity pairs. The resulting publicly available HEADLINES dataset is significantly larger than most existing semantic similarity datasets and covers a much longer span of time. It will facilitate the application of contrastively trained semantic similarity models to a variety of tasks, including the study of semantic change across space and time. ## 1 Introduction Transformer language models contrastively trained on large-scale semantic similarity datasets are integral to a variety of applications in natural language processing (NLP). Contrastive training is often motivated by the anisotropic geometry of pre-trained transformer models like BERT [5], which complicates working with their hidden representations. Representations of low frequency words are pushed outwards on the hypersphere, the sparsity of low frequency words violates convexity, and the distance between embeddings is correlated with lexical similarity. This leads to poor alignment between semantically similar texts and poor performance when individual term representations are pooled to create a representation for longer texts [22]. Contrastive training reduces anisotropy [26]. A variety of semantic similarity datasets have been used for contrastive training [19]. Many of these datasets are relatively small, and the bulk of the larger datasets are created from recent web texts; _e.g._, positive pairs are drawn from the texts in an online comment thread or from questions marked as duplicates in a forum. To provide a semantic similarity dataset that spans a much longer length of time and a vast diversity of topics, this study develops HEADLINES (**H**isortical **E**normous-**S**cale **A**b**stractive **D**up**L**I**cate **N**ews **S**ummaries), a massive dataset containing nearly 400 million high quality semantic similarity pairs drawn from 70 years of off copyright U.S. newspapers. Historically, around half of content in the many thousands of local newspapers across the U.S. was taken from centralized sources such as the Associated Press wire [9]. Local newspapers reprinted wire articles but wrote their own headlines, which form abstractive summaries of the articles. Headlines written by different papers to describe the same wire article form positive semantic similarity pairs. To construct HEADLINES, we digitize front pages of off-copyright local newspapers, localizing and OCRing individual content regions like headlines and articles. The headlines, bylines, and article texts that form full articles span multiple bounding boxes - often arranged with complex layouts - and we associate them using a model that combines layout information and language understanding [15]. Then, we use neural methods from [24] to accurately predict which articles come from the same underlying source, in the presence of noise and abridgement. HEADLINES allows us to leverage the collective writings of many thousands of local editors across the U.S., spanning much of the 20th century, to create a massive, high-quality semantic similarity dataset. HEADLINES captures semantic similarity with minimal noise, as positive pairs summarize the same underlying texts. This study is organized as follows. Section 2 describes HEADLINES, and Section 3 relates it to existing datasets. Section 4 describes and evaluates the methods used for dataset construction, Section 5 benchmarks the dataset, and Section 6 discusses limitations and intended usage. ## 2 Dataset Description HEADLINES contains 393,635,650 positive headline pairs from off-copyright newspapers. Figure 1 plots the distribution of content by state. The supplementary materials summarize copyright law for works first published in the United States. The newspapers in HEADLINES are off-copyright because they were published without a copyright notice or did not renew their copyright, required formalities at the time. Far from being an oversight, it was rare historically to copyright news, outside the nation's most widely circulated papers. The headlines in our dataset were written by editors at these local papers, and hence are in the public domain and anyone can legally use or reference them without permission. It is possible that a newspaper not itself under copyright could reproduce copywritten content from some third party - the most prevalent example of this is comics - but this does not pose a problem for HEADLINES, since the dataset is built around the locally written headlines that describe the same \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline Decade & Headline & Cluster & Positive & Word & Words Per & Line & Lines Per & Character \\ & Count & Count & Pair Count & Count & Headline & Count & Headline & Error Rate \\ \hline 1920s & 4,889,942 & 1,032,108 & 28,928,226 & 68,486,589 & 14.0 & 18,893,014 & 3.9 & 4.3\% \\ 1930s & 5,519,472 & 1,126,566 & 37,529,084 & 75,210,423 & 13.6 & 21,905,153 & 4.0 & 3.7\% \\ 1940s & 6,026,940 & 1,005,342 & 62,397,004 & 61,629,003 & 10.2 & 19,538,729 & 3.2 & 2.4\% \\ 1950s & 7,530,810 & 1,192,858 & 100,272,386 & 61,127,313 & 8.1 & 20,823,786 & 2.8 & 2.3\% \\ 1960s & 6,533,071 & 926,819 & 108,415,279 & 46,640,311 & 7.1 & 16,408,148 & 2.5 & 3.7\% \\ 1970s & 3,664,201 & 585,782 & 52,951,097 & 24,472,831 & 6.7 & 7,829,510 & 2.1 & 3.2\% \\ 1980s & 703,025 & 107,507 & 28,772,722 & 5,161,537 & 7.3 & 1,502,893 & 2.1 & 1.5\% \\ **Total** & **34,867,488** & **6,039,982** & **393,635,650** & **342,728,007** & **9.8** & **106,991,233** & **3.1** & \\ \hline \hline \end{tabular} \end{table} Table 1: Descriptive statistics of HEADLINES. Figure 1: Geographic variation in source of headlines wire articles. If we were to accidentally include a syndicated headline, it would be dropped by our post-processing, since we drop headline pairs within a Levenshtein edit distance threshold of each other. It is also worth noting that a detailed search of U.S. copyright catalogs by [20] did not turn up a single instance of a wire service copyrighting their articles. (Even if they had, however, it would not pose a problem for headlines, since they were written locally.) Figure 2 shows examples of semantic similarity pairs. We quantify variation in HEADLINES across years, using a measure reminiscent of Earth Mover distance. This measure computes how much each text in a query dataset (e.g., 1920 headlines) would have to change (in embedding space) to have the same representation as the closest text in a key dataset (e.g., 1930 headlines). Specifically, we first take a random sample of 10,000 texts per year. For year \(j\), we embed texts \(t_{1j}...t_{10,000j}\) using all-mpnet-base-v2. We choose MPNet because it has been shown to perform well across a variety of embedding datasets and tasks [19]. For each of these \(t_{ij}\), we compute the most similar embedding in year \(k\), measured by cosine similarity. This gives us a vector of similarity measures \(s_{1jk}...s_{10,000jk}\), that for each text in year \(j\) measure proximity to the most similar text in year \(k\). We average these similarities to calculate \(SIM_{jk}\).1 Figure 3, which plots the \(SIM_{jk}\), shows that similarity increases with temporal proximity. The dark square towards the upper left is World War 2, during which newspapers coverage was more homogeneous due to the centrality of the war. Footnote 1: We end in 1977, as we have very few texts after this date. Figure 3: Average similarity between different years of HEADLINES. Figure 2: Semantic similarity examples, showing article image crops and OCR’ed headlines. HEADLINES is useful for training and evaluating models that aim to capture abstractive similarity, whether using the embeddings for tasks like clustering, nearest neighbor retrieval, or semantic search [19]. Because it contains chronological content over a long span of time, it can be used to evaluate dynamic language models for processing continuously evolving content [1, 16], as well as how large language models can be adapted to process historical content [18, 4, 17]. Likewise, it can be used to train or evaluate models that predict the region or year a text was written [21]. In addition, it is useful for training models and developing benchmarks for a number of downstream tasks, such as topic classification of vast historical and archival documents, which have traditionally been classified by hand. This is an extremely labor-intensive process, and as a result many historical archives and news collections remain largely unclassified. Similarly, it could facilitate creating a large scale dataset to measure term-level semantic change, complementing existing smaller-scale SemEval tasks. HEADLINES has a Creative Commons CC-BY license, to encourage widespread use, and is available on Huggingface.2 Footnote 2: [https://huggingface.co/datasets/dell-research-harvard/headlines-semantic-similarity](https://huggingface.co/datasets/dell-research-harvard/headlines-semantic-similarity) ## 3 Existing Semantic Similarity Datasets There is a dense literature on semantic similarity, with datasets covering diverse types of textual similarity and varying greatly in size. The focus of HEADLINES on semantic similarity in historical texts sets it apart from other widely used datasets. It also dwarfs the size of most existing datasets, aggregating the collective work of 20th century newspapers editors, from towns across the U.S. Its paired headlines summarize the same text, rather than being related by other forms of similarity frequently captured by datasets, such as being in the same conversation thread or answering a corresponding question. One related class of semantic similarity datasets consists of duplicate questions from web platforms, _e.g._, questions tagged by users as duplicates from WikiAnswers (77.4 million positive pairs) [6], duplicate stack exchange questions (around 304,000 duplicate title pairs) [2], and duplicate quora questions (around 400,000 duplicate pairs) [13].3 Alternatively, MS COCO [3] used Amazon's Mechanical Turk to collect five captions for each image in the dataset, resulting in around 828,000 positive caption pairs. In Flickr [27], 317,695 positive semantic similarity pairs describe around 32,000 underlying images. Like HEADLINES, positive pairs in these datasets refer to the same underlying content, but are describing an image rather than providing an abstractive summary of a longer text. In future work, HEADLINES could be expanded to include caption pairs describing the same underlying photo wire image, as local papers frequently wrote their own captions. Footnote 3: Dataset sizes are drawn, when applicable, from a table documenting the training of Sentence BERT [22]. Online comment threads have also been used to train semantic similarity models. For example, the massive scale Reddit Comments [12] draws positive semantic similarity pairs from Reddit conversation threads between 2016 and 2018, providing 726.5 million positive pairs. Semantic similarity between comments in an online thread reflects conversational similarity, to the extent the thread stays on topic, rather than abstractive similarity. Likewise, question-answer and natural-language inference datasets are widely used for semantic similarity training.While other datasets exploit abstractive summaries - _e.g._, Semantic Scholar (S2ORC) has been used to create semantic similarity pairs of the titles and abstracts of papers that cite each other - to our knowledge there are not large-scale datasets with abstractive summaries of the same underlying texts. A wide variety of text embedding datasets have been combined into the Massive Text Embedding Benchmark (MTEB) [19], which evaluates 8 embedding tasks on 58 datasets covering 112 languages. We measure the similarity between HEADLINES and the English datasets in MTEB, using the Earth Mover-style distance, described in Section 2. As above, we first take a random sample of (up to) 10,000 texts from each decade of HEADLINES, as well as each of the English datasets in MTEB (if the dataset contains fewer than 10K texts, we use the full dataset and limit the comparison dataset to the same number of randomly selected texts). For dataset \(j\), we embed texts \(t_{1j}...t_{10,000j}\). For each of these \(t_{ij}\), we compute the most similar embedding in dataset \(k\), averaging these across all texts in \(j\) to compute \(SIM_{jk}\). \(SIM_{jk}\) need not be symmetric. Suppose dataset \(j\) is highly homogeneous, whereas dataset \(k\) is heterogeneous. \(SIM_{jk}\) may be high, because the similar embeddings in homogeneous dataset \(j\) are close to a subset of embeddings in dataset \(k\). On the other hand, \(SIM_{kj}\) may be low, because most texts in dataset \(k\) are dissimilar from texts in homogeneous dataset \(j\). Figure 4 shows the entire similarity matrix between HEADLINES and the English datasets in MTEB; rows are the query dataset and columns are the key. The style of the figure was adapted from [19]. The average similarity between HEADLINES and MTEB datasets is 31.4, whereas the average similarity between MTEB datasets and HEADLINES is 33.5, supporting our supposition that HEADLINES is diverse on average relative to other benchmarks. This average max similarity shows that there is ample information in HEADLINES not contained in existing datasets. Table 2 shows examples where the nearest text to a headline in an MTEB dataset is highly similar, versus of average similarity. In some cases, highly similar texts in fact have a different meaning, but the limited context in the MTEB datasets makes this difficult to capture. \begin{table} \begin{tabular}{l l l} \hline \hline **Headline** & **Highly similar texts** & **Similarity** \\ “Inflation Cuts are Questoined” & **Reddit**: “Today FOMC Resumed Meeting Inflation & 0.55 \\ & getting out of hand... Maybe... Maybe Not” & \\ “Bear Bites Off Arm of Child” & **StackExchange**: “How to handle animal AI biting & 0.46 \\ & and holding onto the character.. & \\ “British Cruiser Reported Sunk” & **Twitter**: “That’s Ift, Britain is Sunk” & 0.61 \\ “Will Free Press Dance to Government Tune” & **Twitter**: “Donald Trump v. a free press” & 0.60 \\ “Partitioning Plan unsatisfactory, Re Declares” & **Ubuntu Questions**: “Partitioning Issues” & 0.51 \\ \hline **Headline** & **Average similarity texts** & **Similarity** \\ “Reds Knot Strong Tie” & **ArXiv**: “Knots and Polytopes” & 0.34 \\ “SHOWERS PROMISED TO END HEAT WAVE Two Deaths From Heat Over Weekend” & **StackOverflow**: “how to annotate heatmap with & 0.27 \\ “Salary Boost Due For Some On Labor Day” & **Twitter**: “Glassdoor will now tell you if your & 0.31 \\ “40 Old Ladies Now In Senate Says Rogers” & **Quora**: “Is 19 young?” & 0.28 \\ \hline \hline \end{tabular} \end{table} Table 2: This table shows similarities between example texts. Figure 4: Average similarity between HEADLINES (by decade) and the English datasets in the Massive Text Embedding Benchmark. The similarity measure is described in the text. Dataset Construction and Evaluation ### Digitization We digitized front pages from off-copyright newspapers spanning 1920-1989. We recognize layouts using Mask RCNN [11] and OCR the texts. We transcribed the headlines using Tesseract. The digitization was performed using Azure F-Series CPU nodes. We evaluate this OCR on a hand annotated sample of 300 headlines per decade. Table 1 reports the character error rate, defined as the Levenshtein distance between the transcribed text and the ground truth, normalized by the length of the ground truth. As expected, OCR quality improves over time, as there is less damage from aging and fewer unusual fonts. ### Article Association Newspaper articles have complex and irregular layouts that can span multiple columns (Figure 5). We associate the (potentially multiple) headline bounding boxes with the (potentially multiple) article bounding boxes and byline boxes that comprise a single article using a combination of layout information and language understanding. A rule-based approach using the document layouts gets many of the associations correct, but misses some difficult cases where article bounding boxes are arranged in complex layouts. Language understanding can be used to associate such articles but must be robust to noise, from errors in layout detection (_e.g._ from cropping part of a content bounding box or adding part of the line below) and from OCR character recognition errors. Hand annotating a sufficiently large training dataset would have been infeasibly costly. Instead, we devise a set of rules that - while recall is relatively low - have precision above 0.99, as measured on an evaluation set of 3,803 labeled bounding boxes. The algorithm exploits the positioning of article bounding boxes relative to headline boxes (as in Figure 6), first grouping an article bounding box with a headline bounding box if the rules are met and then associating all article bounding boxes grouped with the same headline together. Since precision is above 0.99, the rule generates nearly perfect silver-quality training data. To train a language model to predict whether article box B follows box A, we embed the first and last 64 tokens of the texts in boxes B and A, respectively, with a RoBERTa base model [15]. The pair is positive when B follows A. The training set includes 12,769 positive associated pairs, with training details described in the supplementary materials. At inference time, we first associate texts using the rule-based approach, described in Figure 6, which has extremely high precision. To improve recall, we then apply the RoBERTa cross-encoder to remaining article boxes that could plausibly be Figure 5: Articles that are misassociated with rule-based or image-based methods associated, given their coordinates. Texts cannot be followed by a text that appears to the left, as layouts always proceed from left to right, so these combinations are not considered. We evaluate this method on a hand-labeled dataset of 214 scans. Full details of this dataset are given in the appendix. Table 3 evaluates recall, precision and F1 for associated articles. The F1 of 93.7 is high, and precision is extremely high. Errors typically occur when there is an error in the layout analysis or when contents are very similar, _e.g._, grouping multiple obituaries into a single article. \begin{table} \begin{tabular}{c c c c} \hline \hline & (1) & (2) & (3) \\ & F1 & Recall & Precision \\ \cline{2-3} Full Article Association & 93.7 & 88.3 & 99.7 \\ \hline \hline \end{tabular} \end{table} Table 3: This table evaluates the full article association model. Figure 6: Illustration of article association pipeline ### Detecting Reproduced Content Accurately detecting reproduced content can be challenging, as articles were often heavily abridged by local papers to fit within their space constraints and errors in OCR or article association can add significant noise. Table 4 shows examples of reproduced articles. We use the model developed by [24], who show that a contrastively trained neural MPNet bi-encoder - combined with single linkage clustering of article representations - accurately and cheaply detects reproduced content. This bi-encoder is contrastively trained on a hand-labeled dataset (detailed in the appendix) to create similar representations of articles from the same wire source and dissimilar representations of articles from different underlying sources, using S-BERT's online contrastive loss [10] implementation. We run clustering on the article embeddings by year over all years in our sample. In post-processing, we use a simple set of rules exploiting the dates of articles within clusters to remove content like weather forecasts and legal notices, that are highly formulaic and sometimes cluster together when they contain very similar content (_e.g._ a similar 5-day forecast) but did not actually come from the same underlying source. We remove all headline pairs that are below a Levenshtein edit distance, normalized by the min length in the pair, of 0.1, to remove pairs that are exact duplicates up to OCR noise. Training and inference were performed on an A6000 GPU card. More details are provided in the supplementary materials. To evaluate how well the model detects reproduced content, we use a labeled sample of all front page articles appearing in the newspaper corpus for three days in the 1930s and 1970s, taken from [24]. This sample consists of 54,996 positive reproduced article pairs and 100,914,159 negative pairs. The large-scale labeled evaluation dataset was generated using the above pipeline, so the evaluation is inclusive of any errors that result from upstream layout detection, OCR, or article association errors. The neural bi-encoder methods achieve a high adjusted rand index (ARI) of 91.5, compared to 73.7 for an optimal local sensitive hashing specification, chosen on the validation set. This shows that our neural methods substantially outperform commonly used sparse methods for detecting reproduced content. The neural bi-encoder is slightly outperformed by adding a re-ranking step that uses a neural cross-encoder on the best bi-encoder matches (ARI of 93.7). We do not implement this method because the cross-encoder doesn't scale well. In contrast, the bi-encoder pipeline can be scaled to 10 million articles on a single GPU in a matter of hours, using a FAISS [14] backend. An error analysis is provided in [24]. Errors typically consist of articles about the same story from different wire services (_e.g._ the Associated Press and the United Press) or updates to a story as new events unfolded. Both types of errors will plausibly still lead to informative semantic similarity pairs. ## 5 Benchmarking We benchmark HEADLINES using a variety of different language models and the MTEB clustering task. This task embeds texts using different base language models and then uses \(k\) - the number of clusters in the ground truth data - for k-means clustering. Following MTEB, we score the model using the v-measure [23]. We should note that real-world problems are often framed as clustering tasks - rather than as classification tasks - because \(k\) is unknown. By using \(k\) from the ground truth, it makes the task easier. Nevertheless, we examine this task to allow for comparison with the rest of the literature. Figure 7 plots the results of this benchmarking exercise. MTEB benchmarks clustering on Arxiv, Bioarxiv, Medarxiv, Reddit, StackExchange, and Twenty Newsgroups. Texts are labeled with their classification (e.g., fields like ComputerVision for the Arxiv datasets; the subreddit for Reddit). The best average v-score across these datasets, from MPNet, is 43.69. The best average v-score across decades for HEADLINES, from ST5-XXL, is around 78. This difference is likely to reflect, at least in part, that our cluster labels are less noisy, since texts in the same cluster summarize the same content. \begin{table} \begin{tabular}{c|c|c} \hline \hline & **Neural** & **Non-Neural** \\ \hline **Most scalable** & Bi-encoder (91.5) & LSH (73.7) \\ **Less scalable** & Re-ranking (**93.7**) & \(N\)-gram overlap (75.0) \\ \hline \hline \end{tabular} \end{table} Table 5: The numbers in parentheses are the Adjusted Rand Index for four different models - a bi-encoder, a “re-ranking” strategy that combines a bi- and cross-encoder, locally sensitive hashing (LSH), and \(N\)-gram overlap. Hyperparameters were chosen on the NEWS-COPY validation set, and all models were evaluated on the NEWS-COPY test set. Figure 7: This figure benchmarks HEADLINES on the MTEB clustering task. The x-axis shows the year that the sample was taken from and the y-axis gives the v-measure. In contrast, titles of Reddit posts in the same subreddit may be only loosely linked to each other, and many could be within the domain of another subreddit cluster. While a user happened to post in one subreddit, another user could have reasonably made the same titled post in a different subreddit. The under-identification of the clustering tasks for some texts in the MTEB datasets is suggested by the very low v-scores across state-of-the-art language models. Overall, this suggests the high quality of clusters in HEADLINES relative to many web text datasets. Yet there is still ample scope for improvement to the state-of-the-art model. ## 6 Limitations and Recommended Usage HEADLINES contains some transcription errors. For working with historical texts, these are more a feature than a bug, as most historical texts are transcribed and also contain various OCR errors. Training a model on transcribed texts likely makes it more robust to transcription errors at inference time. However, researchers requiring completely clean texts should seek another corpus. HEADLINES contains historical language, that reflects the semantics and cultural biases of many thousands of local newspaper editors. This is a distinguishing feature of HEADLINES, that is core to many potential applications. We do not attempt to filter texts with antiquated terms or that may be considered offensive, as this would invalidate the use of the dataset for studying semantic change and historical contexts. At the same time, this makes HEADLINES less suited for tasks that require texts that fully conform to current cultural standards or semantic norms. For these reasons, we recommend against the use of HEADLINES for training generative models. Rather, with nearly 400M positive semantic similarity pairs spanning much of the 20th century, it can plausibly play an important role in facilitating the application of large language models to historical texts. ## Supplementary Materials ### Methods to Associate Articles Figure 6 in the main text illustrates the full article association procedure. First, we used a rule-based algorithm using associate article bounding boxes that are under the same headline, as these are part of the same article with extremely high probability. Algorithm 1 gives pseudocode for this method. We set the parameters as \(P_{S}=100\), \(P_{T}=20\), \(P_{B}=50\). For training data, where we want article pairs that are not only part of the same article, but also where they appear in the given order, we further narrow down the pairs. Specifically, we use only those pairs which are horizontally next to each other, and which have no other bounding boxes below them, as for these pairs, we can guarantee that the pair of bounding follow directly after one another (whereas for other article bounding boxes that share a headline, there may be a third bounding box in between). Algorithm 2 shows pseudocode for this procedure, and we used \(P_{C}=5\), and it is further illustrated in panel A of figure 6 in the main text. For hard negatives, we used article boxes under the same headline in reverse reading order (right to left). For standard negatives, we took pairs of articles on the same page, where B was above and to the left of A, as articles do not read from right to left. One twelfth of our training data were positive pairs, another twelfth were hard negative pairs and the remainder were standard negative pairs. This outperformed a more balanced training sample. We use this dataset to finetune a cross-encoder using a RoBERTa base model [15]. We used a Bayesian search algorithm [7] to find optimal hyperparameters on one tenth of our training data (limited compute prevented us from running this search with the full dataset), which led to a learning rate of 1.7e-5, with a batch size of 64 and 29.57% warm up. We trained for 26 epochs with an AdamW optimizer, and optimize a binary cross-entropy loss. We evaluate these methods on a hand-labeled dataset of 214 scans, randomly selected from 1968 and 1955. These scans were labeled by a highly-trained undergraduate research assistant. Summary statistics of this dataset are given in table S-1 and evaluation results are given in the main text. ### Methods to Detect Reproduced Content To detect reproduced content, we use the contrastively trained bi-encoder model developed by [24], which is trained to learn similar representations for reproduced articles and dissimilar representations for non-reproduced articles. This model is based on an S-BERT MPNET model [22, 25] and is fine-tuned on a hand-labelled dataset of articles from the same underlying wire source, using S-BERT's online contrastive loss [10] implementation, with a 0.2 margin and cosine similarity as the distance metric. The learning rate is 2e-5 with 100% warm up and a batch size of 32. It uses an AdamW optimizer, and the model is trained for 16 epochs. This bi-encoder is trained and evaluated on a hand-labeled dataset, which is detailed in S-2. The results of this evaluation are given in the main text. To create clusters from the bi-encoder embeddings, we use highly scalable single-linkage clustering, with a cosine similarity threshold of 0.94. We build a graph using articles as nodes, and add edges if the cosine similarity is above this threshold. As edge weights we use the negative exponential of the difference in dates (in days) between the two articles. We then apply Leiden community detection to the graph to control false positive edges that can otherwise merge disparate groups of articles. We further remove clusters that have over 50 articles and contain articles with greater than five different dates. We also remove clusters that contain over 50 articles, when the number of articles is more than double the number of unique newspapers from which these articles are sourced. This removes clusters of content that are correctly clustered in the sense of being based on the same underlying source, but are not useful for the HEADLINES dataset. For example, an advertisement (miscalified as an article due to an article-like appearance) might be repeated by the same newspaper on multiple different dates and would be removed by these rules, or weather forecasts can be very near duplicates across space and time, forming large clusters. ### Dataset Information #### s-4.1 Dataset URL HEADLINES can be found at [https://huggingface.co/datasets/dell-research-harvard/headlines-semantic-similarity](https://huggingface.co/datasets/dell-research-harvard/headlines-semantic-similarity). This dataset has structured metadata following schema.org, and is readily discoverable.4 Footnote 4: See [https://search.google.com/test/rich-results/result?id=_HKjxIv-LaF_8EIAarsM_g](https://search.google.com/test/rich-results/result?id=_HKjxIv-LaF_8EIAarsM_g) for full metadata. #### s-4.2 Doi The DOI for this dataset is: 10.57967/hf/0751. #### s-4.3 License HEADLINES has a Creative Commons CC-BY license. #### s-4.4 Dataset usage The dataset is hosted on huggingface, in json format. Each year in the dataset is divided into a distinct file (eg. 1952_headlines.json). The data is presented in the form of clusters, rather than pairs to eliminate duplication of text data and minimize the storage size of the datasets. An example from HEADLINES looks like: { "headline": "FRENCH AND BRITISH BATTLESHIPS IN MEXICAN WATERS", "group_id": 4 "date": "May-14-1920", "state": "kansas", } The data fields are: - headline: headline text. - date: the date of publication of the newspaper article, as a string in the form mmm-DD-YYYY. - state: state of the newspaper that published the headline. - group_id: a number that is shared with all other headlines for the same article. This number is unique across all year files. The whole dataset can be easily downloaded using the datasets library: from datasets import load_dataset dataset_dict = load_dataset("dell-research-harvard/headlines-semantic-similarity") Specific files can be downloaded by specifying them: from datasets import load_dataset load_dataset( "dell-research-harvard/headlines-semantic-similarity", data_files=["1929_headlines.json", "1989_headlines.json"] ) #### s-4.5 Author statement We bear all responsibility in case of violation of rights. ### Maintenance Plan We have chosen to host HEADLINES on huggingface as this ensures long-term access and preservation of the dataset. #### S-4.7 Dataset documentation and intended uses We follow the datasheets for datasets template [8]. #### S-4.7.1 Motivation For what purpose was the dataset created?Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. _Transformer language models contrastively trained on large-scale semantic similarity datasets are integral to a variety of applications in natural language processing (NLP). A variety of semantic similarity datasets have been used for this purpose, with positive text pairs related to each other in some way. Many of these datasets are relatively small, and the bulk of the larger datasets are created from recent web texts; e.g. positives are drawn from the texts in an online comment thread or duplicate questions in a forum. Relative to existing datasets, HEADLINES is very large, covering a vast array of topics. This makes it useful generally speaking for semantic similarity pre-training. It also covers a long period of time, making it a rich training data source for the study of historical texts and semantic change. It captures semantic similarity directly, as the positive pairs summarize the same underlying texts._ Who created this dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?HEADLINES was created by Melissa Dell and Emily Silcock, at Harvard University. Who funded the creation of the dataset?If there is an associated grant, please provide the name of the grantor and the grant name and number. _The creation of the dataset was funded by the Harvard Data Science Initiative, Harvard Catalyst, and compute credits provided by Microsoft Azure to the Harvard Data Science Initiative._ Any other comments?_ _None._ #### S-4.7.2 Composition What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)?Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. HEADLINES comprises instances of newspaper headlines and relationships between them. Specifically, each headline includes information on the text of the headline, the date of publication, and the state it was published in. Headlines have relationships between them if they are semantic similarity pairs, that is, if they two different headlines for the same newspaper article. How many instances are there in total (of each type, if appropriate)?HEADLINES contains 34,867,488 different headlines and 396,001,930 positive relationships between headlines. Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set?If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). _Many local newspapers were not preserved, and newspapers with the widest circulation tended to renew their copyrights, so cannot be included._ What data does each instance consist of? "Raw" data (e.g., unprocessed text or images) or features?In either case, please provide a description. _Each data instance consists of raw data. Specifically, an example from HEADLINES is:_ { "headline": "FRENCH AND BRITISH BATTLESHIPS IN MEXICAN WATERS", "group_id": 4 "date": "May-14-1920", "state": "kansas", } _The data fields are:_ - headline_: headline text._ - date_: the date of publication of the newspaper article, as a string in the form mmm-DD-YYYY._ - state_: state of the newspaper that published the headline._ - group_id_: a number that is shared with all other headlines for the same article. This number is unique across all year files._ Is there a label or target associated with each instance?If so, please provide a description. _Each instance contains a_ group_id _as mentioned directly above. This is a number that is shared by all other instances that are positive semantic similarity pairs._ Is any information missing from individual instances?If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. _In some cases, the state of publication is missing, due to incomplete metadata._ Are relationships between individual instances made explicit (e.g., users' movie ratings, social network links)?If so, please describe how these relationships are made explicit. _Relationships between instances are made explicit in the_ group_id _variable, as detailed above._ Are there recommended data splits (e.g., training, development/validation, testing)?If so, please provide a description of these splits, explaining the rationale behind them. _There are no recommended splits._ Are there any errors, sources of noise, or redundancies in the dataset?If so, please provide a description. _The data is sourced from OCR'd text of historical newspapers. Therefore some of the headline texts contain OCR errors._ Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)?If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a future user? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate. _The data is self-contained._ Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals non-public communications)?If so, please provide a description. _The dataset does not contain information that might be viewed as confidential._ Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?If so, please describe why. _The headlines in the dataset reflect diverse attitudes and values from the period in which they were written, 1920-1989, and contain content that may be considered offensive for a variety of reasons._ Does the dataset relate to people?If not, you may skip the remaining questions in this section. _Many news articles are about people._ Does the dataset identify any subpopulations (e.g., by age, gender)?If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. _The dataset does not specifically identify any subpopulations._ Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset?If so, please describe how. _If an individual appeared in the news during this period, then headline text may contain their name, age, and information about their actions._ Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)?If so, please provide a description. _All information that it contains is already publicly available in the newspapers used to create the headline pairs._ Any other comments?_ _None._ S-4.7.3 Collection Process How was the data associated with each instance acquired?Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. _To create HEADLINES, we digitized front pages from off-copyright newspapers spanning 1920-1989. Historically, around half of articles in U.S. local newspapers came from newswires like the Associated Press. While local papers reproduced articles from the newswire, they wrote their own headlines, which form abstractive summaries of the associated articles. We associate articles and their headlines by exploiting document layouts and language understanding. We then use deep neural methods to detect which articles are from the same underlying source, in the presence of substantial noise and abridgement. The headlines of reproduced articles form positive semantic similarity pairs._ What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)?How were these mechanisms or procedures validated? _These methods are described in detail in the main text and supplementary materials of this paper._ If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? The dataset was not sampled from a larger set. Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? We used student annotators to create the validation sets for associating bounding boxes, and the training and validation sets for clustering duplicated articles. They were paid S15 per hour, a rate set by a Harvard economics department program providing research assistantships for undergraduates. Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)?If not, please describe the timeframe in which the data associated with the instances was created. _The headlines were written between 1920 and 1989. Semantic similarity pairs were computed in 2023._ Were any ethical review processes conducted (e.g., by an institutional review board)?If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation. _No, this dataset uses entirely public information and hence does not fall under the domain of Harvard's institutional review board._ Does the dataset relate to people?If not, you may skip the remaining questions in this section. _Historical newspapers contain a variety of information about people._ Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)?_ _The data were obtained from off-copyright historical newspapers._ Were the individuals in question notified about the data collection?If so, please describe (or show with screenshots or other information) how notice was provided, and provide a link or other access point to, or otherwise reproduce, the exact language of the notification itself. _Individuals were not notified; the data came from publicly available newspapers._ Did the individuals in question consent to the collection and use of their data?If so, please describe (or show with screenshots or other information) how consent was requested and provided, and provide a link or other access point to, or otherwise reproduce, the exact language to which the individuals consented. _The dataset was created from publicly available historical newspapers._ If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses?If so, please provide a description, as well as a link or other access point to the mechanism (if appropriate). _Not applicable._ Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted?If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. _No._ Any other comments?_ _None._ #### s-4.7.4 Preprocessing/cleaning/labeling **Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)?** If so, please provide a description. If not, you may skip the remainder of the questions in this section. _See the description in the main text._ **Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)?** If so, please provide a link or other access point to the "raw" data. _No._ **Is the software used to preprocess/clean/label the instances available?** If so, please provide a link or other access point. _No specific software was used to clean the instances._ **Any other comments?** _None._ #### s-4.7.5 Uses **Has the dataset been used for any tasks already?** If so, please provide a description. _No._ **Is there a repository that links to any or all papers or systems that use the dataset?** If so, please provide a link or other access point. _No._ **What (other) tasks could the dataset be used for?** _The dataset can be used for training models for semantic similarity, studying language change over time and studying difference in language across space._ **Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?** For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? _The dataset contains historical news headlines, which will reflect current affairs and events of the time period in which they were created, 1920-1989, as well as the biases of this period._ **Are there tasks for which the dataset should not be used?** If so, please provide a description. _It is intended for training semantic similarity models and studying semantic variation across space and time._ **Any other comments?** _None_ #### s-4.7.6 Distribution **Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created?** If so, please provide a description. _Yes. The dataset is available for public use._ How will the dataset will be distributed (e.g., tarball on website, API, GitHub)Does the dataset have a digital object identifier (DOI)? _The dataset is hosted on huggingface. Its DOI is 10.57967/hf/0751._ When will the dataset be distributed? _The dataset was distributed on 7th June 2023._ Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)?If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. _The dataset is distributed under a Creative Commons CC-BY license. The terms of this license can be viewed at [https://creativecommons.org/licenses/by/2.0/_](https://creativecommons.org/licenses/by/2.0/_) Have any third parties imposed IP-based or other restrictions on the data associated with the instances?If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms, as well as any fees associated with these restrictions. _There are no third party IP-based or other restrictions on the data._ Do any export controls or other regulatory restrictions apply to the dataset or to individual instances?If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. _No export controls or other regulatory restrictions apply to the dataset or to individual instances._ Any other comments?_ _None._ ### S-4.7.7 Maintenance Who will be supporting/hosting/maintaining the dataset?_ _The dataset is hosted on huggingface._ How can the owner/curator/manager of the dataset be contacted (e.g., email address)?_ _The recommended method of contact is using the huggingface 'community' capacity. Additionally, Melissa Dell can be contacted at [email protected]._ Is there an erratum?If so, please provide a link or other access point. _There is no erratum._ Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)?If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? _We have no plans to update the dataset. If we do, we will notify users via the huggingface Dataset Card._ If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)?If so, please describe these limits and explain how they will be enforced. _There are no applicable limits on the retention of data._ Will older versions of the dataset continue to be supported/hosted/maintained?If so, please describe how. If not, please describe how its obsolescence will be communicated to users. _We have no plans to update the dataset. If we do, older versions of the dataset will not continue to be hosted. We will notify users via the huggingface Dataset Card._ If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?If so, please provide a description. Will these contributions be validated/verified? If so, please describe how. If not, why not? Is there a process for communicating/distributing these contributions to other users? If so, please provide a description. _Others can contribute to the dataset using the huggingface 'community' capacity. This allows for anyone to ask questions, make comments and submit pull requests. We will validate these pull requests. A record of public contributions will be maintained on huggingface, allowing communication to other users._ Any other comments?_ _None._
2310.00452
Cosmological singularities in non-canonical models of dark energy
The pursuit of unraveling the true essence of dark energy has become an immensely captivating endeavor in modern cosmology. Alongside the conventional cosmological constant approach, a diverse range of ideas has been proposed, encompassing scalar field-based models and various modified gravity approaches. A particularly intriguing notion involves exploring scalar field dark energy models within quantum gravitationally motivated cosmologies, with non-canonical theories standing out as a prominent candidate in this context. Hence, in this work, we investigate three widely recognized non-canonical scalar field dark energy models: phantom, quintom, and DBI dark energy models. By employing the Goriely-Hyde procedure, we demonstrate the presence of singularities in both finite and infinite time within these frameworks. and that these singularities can manifest regardless of the system's initial conditions. Moreover, we further establish how cosmological singularities of types I-IV can arise in all of these models. The work goes to show that non-canonical regimes for dark energy can allow for most of the prominent cosmological singularities for a variety of models.
Oem Trivedi, Simran Kaur Saggu, Pankaj S. Joshi
2023-09-30T18:09:41Z
http://arxiv.org/abs/2310.00452v2
# Cosmological singularities in non-canonical models of dark energy ###### Abstract The pursuit of unraveling the true essence of dark energy has become an immensely captivating endeavor in modern cosmology. Alongside the conventional cosmological constant approach, a diverse range of ideas has been proposed, encompassing scalar field-based models and various modified gravity approaches. A particularly intriguing notion involves exploring scalar field dark energy models within quantum gravitationally motivated cosmologies, with non-canonical theories standing out as a prominent candidate in this context. Hence, in this work, we investigate three widely recognized non-canonical scalar field dark energy models: phantom, quintom, and DBI dark energy models. By employing the Goriely-Hyde procedure, we demonstrate the presence of singularities in both finite and infinite time within these frameworks. and that these singularities can manifest regardless of the system's initial conditions. Moreover, we further establish how cosmological singularities of types I-IV can arise in all of these models. The work goes to show that non-canonical regimes for dark energy can allow for most of the prominent cosmological singularities for a variety of models. ## 1 Introduction Observations of late-time acceleration of the Universe took the cosmological community by surprise [1]. Since then, significant efforts have been made to explain this expansion, including the standard approaches like the Cosmological constant [1, 2, 3, 4, 5], as well as more exotic scenarios like Modified gravity theories [6, 7, 8], and recent proposals for direct detection of dark energy [9]. One fascinating approach to understanding dark energy is Quintessence, where a scalar field drives the late-time cosmic acceleration of the universe [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]. Quintessence is particularly interesting as it represents the simplest scalar field dark energy scenario that does not suffer from issues like ghosts or Laplacian instabilities. In quintessence models, a slowly varying scalar field with a potential \(V(\phi)\) leads to the acceleration of the universe, similar to the mechanism of slow-roll inflation. However, in this case, contributions from non-relativistic matter, such as baryons and dark Matter, cannot be ignored. It is worth noting that simple models of Quintessence have been shown to be in conflict with the current H0 tension [22, 23, 24], suggesting that simple Quintessence models may perform worse than \(\Lambda\)-CDM models in light of the current H0 data [25]. This leads one to consider other more exotic possibilities for scalar field dark energy models and one such possibility is to consider models with noncanonical Lagrangians. Non-canonical models of scalar fields present some non-trivial characteristics, as they are subject to theoretical issues that do not arise in canonical cases. These issues include the presence of ghosts and non-physical solutions. Nevertheless, the specific form of these contributions can be motivated by high-energy phenomenology, establishing a connection between cosmological observations and high-energy physics. As a result, non-canonical models have garnered significant attention in the literature on dark energy and modified gravity [26, 27, 28, 29]. Three very interesting models of dark energy with non-canonical scalar fields are the phantom, quintom and Dirac-Born-Infeld (DBI) models. Phantom models of dark energy refer to models with the equation of state parameter \(w<-1\) which is known as the phantom regime [17, 30]. A dark energy model able to produce an EoS with values below -1 has generated considerable interest from a phenomenological point of view and it is also not completely ruled out by observations [31]. One can write the action of a phantom field minimally coupled to gravity as \[S=\int d^{4}x\sqrt{-g}\left[\frac{1}{2}(\nabla\phi)^{2}-V(\phi)\right] \tag{1}\] where \(\partial\phi^{2}=g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi\,V(\phi)\) is a self interacting potential. The Lagrangian under consideration exhibits phantom behavior, where its equation of state (EOS) can go below -1. In contrast, quintessence models describe dark energy with an EOS greater than -1. Consequently, it is not possible to cross the phantom barrier using a single canonical or phantom scalar field alone. However, an intriguing model that allows for such a crossing was proposed by Feng et al. [32]. This particular dark energy scenario features an EOS larger than -1 in the past and less than -1 in the present, consistent with current observations. While this can be accomplished with more general non-canonical scalar fields, the simplest model entails a quintom Lagrangian consisting of two scalar fields. The action for such a quintom fields minimally coupled to gravity can be written as \[S=\int d^{4}x\sqrt{-g}\left[-\frac{1}{2}(\nabla\phi)^{2}+\frac{1}{2}\partial\sigma ^{2}-V(\phi,\sigma)\right] \tag{2}\] where \(\phi\) is a canonical scalar field, \(\sigma\) is a non canonical phantom field and \(V(\phi,\sigma)\) is a general potential for both fields. Finally, another interesting class of noncanonical scalar field models is the DBI model. The simplest form of the DBI Lagrangian for a scalar field theory can be given by \[\mathcal{L}_{t}=V(\phi)\sqrt{1+\partial\phi^{2}} \tag{3}\] This form of the DBI Lagrangian was used to predict tachyons in low energy EFT descriptions of string theory [33, 34] and since then Tachyonic scalar fields have also been studied in the context of cosmology [35, 36]. The general action for a minimally coupled DBI field, however, is an extension of (3) given by [37] \[S=\int d^{4}x\sqrt{-g}\left[\frac{1}{f(\phi)}(\sqrt{1+2f(\phi)X})-1)-V(\phi)\right] \tag{4}\] where \(X=-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi\) with \(f(\phi)\) and \(V(\phi)\) are arbitrary functions of \(\phi\). These models have also been extensively studied in the context of cosmology have even been extended further [38, 39]. In recent times, a substantial body of literature has emerged that focuses on investigating the different types of cosmological singularities that may arise in the current and far future of the Universe [40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62]. But often it is very difficult to classify and study the cosmological singularities which may occur in extremely non-conventional cosmologies which are motivated by quantum gravitational/ phenomenological considerations (for example, see the classification of singularities in asymptotically safe cosmology [63] ) and often it may not even be possible to do so in an orthodox fashion. Hence it becomes essential to look for non-conventional ways to find out cosmological singularities in exotic cosmologies and in this regard, a particular dynamical systems method can be of huge help. From a dynamical standpoint, one of the most intriguing aspects of studying various dynamical systems lies in understanding their singularity structure, which becomes particularly relevant when these systems describe physically significant phenomena. While numerous approaches have been proposed to explore the singularity structure of autonomous dynamical systems, one particularly interesting method is the Goriely-Hyde procedure [64]. As cosmology presents a multitude of captivating dynamical systems [38], the investigation of singularity structure in such systems has gained considerable attention, with the Goriely-Hyde method proving particularly useful for cosmological explorations [65, 66, 67, 68, 69, 70]. This method has previously been applied to study finite and non-finite time singularities in certain classes of quintessence models as well [71, 72, 73]. However, a comprehensive study of cosmological singularities in non-canonical scalar field models of dark energy using this approach is still lacking, and thus, we aim to address this gap in our work. In Section II, we provide a concise overview of the Goriely-Hyde method, after which we apply the complete Goriely-Hyde procedure to phantom, quintom, and DBI models of dark energy, as discussed previously. We demonstrate how singularities in these non-canonical models can exhibit diverse characteristics and occur at both finite and infinite times in Section III. Subsequently, in Section IV, we consider two well-motivated ansatz for the Hubble parameter and classify which types of cosmological singularities (Types I-IV) can arise within these regimes. Finally, we conclude our work in Section V. ## 2 Goriely-Hyde Procedure The Goriely-Hyde method [64] provides an elegant approach to determining the presence of finite-time singularities in dynamical systems. The procedure can be outlined as follows: * We begin by considering a dynamical system described by \(n\) differential equations of the form: \[\dot{x}_{i}=f_{i}(x),\] (5) where \(i=1,2,...,n\), and the overdot represents differentiation with respect to time \(t\), which in the case of quintessence models can be better represented by the number of e-foldings \(N\). We identify the parts of the equation \(f_{i}\) that become significant as the system approaches the singularity. These significant parts are referred to as "dominant parts" [64]. Each dominant part constitutes a mathematically consistent truncation of the system, denoted as \(\hat{f}_{i}\). The system can then be written as: \[\dot{x}_{i}=\hat{f}_{i}(x).\] (6) * Without loss of generality, the variables \(x_{i}\) near the singularity can be expressed as: \[x_{i}=a_{i}\tau^{p_{i}},\] (7) where \(\tau=t-t_{c}\), and \(t_{c}\) is an integration constant. Substituting equation (4) into equation (3) and equating the exponents, we can determine the values of \(p_{i}\) for different \(i\), which form the vector \(\mathbf{p}=(p_{1},p_{2},...,p_{n})\). Similarly, we calculate the values of \(a_{i}\) to form the vector \(\vec{a}=(a_{1},a_{2},...,a_{n})\). It is important to note that if \(\vec{a}\) contains only real entries, it corresponds to finite-time singularities. Conversely, if \(\vec{a}\) contains at least one complex entry, it may lead to non-finite-time singularities. Each set \((a_{i},p_{i})\) is known as a dominant balance of the system. * Next, we calculate the Kovalevskaya matrix given by: \[R=\begin{pmatrix}\frac{\partial f_{1}}{\partial x_{1}}&\frac{\partial f_{1}}{ \partial x_{2}}&.&.&\frac{\partial f_{1}}{\partial x_{n}}\\ \frac{\partial f_{2}}{\partial x_{1}}&\frac{\partial f_{2}}{\partial x_{2}}&.&.&\frac{\partial f_{2}}{\partial x_{n}}\\.&.&.&.&.\\.&.&.&.&.\\ \frac{\partial f_{n}}{\partial x_{1}}&\frac{\partial f_{n}}{\partial x_{2}}&.&.&\frac{\partial f_{n}}{\partial x_{n}}\end{pmatrix}-\begin{pmatrix}p_{1}&0&.&. &0\\ 0&p_{2}&.&.&0\\.&.&.&.&.\\.&.&.&.&.\\ 0&0&.&.&p_{n}\end{pmatrix}.\] (8) After obtaining the Kovalevskaya matrix, we evaluate it for different dominant balances and determine the eigenvalues. If the eigenvalues are of the form \((-1,r_{2},r_{3},...,r_{n})\), with \(r_{2},r_{3},...>0\), then the singularity is considered general and will occur regardless of the initial conditions of the system. Conversely, if any of the eigenvalues \(r_{2},r_{3},...\) are negative, the singularity is considered local and will only occur for certain sets of initial conditions. * Without loss of generality, the variables \(x_{i}\) near the singularity can be expressed as: \[x_{i}=a_{i}\tau^{p_{i}},\] (9) where \(\tau=t-t_{c}\), and \(t_{c}\) is an integration constant. Substituting equation (4) into equation (3) and equating the exponents, we can determine the values of \(p_{i}\) for different \(i\), which form the vector \(\mathbf{p}=(p_{1},p_{2},...,p_{n})\). Similarly, we calculate the values of \(a_{i}\) to form the vector \(\vec{a}=(a_{1},a_{2},...,a_{n})\). It is important to note that if \(\vec{a}\) contains only real entries, it corresponds to finite-time singularities. Conversely, if \(\vec{a}\) contains at least one complex entry, it may lead to non-finite-time singularities. Each set \((a_{i},p_{i})\) is known as a dominant balance of the system. * Next, we calculate the Kovalevskaya matrix given by: \[R=\begin{pmatrix}\frac{\partial f_{1}}{\partial x_{1}}&\frac{\partial f_{1}} {\partial x_{2}}&.&.&\frac{\partial f_{1}}{\partial x_{n}}\\ \frac{\partial f_{2}}{\partial x_{1}}&\frac{\partial f_{2}}{\partial x_{2}}&. &.&\frac{\partial f_{2}}{\partial x_{n}}\\.&.&.&.&.\\ \frac{\partial f_{n}}{\partial x_{1}}&\frac{\partial f_{n}}{\partial x_{2}}&. &.&\frac{\partial f_{n}}{\partial x_{n}}\end{pmatrix}-\begin{pmatrix}p_{1}&0&. &.&0\\ 0&p_{2}&.&.&0\\.&.&.&.&.\\ 0&0&.&.&p_{n}\end{pmatrix}.\] (10) After obtaining the Kovalevskaya matrix, we evaluate it for different dominant balances and determine the eigenvalues. If the eigenvalues are of the form \((-1,r_{2},r_{3},...,r_{n})\), with \(r_{2},r_{3},...>0\), then the singularity is considered general and will occur regardless of the initial conditions of the system. Conversely, if any of the eigenvalues \(r_{2},r_{3},...\) are negative, the singularity is considered local and will only occur for certain sets of initial conditions. Singularity Analysis ### Phantom Dark Energy Assuming a flat FLRW metric form for the cosmology, we can write and using the Lagrangian (1), we can write the cosmological equations in the presence of a matter fluid given 1 by \(p_{m}=w_{m}\rho_{m}\) (\(0\leq w_{m}\leq\frac{1}{3}\)) as [38] Footnote 1: Throughout this paper we are working in a usual flat FLRW background i.e. the universe that we consider is the usual isotropic and homogeneous one. Future explorations in this direction can indeed make one to consider, for example, how these singularities would occur in an anisotropic universe with let’s say the Bianchi universe ( Bianchi type I for simplicity ) or even a Kantowski-Sachs cosmology in the case that one wants to take into account more exotic phenomenological features. But that is for now beyond the scope of this work as we would like to explore the singularities in non-canonical theories in rather simple settings first. \[3H^{2}=\kappa^{2}\left(\rho_{m}-\frac{\dot{\phi}^{2}}{2}+V(\phi)\right) \tag{11}\] \[2\dot{H}+3H^{2}=\kappa^{2}\left(-w\rho+\frac{1}{2}\dot{\phi}^{2}+V(\phi)\right) \tag{12}\] While the Klein-Gordon equation has its usual form \[\ddot{\phi}+3H\dot{\phi}+V^{\prime}(\phi)=0 \tag{13}\] where the overdots denote differentiation with respect to the coordinate time and the prime denotes differentiation with respect to \(\phi\). The EOS of the field is given by \[w_{\phi}=\frac{\frac{1}{2}\dot{\phi}^{2}+V(\phi)}{\frac{1}{2}\dot{\phi}^{2}-V( \phi)} \tag{14}\] with the energy density of the scalar field given by \[\rho_{\phi}=-\frac{1}{2}\dot{\phi}^{2}+V(\phi) \tag{15}\] One sees immediately that assuming a positive scalar field potential when the kinetic energy equals the potential energy the EoS (14) diverges. This can be taken as a first warning that the theoretical pathologies mentioned above can yield non-physical behavior 2. In order to kick start our analysis, we define the variables 3 to be Footnote 3: Note that it is not necessary that the variables will be defined in this same way for all cosmological paradigms, as we shall see later in the paper too. In fact, one can use different variables for the same paradigm too if required or wished for. See, for example, [76, 77, 78] for extended discussions on the same \[x=\frac{\kappa\dot{\phi}}{\sqrt{6}H}\qquad y=\frac{\kappa\sqrt{V}}{\sqrt{3}H} \qquad\lambda=-\frac{V^{\prime}}{V} \tag{16}\] Using this one can write the cosmological equations (11-13) as \[x^{\prime}=\frac{1}{2}\left(3(w-1)x^{3}-3x\bigg{[}y^{2}+1+w(y^{2}-1)\bigg{]}- \sqrt{6}\lambda y^{2}\right) \tag{17}\] \[y^{\prime}=-\frac{1}{2}y\bigg{[}-3(w-1)x^{2}+3(w+1)(y^{2}-1)+\sqrt{6}\lambda x \bigg{]} \tag{18}\] \[\lambda^{\prime}=-\sqrt{6}(\Gamma-1)x\lambda^{2} \tag{19}\] where \[\Gamma=\frac{VV^{\prime\prime}}{{V^{\prime}}^{2}}\] and the primes on the variables denote differentiation with respect to \(\eta=\log a\). It is important to note that (17-19) do not form an autonomous system unless \(\Gamma\) can be written as a function of \(\lambda\), in which case it forms a 3D autonomous system. While it is interesting to study this system for a variety of potentials, we shall focus here on the case of exponential potential only. In this case, \(\lambda\) takes the form of a constant and (17-18) form a 2D autonomous system and hence fit to be dealt with the Goriely-Hyde procedure. This particular form of potential is also arguably the most studied form for this model [79, 80] and thus is also indeed very popular. Now that we have decided on our system (17-18), we can start our analysis. The first truncation that we consider is given by \[\hat{f}=\begin{pmatrix}-\left(\frac{3}{2}wxy^{2}\right)\\ \frac{1}{2}y\left(3(1-w)x^{2}\right)\end{pmatrix} \tag{20}\] Using the ansatz for the Goriely-Hyde method described in section II, we get the exponent values as \(\mathbf{p}=(-1/2,-1/2)\) and using this we get \[\begin{split} a_{1}&=\left(\frac{i}{\sqrt{3(1-w)}}, \frac{1}{\sqrt{3w}}\right)\\ \\ a_{2}&=\left(\frac{i}{\sqrt{3(1-w)}},\frac{-1}{\sqrt{3 w}}\right)\\ \\ a_{3}&=\left(\frac{-i}{\sqrt{3(1-w)}},\frac{1}{\sqrt{3 w}}\right)\\ \\ a_{4}&=\left(\frac{-i}{\sqrt{3(1-w)}},\frac{-1}{\sqrt {3w}}\right)\end{split} \tag{21}\] as all \(\mathbf{\hat{a}}\) have complex entries 4, only non-finite time singularities will be possible with regards to this truncation. The Kovalevskaya matrix now takes the form Footnote 4: At this point we would like to highlight that complex entries in \(\mathbf{\hat{a}}\) and those observed in (15) are completely consistent with the fact that the system we have considered in (11-13) consists of x,y and z which are real and positive. As mentioned in section II, complex entries for various \(\mathbf{a}\) suggest that the singularities will be non-finite time in nature and hence these quantities taking up complex values is consistent with the analysis as shown in [64]. Similar case has been for various cosmological systems (for example, see [71, 72]) \[R=\left(\begin{array}{cc}\frac{1}{2}-\frac{3wy^{2}}{2}&-3wxy\\ 3(1-w)xy&\frac{3}{2}(1-w)x^{2}+\frac{1}{2}\end{array}\right) \tag{22}\] After plugging in the dominant balances (21) into the Kovalevskaya matrix, we find its eigenvalues to be \[r=(-1,-1) \tag{23}\] Hence the singularities in this case will only be local singularities which will only form for a limited set of initial conditions. Furthermore, as the dominant balance (21) had complex entries we can say that this truncation also tells that the singularities could happen at non finite times while being very dependent on the initial conditions for the system variables (17-18). Note that we have not set any constraint on the value of \(\lambda\) and so it does not matter what value \(\lambda\) takes in this case. The second truncation that we consider is given by \[\hat{f}=\begin{pmatrix}-\frac{1}{2}\left(\sqrt{6}\lambda\mathrm{y}1^{2} \right)\\ -\frac{1}{2}\left(\sqrt{6}\lambda\mathrm{x}1\mathrm{y}1\right)\end{pmatrix} \tag{24}\] Using the ansatz for the Goriely-Hyde method described in section II, we get the exponent values as \({\bf p}=(-1,-1)\) and using this we get \[\begin{array}{c}a_{1}=\left(\frac{\sqrt{2}}{\sqrt{3}\lambda},\frac{\sqrt{2}}{ \sqrt{3}\lambda}\right)\\ \\ a_{2}=\left(\frac{\sqrt{2}}{\sqrt{3}\lambda},-\frac{\sqrt{2}}{\sqrt{3}\lambda} \right)\end{array} \tag{25}\] Here we immediately note that both \(a_{1}\) and \(a_{2}\) have only real entries, which means that the singularities occurring in this scenario can take place in finite time, something which we couldn't see in the previous truncation that we considered. Moving forward, we can write the Kovalevskaya matrix in this case to be \[R=\left(\begin{array}{cc}1&-\sqrt{6}\lambda{\rm y}1\\ -\sqrt{\frac{3}{2}}\lambda{\rm y}1&1-\sqrt{\frac{3}{2}}\lambda{\rm x}1\end{array}\right) \tag{26}\] The eigenvalues of this after plugging in the dominant balance (25) is found to be \[r=(-1,1) \tag{27}\] This tells that this truncation allows for the system to have general singularities to, singularities which will take place irrespective of the initial conditions for the system variables. This coupled with the conclusion we found by the dominant balances having real entries entails us to the fact that the phantom scenario described by (17-18) will surely have cosmological singularities occurring in both finite and nonfinite times regardless of the initial conditions. To classify exactly which singularities these would be physically would be done in section IV. In passing it is also worth mentioning that phantom dark energy models seem to be providing quite optimistic views towards possible solutions of prevalent cosmic tensions today, in particular the Hubble tension [81, 82, 83, 84, 85, 86, 87, 88, 89, 90]. In general it is being seen that phantom-like behaviour of the effective equation of state of dark energy can more naturally accommodate higher values of \(H0\), preferred by recent local measurements, while satisfying the CMB constraints as well. Hence one can see that there is a renewed sense of realism attached with phantom cosmological scenarios given tensions like this and it may very well be the key to alleviating the \(H0\) issues once and for all. ### Quintom Dark Energy We now turn our attention toward the interacting quintom model described by (2). In this case, we consider the potential to be of the form \[V(\phi,\sigma)=V_{0}\exp^{-\kappa(\lambda_{\phi}\phi+\lambda_{\sigma}\sigma)} \tag{28}\] where \(\lambda_{\phi}\) and \(\lambda_{\sigma}\) are some constants. We consider this potential form because, again, the exponential potential has been the most studied type in this regard and this has displayed a variety of cosmologically interesting behaviors 5 ( for example, [91] obtained tracking, phantom, and quintessence solutions for this potential ). The cosmological equations for this potential given the Lagrangian (2) are Footnote 5: Another reason to consider the coupled potential is that it is mathematically simpler to analyze dynamically than the uncoupled one, because it only requires one expansion normalized variable for the potential energy rather than two [38] as we shall see later in this section. \[3H^{2}=\kappa^{2}\Bigg{[}\rho_{m}+\frac{1}{2}\dot{\phi}^{2}-\frac{1}{2}\dot{ \sigma}^{2}+V(\phi,\sigma)\Bigg{]} \tag{29}\] \[2\dot{H}+3H^{2}=-\kappa^{2}\Bigg{[}w_{m}\rho_{m}+\frac{1}{2}\dot{\phi}^{2}- \frac{1}{2}\dot{\sigma}^{2}-V(\phi,\sigma)\Bigg{]} \tag{30}\] While the Klein-Gordon equations for both the fields are given by \[\ddot{\phi}+3H\dot{\phi}+\frac{\partial V}{\partial\phi}=0 \tag{31}\] \[\ddot{\sigma}+3H\dot{\sigma}-\frac{\partial V}{\partial\sigma}=0 \tag{32}\] The expansion normalized variables for this system can be written as \[x_{\phi}=\frac{\kappa\dot{\phi}}{\sqrt{6}H}\qquad x_{\sigma}=\frac{\kappa \dot{\sigma}}{\sqrt{6}H}\qquad y=\frac{\kappa\sqrt{V}}{\sqrt{3}H} \tag{33}\] Using these variable, one can write the equations (29-32) in the form of 3D autonomous system as \[x^{\prime}_{\phi}=\frac{1}{2}\Bigg{[}3x_{\phi}\bigg{[}(w-1)x^{2}_{\sigma}-(w+ 1)y^{2}w-1\bigg{]}-3(w-1)x^{3}_{\phi}+\sqrt{6}\lambda_{\phi}y^{2}\Bigg{]} \tag{34}\] \[x^{\prime}_{\sigma}=\frac{1}{2}\Bigg{[}3(w-1)x^{3}_{\sigma}-3x_{\sigma}\bigg{[} (w-1)x^{2}_{\phi}+(w+1)y^{2}-w+1\bigg{]}-\sqrt{6}y^{2}\lambda_{\sigma}\Bigg{]} \tag{35}\] \[y^{\prime}=-\frac{1}{2}y\Bigg{[}3(w-1)(x^{2}_{\phi}-x^{2}_{\sigma})+3(w+1)(y^ {2}-1)+\sqrt{6}\lambda_{\sigma}x_{\sigma}+\sqrt{6}\lambda_{\phi}x_{\phi} \Bigg{]} \tag{36}\] The eqns. (34-36) form a 3D autonomous system and so is fit to be studied with the Goriely-Hyde procedure. The first truncation that we consider is given by \[\hat{f}=\begin{pmatrix}-\frac{1}{2}\left(3(w-1)x^{3}_{\phi}\right)\\ -\frac{1}{2}\left(3(w+1)x_{\sigma}y^{2}\right)\\ \frac{3}{2}(w-1)x^{2}_{\sigma}y\end{pmatrix} \tag{37}\] Using the ansatz for the Goriely-Hyde method described in section II, we get the exponent values as \({\bf p}=(-1/2,-1/2,-1/2)\) and using this we get \[a_{1}=\left(\frac{1}{\sqrt{3(w-1)}},\frac{i}{\sqrt{3(w-1)}},\frac{1}{\sqrt{3(w+1 )}}\right)\] \[a_{2}=\left(-\frac{1}{\sqrt{3(w-1)}},\frac{i}{\sqrt{3(w-1)}},\frac{1}{\sqrt{3(w +1)}}\right)\] \[a_{3}=\left(\frac{1}{\sqrt{3(w-1)}},-\frac{i}{\sqrt{3(w-1)}},\frac{1}{\sqrt{3( w+1)}}\right)\] \[a_{4}=\left(\frac{1}{\sqrt{3(w-1)}},\frac{i}{\sqrt{3(w-1)}},-\frac{1}{\sqrt{3( w+1)}}\right) \tag{38}\] \[a_{5}=\left(\frac{1}{\sqrt{3(w-1)}},-\frac{i}{\sqrt{3(w-1)}},-\frac{1}{\sqrt{3( w+1)}}\right)\] \[a_{6}=-\left(\frac{1}{\sqrt{3(w-1)}},-\frac{i}{\sqrt{3(w-1)}},\frac{1}{\sqrt{3 (w+1)}}\right)\] \[a_{7}=-\left(\frac{1}{\sqrt{3(w-1)}},\frac{i}{\sqrt{3(w-1)}},-\frac{1}{\sqrt{3 (w+1)}}\right)\] \[a_{8}=\left(-\frac{1}{\sqrt{3(w-1)}},-\frac{i}{\sqrt{3(w-1)}},-\frac{1}{\sqrt{3 (w+1)}}\right)\] as all \(\bf{\hat{a}}\) have complex entries 6, only non-finite time singularities will be possible with regards to this truncation. The Kovalevskaya matrix in this case takes the form Footnote 6: At this point we would like to highlight that complex entries for the dominant balance in (38) and those observed in the previous section for phantom models in (21) are completely consistent with the fact that the system we have considered in for quintom and phantom models, (34-36) and (17-18) respectively, consists of the expansion normalized variables which are real and positive (x,y for phantom and \(x_{\phi}\), \(x_{\sigma}\) and y for quintom models). As mentioned in section II, complex entries for various \(\bf{\hat{a}}\) suggest that the singularities will be a non-finite time in nature and hence these quantities taking up complex values is consistent with the analysis as shown in [64]. A similar case has been for various cosmological systems (for example, see [71, 72]) \[R=\left(\begin{array}{ccc}\frac{1}{2}-\frac{9}{2}(w-1)x_{\phi}^{2}&0&0\\ 0&\frac{1}{2}-\frac{3}{2}(w+1)y^{2}&-3(w+1)x_{\sigma}y\\ 0&3(w-1)x_{\sigma}y&\frac{3}{2}(w-1)x_{\sigma}^{2}+\frac{1}{2}\end{array}\right) \tag{39}\] Using the dominant balance for this system in (38), we can find the eigenvalues of this Kovalevskaya matrix to be \[r=(-1,-1,1) \tag{40}\] We see here that one of the eigenvalues besides -1 is also negative and so as far as this truncation is concerned, one can only have local singularities for this cosmology. Coupling this with the form of the dominant balances (38), one sees that the Goriely-Hyde procedure tells us here that singularities can form in non-finite times for a particular set of initial conditions for the variables. The second truncation that we consider is given by \[\hat{f}=\begin{pmatrix}-\frac{1}{2}\left(3(w+1)x_{\phi}y^{2}\right)\\ -\frac{1}{2}\left(3(w-1)x_{\phi}^{2}x_{\sigma}\right)\\ -\frac{1}{2}\left(\sqrt{6}\lambda x_{\sigma}y\right)\end{pmatrix} \tag{41}\] Using the ansatz for the Goriely-Hyde method described in section II, we get the exponent values as \(\mathbf{p}=(-1/2,-1,-1/2)\) and using this we get \[\begin{array}{c}a_{1}=\left(\frac{\sqrt{2}}{\sqrt{3(w-1)}},\frac{1}{\sqrt{6} \lambda},\frac{1}{\sqrt{3(w+1)}}\right)\\ \\ a_{2}=\left(-\frac{\sqrt{2}}{\sqrt{3(w-1)}},\frac{1}{\sqrt{6}\lambda},\frac{1 }{\sqrt{3(w+1)}}\right)\end{array} \tag{42}\] We note here that both \(a_{1}\) and \(a_{2}\) have only real entries, which means that the singularities occurring in this scenario can take place in finite time, something which we couldn't see in the previous truncation that we considered. Moving forward, we can write the Kovalevskaya matrix in this case to be \[R=\left(\begin{array}{ccc}\frac{1}{2}-\frac{3}{2}(w+1)y^{2}&0&-3(w+1)x_{\phi }y\\ -3(w-1)x_{\phi}x_{\sigma}&1-\frac{3}{2}(w-1)x_{\phi}^{2}&0\\ 0&-\sqrt{\frac{3}{2}}\lambda y&\frac{1}{2}-\sqrt{\frac{3}{2}}\lambda x_{ \sigma}\end{array}\right) \tag{43}\] Using the dominant balance for this system in (38), we can find the eigenvalues of this Kovalevskaya matrix to be \[r=(-1,1/2,1/2) \tag{44}\] We see here that none of the eigenvalues besides -1 is negative and so this truncation tells us that general singularities not dependent on the initial conditions are also possible in this scenario. Coupled with the conclusions from the dominant balances, we see that the singularities in the case of the quintom models described by (34-36) can take place place in finite time and be independent of initial conditons. Note very interestingly that these singularities occur irrespective of whether or not the dark energy equation of state is in phantom (\(w<-1\)) or quintessence regime (\(w>-1\)) and these results hold irrespective of the values of \(\lambda_{\phi}\) and \(\lambda_{\sigma}\), hence encapsulating a wide array of popular quintom models for cosmology 7 Footnote 7: In principle one can make more interesting truncations for the quintom system like \[\hat{f}=\left(\begin{array}{c}\frac{3}{2}(w-1)x_{\phi}x_{\sigma}^{2}\\ \sqrt{\frac{3}{2}}\left(-\left(\lambda_{\sigma}y^{2}\right)\right)\\ \sqrt{\frac{3}{2}}\lambda_{\phi}x_{\phi}(-y)\end{array}\right)\] But we choose to not pursue the analysis of more truncations as we have already demonstrated all the properties of singularities which can be known using the Goriely-Hyde method by using only two truncations. The reader is, however, welcome to analyze any more truncations if they wish to for their own interest.. The physical classification of these singularities will take place in section 4 ### DBI Dark Energy We would like to consider the models corresponding to the action (4) now but before that, we would quickly like to discuss some subtleties of these form of DBI models. From the perspective of string theory, \(V(\phi)\) is a potential that arises from quantum interactions between a D3-brane associated with \(\phi\) and other Dbranes. Although a quadratic potential was considered in the initial proposal of the generalized DBI model [37], discussions on the exact form of the potential are still ongoing. \(f(\phi)\), on the other hand, is the inverse of the D3-brane tension and contains geometrical information about the throat in the compact internal space with some proposals for the form of \(f(\phi)\) consider it to be a constant or even of the form \(f(\phi)=\alpha\phi^{-4}\) (\(\alpha\) being a constant ), which is the case for an AdS throat. In the context of cosmology, it is normally taken to be reasonable to consider both \(f(\phi)\) and \(V(\phi)\) as non-negative [38] and that is what we would also be considered in here. Another important parameter with regards to DBI models is the Lorentz factor given by \[\gamma=\frac{1}{\sqrt{1-f(\phi)\dot{\phi}^{2}}} \tag{45}\] When the speed of the brane motion, \(|\dot{\phi}|\), approaches the limit \(f^{-1/2}\) the Lorentz factor can grow without bound and one starts to see the true effects of the DBI model. While in the limit when the brane motion can be neglected and is not comparable to the limits of \(f^{-1/2}\), \(\gamma\to 1\) and one sees the usual theory of canonical scalar fields unfold. The energy and pressure densities for the DBI field can be written using (4) as \[\rho_{\phi}=2X{\cal L}_{,X}=\frac{\gamma-1}{f(\phi)}+V(\phi) \tag{46}\] \[p_{\phi}=\frac{\gamma-1}{\gamma f(\phi)}-V(\phi) \tag{47}\] where \({\cal L}_{,X}\) refers to the Lagrangian's differentiation with respect to the kinetic term X. From this one can write the speed of sound for the DBI model as \[c_{s}^{2}=\frac{\partial p_{\phi}/\partial X}{\partial\rho_{\phi}/\partial X}= \frac{1}{\gamma^{2}}=1-f(\phi)\dot{\phi}^{2} \tag{48}\] One thus sees that the speed of sound and the Lorentz factor are directly related to each other in DBI models. For a flat FLRW metric universe with scale factor \(a(t)\), \(X=\dot{\phi}^{2}/2\) for a homogeneous field \(\phi\) and one can then write the energy and pressure densities of such a field as [92] \[\rho_{\phi}=\frac{\gamma^{2}}{\gamma+1}\dot{\phi}^{2}+V \tag{49}\] \[p_{\phi}=\frac{\gamma^{2}}{\gamma+1}\dot{\phi}^{2}-V \tag{50}\] We can now write the Friedmann equation for this scenario as \[\frac{3H^{2}}{\kappa^{2}}=\rho_{m}+\frac{\gamma^{2}}{\gamma+1}\dot{\phi}^{2}+V \tag{51}\] The Klein-Gordon equation also gets significantly modified and takes the form \[\ddot{\phi}+\frac{3H}{\gamma^{2}}\dot{\phi}+\frac{1}{\gamma^{3}}\frac{dV}{d \phi}+\frac{1}{2f}\frac{df}{d\phi}\frac{(\gamma+2)(\gamma-1)}{(\gamma+1)\gamma }\dot{\phi}^{2}=0 \tag{52}\] The continuity equation for the matter field still stays has the usual form \[\dot{\rho}_{m}+3H\rho_{m}(1+w_{m})=0 \tag{53}\] Introducing now the following expansion normalized variables [38, 92], \[x=\frac{\gamma\kappa\dot{\phi}}{\sqrt{3(\gamma+1)H}}\qquad y=\frac{\kappa\sqrt {V}}{\sqrt{3}H}\qquad\overline{\gamma}=\frac{1}{\gamma} \tag{54}\] Before proceeding further, we would like to specify that in our analysis we would like to consider DBI models with a constant \(\gamma\) ( or equivalently, constant sound speed ) and this is motivated by two reasons. Firstly, we would like to base this analysis on rather simple models of DBI cosmology and would like to understand the extent to which cosmological singularities are allowed to occur in such models. When one considers models with \(\gamma\) being dependent on \(\phi\), one runs into quite a lot of complexities and as we just currently want to gauge the status quo of cosmological singularities in these models, for now we would like to first see out the arguably simpler DBI models( and in any case, models with constant \(\gamma\) have not been short of attention in recent years by any means [92]). Recently even structure formation has been studied comprehensively with constant \(\gamma\) models [93]. A second reason is more subtle and concerns the very reason DBI models came to the fore very quickly ; non-Gaussianity. Although we are considering the very late universe in this work primarily, from the phenomenological viewpoint one of the main reasons why this model attracted strong attention is because of its sizable equilateral type of primordial non-Gaussianity. In particular, there are certain cases of constant \(\gamma\) DBI models where one can hope to observe large yet detectable levels of non-Gaussianity in the early universe [94, 92, 95, 96]. So while constant \(\gamma\) models are interesting for us simply from the perspective that these are a simpler class of DBI models, they could have some deep implications for very interesting quantum gravitational results in particular epochs of the universe. With that cleared, we can now write the cosmological equations (51-53) using the expansion normalized variables as \[x^{\prime}=\frac{1}{2}\sqrt{3\overline{\gamma}(\overline{\gamma}+1)}\lambda y ^{2}+\frac{3}{2}x\Bigg{[}(1+\overline{\gamma}(x^{2}-1)+(1+w)(1-x^{2}-y^{2}) \Bigg{]} \tag{55}\] \[y^{\prime}=-\frac{1}{2}\sqrt{3\overline{\gamma}(1+\overline{\gamma})}\lambda xy +\frac{3}{2}y\Bigg{[}(1+\overline{\gamma})x^{2}+(1+w)(1-x^{2}-y^{2})\Bigg{]} \tag{56}\] where \(\lambda=-\frac{1}{\kappa V}\frac{dV}{d\phi}\) is again considered to be a constant while we also considered another quantity, \(\mu=-\frac{1}{kV}\frac{df}{d\phi}\) to be a constant as well. There is an important new feature that emerges in the DBI case and is not present in the case of the canonical scalar field. In the latter case only an exponential potential can truly lead to an autonomous system, but as was shown in [92], this fact does not apply to the DBI field and there can be non-exponential potentials which can still lead to constant \(\lambda\) but here we assume an exponential form for the potential. We can now start with the singularity analysis, with our first truncation being given by \[\hat{f}=\begin{pmatrix}\frac{3}{2}(\overline{\gamma}+1)x^{3}\\ -\frac{1}{2}\left(3(w+1)y^{3}\right)\end{pmatrix} \tag{57}\] Using the ansatz for the Goriely-Hyde method described in section II, we get the exponent values as \(\mathbf{p}=(-1/2,-1/2)\) and using this we get \[a_{1}=\Bigg{(}\frac{i}{\sqrt{3(\overline{\gamma}+1)}},\frac{1}{ \sqrt{3(w+1)}}\Bigg{)} \tag{58}\] \[a_{2}=\Bigg{(}-\frac{i}{\sqrt{3(\overline{\gamma}+1)}},\frac{1}{\sqrt{3(w+1)} }\Bigg{)}\] The Kovalevskaya matrix in this case is given by \[R=\left(\begin{array}{cc}\frac{9}{2}(\overline{\gamma}+1)x^{2}+\frac{1}{2}& 0\\ 0&\frac{1}{2}-\frac{9}{2}(w+1)y^{2}\end{array}\right) \tag{59}\] Using the dominant balance (58), we find that the eigenvalues of this matrix are given by \[r=(-1,1) \tag{60}\] Hence the singularities in this case will only be local singularities which will only form for a limited set of initial conditions. Furthermore considering that the dominant balance for this truncation has complex entries, one can further predict from this truncation that the system admits singularities in non-finite time. The second truncation that we consider is given by \[\hat{f}=\begin{pmatrix}\frac{1}{2}\sqrt{3\overline{\gamma}(\overline{\gamma}+1 )}\lambda y^{2}\\ \frac{1}{2}\sqrt{3\overline{\gamma}(\overline{\gamma}+1)}(-\lambda)(xy) \end{pmatrix} \tag{61}\] Using the ansatz for the Goriely-Hyde method described in section II, we get the exponent values as \(\mathbf{p}=(-1,-1)\) and using this we get \[a_{1}=\left(\frac{2}{\sqrt{3\overline{\gamma}(\overline{\gamma}+1)}\lambda}, \frac{2i}{\sqrt{3\overline{\gamma}(\overline{\gamma}+1)}\lambda}\right) \tag{62}\] \[a_{2}=\left(\frac{2}{\sqrt{3\overline{\gamma}(\overline{\gamma}+1)}\lambda},- \frac{2i}{\sqrt{3\overline{\gamma}(\overline{\gamma}+1)}\lambda}\right)\] Furthermore the Kovalevskaya matrix for this truncation can be written as \[R=\left(\begin{array}{cc}1&\sqrt{3}\sqrt{\overline{\gamma}(\overline{\gamma }+1)}\lambda y\\ -\frac{1}{2}\sqrt{3}\sqrt{\overline{\gamma}(\overline{\gamma}+1)}\lambda y&1- \frac{1}{2}\sqrt{3}\sqrt{\overline{\gamma}(\overline{\gamma}+1)}\lambda x \end{array}\right) \tag{63}\] Using the balance (62), we get the eigenvalues for this matrix to be \[r=(-1,2) \tag{64}\] These eigenvalues directly tell us that the system allows for general singularities which are independent of the initial conditions. Coupled with the balance (62), one can conclude that according to the Goriely-Hyde method, this class of DBI models described by the system we have considered above will have singularities in any case irrespective of the initial conditions and that those singularities could occur at non-finite times. ## 4 Physical Classification of the Singularities Until now, we have discussed the singularity structure within this dark energy scenario from a dynamical perspective. However, it is insufficient to merely acknowledge the existence of singularities in this system from a physical standpoint. Thus, it becomes necessary to appropriately classify the potential types of singularities that could occur in this model. Various types of physical singularities for cosmology at a specific time \(t=t_{s}\), where \(t_{s}\) represents the occurrence of the singularities, can be classified as follows [97, 42]: * Type I ("Big Rip"): In this case, the scale factor \(a\), effective energy density \(\rho_{\rm eff}\), and effective pressure density \(p_{\rm eff}\) diverge. * Type II ("Sudden/Quiescent singularity"): In this case, \(p_{\rm eff}\) diverges, as well as the derivatives of the scale factor beyond the second derivative. * Type III ("Big Freeze"): In this case, the derivative of the scale factor from the first derivative onwards diverges. * Type IV ("Generalized sudden singularities"): In this case, the derivative of the scale factor diverges from a derivative higher than the second. Among these classifications, Type I singularities are considered strong singularities since they have the ability to distort finite objects, while singularities of Type II, Type III, and Type IV are regarded as weak singularities as they cannot be perceived as either the beginning or the end of the universe. Although there are other minor types of singularities, such as Type V singularities or "w" singularities, we will focus solely on Type I to Type IV singularities here. The most general form of the Hubble parameter for investigating singularities within the aforementioned classified types is expressed as [72]: \[H(t)=f_{1}(t)+f_{2}(t)(t-t_{s})^{\alpha} \tag{65}\] Here, \(f_{1}(t)\) and \(f_{2}(t)\) are assumed to be nonzero regular functions at the time of the singularity, and similar conditions apply to their derivatives up to the second-order. Additionally, \(\alpha\) is a real number. It is not mandatory for the Hubble parameter (34) to be a solution to the field equations; however, we will consider this case and explore the implications of this assumption on the singularity structure based on our dynamic analysis. First, we observe that none of the variables \(x\), \(y\), or \(z\) as defined in (10) can ever become singular for any cosmic time value. The singularities that can occur considering the Hubble parameter as defined in (34) are as follows: * For \(\alpha<-1\), a big rip singularity occurs. * For \(-1<\alpha<0\), a Type III singularity occurs. * For \(0<\alpha<1\), a Type II singularity occurs. * For \(\alpha>1\), a Type IV singularity occurs. Another ansatz useful for classifying singularities was introduced in [46] whereby the scale factor was written as \[a(t)=g(t)(t-t_{s})^{\alpha}+f(t) \tag{66}\] where g(t) and f(t) and all their higher order derivatives with respect to the cosmic time are smooth functions of the cosmic time. For this ansatz, according to the values of the exponent \(\alpha\) one can have the following singularities * For \(\alpha<0\), a type I singularity occurs * For \(0<\alpha<1\), a type III singularity develops * For \(a<\alpha<2\), a type II singularity occurs * For \(\alpha>2\), a type IV singularity occurs Again, it is not mandatory that the scale factor in (66) will necessarily be a solution to the field equations but we would like to consider this and (65) in order to get a well-motivated feel for the type of cosmological singularities we can deal with in the various models we have discussed so far. To proceed further, we need to express the expansion normalized variables that we defined for the phantom, quintom, and DBI models in terms of the Hubble parameter alone. To do this, we realize that we need to express the potential and the derivative of the field parameter(s) in each case in terms of the Hubble parameter as these are the quantities on which the expansion normalized variables really depend in all models. For the phantom case given by the cosmological equations (11-12), one arrives at \[\dot{\phi}^{2}=\frac{2\dot{H}}{k^{2}}+\rho_{m}(1-w_{m}) \tag{67}\] \[V(\phi)=\frac{\dot{H}+3H^{2}}{k^{2}}-\frac{\rho_{m}}{2}(1+w_{m}) \tag{68}\] For the case of the Quintom model, one can write the interaction potential as \[V(\phi,\sigma)=\frac{3H^{2}-\dot{H}}{\kappa^{2}}+\frac{\rho_{m}}{2}(w_{m}-1) \tag{69}\] While writing separate expressions \(\sigma\) and \(\phi\) in the quintom case only in terms of the Hubble parameter and its derivatives can prove cumbersome, we can however write \[\dot{\sigma}^{2}-\dot{\phi}^{2}=\rho_{m}(w_{m}+1)+\frac{2\dot{H}}{\kappa^{2}} \tag{70}\] If the quantity \(\dot{\sigma}^{2}-\dot{\phi}^{2}\) is finite, we would be assured of both \(\dot{\sigma}\) and \(\dot{\phi}\) being finite as well and hence this expression would do the trick for us in this case. For the case of the DBI field, one can write \[\dot{\phi}^{2}=\frac{\gamma+1}{\gamma-1}\Bigg{[}\rho_{m}(w_{m}-1)-\frac{2\dot {H}}{\kappa^{2}}\Bigg{]} \tag{71}\] \[V(\phi)=\frac{3H^{2}}{\kappa^{2}}-\rho_{m}+\frac{\gamma^{2}}{\gamma-1}\left( \frac{2\dot{H}}{\kappa^{2}}-\rho_{m}(w_{m}-1)\right) \tag{72}\] Using these expressions, one can express the expansion normalized variables for each system as discussed before but as the expressions just become too long, we will not be putting them here and summarize our findings as follows. Table 1 explains the formation of various singularities in the regimes we have considered and we see that all types of cosmological singularities are allowed for the ansatz (65) in all the models we have considered, except for DBI model going to its canonical limit \(\gamma\to 1\) in which case all of these singularities get removed ( but the model becomes canonical again ). The ansatz (66) provides a different picture as by this ansatz, one does not get any singularities in the phantom model, only gets type IV singularities in the quintom regime while the status quo of DBI models remains unchanged in that it still can have all the singularities until it does not go to its canonical limit. One is had to conclude from this that non-canonical regimes very easily seem to be allowing for a larger class of cosmological singularities than what has been seen through similar procedures for their canonical counterparts [71, 72, 73]. Note that these results hold true independent of what is the equation of state of the background matter fluid \(w_{m}\), meaning this is true for any configuration of matter background. Note also that in recent years some ideas have been discussed in the literature about ways in which the weaker cosmological singularities may be delayed or made milder, for example conformal anomalies have been seen to be helpful in this regard [40, 98]. But these anomaly driven effects also have some nuances attached to them, for example the quantum effective action in such cases (like the one considered here) is usually non-local and higher-derivative can lead to more serious issues near the time of the singularities and conformal anomaly effects are not always helpful for removing singularities [63]. Another way of removing the weaker singularities of the cosmological type that was considered was to introduce modified gravity effects, in particular f(R) effects [43]. While such approaches to dealing with these singularities are certainly very appealing and could bear fruit, our work here is more focused on just finding out singularities and we see that a discussion on singularity removal methods in this scenario would be suitable for a whole new work altogether. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Ansatz & \multicolumn{3}{c|}{Singularities} \\ \cline{2-4} & Phantom & Quintom & DBI (\(\gamma\neq 1\)) \\ \hline \(H(t)=f_{1}(t)+f_{2}(t)(t-t_{s})^{\alpha}\) & Type I, II, III, IV & Type I, II, III, IV & Type I, II, III, IV \\ \hline \(a(t)=g(t)(t-t_{s})^{\alpha}+f(t)\) & None & Type IV & Type I, II, III, IV \\ (except in canonical limit) & & & \\ \hline \end{tabular} \end{table} Table 1: Summary of Singularities for Different Ansatz Concluding Remarks In this paper, we have considered three very well-known noncanonical scalar field dark energy models in the form of phantom, quintom, and DBI dark energy models. These three models have received immense attention from a phenomenological point of view in recent years and hence we found it interesting to probe cosmological singularities in these diverse non-canonical scenarios. For this endeavor, we used a method pioneered by the works of Odintsov in recent years, in which we applied the Goriely-Hyde procedure to the various dynamical systems by which the cosmological equations of these three models could be described. This allowed us to make predictions about whether singularities in these scenarios would be strongly dependent on initial physical conditions and whether they could happen in finite or non-finite times. After this, we employed two very well-motivated ansatz' for the Hubble parameter and the scale factor to reach the conclusion that one can have all of type I-type IV singularities in such cosmologies in certain cases. This work goes to show that non-canonical regimes for dark energy can allow for most of the prominent cosmological singularities for a variety of models. We find it very intriguing that in the models we have studied, the dark energy or the matter fields with negative pressures are not able to remove the space-time singularities. But this happens because sometimes the overall gravitational focusing of the energy density is more dominant as compared to the negative pressures. An interesting point to note is if all the singularities in the space time are weak then the space-time could be geodetically complete. In that case, then there are no genuine singularity in the space-time, even though some occasional blowups of some physical quantities may show up. On the other hand, when the singularity is strong, then there will be genuine incompleteness as for the geodesics, and in the case that we have studied here one sees that at least one strong singularity will occur (the big rip or type I singularity ). Overall, the ideas related to non-canonical models or similar phenomenologically inspired cosmological scenarios which have been introduced in cosmology for various reasons in literature are of very novel and rather uncertain nature.we know very little about their properties and hence examining the singularities caused by them could be one way of asking more and learning more about their properties. ## Acknowledgments The authors would like to thank Sergei Odintsov, and Alexander Timoshkin for very helpful discussions on the work and Maxim Khlopov for discussions on cosmological singularities in general. We would also like to thank the anonymous referees of the paper for their insightful comments on the work which have improved its depth multi-folds.
2309.05449
First exploration of the runaway greenhouse transition with a GCM
Even if their detection is for now challenging, observation of small terrestrial planets will be easier in a near future thanks to continuous improvements of detection and characterisation instruments. In this quest, climate modeling is a key step to understand their characteristics, atmospheric composition and possible history. If a surface water reservoir is present on such a terrestrial planet, an increase in insolation may lead to a dramatic positive feedback induced by water evaporation: the runaway greenhouse. The resulting rise of global surface temperature leads to the evaporation of the entire water reservoir, separating two very different population of planets: 1) temperate planets with a surface water ocean and 2) hot planets with a puffed atmosphere dominated by water vapor. In this work we use a 3D General Circulation Model (GCM), the Generic-PCM, to study the runaway greenhouse transition, linking temperate and post-runaway states. Our simulations are made of two steps. First, assuming initially a liquid surface ocean, an evaporation phase which enriches the atmosphere in water vapor. Second, when the ocean is considered entirely evaporated, a dry transition phase for which the surface temperature increases dramatically. Finally, it converges on a hot and stable post-runaway state. By describing in detail the evolution of the climate during these two steps, we show a rapid transition of the cloud coverage and of the wind circulation from the troposphere to the stratosphere. By comparing our result to previous studies using 1D models, we discuss the effect of intrinsically 3D processes such as the global dynamics and the clouds, keys to understand the runaway greenhouse. We also explore the potential reversibility of the runaway greenhouse, limited by its radiative unbalance.
G. Chaverot, E. Bolmont, M. Turbet
2023-09-11T13:41:48Z
http://arxiv.org/abs/2309.05449v1
# First exploration of the runaway greenhouse transition with a GCM ###### Abstract Even if their detection is for now challenging, observation of small terrestrial planets will be easier in a near future thanks to continuous improvements of detection and characterisation instruments. In this quest, climate modeling is a key step to understand their characteristics, atmospheric composition and possible history. If a surface water reservoir is present on such a terrestrial planet, an increase in insolation may lead to a dramatic positive feedback induced by water evaporation: the runaway greenhouse. The resulting rise of global surface temperature leads to the evaporation of the entire water reservoir, separating two very different population of planets: 1) temperate planets with a surface water ocean and 2) hot planets with a puffed atmosphere dominated by water vapor. Therefore, the understanding of the runaway greenhouse is pivotal to assess the different evolution of Venus and the Earth, as well as every similar terrestrial exoplanet. In this work we use a 3D General Circulation Model (GCM), the Generic-PCM, to study the runaway greenhouse transition, linking temperate and post-runaway states. Our simulations are made of two steps. First, assuming initially a liquid surface ocean, an evaporation phase which enriches the atmosphere in water vapor. Second, when the ocean is considered entirely evaporated, a dry transition phase for which the surface temperature increases dramatically. Finally, it converges on a hot and stable post-runaway state. By describing in detail the evolution of the climate during these two steps, we show a rapid transition of the cloud coverage and of the wind circulation from the troposphere to the stratosphere. By comparing our result to previous studies using 1D models, we discuss the effect of intrinsically 3D processes such as the global dynamics and the clouds, keys to understand the runaway greenhouse. We also explore the potential reversibility of the runaway greenhouse, limited by its radiative unbalance. ## 1 Introduction Continuous improvement of telescopes, instruments and techniques allows us to detect and characterize a growing number of exoplanets. The new generation of instruments will enable to characterize atmospheres of small, rocky and temperate planets, which is now within reach thanks to the James Webb Space Telescope (JWST) (e.g. Morley et al., 2017; Lustig-Yaeger et al., 2019; Fauchez et al., 2019; Wunderlich et al., 2019). The first results of the JWST will likely open a new era in the characterisation of such targets. Future astronomical ground-based instruments in development such as RISTRETTO@VLT (Lovis et al., 2022) or ANDES@E-ELT (Marconi et al., 2022) will allow to go beyond and study Earth-like planets. The possibility of detecting objects similar to our planet brings up the ancestral question of life (and "habitability") in the Universe. Astrobiology is the field of science that aims to answer this question while exoplan-etology is now a big piece in that gigantic puzzle. In particular, an essential condition for the emergence and the maintenance of life as we know it, is the presence of liquid water. Therefore, modeling the potential climate of planets we will characterize is essential to determine the conditions under which water can exist in liquid phase at their surface or in the atmosphere as clouds. This modeling effort coupled with advanced spectroscopy techniques will allow us to make progress in the search for life in the Universe. For an Earth-like planet with a given amount of water, there is a range of distance to the host star for which this water can be liquid at the surface. Too close, the water evaporates and too far, it freezes. This ideal zone is the Habitable Zone (HZ) (Kasting et al., 1993; Kopparapu et al., 2013). This concept is extremely useful in the exoplanet community to statistically study large samples of targets under the spectrum of habitability and search for life. The HZ is function of the brightness of the star, the size of the planet and the composition of its atmosphere Kopparapu et al. (2013, 2014, 2017). If the planet is close enough to the star, the temperature rises and oceans start to evaporate which may add enough vapor in the bottom part of the atmosphere to completely absorb the thermal emission because of the strong greenhouse effect of water vapor. Once thermal emission reaches the Simpson-Nakajima limit (Simpson, 1929; Nakajima et al., 1992), the planet is not able to cool down anymore and the positive feedback of the runaway greenhouse arises (Komabayasi, 1967; Ingersoll, 1969; Kasting, 1988; Goldblatt & Watson, 2012). This leads to the evaporation of the entire surface ocean and a dramatic increase in global surface temperature of several hundreds of Kelvin. This climatically unstable transition separates two population of planets: temperate planets and hot post-runaway planets (e.g. Hamano et al., 2013; Turbet et al., 2021). This is one of the scenario to explain the difference between the Earth and early-Venus (Way et al., 2016). Temperate planets are similar to the Earth with surface condensed water and a thin atmosphere, while hot planets have an extended water-dominated atmosphere with high surface temperature, up to a few thousand Kelvins (Turbet et al., 2019). The inner edge of the HZ is determined in part by the runaway greenhouse process. The key question of the existence of surface liquid water on terrestrial planets is thus inherently connected to the runaway greenhouse with strong implications for our understanding of the possible evolution of exoplanets. As shown in Turbet et al. (2020), this process may also strongly influence the mass-radius relationships of terrestrial planets by clearly separating temperate planets hosting a water ocean and hot steam planets. Regarding the importance of this process, many papers studied it using simple 1D climate models. But as shown by some 3D General Circulation Models (GCMs) studies (e.g. Leconte et al., 2013; Kopparapu et al., 2017), approximations assumed in 1D models lead to a wrong estimation of the climate. For example, Hadley cells on Earth-like planets tend to reduce the relative humidity around the tropics allowing a larger Outgoing Longwave Radiation (OLR) than the one estimated by 1D models (Leconte et al., 2013). Moreover, specific clouds pattern may appear for slow rotating planets planets (Yang et al., 2013) or for post-runaway states (Turbet et al., 2021) which by definition cannot be accounted for in a 1D model. Even if 1D models are useful in many cases because of their adaptability and fast computation time, more complex 3D models should be preferred to capture intrinsically 3D physical processes, such as clouds and global dynamics. Several studies explored the runaway greenhouse effect on Earth-like planets with different GCMs (e.g. Leconte et al., 2013; Yang et al., 2013; Wolf & Toon, 2014, 2015; Popp et al., 2016; Kopparapu et al., 2017) in order to determine under which insolation this positive feedback arises. Unfortunately, highly unstable climates and high water vapor content may cause numerical instabilities or reveal model limitations. As explained by Kopparapu et al. (2016) and Kane et al. (2018), usually, existing studies assume a runaway greenhouse when the simulation fails due to high water content and large differences between the OLR and the Incident Stellar Radiation (ISR). Some other studies found moist stables states (e.g. Popp et al., 2016), for which the planet can be in a stable regime with a wet troposphere and a global temperature lower than 350 K. This prevent any dramatic huge increase of the global temperature leading to a post-runaway state, thus questioning the definition of the inner edge of the HZ as well as the implications for habitability. The originality of our work is to push the study beyond the tipping point and to describe the evolution of the climate across the runaway greenhouse transition in order to link temperate and post-runaway states. We also explore the reversibility of the runaway greenhouse and we discuss the possible existence of an intermediate moist greenhouse state. We describe in detail inherently 3D processes in order to have a complete and accurate overview of the evolution of the atmosphere, mainly through its enrichment in water vapor and the consequences of the modification of the cloud coverage. In Sect. 2 we describe the method followed to explore the runaway greenhouse transition, as well as the GCM we use and the considered numerical setups. We present our results in Sect. 3 and we discuss the limits and the prospects of this work in Sect. 4. ## 2 Modeling the climate ### Modeling the runaway greenhouse Our method is similar to the one used by previous works (Yang et al., 2013; Leconte et al., 2013; Wolf & Toon, 2014, 2015; Popp et al., 2016; Kopparapu et al., 2017) which studied the onset of the runaway greenhouse with GCMs. First, we model a temperate planet similar to the present-day Earth then we increase step by step the insolation (ISR) to find the tipping point for which the well-known positive feedback arises. We keep the shape of the stellar spectrum unchanged when we modify the insolation. The originality of this work is that we model the climate beyond the onset of the runaway greenhouse, that is for higher temperature and water vapor mixing ratio. When the insolation is high enough to initiate a runaway greenhouse, we fix it to let the ocean evaporate in order to study the evolution of the climate during this critical transition phase. The insolation and the global surface temperature for which this threshold arises depend on the configuration of the simulation (e.g. presence of continents, CO\({}_{2}\) content). Our aim is not here to find the exact value of the insolation corresponding to the tipping point, but to study the differences in the climate patterns. It could be interesting to find the exact tipping point as done by Leconte et al. (2013) or Kopparapu et al. (2017) for example, for various atmospheric compositions, but this is not the aim of this study. We perform simulations following this method for different atmospheric configurations: with/without Earth's continents, with/without CO\({}_{2}\) and for different N\({}_{2}\) pressures (see Table 1). In this article, our reference simulation is the waterworld including 1 bar of N\({}_{2}\), without CO\({}_{2}\) (W1), as a representative example of the evolution of the climate during the runaway transition. Other simulations presented in Table 1 are discussed in the context of a sensitivity study. The key process allowing a runaway greenhouse is the evaporation of the surface ocean. To model the ocean, we use a two-layer slab ocean without heat transport, coupled to the atmosphere (see Sect. 2.2). Evaporating the totality of the Earth's oceans will induce a vapor surface pressure of 273 bar. Regarding the large heat capacity of the water vapor, any temperature modification will be extremely slow in this situation, making such simulation unachievable from a numerical point of view. Moreover, the timescale required to evaporate enough water to reach this pressure are also hugely long. To circumvent that difficulty, we choose to stop the evaporation process when a given quantity of water is evaporated (see evaporation phase described in Sect. 3.1). For numerical reasons, we test only two different evaporated water vapor pressures: 1 and 1.5 bar which correspond respectively to 10 and 15m of global layer equivalent (GEL), that is the depth of the equivalent liquid ocean covering the entire planet. Then we remove the ocean from the simulation (i.e. we turn off the surface evaporation and we consistently adjust the surface albedo and the thermal inertia) and we set a new simulation by using the final state of the evaporation phase as the initial state (this is the dry transition phase described in Sect. 3.2). This method allows us to study the impact of the size of the water reservoir on the obtained post-runaway state. Based on the conclusions of Chaverot et al. (2022) on the radiative transfer for non dilute water atmospheres, we use opacity data taking into account the transition of broadening species from nitrogen dominated to water dominated. ### Numerical setup The Generic-PCM (G-PCM) (e.g. Wordsworth et al., 2011; Leconte et al., 2013; Turbet et al., 2016, 2018; Fauchez et al., 2019; Turbet et al., 2021), previously known as the LMD Generic GCM, has been adapted to model water-rich atmospheres (Leconte et al., 2013). This makes it particularly appropriate to study the runaway greenhouse. The horizontal resolution is 64\(\times\)48 (longitude \(\times\) latitude) with 30 vertical levels from 0.1, 1 or 10 bar (depending on the simulation setup, see Table 1) to 10 Pa. We consider the Earth's obliquity and all the simulations are initialized assuming summer in the northern hemisphere. The dynamical time step is equal to 90 seconds. The physics (evaporation, convection, etc. ) is computed every 15 minutes while the radiative transfer is computed every hour. The radiative transfer, assuming a Sun-like star, is done using a correlated-k table from Chaverot et al. (2022) for the mixture H\({}_{2}\)O\(+\)N\({}_{2}\) and from Leconte et al. (2013) for the mixture including 376ppm of CO\({}_{2}\). The absorption line-list are taken from HITRAN 2012 (Rothman et al., 2013) and HITRAN 2016 (Gordon et al., 2017) respectively. We include also the N\({}_{2}\)\(-\)N\({}_{2}\) collision-induced absorption (CIA) from the HITRAN CIA database (Karman et al., 2019). The H\({}_{2}\)O-H\({}_{2}\)O and H\({}_{2}\)O-air continua are from the version 3.3 of the MT_CKD database (Mlawer et al., 2012). In our simulation, water is a variable gas which may condense or evaporate from the surface ocean but also in the atmosphere, and the relative humidity is computed consistently. The subgrid-scale dynamical processes (turbulent mixing and convection) as well as the moist convection scheme are described in Leconte et al. (2013). The number of cloud condensation nuclei (CCNs) per unit mass of moist air used is fixed: \(4\times 10^{6}\) kg\({}^{-1}\) for liquid water clouds and \(2\times 10^{4}\) kg\({}^{-1}\) for ice clouds, while their radius is variable (see Leconte et al., 2013 for more details). The model includes also a 2-layers slab ocean without heat transport (Codron, 2012; Charany et al., 2013). The thermal inertia is equal to 18,000 J.s\({}^{-1/2}\).m\({}^{-2}\).K\({}^{-1}\) for oceans and 2,000 J.s\({}^{-1/2}\).m\({}^{-2}\).K\({}^{-1}\) for continents. The albedo of the ocean is 0.07 and varies for continents up to 0.35 following Leconte et al. (2013) The heat capacity (\(c_{p}\)) and the mean molecular weight are consistently initialized for every setup but they stay constant during the simulation which means that we do not account for the modification induced by the addition of water vapor. ## 3 The runaway greenhouse transition In this section we present evolutions of the climate from temperate stable states to stable post-runaway states, following the method described in Sect. 2. First, we describe the onset of the runaway greenhouse (Sect. 3.1), here named _evaporation phase_ in reference to the enrichment of the atmosphere in water vapor. Second, we describe the _dry transition phase_ (Sect. 3.2) for which the ocean is considered entirely evaporated. ### Onset of the runaway greenhouse #### 3.1.1 Evolution of the OLR during the onset We performed simulations for the different configurations presented in Table 1 in order to quantify the influence of continents and strong infrared (IR) absorbers like CO\({}_{2}\) on the onset of the runaway greenhouse. Figure 1 provides an overview of the different simulations done. The bottom panel is the OLR as a function of the mean surface temperature while the top panel indicates the different values of the insolation considered for every setup. in Fig. 1, stable states, in other words simulations for which an equilibrium is possible, are indicated using black dots. The surface temperature of stable states is function of the insolation, therefore we find different stables states by increasing step-by-step the insolation (colored dots on top panel of Fig. 1) as described in Sect. 2.1. When the insolation is high enough, a tipping point is reached and the atmosphere enters an unstable state (colored arrows on top panel of Fig. 1). Due to a large amount of water vapor, the bottom part of the atmosphere becomes optically thick to the thermal emission and the OLR is limited. Therefore, there is no longer equality between the OLR and the Absorbed Stellar Radiation (ASR) (see Fig. 2) and the global temperature increases continuously due to this radiative imbalance (solid colored lines on the bottom panel of Fig. 1). For every configuration presented in Fig. 1 the relation between the OLR and the mean surface temperature for stable states is roughly linear which is consistent with previous studies (e.g. Leconte et al., 2013; Wolf & Toon, 2015). For a given insolation, continents tend to reduce the global surface temperature of few Kelvins because of their high albedo (higher than the deep ocean) as shown on the top panel of Fig. 1 (W1 vs E1 and WCO2 vs ECO2). For atmospheres with 1 bar of nitrogen (W1, E1, ECO2 and WCO2), the evolution of the OLR during the runaway transition (i.e. when the surface temperature reach \(\sim\)340 K) is not sensitive to the presence of continents and/or the presence of low concentration of CO\({}_{2}\) for a given total surface pressure: every simulation converge roughly on a same OLR curve. This is because when the surface temperatures are high enough, background gases become negligible compared to the radiative ef \begin{table} \begin{tabular}{c c c c c c} \hline \multicolumn{6}{c}{**Planetary parameters**} \\ \cline{2-6} & & \multicolumn{1}{c}{Stellar spectrum} & \multicolumn{1}{c}{Sun} \\ & & \multicolumn{1}{c}{Orbital period} & \multicolumn{1}{c}{365 days} \\ & & \multicolumn{1}{c}{Rotation period} & \multicolumn{1}{c}{24 hours} \\ & & \multicolumn{1}{c}{Obliquity} & \multicolumn{1}{c}{23\({}^{\circ}\)} \\ & & \multicolumn{1}{c}{Planetary radius} & \multicolumn{1}{c}{6400 km} \\ & & \multicolumn{1}{c}{Gravity} & \multicolumn{1}{c}{9.81} & \\ \hline **Simulation** & W01 & W1 & W10 & WCO2 & E1 & ECO2 \\ \hline Topography & waterworld & waterworld & waterworld & waterworld & Earth’s continents & Earth’s continents \\ Surface albedo & 0.07 & 0.07 & 0.07 & 0.07 & Earth’s albedo & Earth’s albedo \\ \hline H\({}_{2}\)O & variable & variable & variable & variable & variable \\ N\({}_{2}\) & 0.1 bar & 1 bar & 1 bar & 1 bar & 1 bar \\ CO\({}_{2}\) & - & - & - & 376 ppm & - & 376 ppm \\ \hline \end{tabular} \end{table} Table 1: Parameters of the simulations fect of high vapor pressures (Chaverot et al., 2022). Moreover, the radiatively active part of the atmosphere is decoupled from the surface conditions, therefore the surface albedo (i.e. continents) does not influence the radiative balance anymore. For higher background gas pressures (W10 in Fig. 1) the onset of the runaway greenhouse arises at higher temperature and for a higher insolation (450 W.m\({}^{-2}\)). For this extreme case with 10 bar of nitrogen, the temperature profile is closer to a purely dry adiabatic profile which is "less steep" than a wet adiabatic profile. Consequently, for a given global surface temperature the atmosphere is colder thus dryer. Therefore, even if there is less water in the atmosphere (vapor and clouds) the averaged OLR is lower as shown by Fig. 1. This effect of high nitrogen pressures on the steepness of the temperature profile is also discussed in Chaverot et al. (2022), but they do not find smaller OLR values for high nitrogen pressure. This is probably linked to the pseudo-adiabatic assumption fixing the water vapor partial pressure as a function of the temperature. This assumption usually done in 1D models fixes the relative humidity to unity and underestimates the OLR (Ishiwatari et al., 2002; Leconte et al., 2013). On the other side, lower background gas pressures (0.1 bar of nitrogen, see W01 in Fig. 1) lead to a runaway greenhouse transition at a lower temperature, OLR value and insolation (350 W.m\({}^{-2}\)). A lower OLR bump for a low nitrogen pressure is consistent with results obtained by Chaverot et al. (2022) with a 1D model. The strong pressure broadening of H\({}_{2}\)O-H\({}_{2}\)O collisions makes the absorption more efficient than in the diluted case assuming 1 bar of nitrogen. The temperature profile is also closer to a wet adiabat which warms the stratosphere (compare to the dry adiabat case) and trigger the onset of the runaway greenhouse. Finally, adding 376 ppm of CO\({}_{2}\) shifts the onset of the runaway greenhouse to lower insolation (375 W.m\({}^{-2}\) for ECO2 and WCO2 instead of 400 W.m\({}^{2}\) for E1 and W1) because of the strong greenhouse power of CO\({}_{2}\). This drop is discussed using simpler 1D models as well (e.g. Nakajima et al., 1992; Goldblatt & Watson, 2012; Goldblatt et al., 2013; Koll & Cronin, 2019; Chaverot et al., 2022). For a given insolation the global surface temperature is also about 20 K higher in presence of CO\({}_{2}\) (top panel in Fig. 1) which is consistent regarding the greenhouse power of this gas. During the runaway greenhouse, the insolation is kept unchanged to let the system evolve. The OLR reaches a maximum then decreases strongly. This is due to a combination of radiative effects and a quick modification of the cloud pattern. Spectroscopic processes are described by using a 1D radiative-convective model in Chaverot et al. (2022) and the modification of the cloud coverage is described in Sect. 3.1.3. In Fig. 1, the colored envelope delimiting the one-sigma variability of the OLR increases largely during the first steps of the runaway greenhouse. The continuous evaporation characteristic of this process densifies the cloud coverage. For medium temperatures (between 320 K and 360 K), the global dynamics induces variable patterns in this cloud coverage, thus strong variations of the spatial distribution of the OLR. When the global surface temperature is high enough to maintain a large amount of water vapor in the atmosphere (beyond 360 K), the cloud coverage becomes more homogeneous reducing the thermal emission windows thus the OLR as well as its variability. #### 3.1.2 Radiative imbalance To study the evolution of the cloud coverage and its radiative effects, we compare in Fig. 2 the OLR and the ASR to their clear sky equivalent (csOLR and csASR). To compute csOLR and csASR, we run a simulation including clouds but we neglect their radiative effect to estimate the emitted and absorbed fluxes. On the same figure, the red curve correspond to OLR values obtained using the 1D cloud-free version of the Generic PCM1 described in Chaverot et al. (2022), with an atmosphere including H\({}_{2}\)O and 1 bar of N\({}_{2}\) and a relative humidity value fixed to unity. The aim is to compare 1D and 3D models sharing the same radiative transfer calculation. Footnote 1: named 1D reverse version of LMD Generic in Chaverot et al. (2022) At low and high temperatures, respectively when the atmosphere is nitrogen or water dominated, csOLR values from 1D and 3D simulations tend to converge. As there is no clouds in the 1D simulation, the only remaining difference is the absence of dynamics in the 1D model. Therefore, at low temperature the atmosphere is very thin and the csOLR is not much affected by the global dynamics. In other word, it is close to the Planck function. On the other side, at high temperature, the water vapor is well distributed around the planet and the csOLR is not affected by the dynamics. For intermediate temperatures, and especially for the onset of the runaway greenhouse when the evaporation begins, the distribution of the vapor is not homogeneous and the 1D version of the Generic PCM largely underestimates the global OLR. As explained in Ishiwatari et al. (2002) and Leconte Figure 1: The bottom panel gives the value of the OLR function of the global surface temperature for each simulation setup described in Table 1. Colored dots are the stables states corresponding to various insolations while the full curves are the instable state of the runaway greenhouse. The top panel indicate the insolation values of each stable state. The colored arrows represent the insolation for which the runaway greenhouse arises for each setup. The OLR and temperature values are averaged over 2 years (for both the stable states and the runaway greenhouse) and the colored fills are the 1-sigma uncertainties due to variability on the OLR calculation. et al. (2013), this underestimation of the OLR of 1D models assuming a saturated atmosphere is due to the downward branches of the Hadley cells which reduce the relative humidity around the tropics, creating radiative windows by reducing the quantity of water vapor. To circumvent this issue, 1D models with variable humidity should be preferred (e.g. Ramirez et al., 2014). In Fig. 2, the green curve which is from Goldblatt et al. (2013) is obtained by using a cloud free 1D model assuming a saturation mixing ratio of less than 5%. This allows to fit accurately csOLR from GCM simulations for temperate states. Even if some differences remain for the runaway greenhouse, the shape of csOLR and particularly the bump is quite well represented. The major difference between results from 1D and 3D simulations is thus induced by the evolution of the cloud coverage. The differences remaining between Goldblatt et al. (2013) (green curve) and csOLR could come from dynamics, the different opacity database used or a mix of the two. Indeed, Goldblatt et al. (2013) use HITEMP while we use HITRAN in this work. The difference between the OLR and csOLR curves in Fig. 2 is only due to the radiative effect of the clouds. While this difference is rather constant for stable states, it grows during the runaway greenhouse. This is due to the strong evaporation which allows a densification and a drift of the cloud coverage toward the upper layers of the atmosphere (see Sect. 3.1.3) increasing their radiative effect (see also Fig. 11 showing the usual cloud forcing metrics). This is visible through the sharp increase of the Bond albedo and the drop of the ASR (pink solid line in Fig. 2). Nevertheless, as the OLR decreases as well, the effective flux (Seff=ASR/OLR) is greater than unity and the atmosphere warms-up. The energy imbalance is constant beyond 390 K (\(\approx\)12 W.m\({}^{-2}\)) and never exceed \(\approx\)25 W.m\({}^{-2}\) while it reaches \(\approx\)150 W.m\({}^{-2}\) in Kopparapu et al. (2017) for a global surface temperature equal to 350 K around a 3700 K star. We found that this energy imbalance may reach higher values for higher insolations (e.g. \(\approx\)80 W.m\({}^{-2}\) for ISR=475 W.m\({}^{-2}\) instead of ISR=400 W.m\({}^{-2}\) for the reference simulation W1). During the runaway greenhouse, the difference between the csOLR and the csASR increases, highlighting the increase of the water vapor content in the atmosphere, and thus of its absorption. This means that the OLR drops due to a combined absorption induced by the clouds and by a efficient horizontal homogenization of the the increasing content of water vapor in the atmosphere. As the csASR is roughly constant during the runaway greenhouse, the drop of ASR is mainly due to the increase of the Bond albedo due to the evolution of the cloud coverage. #### 3.1.3 Evolution the cloud coverage As explained in Sect. 3.1.2, during the runaway greenhouse the thickness of the cloud coverage grows (due to a strong evaporation) and the altitude of the cloud deck drifts toward the upper layers of the atmosphere. This effect is visible through the evolution of the mean vertical profile of the cloud coverage as a function of the temperature given by Fig. 3. The increase of surface pressure through the evaporation of the ocean is also visible on the same figure. The same evolution of the clouds has been observed by Wolf & Toon (2015) using CAM4. The water cloud content is constant in the bottom part of the atmosphere while the top-of-atmosphere layers are enriched from \(\approx\)10\({}^{-7}\) to \(\approx\)10\({}^{-1}\) kg.m\({}^{-3}\). In the same way, the stratosphere becomes wetter with the augmentation of temperature. At 400 K the vapor profile is quasi-constant and tends toward 1 kg.kg\({}^{-1}\) (see Appendix A for more details). This tendency is consistent with the results obtained by Turbet et al. (2021) studying the runaway greenhouse with hot and water-dominated initial conditions. As the quantity Figure 3: Map of the evolution of the mean vertical profile of the total water cloud (liquid + ice) content as a function of the global surface temperature during the evaporation phase. The simulation setup is the waterworld with 1 bar of nitrogen without CO\({}_{2}\) (W1). The profiles are averaged over one year. Figure 2: Thermal emission and absorbed flux as a function of the mean surface temperature for a waterworld with 1 bar of nitrogen without CO\({}_{2}\) (W1). Blue and purple solid lines are OLR and ASR respectively while blue and purple dashed lines are their clear-sky equivalent (csOLR and csASR). Black dots are the stable states and the blue solid line is the runaway greenhouse for an insolation equal to 400 W/m\({}^{2}\). For comparison the red line is the OLR computed with the 1D reverse version of the Generic PCM which uses the same radiative transfer module and assume a fully saturated atmosphere (named 1D reverse version of LMD Generic in Chaverot et al., 2022). The green line is the OLR computed using a 1D reverse model assuming a subsaturated atmosphere (saturation mixing ratio less than 5%) from Goldblatt et al. (2013). The flux and temperature values are averaged over 2 years and the colored areas are the 1-sigma uncertainties due to variability on the fluxes calculation. of clouds at the top of the atmosphere increases, the Bond albedo increases (from 0.2 to 0.45, see Fig. 12), inducing a drop of the ASR as discussed in Sect. 3.1.2. The day-to-night distribution of the clouds is not homogeneous during the runaway greenhouse, as shown on the latitude-longitude maps averaged over a year and centered at the substellar point presented in Fig. 4. The pattern observed at 400 K (right panel on the figure) is similar to that observed and discussed in Turbet et al. (2021) for hot and steam post-runaway atmospheres except that here a liquid surface ocean is still present, and thus the albedo on the day side is not strictly zero. The ocean allows cloud formation on the day-side, even if they are mainly located on the night-side. The cloudy night-side provides an efficient greenhouse effect, contributing to the warming of the atmosphere. While non-existent for temperate stable states, the contrast of this specific pattern amplifies during the evaporation of the ocean. In this work, we assume a rapidly rotating Earth-like planet which induces an uniform cloud coverage of temperate stables states. Previous studies (Yang et al., 2013; Kopparapu et al., 2017) have shown that slow-rotating planets preferentially form cloud at the sub-stellar point for temperate stables states, preventing an intense warming and delaying the runaway greenhouse thanks to a high albedo on the day-side. These two different scenarios are discussed in Turbet et al. (2021). The evaporation of the clouds at the sub-stellar point during the onset of the runaway greenhouse described in detail in this section was also observed by Kopparapu et al. (2017) for planets orbiting various stellar types. #### 3.1.4 Atmospheric circulation The global circulation of the atmosphere changes during the runaway greenhouse. The usual Hadley cells are visible in Fig. 5 for temperate states (bottom left panel), with an upward branch at the equator and downward branches at the tropics. This terrestrial circulation disappears during the runaway greenhouse while a new circulation arises at the top of the atmosphere (bottom right). As the water vapor pressure increases in the bottom part of the atmosphere, the opacity increases and there is less and less stellar flux reaching the surface. Because of this, the temperature is more homogeneously distributed around the planet which decreases the surface wind jets. On the other side, as the absorption in the upper layers rises the radiative heating strongly increases, inducing strong meridional jets at the top of the atmosphere. This stratospheric circulation characteristic of the runaway greenhouse and decorrelated from the surface conditions is visible in every simulation we performed. The same circulation was found by Turbet et al. (2021) for similar conditions and by Fujii et al. (2017), using ROCKE-3D, for water-dominated tidally locked planets. As described in Sect. 2 we use a slab ocean without heat transport. For simulations without continents we observe a hemispheric difference of the average annual temperature. More precisely, the northern hemisphere is warmer. This hemispheric difference can also be seen in Fig. 4 where the cloud coverage is slightly more important for the two left panels. This induces an atmospheric circulation around the equator along the North-South direction that is visible on the two first panels of Fig. 5. In our simulations, the atmosphere is initially water vapor free and the surface temperature is homogeneous. Due to the non-zero obliquity of the planet (see Table 1), heating and intense evaporation happen alternatively in both hemispheres. For the water-terworld setup, the evaporation of the ocean is continuous over time, unlike the Earth's continent setup for which the evaporation rate over time is perturbed by the presence of continents during the rotation of the planet. Consequently, for a given insolation, the atmosphere of the waterworld hosts more water vapor. In addition to this, the non-symmetric evaporation due to the obliquity of the planet induces a difference of the opacity between the two hemispheres. Therefore, this creates a North-South forcing of the heat transport, and one Hadley cell overrides the other. The downward branch of the Hadley cell reduces the relative humidity as described in Leconte et al. (2013) which allows locally a larger thermal emission, thus preventing the evaporation. This induces a feedback which maintain this specific circulation. This modification of the Hadley cells is not function of the seasons and seems stable along the entire year. When the runaway greenhouse occurs, as the physics is de-coupled from the surface this circulation weakens to converge on a unique stratospheric circulation pattern (last two panel in Fig. 5, see also Fujii et al., 2017) This non-symmetric temperate circulation seems stable in the simulation W1 as the zonal averaged temperature is constant over time as well as the water vapor pressure. However, as the numerical integration time is limited by our resources, a very long equilibrium time (longer than a hundred years) should not be excluded. Even if this state seems stable, it is function of the initial conditions of the simulation. The north to south hemispheric circulation shown in Fig. 5 (top left panel) corresponds to a simulation initialized assuming summer in the northern hemisphere. By initializing the simulation assuming summer in the southern hemisphere, the obtained circulation is reversed, meaning that the dominant wind is from south to north. For simulations with continents (E1 and ECO2), the evaporation rate and the cloud distribution are less homogeneous along the longitude Figure 4: Latitude-longitude map of the clouds (liquid + ice) distribution during the evaporation phase. The maps were calculated in the heliocentric frame, that is keeping the subsolar point at 0\({}^{\circ}\) longitude and 0\({}^{\circ}\) latitude, and using an annual average. The simulation setup is the waterworld with 1 bar of nitrogen without CO\({}_{2}\) (W1). than for their waterworld equivalents (W1, WCO2). This disturbs the feedback described above, thus allowing more symmetrical circulation around the equator that is similar to Hadley cells we see on Earth. Moreover, a symmetrical circulation is observed for a planet without obliquity as well as for low insolations (i.e. global surface temperatures below 280 K). This confirms that the modified circulation can happen preferentially on a waterworld, and it is induced by combining obliquity and high evaporation rates due to an insolation higher than the one of the Earth. Nevertheless, the sensitivity of this particular circulation needs to be deeply investigated to understand the contribution of each processes on it, as well as its stability criteria. Moreover, the oceanic circulation could largely affect it. #### 3.1.5 Reversibility of the runaway greenhouse In each simulation presented previously, we study the evolution of the atmosphere during the warming phase of the runaway greenhouse transition. We performed a set of simulations, presented in Fig. 6, in which we start from a planet in runaway and reduce the insolation it receives in order to condense back the vapor into the surface ocean. First, a simulation following the method described in Sect. 2 is performed in order to partially evaporate the ocean (blue curve in Fig. 6) (see W1 in Table 1). This induces an increase of the global surface temperature, symbolized by the blue arrow in Fig. 6. Then we select a runaway greenhouse state with a global temperature equal to 369 K as initial state for a second set of simulation (shown as a black star in Fig. 6). For this second step, we decrease the insolation compared to the one required to trigger a runaway greenhouse transition (ISR=400 W.m\({}^{-2}\)) in order to find the insolation required to condense back the vapor into the surface ocean (i.e. to cool down the planet). The bottom panel of Fig. 6 presents the obtained OLR curves for different insolations (yellow: ISR=300 W.m\({}^{-2}\), orange: ISR=325 W.m\({}^{-2}\), brown: ISR=350 W.m\({}^{-2}\), red: ISR=375 W.m\({}^{-2}\)). Colored arrows on the two panel represent the tendency evolution of the global surface temperature for each corresponding insolation (cooling or warming). The non-zero difference between the OLR and the ASR during the runaway (see Fig. 2) induces that a small decrease of insolation is not enough to condense back the water vapor and cool the planet. While ISR=400 W.m\({}^{-2}\) is required to initiate the runaway greenhouse for the W1 setup, ISR=375 W.m\({}^{-2}\) is not low enough to condense back the water vapor onto the surface starting at a global surface temperature of 369 K. An insolation equal to 350 W.m\({}^{-2}\) is able to cool down the planet, converging on a temperate stable state. For every cooling simulation, we recover the same stable stables as those found by increasing step-by-step the insolation (see Sect. 2.1). We show in Fig. 6 that during this cooling phase, the OLR increases following the same pathway than the warming W1 simulation. This is also true for the evolution of the albedo and the cloud coverage. This resilience of the runaway greenhouse is visible on the top panel of Fig. 6 through the hysteresis loop on the insolation. ### Runaway greenhouse dry transition As explained in Sect. 2.1, to study the transition of the runaway, we arbitrarily remove the surface ocean when the surface water vapor pressure reaches some threshold values (1 and 1.5 bar). In the evaporation phase, we evaporate up to 2.6 bar of water but we were not able to model the dry transition phase for vapor pressure higher than 1.5 bar because of numerical instabilities. High vapor pressures induce a huge radiative imbalance leading to numerical failures similar to those discussed by Kopparapu et al. (2017). We did two simulations for two different vapor surface pressures (1 bar and 1.5 bar corresponding to 10 and 15 m of GEL) in order to explore the impact of the water reservoir on the obtained post-runaway states. We used the corresponding runaway greenhouse states as initial conditions then we remove the sur Figure 5: Evolution of the wind circulation during the runaway greenhouse. The color map represent the zonal wind (positive values represent eastward winds) and the arrows the vertical and meridional winds averaged over 1 year. The first row is the evolution of the winds for the waterworld setup (W1) and the bottom row is considering the Earth’s continents (E1), both without CO\({}_{2}\). The first panel of each row is the warmer stable state obtained for the two considered setups at I=375 W.m\({}^{2}\). Others show the evolution during the runaway greenhouse for different temperatures. Wind speeds corresponding to the length of the arrows vary with temperature. face ocean from the simulation and we let the system evolve by keeping the atmospheric composition unchanged, as well as the insolation. As there is no humidity source anymore, the temperature profile of the atmosphere changes. More precisely, there is a transition from a previously wet to a dry adiabatic profile in the bottom part of the atmosphere (as seen in Boukrouche et al. 2021 with a 1D model) as visible in Fig. 7, thus we choose to name this second simulation phase the 'dry transition'. As the pressure/temperature profile of a dry adiabat is steeper than a wet one, the global surface temperature rises quickly. During the evaporation phase, the large thermal inertia of the ocean produces a cooling in the bottom part of the atmosphere preventing a quick increase of the surface temperature. In the dry transition, such forcing does not exist allowing a strong and fast increase of the global surface temperature. For the evaporation phase (W1 simulation, ISR=400 W.m\({}^{-2}\)), more than a hundred years are needed to increase the temperature of about 80 K (i.e. evaporating 1.5 bar of water). While for the 'dry' phase, only few tens of years are needed to increase the temperature of \(\approx\)400 K. Figure 8 shows the evolution of the total water cloud content (liquid + ice) during the dry transition. We see a re-evaporation of the bottom part of the cloud layer. This contribute to enrich the upper part of the atmosphere in water vapor. Moreover, due to the high temperatures at the surface, the water vapor migrates towards the upper layers and the vapor mass mixing ratio profile tends toward a constant value equal to 0.5 kg.kg\({}^{-1}\) (see Appendix A for more details). We notice also a slightly increase of the cloud coverage at the very top of the atmosphere. Figure 8: Map of the evolution of the mean vertical profile of the total water cloud (liquid + ice) content as a function of the global surface temperature during the dry transition phase. The simulation setup is the waterworld with 1 bar of nitrogen without CO\({}_{2}\) (W1) assuming 1 bar of water vapor. The profiles are averaged over one year. Figure 6: The bottom panel is the OLR function of the mean surface temperature. The top panel is the insolation of each simulation. The blue curve is the warming simulation presented in Fig. 1 with an insolation equal to 400 W.m\({}^{-2}\). Black dots are stable states assuming a 2-year average. Other colored curves (yellow, orange, brown and red) are the evolution of the OLR for lower insolations and assuming a initial temperature equal to 369 K (the black star). Arrows are representing the tendency of the evolution of the global surface temperature (warming or cooling) for different insolations. Colored crosses represent the stable stable found for cooling simulations (ISR=300 W.m\({}^{-2}\), 325 W.m\({}^{-2}\) and 350 W.m\({}^{-2}\) respectively). The top panel highlight the hysteresis loop induced by the clouds feedback during the runaway greenhouse. However, the bottom panel shows that there is no dependence of the OLR on the insolation for a given surface temperature. The simulation setup is a waterworld with 1 bar of nitrogen without CO\({}_{2}\). Figure 7: Temperature profile for different surface temperature during the dry transition with 1 bar of water vapor. Arrows indicate the transition between a dry adiabat in the lower part of the atmosphere (high pressures) and a wet one in the bottom part (low pressures) for the coldest (blue) and hottest (red) cases. Dry region is below the transition while moist one is above (Abe & Matsui 1988; Kasting 1988). The simulation setup is the waterworld with 1 bar of nitrogen without CO\({}_{2}\) (W1). Figure 10 gives the evolution of the OLR as a function of the global mean surface temperature. The evaporation phase (below 420 K) corresponds to the simulation W1 presented in Fig. 1. The blue dashed and dashed-dotted lines are respectively the dry transitions when 1 bar and 1.5 bar of water vapor is evaporated. The red curve is taken from Turbet et al. (2021) assuming 10 bar of vapor, 1 bar of nitrogen and no CO\({}_{2}\). Black dots are the different stables states obtained in each cases, and for different insolations. Blue and red curves are computed using different methods. Blue curves are obtained by increasing the insolation from temperate stable states (warming simulation) while for the red curve the insolation is reduced step-by-step to find the tipping point for which condensation occurs (cooling simulation). We show that the size of the water reservoir determines the equilibrium temperature of the post-runaway state (black dots beyond 800 K in Fig. 10) as suggested by 1D models (Turbet et al., 2019; Boukrouuche et al., 2021; Turbet et al., 2021). Dotted lines are reference OLR curves computed using different 1D models and for different atmospheric compositions. Cyan, orange and green dotted lines correspond to a pure water atmosphere (Goldblatt et al., 2013), the pre-industrial Earth (Kopparapu et al., 2013), and an atmosphere containing 1 bar of nitrogen without CO\({}_{2}\)(Chaverot et al., 2022) respectively. All these models consider a fully saturated atmosphere (see Sect. 3.1.2 for more details about differences between 3D and 1D saturated/sub-saturated models). The reason for which we consider small amounts of water (1 and 1.5 bar) come from the fact that the addition of vapor in the atmosphere increases its thermal inertia thus decreasing the evaporation rate. Due to numerical constrains we were not able to evaporate more than 2.6 bar of water vapor in our simulations. Moreover, we were not able to model the dry transition phase itself for vapor pressures beyond 1.5 bar because of numerical instabilities. Indeed, for such pressures the radiative unbalance skyrockets inducing numerical failures similar to those described by Kopparapu et al. (2017) for the onset of the runaway greenhouse. There is no strong evolution of the wind pattern during the dry transition. The day-to-night cloud dichotomy discussed in Sect. 3.1.3 remains during this phase as shown by Fig. 9 and becomes even more contrasted. It is accentuated by the re-evaporation of the clouds to converge on the exact same pattern as described in Turbet et al. (2021) with a purely cloud free day-side. OLR values obtained in this work are also consistent with those obtained for different insolations in Turbet et al. (2021). Moreover, even with a quantity as high as 10 bar of water vapor in the atmosphere, they obtain global circulation, temperature and cloud distributions strongly similar to those presented in this section. Therefore, we can reasonably expect the same climate pattern and evolution for post-runaway states with higher water pressures (few tens of bar). Figure 11 shows the evolution of the cloud forcing during both phases of the runaway greenhouse. For temperate habitable states, the cooling effect of the clouds in the shortwave (blue curve) is more efficient than the warming due to their greenhouse effect in the longwave (red curve). During the evaporation phase, as the cloud deck densities, the Bond albedo increases (Fig. 12) and the colling effect of the clouds is even more efficient (blue Figure 10: Evolution of the OLR as a function of the mean surface temperature from temperate stable states to post-runaway stable states. For blue curves, the simulation setup is the waterworld, 1 bar of nitrogen without CO\({}_{2}\) (W1). The red curve is taken from Turbet et al. (2021) assuming 10 bar of water vapor, 1 bar of nitrogen and no CO\({}_{2}\). The insolation of the evaporation phase and the dry transition is equal to 400 W.m\({}^{-2}\). The vertical grey lines are the temperatures for which 1 bar and 1.5 bar of water vapor are evaporated. Stable post-runaway states from Turbet et al. (2021) assume different insolations. The OLR values are averaged over 2 years which explain the gap between the evaporation and the dry transition phases. Cyan, orange and green dotted lines are OLR curves from reference works using 1D models and assuming 273 bar of water (pure water from Goldblatt et al., 2013, Earth’s case from Kopparapu et al., 2013 and with 1 bar of N\({}_{2}\) from Chaverot et al., 2022 respectively). Figure 9: Latitude-longitude map of the cloud (liquid + ice) distribution during the dry transition phase with 1 bar of vapor. The maps were calculated in the heliocentric frame, that is keeping the subsolar point at 0° longitude and 0° latitude, and using an annual average. The simulation setup is the waterworld assuming 1 bar of nitrogen without CO\({}_{2}\). curve in Fig. 11). However, when the surface ocean is considered entirely evaporated (vertical grey dotted lines at 383 K and 393 K corresponding respectively to 1 bar and 1.5 bar), after a strong increase of the global temperature the net radiative effect of the clouds switch from cooling to warming (green curve) around 700 K. We show that during the dry transition phase, as the Bond albedo decreases (see Fig. 12 and blue curve in Fig. 11), the greenhouse effect of clouds becomes the dominant process and the net effect of the clouds tends to warm the planet (Fig. 11). In other words, the thick layer of clouds at the top of the atmosphere acts as a shield that retains the thermal emission of the planet. The cloud forcing of the obtained post-runaway states is very similar to the one from Turbet et al. (2021) even if they consider 10 times more water vapor. This suggest that 1 bar of vapor is enough to stabilize the climate in a post-runaway configuration, with a thick cloud deck at the top of the atmosphere, preferentially on the night side of the planet. The temperature for which the net effect of the clouds switch from cooling to warming is concomitant with the increase of OLR and function of the water vapor content. More vapor and clouds will require a higher temperature to evaporate and modify the cloud albedo. The 50 K gap between the evaporation phase and the dry transition on Figs. 10, 11 and 12 is due to the very fast evolution of the system during the first stages of the simulation. ## 4 Discussion ### Literature comparison We show in Sect. 3.1.2 that the radiative imbalance is constant during all the evaporation phase. Similarly to Leconte et al. (2013), we do not find any moist stable states while Wolf & Toon (2015), using CAM4, or Popp et al. (2016), using ECHAM6, do. Multiple discussions in the literature about this still open question (e.g. Kopparapu et al., 2017; Yang et al., 2019) pointed out that the specific humidity of the stratosphere is a key factor to explain this difference. Indeed, the GCM inter-comparison done by Yang et al. (2019) highlights that the Generic-PCM2 tends to be wetter than CAM4 for planets around G-type stars. In the same time, the cloud fraction and the cloud thickness is higher in CAM4 inducing a cooling of the planet. This was discussed in Kopparapu et al. (2017), showing that Wolf & Toon (2015) found higher stratospheric temperatures than Leconte et al. (2013) for a same surface temperature. High relative humidity in the strato Figure 11: Evolution of the cloud forcing during the runaway greenhouse for a waterworld with 1 bar of nitrogen without CO2 (W1). Solid lines are the evaporation and dry transition phases while black dots are stable states assuming a 2-years average. Negatives values indicate a cooling effect of the clouds and positive ones a warming effect. In the longwave (blue curves), clouds have a cooling effect due to their high albedo reflecting the incoming stellar flux back to space. The red curve correspond to the warming contribution due to the greenhouse effect of the clouds (i.e. thermal mission absorbed by the clouds). The green curve is the net effect of the clouds, including shortwave and longwave contributions. The vertical grey lines are the temperatures for which 1 bar and 1.5 bar of water vapor are evaporated. There are used as initial conditions for the dry transition phase (colored dotted lines). Colored triangles are the values obtained in Turbet et al. (2021) for the same insolation (400 W.m\({}^{-2}\)) but including 10 bar of water vapor. The fluxes values are averaged over 2 years which explain the gap between the evaporation and the dry transition phases Figure 12: Evolution of the bond albedo as a function of the mean surface temperature from temperate stable states to post-runaway stable states. The simulation setup is the waterworld with 1 bar of nitrogen without CO2 (W1). The insolation of the evaporation phase and the dry transition is equal to 400 W.m\({}^{-2}\). The vertical grey lines are the temperatures for which 1 bar and 1.5 bar of water vapor are evaporated. The albedo values are averaged over 2 years which explain the gap between the evaporation and the dry transition phases. Pink squares and green triangles are steady states of the Earth’s case from Leconte et al. (2013) and Wolf & Toon (2015) respectively. Dotted lines are albedo curves from reference works using 1D models (1 bar of N2 from Goldblatt et al., 2013 and Earth’s case from Kopparapu et al., 2013). sphere reduces the thermal emission from the upper layers, allowing a larger difference between the OLR and the ASR. This is the commonly accepted answer to explain the existence - or not - of a moist stable state. More inter-comparisons based on the method used in Yang et al. (2019), as well as high-resolution cloud resolving simulations (e.g. Lefevre et al.2021), should be done to solve this question. By using a 1D climate model, Popp et al. (2015) suggests that the convection scheme could be the answer to explain potential moist runaway greenhouse. The number of Cloud Condensation Nucleii (CCN) may also impact the cloud formation thus the possibility of a moist state. We compare our results for the Earth case with pre-industrial CO\({}_{2}\) quantities (ECO2) with major results from the literature (Leconte et al.2013; Wolf and Toon2015) in Fig. 13. We do not show a direct comparison of the ASR but we compare the Bond albedo from simulation W1 with Leconte et al. (2013) and Wolf and Toon (2015) in Fig. 12. As shown by Fig. 13, we found slightly different global surface temperature and OLR values compared to Leconte et al. (2013) and Wolf and Toon (2015), even for a same insolation. This is not surprising regarding numerous differences between the models highlighted by Yang et al. (2019). In Leconte et al. (2013), they used the Generic-PCM as this work but they mimic the ocean thank to a large thermal inertia of the ground while we use a more complex slab ocean (without heat transport). This, combined with the development of the model, may explain the differences between our results and especially on the relation between insolation and the global surface temperature. Even if Wolf and Toon (2015) found a moist stable state, the evolution of the OLR as a function of the global surface temperature is qualitatively consistent with our simulations. The red dashed line is taken from Goldblatt et al. (2013) and correspond to a pre-industrial composition with a subsatured atmosphere. This curve is comparable with our ECO2 case to estimate the impact of 3D processes. The constant offset of OLR values for temperate states between 1D and 3D simulations is due to the greenhouse effect of the clouds. In the same way, the large difference of OLR during the runaway greenhouse is due to the contribution of the thick cloud coverage. As discussed several works (e.g. Ishiwatari et al.2002; Abe et al.2011; Leconte et al.2013), reducing the relative humidity of 1D models can produce results in a good agreement with 3D modelling for temperate stable states. However, large differences remain for the runaway greenhouse as the key process is the evolution of the cloud coverage not estimable using 1D models. From a qualitative point of view, the shape of the OLR curve we show is similar to historical estimations of the thermal emission of an atmosphere composed of H\({}_{2}\)O and N\({}_{2}\) (e.g. Nakajima et al.1992), but the quantitative values are different because of the addition of more complex physical processes (non-grey atmosphere, clouds, global dynamics). ### Limitations and prospects The H\({}_{2}\)O+N\({}_{2}\) correlated-k table used in this work was made by Chaverot et al. (2022) using the HITRAN database (Gordon et al.2017). Goldblatt et al. (2013) showed that the HITEMP database is more accurate and complete than HITRAN for high temperatures and this impacts the estimation of OLRs and ASRs (see also Kopparapu et al.2013 and Ramirez2014). More precisely, they highlight using a 1D climate model, that HITRAN overestimates the albedo and thus underestimates the ASR for temperatures beyond \(\approx\)360 K. As in 3D GCM simulations the relative humidity (thus the vapor partial pressure) is lower than in 1D models (Leconte et al.2013), we can assume no significant difference in our simulations up to \(\approx\)400 K. Therefore, this does not affect much the evaporation phase but may change the dry transition phase and the final equilibrium temperature by slightly modifying the Bond albedo of the planet an the heating rates. This could be investigated but we are confident on the global evolution described in this paper and on the reliability of general mechanisms which determine the climate pattern transition between pre- and post-runaway. We describe general processes and tendencies and our results are in accordance with previous works using different GCMs. In every simulation, the heat capacity (c\({}_{\rm p}\)) and the mean molecular weight are assumed to be constant. This is inaccurate when the temperature increases strongly (because of the dependence of c\({}_{\rm p}\) on the temperature) and when the dominant gas changes (from nitrogen dominated to water dominated during the evaporation phase, see Fig. A.1 and Chaverot et al.2022). Our version of the Generic-PCM does not include variable heat capacity and molecular weight but ongoing developments will allow to study the impact of variations of these quantities in a near future. For consistency reasons, we do not update the value of c\({}_{\rm p}\) when we initialize the simulation of the dry transition phase. Figure 13: Literature comparison of the OLR and the insolation as a function of the temperature. The black dots are the stables states corresponding to various insolations while the full curves are the unstable state of the runaway greenhouse. The top panel indicates the insolation values of each stable state. The colored arrows represent the insolation for which the runaway greenhouse arises for each setup. The OLR and temperature values are averaged over 2 years and the colored fills are the 1-sigma uncertainties due to variability on the OLR calculation. Pink squares and green triangles are steady states of the Earth’s case from Leconte et al. (2013) and Wolf and Toon (2015) respectively. The red dashed curve is a reference result from Goldblatt et al. (2013) for a pre-industrial atmosphere and using a 1D model assuming saturation mixing ratio below 5%. In this work, we describe in detail the simulation with 1 bar of nitrogen without CO\({}_{2}\) (W1) to explore the effect of the water without additional absorption sources (nitrogen is assumed to be optically thin, except for the CIA). Therefore, the minimal absorption given by the correlated-k table is for a water volume mixing ratio equal to \(10^{-6}\). When the mixing ratio is lower, for example in the stratosphere at low temperature, the water vapor absorption is given by this threshold value. This approximation may induce an error for very low water content but becomes negligible when water is the dominant gas, in other words during the runaway greenhouse on which this work is focused on. In Sect. 3.2, we describe a transition between a wet to a dry adiabat. This is true for low partial water pressures but as shown by Selsis et al. (2023) it becomes wrong for pressures higher than roughly 1 bar (function of the considered host star). In this case, the temperature profile follows a radiative profile. Finally, as we do not search for the exact tipping point triggering the runaway greenhouse we probably overestimate the radiative imbalance by a few W.m\({}^{-2}\). This does not change the physics described in this work, but a study focused on this could be interesting in order to estimate the evaporation timescale of different water reservoirs. ## 5 Conclusions We study the runaway greenhouse with the 3D Generic-PCM in order to link temperate stables states to post-runaway stable states. The positive feedback of the runaway greenhouse is triggered by the strong absorption of the water vapor and driven by a large water inventory (i.e. surface ocean) available through evaporation. From terrestrial climate conditions, we increase the insolation step-by-step in order to find the onset of the runaway greenhouse, following the method used by previous works (Leconte et al., 2013; Wolf and Toon, 2014; Popp et al., 2016). The originality of this work is to go beyond the onset and to study the transition toward post-runaway states. We show that there is two phases: 1. An "evaporation phase", for which the water vapor content and the cloud thickness increases rapidly, 2. When the ocean is considered entirely evaporated, a "dry transition phase" characterized by a switch from a wet adiabatic profile to a dry adiabatic profile. During this second step, the temperature increases quickly, then the climate converges onto a new stable state at high temperature: a post-runaway state. We describe the evolution of the composition of the atmosphere as well as the changes of the cloud pattern in Sect. 3.1. We show that even if the tipping point of the runaway greenhouse depends on the simulation setup (e.g. continents, CO\({}_{2}\) as a minor constituent), the evolution path during the transition does not, for a given background gas pressure. As the bottom part of the atmosphere is water dominated, it becomes optically thick and the physics is decoupled from the surface conditions. In the same way, a low content of CO\({}_{2}\) (376 ppm) does not influence the climate beyond the onset (Fig. 1). The main characteristic of the first phase of the runaway greenhouse (evaporation phase) is the evolution of the cloud coverage. It grows because of the intense evaporation and it migrates toward the upper layers (Fig. 11). This migration increases the Bond albedo creating a drop of the ASR (Fig. 2). The day to night distribution of the clouds is also affected. As shown by Fig. 4, the night-side is more cloudy than the day-side which tend to increase the greenhouse effect of the clouds. This competition between the warming and cooling effect of the clouds is shown in Fig. 12. We highlight that during the entire evaporation phase, the net effect of the clouds tends to cool down the planet. However, the addition of water vapor due to evaporation creates an extra absorption able to overshoot this cooling, leading to a roughly constant radiative imbalance (Fig. 2) which warms continuously the planet. The general atmospheric circulation is strongly affected by the evaporation and the evolution of the cloud coverage. As the top part of the atmosphere becomes wetter, it absorbs more stellar flux and the heating rate rises creating an intense global circulation (Fujii et al., 2017). As visible in Fig. 5, strong stratospheric jets appear substituting the usual Hadley cells of the temperate Earth. We explore also the potential reversibility of the runaway greenhouse. The idea is to use a runaway state as a initial state, then to reduce the insolation in order to condense back the water in the surface ocean. We show in Fig. 6 that the radiative unbalance characteristic of the runaway greenhouse creates a hysteresis loop on the insolation making it resilient to any small variations of incoming flux. We also show that if a re-condensation happens, the OLR increases when the surface temperature decreases, following the exact same evolution pathway than during the evaporation phase. In a second step, when a given quantity of water is evaporated (1 bar or 1.5 bar), we consider the ocean as entirely evaporated and we remove it from the simulation (see Sect. 3.2). The absence of humidity sources at the bottom of the atmosphere induced a transition from a wet to a dry adiabatic profile (Fig. 7). Water vapor migrates towards the upper layers of the atmosphere and lower clouds evaporate (Fig. 11). We observed also a more contrasted dichotomy between a cloudy night-side and a cloud free day-side Fig. 9, which is consistent with results from Turbet et al. (2021) studying post-runaway states for similar atmospheres. Because of the re-evaporation of the clouds, the albedo decreases (Fig. 12) and the their net effect tend to warm the planet (Fig. 11, see also Turbet et al., 2021). This reduces also the radiative unbalance, allowing the simulation to converge on a post-runaway state as shown in Fig. 10. This final state strongly depends on the quantity of water available in the atmosphere, in other words on the size of the initial water reservoir. Finally, by comparing our results to historical studies using 1D models, we show that 3D processes (clouds and dynamics) impact strongly the evolution of the climate during the first stages of the runaway greenhouse as well as during the dry transition phase. This is of major importance to discuss potential observability of this kind of planet with reflected or emitted light. ## Acknowledgements This work has been carried out within the framework of the NCCR Planet's supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. GC and EB acknowledge the financial support of the SNSF (grant number: 200021_197176 and 200020_215760) MT thanks the Gruber Foundation for its generous support to this research. MT acknowledges support from the Tremplin 2022 program of the Faculty of Science and Engineering of Sorbonne University. The authors thank the Generic-PCM team for the teamwork development and improvement of the model. The computations were performed at University of Geneva on the Baobab and Yggdrasil clusters. This research has made use of NASA's Astrophysics Data System. We thank the two referees Ravi K. Kopparapu and Colin Goldblatt for their helpful comments. ## Data availability The GCM outputs of the stable states (temperate and post-runaway) of each setup, as well as several snapshots along the runaway transition for setup W1 are available here: [https://zenodo.org/record/8325631](https://zenodo.org/record/8325631). Other data underlying this article will be shared on request to the corresponding author. The Generic-PCM (and documentation on how to use the model) can be downloaded from the SVN repository [https://svn.lmd.jussieu.fr/Planeto/trunk/LMDZ.GENERIC/](https://svn.lmd.jussieu.fr/Planeto/trunk/LMDZ.GENERIC/) (version 2521). More information and documentation are available on [http://www-planets.lmd.jussieu.fr](http://www-planets.lmd.jussieu.fr).
2309.04757
Anisotropy-assisted thermodynamic advantage of a local-spin thermal machine
We study quantum Otto thermal machines with a two-spin working system coupled by anisotropic interaction. Depending on the choice of different parameters, the quantum Otto cycle can function as different thermal machines, including a heat engine, refrigerator, accelerator and heater. We aim to investigate how the anisotropy plays a fundamental role in the performance of the quantum Otto engine operating in different time scales. We find that while the efficiency of the engine efficiency increases with the increase in anisotropy for the quasistatic operation, quantum internal friction and incomplete thermalization degrade the performance in a finite time cycle. Further, we study the QOE with one of the spins, the local spin, as the working system. We show that the efficiency of such an engine can surpass the standard quantum Otto limit, along with maximum power, thanks to the anisotropy. This can be attributed to quantum interference effects. We demonstrate that the enhanced performance of a local-spin QOE originates from the same interference effects, as in a measurement-based QOE for their finite time operation.
Chayan Purkait, Suman Chand, Asoka Biswas
2023-09-09T11:05:28Z
http://arxiv.org/abs/2309.04757v1
# Anisotropy-assisted thermodynamic advantage of a local-spin thermal machine ###### Abstract We study quantum Otto thermal machines with a two-spin working system coupled by anisotropic interaction. Depending on the choice of different parameters, the quantum Otto cycle can function as different thermal machines, including a heat engine, refrigerator, accelerator and heater. We aim to investigate how the anisotropy plays a fundamental role in the performance of the quantum Otto engine (QOE) operating in different time scales. We find that while the engine's efficiency increases with the increase in anisotropy for the quasistatic operation, quantum internal friction and incomplete thermalization degrade the performance in a finite-time cycle. Further, we study the QOE with one of the spins ('local' spin) as the working system. We show that the efficiency of such an engine can surpass the standard quantum Otto limit, along with maximum power, thanks to the anisotropy. This can be attributed to quantum interference effects. We demonstrate that the enhanced performance of a local-spin QOE originates from the same interference effects, as in a measurement-based QOE for their finite-time operation. ## I Introduction Recent advancements in experimental techniques have made it possible to measure and control systems at the level of a single atom and molecule. This accelerated as the size of the quantum devices has been shrinking rapidly. Consequently, it becomes imperative to understand the thermodynamics of quantum systems. This demand has led us to study thermal machines (heat engines, refrigerators, heaters, accelerators) at the atomic level [1; 2]. Several studies have been done in this direction [3], and it has been shown that non-classical features, viz., quantum coherence [4; 5; 6; 7; 8; 9], quantum correlation and entanglement [10; 11; 12; 13; 14; 15; 16], and non-thermal baths [17; 18; 19; 20; 21] can be exploited to enhance the performance of quantum thermal machines (QTMs). Finite power is required in various practical applications of quantum technologies. Operating a quantum heat engine (QHE) quasistatically leads to null power generation, which may not be useful in practice. Furthermore, the finite-time operation of the QTMs may exploit genuine non-classical properties in their performances [22; 23]. In fact, various QHE models have been studied for finite times. It has been shown that the non-Markovian character of dynamics can speed up the control of a quantum system and improve the power output of a thermal machine [24; 25]. In other studies, it is found that quantum coherence can be harnessed to increase the power of QHEs [4; 7; 9; 17; 26] and the efficiency at maximum power (EMP) [27], as well. Furthermore, the role of quantum internal friction on the work extraction and performance of the QHEs has been investigated [23; 28; 29; 30; 31; 32]. Coupled spin systems play an important role as a working system for QTMs [33; 34; 35; 36; 37; 38; 39; 40]. The coupling strength between the spin can serve as an additional control parameter for the cycle [41]. Such a system offers a platform to explore several aspects of quantum information theory [42; 43]. The anisotropy in the coupling between the spins adds further flexibility. The effect of such anisotropy on entanglement [44; 45; 46; 47; 48], teleportation [49; 50; 51] and the tripartite uncertainty bound [52] has been studied. Recently, the role of anisotropy in quantum batteries has been studied [53; 54; 55]. It was shown that the maximum power output of this battery can be enhanced by maintaining the anisotropy at low values. Though there are only a few studies on the effects of anisotropy on the performance of quantum thermal machines [56; 22]. In this work, we will study the performance of the QOE with a two-spin working system coupled by Heisenberg's anisotropic XY interaction. Our investigation focuses on different time limits of the cycle: firstly the quasistatic operation; secondly the nonadiabatic unitary processes; then thirdly the incomplete thermalization in the hot isochoric process. For anisotropy in the interaction between the spin, the Hamiltonian does not commute at two times, which introduces genuine quantum nature in the finite time operation of the cycle [57]. We will investigate how anisotropy affects the engine's performance both for the quasistatic and finite-time operation of the engine. We show that the efficiency increases with the increase of the anisotropy for the quasistatic operation. For the finite-time operation, we show that irreversibility increases with the increase of the anisotropy. Then we will consider a single-spin working system which is a part of the global two-spin system and let's call it a local system. We will study the heat engine (HE) operation with such a local spin under the effect of another spin, let's call it a local spin HE when the two-spin is coupled. Primarily, we aim to investigate how it differs from a single-spin HE which is not under the effect of another spin. Can we get any thermodynamic advantage under such a local scenario? Several studies have been conducted on QHEs and refrigerators that function with a local system [33; 35; 37; 58; 59; 60; 61; 62]. These studies primarily focused on studying the quasistatic operation, and also employed the Hamiltonian that commutes at different times. We want to explore how the anisotropic interaction, therefore the non-commuting nature of the Hamiltonian affects the performance of a local spin HE. We show that for the quasistatic operation of the HE, work extraction locally is more powerful than globally, and also the efficiency of a local spin HE can outperform a single spin HE. We also show that in the finite-time operation, the efficiency can be enhanced further than the quasistatic limit, and the maximum power is associated with the enhanced efficiency. The paper is organized as follows. We present our HE model and implementation of the cycle in **Sec. II**. In **Sec. III**, we discuss the various limiting cases of time of the HE operation. Further in **Sec. IV**, we explore the HE operation using a local spin working system. In **Sec. V** discusses potential experimental implementations of our HE model. Finally, we conclude our work in **Sec. VI**. ## II Implementation of the quantum Otto cycle ### System model We consider a system of two-spin coupled by an anisotropic XY interaction (with anisotropy parameter \(0\leq\gamma\leq 1\)) of Heisenberg type in a transverse time-dependent magnetic field \(B(t)\). The Hamiltonian that describes this system can be written as (in the unit of \(\hbar=1\)) [63; 22] \[\hat{H}(t)=\hat{H}_{0}(t)+\hat{H}_{I}, \tag{1}\] where, \(\hat{H}_{0}=B(t)(\hat{\sigma}_{1}^{z}+\hat{\sigma}_{2}^{z})\) represents the free part, \(\hat{H}_{I}=J\left[(1+\gamma)\hat{\sigma}_{1}^{z}\hat{\sigma}_{2}^{x}+(1- \gamma)\hat{\sigma}_{1}^{y}\hat{\sigma}_{2}^{y}\right]\) represents the interaction between two-spin with the coupling strength \(J\) and \(\hat{\sigma}_{i}^{x,y,z}\) are the Pauli spin operators for the \(i\)th spin (\(i\in 1,2\)). When \(\gamma=0\), the above Hamiltonian describes an isotropic XX interaction and the Ising spin Hamiltonian for \(\gamma=1\). Because \([\hat{H}_{0},\hat{H}_{I}]\neq 0\) for \(\gamma\neq 0\), it renders \([\hat{H}(t_{1}),\hat{H}(t_{2})]\neq 0\), that leads to a true quantum feature in the operation of the finite-time QHE [57; 22]. The eigenvalues and the corresponding eigenvectors of the total Hamiltonian (**Eq. 1**) are given by \[\begin{array}{ll}\left|\psi_{0,3}\right\rangle=\frac{1}{\sqrt{2}}(\frac{Bx }{\sqrt{k^{2}+Bk}}\left|11\right\rangle+\frac{\gamma J}{\sqrt{k^{2}+Bk}}\left| 00\right\rangle),&E_{0,4}=\mp 2k\\ \left|\psi_{1,2}\right\rangle=\frac{1}{\sqrt{2}}(\mp\left|10\right\rangle+ \left|01\right\rangle),&E_{1,2}=\mp 2J,\end{array} \tag{2}\] where \(k=\sqrt{B^{2}+\gamma^{2}J^{2}}\). ### Bath Model To describe the dynamics of the system under a heat bath, the Lindblad master equation in the interaction picture can be obtained as [64; 19; 22] \[\begin{array}{ll}\frac{\partial\hat{\rho}}{\partial t}=& i[\hat{\rho},\hat{H}(t)]+\sum_{i=1,2}[\Gamma(n_{i}+1)\times\\ &(\hat{X}_{i}\hat{\rho}\hat{X}_{i}^{+}-\frac{1}{2}\hat{X}_{i}^{+}\hat{X}_{i} \hat{\rho}-\frac{1}{2}\hat{\rho}\hat{X}_{i}^{+}\hat{X}_{i})\\ &+\,\Gamma n_{i}(\hat{X}_{i}^{+}\hat{\rho}\hat{X}_{i}-\frac{1}{2}\hat{X}_{i} \hat{X}_{i}^{+}\hat{\rho}-\frac{1}{2}\hat{\rho}\hat{X}_{i}\hat{X}_{i}^{+})], \end{array} \tag{3}\] where we have considered only one spin of the coupled two-spin system interacting with a heat bath at temperature \(T\) to maintain the simplicity of the master equation. The sum over \(i\) represents the number of transitions in the system under the heat bath, and the thermal photon number distribution at the transition frequencies in the bath are \(n\left(\omega_{i}\right)=\left[\exp\left(\frac{\hbar\omega_{i}}{kT}\right)-1 \right]^{-1}\). Here, \(\Gamma\) is the coupling constant between the system and the bath. Similarly, we can consider that each spin interacts with the bath. The jump operators of the system when only the first spin interacts with the heat bath by \(\sigma^{x}\) operator are given by[64; 22] \[X_{1,2}=\frac{1}{2}\left(\frac{B+k\mp 7J}{\sqrt{k^{2}+Bk}}\left|\psi_{1,2} \right\rangle\left\langle\psi_{3}\right|+\frac{B-k\pm\gamma J}{\sqrt{k^{2}-Bk} }\left|\psi_{0}\right\rangle\left\langle\psi_{2,1}\right|\right). \tag{4}\] They signify transitions in energy of the system \(\hbar\omega_{1}=2k+2J\) and \(\hbar\omega_{2}=2k-2J\) respectively. ### Quantum Otto Cycle and thermodynamic quantities In the following, we will discuss the implementation of the four strokes of the quantum Otto cycle. The schematic diagram of the cycle is shown in **Fig. 1**. Figure 1: Schematic diagram of the quantum Otto cycle on the entropy (\(S\)) versus magnetic field (\(B\)) plane when it functions as a heat engine. In other types of thermal machines, the direction of heat flows and work differ. **Unitary expansion (\(A\to B\))** : We assume that the cycle begins with the working system in thermal equilibrium with the cold bath at temperature \(T_{L}=1/\beta_{L}\left(k_{B}=1\right)\) at point A. The corresponding thermal state of the system is \(\hat{\rho}_{A}=e^{-\beta_{L}\hat{H}_{1}}/Z_{1}\), with \(\hat{H}_{1}=\hat{H}(0)\) and \(Z_{1}=\mathrm{Tr}(e^{-\beta_{L}\hat{H}_{1}^{exp}})\). In this stroke, the working medium is disconnected from the cold heat bath, and the external magnetic field is changed from \(B_{L}\) to \(B_{H}\) (\(B_{L}<B_{H}\)) following the protocol \(B(t)=B_{L}+(B_{H}-B_{L})(t/\tau)\), where \(0\leq t\leq\tau\) and \(\tau\) is the timescale of changing the magnetic field from \(B_{L}\) to \(B_{H}\) or vice versa. So at point B, the state of the system can be obtained as \(\hat{\rho}_{B}=\hat{U}(\tau)\hat{\rho}_{A}\hat{U}^{\dagger}(\tau)\), where \(\hat{U}(\tau)=\mathcal{T}e^{-i\int_{0}^{\tau}dt\hat{H}^{exp}(t)}\) is the time evolution operator, \(\mathcal{T}\) indicates the time-ordering. The amount of work done by the system in this process is given by \(W_{1}=\langle E_{B}\rangle-\langle E_{A}\rangle\), where \(\langle E_{A}\rangle=\mathrm{Tr}(\hat{\rho}_{A}\hat{H}_{1})\) and \(\langle E_{B}\rangle=\mathrm{Tr}(\hat{\rho}_{B}\hat{H}_{2})\), represent the internal energies of the system at A and B, and \(\hat{H}_{2}=\hat{H}(\tau)\) represents the Hamiltonian of the system at B. **Isochoric heating (\(B\to C\))**: In this stroke, the working medium is connected with a heat bath at temperature \(T_{H}\) (\(T_{H}>T_{L}\)), and the external magnetic field remains fixed at a value \(B_{H}\), so the Hamiltonian of the system remains fixed. Therefore, there is no work exchange in this stroke. Also, if the process is carried out for a time \(t_{h}\) and the relaxation time of the system is \(t_{relax}\), then the case \(t_{h}>>t_{relax}\) represents the system is completely thermalized, otherwise, the system is incompletely thermalized in this process. At the end of this process, the state of the system, in the case of complete thermalization, can be represented by \(\hat{\rho}_{C}=e^{-\beta_{H}\hat{H}_{2}}/Z_{2}\) at temperature \(T_{H}=1/\beta_{H}\left(k_{B}=1\right)\), with \(\hat{H}_{2}=\hat{H}(\tau)\) and \(Z_{2}=\mathrm{Tr}(e^{-\beta_{H}\hat{H}_{2}})\). In the case of incomplete thermalization, the state of the system can be obtained by solving **Eq. 3**. The system absorbs some amount of heat in this process which can be calculated as, \(Q_{H}=\langle E_{C}\rangle-\langle E_{B}\rangle\), where \(\langle E_{C}\rangle=\mathrm{Tr}(\hat{\rho}_{C}\hat{H}_{2})\) is the internal energy of the system at C. **Unitary compression (\(C\to D\))**: In this stroke, again the working system is disconnected from the hot heat bath and the external magnetic field is changed from \(B_{H}\) to \(B_{L}\) following the protocol \(B(\tau-t)\), where \(0\leq t\leq\tau\). In this process, the state of the working system changes to \(\hat{\rho}_{D}=\hat{V}(\tau)\hat{\rho}_{C}\hat{V}^{\dagger}(\tau)\), where \(\hat{V}(\tau)=\mathcal{T}e^{-i\int_{0}^{\tau}dt\hat{H}^{cm}(t)}\) is the time evolution operator with \(\hat{H}^{com}(t)=\hat{H}^{exp}(\tau-1)\). The amount of work done on the system in this process can be obtained as \(W_{2}=\langle E_{D}\rangle-\langle E_{C}\rangle\), where \(\langle E_{D}\rangle=\mathrm{Tr}(\hat{\rho}_{D}\hat{H}_{1})\), represent the internal energy of the system at D. **Isochoric cooling (\(D\to A\))**: In this stroke, the working system is connected with a cold heat bath at temperature \(T_{L}\), and the external magnetic field remains fixed at \(B_{L}\). If the process is carried out for a time \(t_{c}\), then the case \(t_{c}>>t_{relax}\) represents that the system reaches thermal equilibrium with the heat bath at the end of this process. The state of the system comes back to the initial state \(\rho_{A}\), and the system releases some amount of heat in this process, which can be obtained as, \(Q_{L}=\langle E_{A}\rangle-\langle E_{D}\rangle\). ### Operation of the quantum Otto cycle as different thermal machines In this part, we will study the parameter regimes of different thermal machines' operation of the cycle [65; 66]. To do that, we have studied the thermodynamic quantities of the cycle, which are shown in **Fig. 2**. We observed that with the different choices of the parameters, specifically \(T_{H}\) and \(\gamma\), the cycle can act in a HE, a refrigerator, an accelerator, or a heater cycle. The cycle acts as an engine when the system absorbs some amount of heat from the hot bath (\(Q_{H}>0\)) and releases a portion of it to the cold bath (\(Q_{L}>0\)), and the remaining portion is converted to work (\(W<0\)) in a complete cycle. It acts as a refrigerator when heat flows in the opposite direction i.e., \(Q_{L}>0\) and \(Q_{H}<0\), with the help of a certain amount of work done on the system (\(W>0\)). It acts as a thermal accelerator when heat flows in the natural direction, i.e., \(Q_{H}>0\) and \(Q_{H}<0\), as work is done on the system (\(W>0\)). It operates as a heater when the system releases heat to the hot and cold heat baths, i.e., \(Q_{H}<0\) and \(Q_{L}<0\), with the assistance of work done on the system (\(W>0\)). From **Fig. 2**, we can see that the operation regime of different thermal machines varies with the anisotropy parameter \(\gamma\). In our work, we will mainly focus on the HE operation, as this form is more popular than others in terms of the study of thermodynamics. Therefore, the thermodynamic quantities of the HE are as follows. Total work in a complete cycle can be obtained as \(W=W_{1}+W_{2}=-(Q_{H}+Q_{L})\). So, the efficiency of the HE is defined as \[\eta=-\frac{W_{1}+W_{2}}{Q_{H}}=\frac{Q_{H}+Q_{L}}{Q_{H}}\] ## III Operation of the heat engine in different time frames In this section, we will focus on the various limiting cases of time over which the HE can be operated. ### Quasi-static operation In this part, we consider that two unitary processes (expansion and compression) in the cycle are carried out over a long time such that these processes are adiabatic, i.e., there is no transition between two energy eigenstates. Furthermore, two isochoric processes are carried out for long times, so the system is fully thermalized at the end of these processes. Therefore, in these limiting cases of time, the cycle becomes quasi-static. The analytical expressions of the internal energies [for derivation see **App. A**] of the working system at A, B, C, and D for a quasistatic cycle are given by \[\begin{split}\langle E_{A,B}\rangle&=-4K_{L,H}\frac{ u_{1}}{Z_{1}}-4J\frac{v_{1}}{Z_{1}}\;,\\ \langle E_{C,D}\rangle&=-4K_{H,L}\frac{u_{2}}{Z_{2}} -4J\frac{v_{2}}{Z_{2}}\;,\end{split} \tag{5}\] where, \(K_{L,H}=\sqrt{B_{L,H}^{2}+\gamma^{2}J^{2}}\), and \(Z_{1}=2\cosh(2K_{L}\beta_{L})+2\cosh(2J\beta_{L})\) and \(Z_{2}=2\cosh(2K_{H}\beta_{H})+2\cosh(2J\beta_{H})\) are the partition functions. Also, \(u_{1}=\sinh(2K_{L}\beta_{L})\), \(u_{2}=\sinh(2K_{H}\beta_{H})\), \(v_{1}=\sinh(2J\beta_{L})\), and \(v_{2}=\sinh(2J\beta_{H})\). The thermodynamic quantities of the cycle can be obtained using the **Eq. 5**, and the work in a complete cycle, \(W=W_{1}+W_{2}\) is given by \[W=4(K_{L}-K_{H})\left(\frac{u_{1}}{Z_{1}}-\frac{u_{2}}{Z_{2}}\right). \tag{6}\] Also, heat absorption in the isochoric heating process is given by \[Q_{H}=4K_{H}\left(-\frac{u_{2}}{Z_{2}}-\frac{u_{1}}{Z_{1}}\right)+\;4J\left(- \frac{v_{2}}{Z_{2}}+\frac{v_{1}}{Z_{1}}\right). \tag{7}\] Therefore, the expression of efficiency can be obtained as \[\eta=\frac{-W}{Q_{H}}=1-\frac{K_{L}(u_{1}-u_{2})+J(v_{1}-v_{2})}{K_{H}(u_{1}-u_ {2})+J(v_{1}-v_{2})}, \tag{8}\] From the expression of the efficiency \(\eta\) ( Eq. 8), we clearly observe that the efficiency depends on the temperatures of both hot and cold baths, \(T_{H}\) and \(T_{L}\), and also on the value of the magnetic fields \(B_{L}\) and \(B_{H}\) and the anisotropy parameter \(\gamma\). Also, two intermediate energy levels \(|\psi_{1,2}\rangle\) (**Eq. 2**) participate in the engine operation. To compare with the measurement-based QOE in a coupled two-spin system [22], the quasistatic efficiency does not depend on the temperate of the cold bath, and also \(|\psi_{1,2}\rangle\) do not participate in the engine operation. Now, from **Fig. 3a**, we can observe that the quasistatic efficiency increases gradually and reaches a steady value at higher temperatures of the hot bath \(T_{H}\). To operate the engine with higher efficiency, we consider the value \(T_{H}=10\) in the remaining part of the paper. Further, from the plot of efficiency as a function of work **Fig. 3b**, we can observe that the work and engine's efficiency both increase with the anisotropy parameter \(\gamma\), which is contrary to the measurement-based QOE where quasistatic efficiency decreases with the increase of the anisotropy parameter [22]. So, we can adjust the parameters of the cycle to achieve a higher efficiency performance of the engine. ### Unitary time evolution processes are time-dependent In this part, we consider that two unitary processes (expansion and compression) in the cycle are time-dependent i.e., in a short time duration it becomes nonadiabatic, and in a large time duration it becomes adia Figure 3: (a) Variation of efficiency (\(\eta\)) as a function of the temperature (\(T_{H}\)) of the hot bath. (b) The parametric plot of the variable anisotropy (\(\gamma\)) on the work-efficiency plane when \(T_{H}=10\). \(\gamma\) varies from \(0\) to \(1\), left side of the graph represents \(\gamma=0\) and the right side represents \(\gamma=1\). The other parameters are \(B_{L}=1,B_{H}=4,J=1,T_{L}=1\). The normalization parameter is \(J=1\) throughout this work; therefore, all the quantities are in units of \(J\). Figure 2: Variation of the thermodynamic quantities \(W\), \(Q_{H}\) and \(Q_{L}\) as a function of temperature (\(T_{H}\)) of the hot bath for different values of the anisotropy parameter (a) \(\gamma=1\), and (b) \(\gamma=0\). The other parameters are \(B_{L}=1,B_{H}=4,J=1,T_{L}=1\). batic in nature. Also, we consider that the working system is completely thermalized in two isochoric processes i.e., the system is in thermal equilibrium with the baths at the end of the isochoric processes. **Thermodynamic quantities in terms of transition probability:** Under the consideration of the above physical conditions, the expressions of the internal energies [for derivation see **App. A** and **App. B**] of the working system at A, B, C, and D of the cycle are given by \[\begin{split}\langle E_{A,C}\rangle&=-4K_{L,H} \frac{u_{1,2}}{Z_{1,2}}-4J\frac{v_{1},v_{2}}{Z_{1,2}}\,\\ \langle E_{B,D}\rangle_{\tau}&=-4K_{H,L}(1-2\xi_{ \tau})\frac{u_{1,2}}{Z_{1,2}}-4J\frac{v_{1,2}}{Z_{1,2}}\,\end{split} \tag{9}\] where, \(\xi_{\tau}=|\langle\psi_{0}^{(2)}|\hat{U}(\tau)|\psi_{3}^{(1)}\rangle|^{2}=| \langle\psi_{3}^{(2)}|\hat{U}(\tau)|\psi_{0}^{(1)}\rangle|^{2}=|\langle\psi_{3 }^{(1)}|\hat{V}(\tau)|\psi_{0}^{(2)}\rangle|^{2}=|\langle\psi_{0}^{(1)}|\hat{ V}(\tau)|\psi_{3}^{(2)}\rangle|^{2}\) represent the transition probability between the energy levels. Thermodynamic quantities of the HE can be calculated using the **Eq. 9**. So, work in a complete cycle is given by \[W_{\tau}=4K_{L}\left[\frac{u_{1}}{Z_{1}}-(1-2\xi_{\tau})\frac{u_{2}}{Z_{2}} \right]-4K_{H}\left[(1-2\xi_{\tau})\frac{u_{1}}{Z_{1}}-\frac{u_{2}}{Z_{2}} \right]. \tag{10}\] Also, heat absorption in the isochoric heating process is given by \[Q_{\tau}=4K_{H}\left[-\frac{u_{2}}{Z_{2}}-(1-2\xi_{\tau})\frac{u_{1}}{Z_{1}} \right]+4J\left[-\frac{v_{2}}{Z_{2}}+\frac{v_{1}}{Z_{1}}\right]. \tag{11}\] Therefore, the expression of efficiency, \(\eta_{\tau}=-W_{\tau}/Q_{\tau}\) is given by \[\eta_{\tau}=1-\frac{K_{L}\left[u_{1}-(1-2\xi_{\tau})u_{2}\right]+J\left[v_{1} -v_{2}\right]}{K_{H}\left[(1-2\xi_{\tau})u_{1}-u_{2}\right]+J\left[v_{1}-v_{2 }\right]}. \tag{12}\] The plot of the transition probability (\(\xi_{\tau}\)) as a function of the time of the unitary processes (\(\tau\)) is shown in **Fig. 4a**. Now, if we put the value of \(\xi_{\tau}\) in the expression of the finite time efficiency (**Eq. 12**), we get the plot of efficiency (see **Fig. 4d**) as a function of \(\tau\). In **Fig. 4c,d**, we have shown how the work in a complete cycle (\(W_{\tau}\)) and efficiency (\(\eta_{\tau}\)) varies with \(\tau\) for different values of \(\gamma\). These plots are produced using QuTip [67] package. The plots indicate that the engine's work output and efficiency are highly dependent on \(\tau\). The work and efficiency both degrade in a very short duration of time and then gradually increase with increasing time and eventually reach the adiabatic (quasistatic) value. As the Hamiltonian does not commute at different times, the system can not follow the instantaneous energy eigenstates. This induces a nonadiabatic transition between the instantaneous eigenstates of the Hamiltonian when the system is driven by an external control parameter [here \(B(t)\)], in finite time unitary processes, therefore, unitary processes become nonadiabatic. In this case, work extraction in a complete cycle is reduced. The situation can be seen in this way that an extra amount of work needs to be performed in order to derive the system in finite time, which can be defined by irreversible work \[W_{\tau}^{Ir}=W_{\tau\rightarrow\infty}-W_{\tau}, \tag{13}\] where \(W_{\tau\rightarrow\infty}\) is the work which given in **Eq. 7**. Once the driving process is completed and the system is coupled with the cold bath, the system dumps more amount of heat to the cold bath. This degrades the overall performance of the engine in finite-time unitary processes which can be seen in **Fig. 4c,d**. This is known as quantum internal friction [28; 29; 30; 31; 32] and is quantified by \(W_{\tau}^{Ir}\). The irreversible work (\(W_{\tau}^{Ir}\)) represents an irreversibility in the engine performance which is also linked with entropy production in the system in the finite time driving process. The plot of \(W_{\tau}^{Ir}\) with respect to \(\tau\) is shown in **Fig. 4b**. The plot indicates that in the short time limit (nonadiabatic regime), the more the anisotropy (\(\gamma\)) is more the irreversible work. Therefore, we can say that irreversibility increases with the increase of anisotropy (\(\gamma\)). For \(\gamma=1\), the system becomes an Ising spin model, which gives rise to maximum irreversibility in finite time operation, and for \(\gamma=0\), the system becomes a Heisenberg XX model which gives rise to reversible operation of the cycle irrespective of the time duration of the unitary processes. In the adiabatic limit, i.e., \(\tau\rightarrow\infty\), there is no transition between the instantaneous energy eigenstates. Figure 4: Variation of (a) transition probability (\(\xi_{\tau}\)) between two instantaneous energy eigenstates (b) irreversible work \(W_{\tau}^{Ir}\) (a) work (\(W_{\tau}\)) in a complete cycle and (c) efficiency \(\eta_{\tau}\) as a function of time of the unitary processes (\(\tau\)) for different values of anisotropy parameter (\(\gamma\)). Other parameters are the same with **Fig. 3**. Therefore, in the limit \(\tau\rightarrow\infty\), we can write \(\xi_{\tau}=|\langle\psi_{0}^{(2)}|\hat{U}(\tau)|\psi_{3}^{(1)}\rangle|^{2}\)\(\stackrel{{\tau\rightarrow\infty}}{{\longrightarrow}}0\), which gives rise to \(W_{\tau}=W\), \(W_{\tau}^{Ir}=0\), and \(\eta_{\tau}=\eta\). Therefore, the expression of the quasistatic efficiency (**Eq. 8**) is recovered by putting \(\xi_{\tau}=0\) in the expression of the finite time efficiency (**Eq. 12**). ### Hot isochoric process is time dependent In this part, we consider that the hot isochoric process is time-dependent [68; 23]. Therefore, we have different thermalization scenarios of the working system depending on the time limit of this process. In the case \(t_{h}>>t_{relax}\), the system is completely thermalized; otherwise, the system is incompletely thermalized. We will investigate how the different time scales, particularly incomplete thermalization, affect the performance of the HE. We also consider that the time for the unitary processes is long enough so that these processes are adiabatic in nature. With the above-mentioned conditions, the states of the working system at points A and B can be represented by the states as given in **Eq. A** and **Eq. A** respectively. But, to determine the state at point C, we need to solve the master equation (**Eq. 3**), and after that, to determine the state at D, we need to solve the von Neumann equation, which is similar to the situation when there is no dissipative part in **Eq. 3**). To understand the thermalization of the working system the trace distance between two states, which is defined as \(D(\rho,\sigma)=\frac{1}{2}\operatorname{Tr}|\rho-\sigma|\)[68], has been studied, where these two states are the reference state represented by **Eq. A1** and the time-evolved state obtained by solving **Eq. 3**. The plot of the trace distance (D) with respect to the time of the isochoric process (\(t_{h}\)) is shown in **Fig. 5c**. We found that the thermalization time increases with the increase of the anisotropy (\(\gamma\)) [22]. The plots of the heat absorption (\(Q_{Ht}\)) of the working system from the hot bath and the work done in a complete cycle as a function \(t_{h}\) are shown in **Fig. 5a,b**. These plots show that the \(Q_{Ht}\) increases with the increase of \(t_{h}\) and then reaches a steady value when the system is completely thermalized. Also, with the increase in \(Q_{Ht}\), the system has more amount of energy to perform work in a complete cycle, therefore the work done increases with \(t_{h}\), and reaches a steady at the longer value of \(t_{h}\). The plot of the efficiency (\(\eta_{t}\)) with respect to \(t_{h}\) is shown in **Fig. 5d**. For the lower value of \(\gamma\), with the time \(t_{h}\), work (\(W_{t}\)) increases slowly than the significant increase of heat absorption (\(Q_{Ht}\)), which gives rise to a slow increase in \(\eta_{t}\). With increasing \(\gamma\), in the very short value of \(t_{h}\), \(W_{t}\) increases significantly rather than the \(Q_{Ht}\), which gives rise to a sudden increase in efficiency. In the large value of \(t_{h}\) both \(Q_{Ht}\) and \(W_{t}\) become steady, \(\eta_{t}\) becomes steady for all values of \(\gamma\), which is the quasistatic value of efficiency (see **Sec. III A**). increases significantly, but in that time period \(W_{t}\) also increases but not as much as \(Q_{Ht}\) has increased, therefore efficiency decreases in the mid-regime of the time of the isochoric heating. ## IV Heat engine operation of a local system In the previous section, we have considered that the coupled two-spin, let's say a global system, operated in Figure 6: Schematic diagram of the quantum Otto cycle on the entropy (\(S\)) Vs magnetic field (\(B\)) plane when it functions as a heat engine. We consider a single local spin as a working system when the coupled two-spin global system is operated in the Otto cycle. Figure 5: Variation of (a) heat absorption of the system (\(Q_{Ht}\)) (b) work in a complete cycle (c) trace distance (\(D\)) between two states, one is for incomplete thermalization and another is the thermal state at C (d) efficiency of the HE as a function time of the isochoric heating process (\(t_{h}\)) for different values of anisotropy parameter (\(\gamma\)). For \(\gamma=0\), \(D\) is around 75, whereas, for \(\gamma=1\), \(D\) is around 100, in the unit of \(J\), if the accuracy in the trace distance is considered of the order \(10^{-5}\). Other parameters are the same with **Fig. 3** and \(\Gamma=0.1\). the quantum Otto cycle as illustrated in **Sec. II.3**. In this section, we will consider a single spin which is a part of the global system, let's call it a local system, as a working system. We will study the QOE operation with such a local spin under the effect of another spin. The primary aim is to investigate how the HE operation in a local spin differs from an engine operating with a single-spin working system which is not under the effect of another spin. We want to illustrate the thermodynamic benefits of a local approach in the HE operation. Quantum heat engines and refrigerators that function with local systems have received significant attention in recent studies [33; 35; 37; 39; 58; 59; 60; 61; 62]. These studies primarily focused on analyzing the quasistatic operation of the cycle and also employed the Hamiltonian that commutes at different times. In contrast, our Hamiltonian does not commute at different times (see **Eq. 1**) which may give rise to some unique characteristics [22] in the finite time behaviour of the HE operating with a local spin working system. The primary objective is to explore how the non-commuting nature of the Hamiltonian impacts the performance of a local spin HE. Now to study the thermodynamics of a local spin, we will trace out one spin from the states of the global two-spin system at A, B, C, and D of the cycle (**Sec. II.3**), which will give us the states of the local spin. If the states of the global two-spin system are represented by \(\rho_{j}\), where \(j\in A,B,C,D\) (see **App. A**, **App. B**), then the reduced density matrices for the first local spin are given by \[\rho_{jL}=\langle 0_{2}|\rho_{j}|0_{2}\rangle+\langle 1_{2}|\rho_{j}|1_{2}\rangle,\] where subscript 2 represents tracing out the second spin (any spin can be traced out). Therefore, the internal energies of the local spin can be obtained as \(\langle E_{j}\rangle_{L}=\text{tr}(H_{jL}\rho_{jL})\), where \(H_{1L}=B_{L}\sigma_{z}\) for \(j\in A,D\), and \(H_{2L}=B_{H}\sigma_{z}\) for \(j\in B,C\) represent the Hamiltonian of the local spin. Thermodynamic quantities of a local spin can be defined in a similar way as that of the global system (see **Sec. II.3**). Heat absorption in the isochoric heating process is given by \(Q_{HL}=\langle E_{C}\rangle_{L}-\langle E_{B}\rangle_{L}\), work in the unitary expansion is defined as \(W_{1L}=\langle E_{B}\rangle_{L}-\langle E_{A}\rangle_{L}\), and that in the unitary compression is defined as \(W_{2L}=\langle E_{D}\rangle_{L}-\langle E_{C}\rangle_{L}\), so the work in a complete cycle is \(W_{L}=W_{1L}+W_{2L}\). ### Quasistatic operation of the cycle - Let's consider that the cycle (see **Sec. II.3**) for the global system is carried out quasistatically, therefore, two unitary processes are adiabatic, and the system is completely thermalized in two isochoric processes. So, the expressions [for derivation see **App. E**] of the internal energies for the local spin are given by \[\langle E_{A,D,B,C}\rangle_{L}=-2B_{L,L,H}(1-a_{L,L,H}^{2})\frac{u_{1,2,1,2} }{Z_{1,2,1,2}}\;, \tag{14}\] where \(a_{L,H}=\frac{B_{L,H}-K_{L,H}}{\sqrt{K_{L,H}^{2}-B_{L,H}K_{L,H}}}\). Thermodynamic quantities of the local spin are given by \[\begin{split}& W_{L}=2\left[B_{L}(1-a_{L}^{2})-B_{H}(1-a_{H}^{2}) \right]\left(\frac{u_{1}}{Z_{1}}-\frac{u_{2}}{Z_{2}}\right)\\ & Q_{HL}=2B_{H}(1-a_{H}^{2})\left(\frac{u_{1}}{Z_{1}}-\frac{u_{2 }}{Z_{2}}\right).\end{split} \tag{15}\] **Comparison between global and local work extraction:** Now to find the potential figure of merit of the local approach, we will compare the local work extraction with the global work extraction for the two-spin system. To do that, we will study the quantity \(W_{G}-2W_{L}\), where \(W_{G}\) (**Eq. 7**) represents the work for the global two-spin system and \(W_{L}\) (**Eq. 15**) represents the work for a local spin and the multiplication factor 2 comes to consider the contribution from the two local spin. The quantity \(W_{G}-2W_{L}\) can be calculated as \[\begin{split}& W_{G}-2W_{L}=4\left[(K_{H}-B_{H})-(K_{L}-B_{L})+ \right.\\ &\left.(B_{H}a_{H}^{2}-B_{L}a_{L}^{2})\right]\times\left(\frac{ \sinh 2K_{L}\beta_{L}}{Z_{1}}-\frac{\sinh 2K_{H}\beta_{H}}{Z_{2}}\right).\end{split} \tag{16}\] The variation of \(W_{G}-2W_{L}\) with respect to \(\gamma\) is shown in **Fig. 7**. The plot shows that \(W_{G}<2W_{L}\) if the two-spin is coupled by anisotropic interaction. For the isotropic interaction, i.e. in the limit of \(\gamma\to 0\), \(K_{H}\to B_{H}\), \(K_{L}\to B_{L}\), \(a_{H}^{2}\to 0\), and also \(a_{L}^{2}\to 0\), so \(W_{G}-2W_{L}=0\). The case \(\gamma>0\) gives rise to \((K_{H}-B_{H})<(K_{L}-B_{L})\) and also \(a_{H}^{2}<a_{L}^{2}\), so \(W_{G}-2W_{L}<0\) i.e., the sum of the local Figure 7: Variation of the work difference \(W_{G}-2W_{L}\) as a function of anisotropy parameter \(\gamma\). The figure in the inset represents the variation of efficiency for a local system as a function of the anisotropy parameter (\(\gamma\)). The efficiency of a single spin system QOE is 0.75 for the values of \(B_{L}=1,B_{H}=4\). Other parameters are the same with **Fig. 3**. work from each local spin surpasses the global work from the global system. Therefore, we can say that extracting work locally is better than globally in the OOE operation with a two-spin system coupled by anisotropic interaction. **Comparison between the efficiencies of a local spin and a single QOE:** The efficiency of the QOE cycle followed by the local spin \(\eta_{L}=-\frac{W_{L}}{Q_{HL}}\), is given by \[\eta_{Lq}=1-\frac{B_{L}\left(1-a_{L}^{2}\right)}{B_{H}\left(1-a_{H}^{2}\right)}. \tag{17}\] The expression of the efficiency of the local spin shows that it depends on \(\gamma\) through \(a_{L,H}\). If a QOE operates with a single spin working system under the same physical conditions of \(B_{L}\) and \(B_{H}\) (or the same compression ratio \(B_{L}/B_{H}\)), then the expression of the efficiency of a single spin working system QOE is given by [66; 69] \[\eta_{S}=1-\frac{B_{L}}{B_{H}}. \tag{18}\] We can see that \(\gamma\geq 0\) makes the quantity \((1-a_{L}^{2})/(1-a_{H}^{2})\leq 1\), which gives rise to \(\eta_{Lq}\geq\eta_{S}\). Therefore, as \(\gamma\) increases the quantity \((1-a_{L}^{2})/(1-a_{H}^{2})\) becomes more and more less than \(1\), which makes \(\eta_{L}\) (**Eq. 17**) is more and more larger than \(\eta_{S}\) (**Eq. 18**), and for \(\gamma=0\), we get \(\eta_{Lq}=\eta_{S}\). All of these can be seen in the plot of efficiency (**Fig. 7**) of the local spin QOE as a function of \(\gamma\). The local spin system QOE outperforms the single spin system QOE for \(\gamma>0\). Therefore, we can say that the efficiency of the QOE operating with a local spin working system in conjunction with another spin with an anisotropic interaction between the spin can surpass the standard quantum Otto limit. ### Finite time operation: unitary processes are time dependent In this section, we consider that two unitary processes in the cycle (see **Sec. II.3**) for the global two-spin system are carried out in a finite time \(\tau\) i.e., they are nonadiabatic in nature. But, the thermalization of the working system in the hot isochoric process is complete. The expressions of the internal energies [for derivation see **App. F**] of the local spin in terms of transition probabilities are given by \[\begin{split}\langle E_{A,D}\rangle_{L}&=-2B_{L}(1 -2\delta_{\tau,\tau\rightarrow\infty})\frac{u_{1,2}}{Z_{1,2}}\,\\ \langle E_{B,C}\rangle_{L}&=-2B_{H}(1-2\lambda_{ \tau,\tau\rightarrow\infty})\frac{u_{1,2}}{Z_{1,2}}\,\end{split} \tag{19}\] where \(\lambda_{\tau}=|\langle 00|\hat{U}(\tau)|\psi_{3}^{(1)}\rangle|^{2}=| \langle 11|\hat{U}(\tau)|\psi_{0}^{(1)}\rangle|^{2}\), and \(\delta_{\tau}=|\langle 11|\hat{V}(\tau)|\psi_{0}^{(2)}\rangle|^{2}=|\langle 00|\hat{V} (\tau)|\psi_{3}^{(2)}\rangle|^{2}\) represent the non-zero overlap between the basis states of a two-spin system and the instantaneous energy eigenstates. In the adiabatic limit i.e. \(\tau\rightarrow\infty\), \(\lambda_{\tau}\) and \(\delta_{\tau}\) become \(\lambda_{\tau\rightarrow\infty}=a_{H}^{2}/2\) and \(\delta_{\tau\rightarrow\infty}=a_{L}^{2}/2\) respectively, illustrating that finite time average internal energies (see **Eq. 19**) approach quasistatic average internal energies (see **Eq. 14**). Thermodynamic quantities of the local spin are given by \[\begin{split} W_{L\tau}=&-2[\frac{u_{1}}{Z_{1}} \left[B_{H}(1-2\lambda_{\tau})-B_{L}(1-2\delta_{\tau\rightarrow\infty}) \right]\\ &+\frac{u_{2}}{Z_{2}}\left[B_{L}(1-2\delta_{\tau})-B_{H}(1-2 \lambda_{\tau\rightarrow\infty})\right]],\\ Q_{HL\tau}=&-2B_{H}\left[\frac{u_{2}}{Z_{2}}(1-2 \lambda_{\tau\rightarrow\infty})-\frac{u_{1}}{Z_{1}}(1-2\lambda_{\tau}) \right].\end{split}\] So, the efficiency, \(\eta_{L\tau}=-\frac{W_{L\tau}}{Q_{HL\tau}}\), of the HE cycle experienced by the local spin in finite time is given by \[\eta_{L\tau}=1-\frac{B_{L}\left[u_{2}(1-2\delta_{\tau})-u_{1}\left(1-2\delta_ {\tau\rightarrow\infty}\right)\right]}{B_{H}\left[u_{2}\left(1-2\lambda_{\tau \rightarrow\infty}\right)-u_{1}(1-2\lambda_{\tau})\right]}. \tag{20}\] It can be seen that the finite-time local efficiency depends on the temperatures of the heat baths as the coefficients \(u_{1},u_{2}\) depend on the temperatures, whereas the quasistatic local efficiency does not depend on the temperatures of the heat baths. Plots of the transition probabilities \((\lambda,\delta)\) with respect to \(\tau\) are shown in **Fig. 8**. If we put the value of \(\lambda_{\tau}\) and \(\delta_{\tau}\) in the expression of efficiency (**Eq. 20**), we get the plot of efficiency with respect to \(\tau\) which is shown in **Fig. 8**. This plot shows that there is an oscillatory dependence of efficiency on \(\tau\) for \(\gamma\neq 0\). Depending on the exact value of \(\tau\) in the short time duration, a local spin system QHE can either underperform or outperform the counterpart which operates in the adiabatic limit. Thus, by adjusting the time of the unitary processes, the efficiency of a local spin system QOE can be enhanced beyond its quasistatic limit. In a long time duration i.e. in the adiabatic limit (\(\tau\rightarrow\infty\)), efficiency gradually approaches the adiabatic (quasistatic) value (see **Sec. IV.1**). In that case, the local spin system efficiency which is represented by **Eq. 20** will be reduced to **Eq. 17**. In the sudden quench limit i.e. \(\tau\to 0\), the external magnetic field is changed \(B_{L}\) to \(B_{H}\) or vice-versa suddenly, in this case, both the \(\delta_{\tau}\) and \(\lambda_{\tau}\) attain their sudden value which can be obtained as \(\lambda_{\tau\to 0}=|\langle 00|\psi_{3}^{(1)}\rangle|^{2}=|\langle 11|\psi_{3}^{(1)} \rangle|^{2}\), and \(\delta_{\tau\to 0}=|\langle 11|\psi_{3}^{(2)}\rangle|^{2}=|\langle 00|\psi_{3}^{(2)} \rangle|^{2}\), as in this case \(\hat{U}(\tau),\hat{V}(\tau)\rightarrow\mathds{1}\). The engine's performance degraded in this case (see **Fig. 8**). Also, in the adiabatic limit i.e. \(\tau\rightarrow\infty\), both \(\lambda_{\tau}\) and \(\delta_{\tau}\) reach their adiabatic value \(\lambda_{\tau\rightarrow\infty}\) and \(\delta_{\tau\rightarrow\infty}\). In between these two limiting cases of time, there is an oscillation in \(\delta_{\tau},\lambda_{\tau}\) with respect to \(\tau\). The oscillation in the efficiency is mainly because of the oscillation in the transition probabilities \(\delta_{\tau},\lambda_{\tau}\) in finite times of the unitary processes, which can be attributed to the interference-like phenomena that happen between two probability amplitudes, which can be seen if we rewrite the \(\lambda_{\tau},\delta_{\tau}\) in the form given in **Eq. 21**. \[\lambda_{\tau},\delta_{\tau}= \left|\frac{\sqrt{2}a_{H,L}}{a_{H,L}d_{H,L}-b_{H,L}c_{H,L}}\langle \psi_{3}^{(2)}|\hat{U}(\tau),\hat{V}(\tau)|\psi_{3}^{(1)}\rangle\right.\] \[-\left.\frac{\sqrt{2}c_{H,L}}{a_{H,L}d_{H,L}-b_{H,L}c_{H,L}} \langle\psi_{0}^{(2)}|\hat{U}(\tau),\hat{V}(\tau)|\psi_{3}^{(1)}\rangle\right| ^{2}, \tag{21}\] where \(b_{H,L}=\frac{\gamma J}{\sqrt{k_{H,L}^{2}-B_{H,L}k_{H,L}}}\), \(c_{H,L}=\frac{B_{H,L}+k_{H,L}}{\sqrt{k_{H,L}^{2}+B_{H,L}k_{H,L}}}\), \(d_{H,L}=\frac{\gamma J}{\sqrt{k_{H,L}^{2}+B_{H,L}k_{H,L}}}\). Although the oscillation in \(\delta_{\tau}\) is less prominent here compare to \(\lambda_{\tau}\) in the parameter value we are using for the engine operation (see **Fig. 8**), in other regions of the parameter, particularly \(B_{H}\), the oscillation in \(\delta_{\tau}\) can be found significant. From **Fig. 8** we can see that when \(\lambda_{\tau}\) goes below the \(\lambda_{\tau\rightarrow\infty}\), the finite time HE outperforms the counterparts operating in the adiabatic limit (\(\tau\rightarrow\infty\)). Also, it can be shown that for \(\gamma=0\) efficiency does not change with \(\tau\), which is because there is no interference-like effect in this case [22]. The plot of the efficiency of the local spin HE with respect to the anisotropy parameter \(\gamma\) is shown in **Fig. 9**. It shows that the outperformance increases with the increase of anisotropy (\(\gamma\)) for the finite time operation of the engine, which is similar to the measurement-based QOE [22]. **Similarity with a measurement-based QOE:** It will be worth mentioning if we are able to construct a QHE model with a transition probability between the energy eigenstates and bare basis states of the working system, then we may see an oscillation in the transition probability in finite times. This oscillation allows us to improve the performance of QHEs in finite times than the quasistatic limit. Also. this will be independent of the type of QHE model. In a recent study, it has been shown that the performance of a measurement-based QOE can be enhanced in a finite time using this type of transition probability [22]. The prescribed type of transition probability is derived from the non-selective measurement protocol. But here we obtain this from a local engine behaviour perspective. Therefore, we can say that the QOE with a local working system can function like a measurement-based engine for the finite-time operation of both of them. **Power Analysis:** As we are studying the finite-time performance of the engine, it is imperative to explore the power of the engine and its relation to efficiency in this type of local spin HE. The power of the local spin HE can be defined as \[P_{L}=\frac{|W_{L}|}{t_{h}+t_{c}+2\tau}, \tag{22}\] where it is assumed that two isochoric processes are carried out over a long time, but not infinite time so that the states of the working system reach very close to the reference thermal states in two isochoric processes. The 3D plot of efficiency as a function of power and the time of the unitary processes is shown in **Fig. 10**. From this plot, it can be seen that we can have improved efficiency even at maximum power. ## V Discussion A similar type of analysis can be done for the refrigerator operation of the cycle. In contrast to the HE operation, it can be shown that the COP of the refrigerator degrades as the anisotropy (\(\gamma\)) increases for the quasistatic operation of the cycle. The COP also declines when the refrigerator is operated for a finite time, which is similar to an engine. Figure 8: Variation of the transition probability \(\lambda_{\tau}\) and \(\delta_{\tau}\) on the left axis, and efficiency of a local spin on the right axis as a function of time of the unitary processes. The solid line on the top represents the quasistatic value of \(\delta_{\tau}\), at the bottom represents the quasistatic value of \(\lambda_{\tau}\), and in the middle represents the quasistatic value of the local efficiency respectively. Other parameters \(\gamma=1\), remaining are the same with **Fig. 3**. Figure 9: Variation of efficiency of the local spin heat engine as a function of anisotropy parameter (\(\gamma\)) for different values of the unitary process time (\(\tau\)). \(\tau=20\) represents the adiabatic and \(\tau=0.3\) represents the non-adiabatic cases of the unitary time evolution. The other parameters are the same with **Fig. 3**. Also, using the local analysis as that of the HE mentioned above, we can show that the COP of a local spin refrigerator can be enhanced in finite-time unitary processes, which is similar to the local spin HE operation. Heisenberg's anisotropic XY interaction between two-spin can be constructed using state-of-the-art technologies [70], particularly in NMR systems or trapped ion systems. In a typical trapped ion system, the coupling constant \(J\) can range from a few hundred Hz to one kHz [71; 72]. Also, the external magnetic field can be of the order of a few kHz[72; 73; 74]. Therefore, depending on the value of \(J\), the time for the unitary processes \(\tau\) can range from \(2\mathrm{\SIUnitSymbolMicro s}\) to a few \(\mathrm{ms}\). Also, the working system needs to be cooled at \(T_{L}=50\) nK and \(T_{H}=500\) nK. ## VI Conclusions We have studied the quantum Otto cycle with a two-spin working system coupled by anisotropic interaction. The cycle can be operated in different thermal machine cycles, including a heat engine, refrigerator, accelerator and heater depending on different temperatures of the hot bath, for a fixed value of the coupling constant and the cold bath temperature. Among all thermal machines, the quantum Otto engine (QOE) is studied in different time frames. The role of anisotropy on engine performance has been investigated. We found that the engine's efficiency increases with the increase of the anisotropy parameter (\(\gamma\)) for the quasistatic operation of the cycle. But, efficiency decreases for finite-time engine operation due to quantum internal friction. We found that the decrease in efficiency increases with the increase of \(\gamma\), which signifies irreversibility in engine operation which increases with the increase of \(\gamma\). In the isochoric heating process, the case of incomplete thermalization of the working system on the thermodynamic quantities is also discussed. We observed that heat absorption and work in a complete cycle both increase with the increase in the time of the process and reach a steady value after a long time. Further, we studied the QOE performance with a local spin working system, which is obtained by tracing out one spin from the global two-spin system. We found that the combined local work extraction from all the spin is larger than the global work extraction in the two-spin system and the difference between these two types of work extraction increases with \(\gamma\). Also, for anisotropic interaction between two-spin (\(\gamma>0\)), a local spin QOE outperforms, in terms of efficiency, a single spin QOE when both function quasistatically with the same cycle parameters. We found that the efficiency of the local spin heat engine oscillates for the finite time unitary processes of the global two-spin system. Therefore, a local spin QOE can outperform the same operating in a long time limit and this outperformance in efficiency is also associated with the maximum power output by the engine. We have shown that the oscillation in efficiency of the local spin QOE comes due to the same origin of an interference-like effect between two probability amplitudes as that of a non-selective measurement-based QOE. ###### Acknowledgements. S. C. would like to acknowledge the funding through the NextGenerationEu Curiosity Driven Project "Understanding even-odd criticality" and the European Union-NextGenerationEU through the "Quantum Busses for Coherent Energy Transfer" (QUBERT) project, in the framework of the Curiosity Driven 2021 initiative of the University of Genova. ## Appendix A Derivation of internal energies for the quasistatic case for the global two-spin system ###### Acknowledgements. The Hamiltonian at point A of the cycle can be expressed as \(H_{A}=H_{1}=\sum_{i=0}^{3}E_{i}^{(1)}|\psi_{i}^{(1)}\rangle\langle\psi_{i}^{ (1)}|\) where \(\{|\psi_{i}^{(1)}\rangle\}\) are the eigenstates of the Hamiltonian \(H_{1}\). As we consider that the system at A is in thermal equilibrium with the heat bath, the thermal density matrix is given by \(\rho_{A}=\frac{e^{-\beta H_{1}}}{Z_{1}}=\sum_{i=0}^{3}P_{i}|\psi_{i}^{(1)} \rangle\langle\psi_{i}^{(1)}|\), where \(P_{i}=e^{-\beta E_{i}^{(1)}}/Z_{1}\) is the thermal occupation probability of the \(i\)th eigenstate. So, the average internal energy at point A is given by \(\langle E_{A}\rangle=\mathrm{Tr}(H_{1}\rho_{A})=\sum_{i=0}^{3}P_{i}E_{i}^{(1) }=-4K_{L}\frac{\mathrm{\SIUnitSymbolMicro s}}{Z_{1}}-2J\frac{\mathrm{ \SIUnitSymbolMicro s}}{Z_{1}}\). ###### Acknowledgements. S. C. would like to acknowledge the funding through the NextGenerationEu Curiosity Driven Project "Understanding even-odd criticality" and the European Union-NextGenerationEU through the "Quantum Busses for Coherent Energy Transfer" (QUBERT) project, in the framework of the Curiosity Driven 2021 initiative of the University of Genova. ## Appendix A Derivation of internal energies for the quasistatic case for the global two-spin system ###### Acknowledgements. The Hamiltonian at point A of the cycle can be expressed as \(H_{A}=H_{1}=\sum_{i=0}^{3}E_{i}^{(1)}|\psi_{i}^{(1)}\rangle\langle\psi_{i}^{ (1)}|\) where \(\{|\psi_{i}^{(1)}\rangle\}\) are the eigenstates of the Hamiltonian \(H_{1}\). As we consider that the system at A is in thermal equilibrium with the heat bath, the thermal density matrix is given by \(\rho_{A}=\frac{e^{-\beta H_{1}}}{Z_{1}}=\sum_{i=0}^{3}P_{i}|\psi_{i}^{(1)} \rangle\langle\psi_{i}^{(1)}|\), where \(P_{i}=e^{-\beta E_{i}^{(1)}}/Z_{1}\) is the thermal occupation probability of the \(i\)th eigenstate. So, the average internal energy at point A is given by \(\langle E_{A}\rangle=\mathrm{Tr}(H_{1}\rho_{A})=\sum_{i=0}^{3}P_{i}E_{i}^{(1) }=-4K_{L}\frac{\mathrm{\SIUnitSymbolMicro s}}{Z_{1}}-2J\frac{\mathrm{ \SIUnitSymbolMicro s}}{Z_{1}}\). **At B:** Figure 10: Variation of efficiency (\(\eta_{L^{\tau}}\) on the z-axis) of the local spin heat engine as a function of power and the time of the unitary processes (\(\tau\)) for \(\gamma=1\). Parameters for the isochoric processes are \(t_{h}=100,t_{c}=220\), \(\Gamma=0.1\). Other parameters are the same with **Fig. 3**. The Hamiltonian at the point B of the cycle can be expressed as \(H_{B}=H_{2}=\sum_{i=0}^{3}E_{i}^{(2)}|\psi_{i}^{(2)}\rangle\langle\psi_{i}^{(2)}|\), where \(\{|\psi_{i}^{(2)}\rangle\}\) are the eigenstates of the Hamiltonian \(H_{2}\). We consider that the unitary process AB is carried out adiabatically i.e. the system follows the instantaneous eigenstates, so the state of the system at B can be written as \(\rho_{B}=\sum_{n}p_{n}^{\mathrm{L}}|\psi_{n}^{(2)}\rangle\langle\psi_{n}^{(2)}|\). The average internal energy at the point B, \(\langle E_{B}\rangle=\mathrm{Tr}(H_{2}\rho_{B})\) is given by \[\begin{split}\langle E_{B}\rangle&=P_{0}E_{0}^{(2)} \ +\ P_{3}E_{0}^{(2)}\ +\ P_{1}E_{1}^{(2)}\ +\ P_{2}E_{2}^{(2)}\ +\\ &\hskip 142.26378ptP_{0}E_{3}^{(2)}\ +\ P_{3}E_{3}^{(2)}\\ &=-4K_{H}\frac{u_{1}}{Z_{1}}-4J\frac{\sinh 2J\beta_{L}}{Z_{1}}\, \end{split}\] **At C:** The thermal density matrix at C is given by \[\rho_{C}=\frac{e^{-\beta H_{2}}}{Z_{2}}=\sum_{i=0}^{3}P_{i}|\psi_{i}^{(2)} \rangle\langle\psi_{i}^{(2)}|, \tag{10}\] where \(P_{i}=e^{-\beta E_{i}^{(2)}}/Z_{2}\) is the thermal occupation probability of the \(i^{\dagger}h\) eigenstate. Similarly to point A, we can derive the expression of average energy at C which is given by \(\langle E_{C}\rangle=\mathrm{Tr}(H_{2}\rho_{c})=-4K_{H}\frac{u_{2}}{Z_{2}}-4J \frac{v_{2}}{Z_{2}}\). **At D:** Similarly to the unitary process AB, we consider that the unitary process CD is also carried out adiabatically. Therefore, the density matrix at the point D can be written as \(\rho_{D}=\sum_{n}p_{n}^{\mathrm{H}}|\psi_{n}^{(1)}\rangle\langle\psi_{n}^{(1)}|\). Similarly to point B, we can derive the average internal energy at point D which is given by \[\langle E_{D}\rangle=\mathrm{Tr}(H_{1}\rho_{D})=-4K_{L}\frac{u_{2}}{Z_{2}}-4J \frac{\sinh 2J\beta_{H}}{Z_{2}}\,\] Appendix B Derivation of internal energies of the global two-spin system for finite-time unitary processes **At B:** The density matrix at point B after the unitary process AB can be obtained as \(\rho_{B\tau}=\hat{U}(\tau)\rho_{A}\hat{U}^{\dagger}(\tau)=\sum_{i=0}^{3}P_{i} \hat{U}(\tau)|\psi_{i}^{(1)}\rangle\langle\psi_{i}^{(1)}|\hat{U}^{\dagger}( \tau)\). The average internal energy at the point B, \(\langle E_{B}\rangle=\mathrm{Tr}(H_{2}\rho_{B\tau})\) is given by \[\begin{split}\langle E_{B}\rangle_{\tau}=P_{0}E_{0}^{(2)}(1-\xi_ {\tau})\ +\ P_{3}E_{0}^{(2)}\xi_{\tau}\ +\ P_{1}E_{1}^{(2)}\ +\\ P_{2}E_{2}^{(2)}\ +\ P_{0}E_{3}^{(2)}\xi_{\tau}\ +\ P_{3}E_{3}^{(2)}(1-\xi_{ \tau})\\ =-4K_{H}(1-2\xi_{\tau})\frac{u_{1}}{Z_{1}}-4J\frac{v_{1}\beta}{Z_{1 }},\end{split} \tag{11}\] where we have used the microreversibility condition \(|\langle\psi_{0}^{(2)}|\hat{U}(\tau)|\psi_{3}^{(1)}\rangle|^{2}=|\langle\psi_ {3}^{(2)}|\hat{U}(\tau)|\psi_{0}^{(1)}\rangle|^{2}=\xi_{\tau}\) (for proof see **App. D**) and \(|\langle\psi_{0}^{(2)}|\hat{U}(\tau)|\psi_{0}^{(1)}\rangle|^{2}=|\langle\psi_ {3}^{(2)}|\hat{U}(\tau)|\psi_{3}^{(1)}\rangle|^{2}=1-\xi_{\tau}\). In unitary stages for a short time interval \(\tau\), nonadiabatic transitions occur between energy eigenstates that are coupled [75]. In the present case, such transitions will be induced between the levels \(|\psi_{0}\rangle\) and \(|\psi_{3}\rangle\). So, the terms like \(\langle\psi_{0}^{(2)}|\hat{U}(\tau)|\psi_{1}^{(1)}\rangle\), \(\langle\psi_{0}^{(2)}|\hat{U}(\tau)|\psi_{2}^{(1)}\rangle\), \(\langle\psi_{3}^{(2)}|\hat{U}(\tau)|\psi_{1}^{(1)}\rangle\) etc. become zero. More details of the proof can be found in [22]. **At D:** The density matrix at point D after the unitary process CD is given by \(\rho_{D\tau}=\hat{V}(\tau)\rho_{C}\hat{V}^{\dagger}(\tau)\). Similarly to point B, we can derive the average internal energy at point D which is given by \[\langle E_{D}\rangle_{\tau}=\mathrm{Tr}(H_{1}\rho_{D\tau})=-4K_{L}(1-2\xi_{ \tau})\frac{u_{2}}{Z_{2}}-4J\frac{v_{2}}{Z_{2}}, \tag{12}\] where we have used the microreversibility condition \(|\langle\psi_{0}^{(2)}|\hat{V}(\tau)|\psi_{3}^{(1)}\rangle|^{2}=|\langle\psi_ {3}^{(2)}|\hat{V}(\tau)|\psi_{0}^{(1)}\rangle|^{2}=\xi_{\tau}\) (for proof see **App. D**) and \(|\langle\psi_{0}^{(2)}|\hat{V}(\tau)|\psi_{0}^{(1)}\rangle|^{2}=|\langle\psi_ {3}^{(2)}|\hat{V}(\tau)|\psi_{3}^{(1)}\rangle|^{2}=1-\xi_{\tau}\). Appendix C Equivalence of the time evolution operators in the unitary expansion and compression processes By utilizing the definitions (see **Sec. II.3**) of the unitary time evolution operators in the expansion and compression stages, one can obtain the equivalence between them [75; 68]. \[\hat{U}(\tau) =\mathcal{T}\exp\left[-i\int_{0}^{\tau}H^{exp}(t)dt\right]\] \[=\mathcal{T}\exp\left[-i\int_{0}^{-\tau}H^{exp}(-t)d(-t)\right]\] \[=\mathcal{T}\exp\left[-i\int_{\tau}^{0}H^{exp}(\tau-t^{\prime})d( \tau-t^{\prime})\right]\] \[=\mathcal{T}\exp\left[-i\int_{0}^{\tau}H^{exp}(\tau-t)dt\right]\] \[=\mathcal{T}\exp\left[-i\int_{0}^{\tau}H^{com}(t)dt\right]\] \[=\hat{V}(\tau)\] ## Appendix D Proof of the microreversibility conditions for the total two-spin system: Using the completeness relation \(\sum_{i=0}^{3}|\psi_{i}^{(1)}\rangle\langle\psi_{i}^{(1)}|=\mathbb{I}\), and the conservation of probability \(|\langle\psi_{3}^{(2)}|\hat{U}(\tau)|\psi_{3}^{(1)}\rangle|^{2}=1\), we can proof the relation \(|\langle\psi_{3}^{(2)}|\hat{U}(\tau)|\psi_{0}^{(1)}\rangle|^{2}=|\langle\psi_{0} ^{(2)}|\hat{U}(\tau)|\psi_{3}^{(1)}\rangle|^{2}\). For more details about the proof see [22]. Similarly, we can prove for the unitary compression stage that \(|\langle\psi_{3}^{(1)}|\hat{V}(\tau)|\psi_{0}^{(2)}\rangle|^{2}=|\langle\psi_{ 0}^{(1)}|\hat{V}(\tau)|\psi_{3}^{(2)}\rangle|^{2}\). Also, using the equivalence between two unitary time evolution operators \(\hat{U}(t)\) and \(\hat{V}(\tau)\) [see **App. C**], we can show that \(|\langle\psi_{3}^{(2)}|\hat{U}(\tau)|\psi_{0}^{(1)}\rangle|^{2}=|\langle\psi_ {0}^{(2)}|\hat{U}(\tau)|\psi_{3}^{(1)}\rangle|^{2}=|\langle\psi_{3}^{(1)}|\hat {V}(\tau)|\psi_{0}^{(2)}\rangle|^{2}=|\langle\psi_{0}^{(1)}|\hat{V}(\tau)|\psi _{3}^{(2)}\rangle|^{2}\). ## Appendix E Derivation of internal energies of a local spin system for quasistatic operation ### At A: The density matrix of the local spin at A, \(\rho_{AL}=\langle 0_{2}|\rho_{A}|0_{2}\rangle+\langle 1_{2}|\rho_{A}|1_{2}\rangle\) is given by \[\begin{split}&\rho_{AL}=\frac{1}{2}[p_{0}^{\rm L}(0_{L}^{2}\left| 0\right\rangle\left\langle 0\right|+a_{L}^{2}\left|1\right\rangle\left\langle 1 \right|)+p_{1}^{\rm L}(\left|1\right\rangle\left\langle 1\right|+\left|0 \right\rangle\left\langle 0\right|)+\\ &\quad\quad\quad\quad\quad p_{2}^{\rm L}(\left|1\right\rangle \left\langle 1\right|+\left|0\right\rangle\left\langle 0\right|)+p_{3}^{\rm L}(d_{L}^{2} \left|0\right\rangle\left\langle 0\right|+c_{L}^{2}\left|1\right\rangle\left\langle 1 \right|)\right],\end{split} \tag{12}\] where \(P_{0}^{L}\) and \(P_{3}^{L}\) are the thermal probabilities of the \(0th\) and \(3rd\) energy levels at A. The average internal energy at point A, \(\langle E_{A}\rangle_{L}={\rm Tr}(H_{L1}\rho_{AL})\) is given by \[\begin{split}&\langle E_{A}\rangle_{L}=\sum_{j=0,1}\langle j|(-B_ {L}\left|0\right\rangle\left\langle 0\right|+B_{L}\left|1\right\rangle \left\langle 1\right|)\rho_{LA}|j\rangle\\ &=\frac{B_{L}}{2}\left[P_{0}^{L}(a_{L}^{2}-b_{L}^{2})+P_{3}^{L}(c _{L}^{2}-d_{L}^{2})\right]\\ &=B_{L}\left[(P_{3}^{L}-P_{0}^{L})(1-a_{L}^{2})\right]\;,\end{split} \tag{13}\] where we have used \(a_{L}^{2}=d_{L}^{2}\), \(b_{L}^{2}=c_{L}^{2}\), and \(a_{L}^{2}/2+b_{L}^{2}/2=1\). ### At B: The density matrix of the local spin at B, \(\rho_{BL}=\langle 0_{2}|\rho_{A}|0_{2}\rangle+\langle 1_{2}|\rho_{A}|1_{2}\rangle\) is given by \[\begin{split}&\rho_{BL}=\frac{1}{2}[p_{0}^{\rm L}(b_{H}^{2}\left| 0\right\rangle\left\langle 0\right|+a_{H}^{2}\left|1\right\rangle\left\langle 1 \right|)+p_{1}^{\rm L}(\left|1\right\rangle\left\langle 1\right|+\left|0 \right\rangle\left\langle 0\right|)\\ &\quad\quad+p_{2}^{\rm L}(\left|1\right\rangle\left\langle 1 \right|+\left|0\right\rangle\left\langle 0\right|)+p_{3}^{\rm L}(d_{H}^{2}\left|0 \right\rangle\left\langle 0\right|+c_{H}^{2}\left|1\right\rangle\left\langle 1 \right|)]\;.\end{split} \tag{14}\] The average internal energy at point B, \(\langle E_{B}\rangle_{L}={\rm Tr}(H_{L2}\rho_{LB})\) is give by \[\begin{split}&\langle E_{B}\rangle_{L}=\sum_{j=0,1}\langle j|(-B_ {H}\left|0\right\rangle\left\langle 0\right|+B_{H}\left|1\right\rangle\left\langle 1 \right|)\rho_{LB}|j\rangle\\ &=\frac{B_{H}}{2}\left[P_{0}^{L}(a_{H}^{2}-b_{H}^{2})+P_{3}^{L}(c _{H}^{2}-d_{H}^{2})\right]\\ &=B_{H}\left[(P_{3}^{L}-P_{0}^{L})(1-a_{H}^{2})\right]\;,\end{split} \tag{15}\] where we have used \(a_{H}^{2}=d_{H}^{2}\), \(b_{H}^{2}=c_{H}^{2}\), and \(a_{H}^{2}/2+b_{H}^{2}/2=1\). ### At C: Similarly to point A, we can derive the average internal energy at point C, \(\langle E_{C}\rangle_{L}={\rm Tr}(H_{L2}\rho_{CL})\) given by \[\begin{split}&\langle E_{C}\rangle_{L}=\frac{B_{H}}{2}\left[P_{0}^ {H}(a_{H}^{2}-b_{H}^{2})+P_{3}^{H}(c_{H}^{2}-d_{H}^{2})\right]\\ &=B_{H}\left[(P_{3}^{H}-P_{0}^{H})(1-a_{H}^{2})\right]\;,\end{split} \tag{16}\] where \(P_{0}^{H}\) and \(P_{3}^{H}\) are the thermal probabilities \(0th\) and \(3rd\) energy levels at C. ### At D: Similarly to point B, we can derive the average internal energy at D, \(\langle E_{D}\rangle_{L}={\rm Tr}(H_{L1}\rho_{LD})\) given by \[\begin{split}&\langle E_{D}\rangle_{L}=\frac{B_{L}}{2}\left[P_{0}^ {H}(a_{L}^{2}-b_{L}^{2})+P_{3}^{H}(c_{L}^{2}-d_{L}^{2})\right]\\ &=B_{L}\left[(P_{3}^{H}-P_{0}^{H})(1-a_{L}^{2})\right]\;.\end{split} \tag{17}\] ## Appendix F Derivation of internal energies of a local spin system for finite time operation ### At B: The density matrix at B, \(\rho_{BL\tau}=\langle 0_{2}|\rho_{A\tau}|0_{2}\rangle+\langle 1_{2}|\rho_{A\tau}|1_{2}\rangle\) is given by \[\begin{split}&\rho_{BL\tau}=\frac{1}{2}[p_{1}^{\rm L}(\left|1 \right\rangle\left\langle 1\right|+\left|0\right\rangle\left\langle 0\right|)+p_{2}^{\rm L}(\left|1 \right\rangle\left\langle 1\right|+\left|0\right\rangle\left\langle 0\right|)]+\\ &\quad\quad\quad P_{0}^{L}\langle 0_{2}|\hat{U}(\tau)|\psi_{0}^{(1)} \rangle\langle\psi_{0}^{(1)}|\hat{U}^{\dagger}(\tau)|0_{2}\rangle+P_{3}^{L} \langle 0_{2}|\hat{U}(\tau)|\psi_{3}^{(1)}\\ &\quad\quad\quad\quad\times\langle\psi_{3}^{(1)}|\hat{U}^{\dagger}( \tau)|0_{2}\rangle+P_{0}^{L}\langle 1_{2}|\hat{U}(\tau)|\psi_{0}^{(1)}\rangle\langle\psi_{0}^{(1)}|\hat{U}^{ \dagger}(\tau)|1_{2}\rangle\\ &\quad\quad\quad\quad\quad\quad\quad+\ P_{3}^{L}\langle 1_{2}|\hat{U}(\tau)|\psi_{3}^{(1)} \rangle\langle\psi_{3}^{(1)}|\hat{U}^{\dagger}(\tau)|1_{2}\rangle\;.\end{split} \tag{18}\] The average internal energy, \(\langle E_{Lbt}\rangle={\rm Tr}(H_{L2}\rho_{LBt})\) is given by \[\begin{split}&\langle E_{B}\rangle_{L\tau}=\sum_{j=0,1}\langle j|(-B_ {H}\left|0\right\rangle\left\langle 0\right|+B_{H}\left|1\right\rangle\left\langle 1\right|)\rho_{LB}|j \rangle\\ &=-P_{0}^{H}B_{H}|\langle 00|\hat{U}(\tau)|\psi_{0}^{(1)}\rangle|^{2}-P_{3}^{H}B_{H}| \langle 00|\hat{U}(\tau)|\psi_{3}^{(1)}\rangle|^{2}+\\ &\quad\quad\quad\quad\quad P_{0}^{H}B_{H}|\langle 11|\hat{U}(\tau)|\psi_{0}^{(1)} \rangle|^{2}+P_{3}^{H}B_{H}|\langle 11|\hat{U}(\ where we need to use the microreversibility conditions [for derivation see **App. G**] \(|\langle 00|\hat{V}(\tau)|\psi_{0}^{(2)}\rangle|^{2}=1-\delta_{\tau}\), \(|\langle 00|\hat{V}(\tau)|\psi_{0}^{(3)}\rangle|^{2}=\delta_{\tau}\), \(|\langle 11|\hat{V}(\tau)|\psi_{0}^{(2)}\rangle|^{2}=\delta_{\tau}\), and \(|\langle 11|\hat{V}(\tau)|\psi_{0}^{(1)}\rangle|^{2}=1-\delta_{\tau}\). ## Appendix G Proof of the micro-reversibility condition for the local spin system We can proof the relation \(|\langle 00|\hat{U}(\tau)|\psi_{3}^{(1)}\rangle|^{2}=|\langle 11|\hat{U}(\tau)| \psi_{0}^{(1)}\rangle|^{2}\) using the completeness relation \(\sum_{i=0}^{3}|\psi_{i}^{(1)}\rangle\langle\psi_{i}^{(1)}|=\mathbb{I}\) and the conservation of probability \(|\langle 00|\hat{U}(\tau)|\psi_{0}^{(1)}\rangle|^{2}+|\langle 11|\hat{U}(\tau)| \psi_{0}^{(1)}\rangle|^{2}=1\), whereas other two terms \(|\langle 01|\hat{U}(\tau)|\psi_{0}^{(1)}\rangle|^{2}=0\), and \(|\langle 10|\hat{U}(\tau)|\psi_{0}^{(1)}\rangle|^{2}=0\). Similarly, we can prove that \(|\langle 00|\hat{V}(\tau)|\psi_{3}^{(2)}\rangle|^{2}=|\langle 11|\hat{V}(\tau)| \psi_{0}^{(2)}\rangle|^{2}\), where we need to use the conservation of probability \(|\langle 00|\hat{V}(\tau)|\psi_{0}^{(2)}\rangle|^{2}+|\langle 11|\hat{V}(\tau)| \psi_{0}^{(2)}\rangle|^{2}=1\).
2303.17770
Description of the newly observed $Ω^{*}_c$ states as molecular states
In this work, we study the strong decays of the newly observed $\Omega^{*}_c(3185)$ and $\Omega^{*}_c(3327)$ assuming that $\Omega^{*}_c(3185)$ and $\Omega^{*}_c(3327)$ as $S$-wave $D\Xi$ and $D^{*}\Xi$ molecular state, respectively. Since the $\Omega_c^{*}$ was observed in the $\Xi_c^{+}K^{-}$ invariant mass distributions, the partial decay width of $\Omega^{*}_c(3185)$ and $\Omega^{*}_c(3327)$ into $\Xi_c^{+}K^{-}$ through hadronic loops are evaluated with the help of the effective Lagrangians. Moreover, the decay channel of $\Xi_c^{'}\bar{K}$ is also included. The decay process is described by the $t$-channel $\Lambda$, $\Sigma$ baryons and $D_s$, $D_s^{*}$ mesons exchanges, respectively. By comparison with the LHCb observation, the current results support the $\Omega^{*}_c(3327)$ with$J^P=3/2^{-}$ as pure $D^{*}\Xi$ molecule while the $\Omega^{*}_c(3327)$ with $J^P=1/2^{-}$ can not be well reproduced in the molecular state picture. In addition, the spin-parity $J^P=1/2^{-}$ $D\Xi$ molecular assumptions for the $\Omega^{*}_c(3185)$ can't be conclusively determined. It may be a meson-baryon molecule with a big $D\Xi$ component. Although the decay width of the $\Omega_c^{*}\to{}\bar{K}\Xi_c^{'}$ is of the order several MeV, it can be well employed to test the molecule interpretations of $\Omega^{*}_c(3185)$ and $\Omega^{*}_c(3327)$.
Jingwen Feng, Feng Yang, Cai Cheng, Yin Huang
2023-03-31T02:20:31Z
http://arxiv.org/abs/2303.17770v1
# Description of the newly observed \(\Omega_{c}^{*}\) states as molecular states ###### Abstract In this work, we study the strong decays of the newly observed \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3327)\) assuming that \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3327)\) as \(S\)-wave \(D\Xi\) and \(D^{*}\Xi\) molecular state, respectively. Since the \(\Omega_{c}^{*}\) was observed in the \(\Xi_{c}^{*}K^{-}\) invariant mass distributions, the partial decay width of \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3327)\) into \(\Xi_{c}^{*}K^{-}\) through hadronic loops are evaluated with the help of the effective Lagrangians. Moreover, the decay channel of \(\Xi_{c}^{*}\bar{K}\) is also included. The decay process is described by the \(t\)-channel \(\Lambda\), \(\Sigma\) baryons and \(D_{s}\), \(D_{s}^{*}\) mesons exchanges, respectively. By comparison with the LHCb observation, the current results support the \(\Omega_{c}^{*}(3327)\) with \(J^{P}=3/2^{-}\) as pure \(D^{*}\Xi\) molecule while the \(\Omega_{c}^{*}(3327)\) with \(J^{P}=1/2^{-}\) can not be well reproduced in the molecular state picture. In addition, the spin-parity \(J^{P}=1/2^{-}\)\(D\Xi\) molecular assumptions for the \(\Omega_{c}^{*}(3185)\) can't be conclusively determined. It may be a meson-baryon molecule with a big \(D\Xi\) component. Although the decay width of the \(\Omega_{c}^{*}\to K\Xi_{c}^{*}\) is of the order several MeV, it can be well employed to test the molecule interpretations of \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3327)\). pacs: ## I Introduction After the discovery of the five \(\Omega_{c}^{*}\) states in one observation simultaneously back in 2017 [1], the latest experimental results of the LHCb Collaboration reported the existence of an additional two \(\Omega_{c}\) states in the \(\Xi_{c}^{*}K^{-}\) invariant mass spectrum, originating from \(pp\) collisions [2]. The measured parameters of these newly states are as follows: \[M_{\Omega_{c}(3185)} =3185.1\pm 1.7^{+7.4}_{-0.9}\pm 0.2\quad\text{MeV},\] \[\Gamma_{\Omega_{c}(3185)} =50\pm 7^{+10}_{-20}\quad\text{MeV}, \tag{1}\] \[M_{\Omega_{c}(3327)} =3327.1\pm 1.2^{+0.1}_{-1.3}\pm 0.2\quad\text{MeV},\] \[\Gamma_{\Omega_{c}(3327)} =20\pm 5^{+13}_{-1}\quad\text{MeV}.\] Compared to the earlier discovery of two ground states \(\Omega_{c}^{*}(2695)\) and \(\Omega_{c}^{*}(2770)\)[3], the findings presented in not only expand our understanding of the charmed baryon with quantum numbers \(C=1\) and \(S=2\), which is composed of one charm quark and two strange quarks, but also help us to strange quarks, but also help us to understand the formation mechanism of exotic hadron states. This is mainly because the newly observed \(\Omega_{c}^{*}\) states may display a more complex internal structure than the ground states \(\Omega_{c}^{*}(2695)\) and \(\Omega_{c}^{*}(2770)\), which are made up of three quarks. Indeed, the \(\Omega_{c}^{*}(3050)\) and \(\Omega_{c}^{*}(3119)\) were suggested to be the exotic pentaquarks in the chiral quark-soliton model [4; 5]. Similar qualitative results can also be found in Refs. [6; 7]. In Refs. [8; 9], based on the analysis of the mass spectrum, the \(\Omega_{c}^{*}(3050)\) was identified as a meson-baryon molecule with \(J^{P}=1/2^{-}\). This is the same with the results in Ref. [10] that the total decay widths of the \(\Omega_{c}^{*}(3050)\) can be well reproduced with the assumption that \(\Omega_{c}^{*}(3050)\) as \(S\)-wave \(\Xi D\) bound state with \(J^{P}=1/2^{-}\). By solving a coupled-channel Bethe Salpeter equation using a \(SU(6)_{tsj}\times\) HQSS extended WT interaction as a kernel, three meson-baryon molecular states were obtained [11] that can be associated to the experimental states \(\Omega_{c}^{*}(3000)\), \(\Omega_{c}^{*}(3050)\), and \(\Omega_{c}^{*}(31119)\) or \(\Omega_{c}^{*}(3090)\). There also exist some clues support the newly observed \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3227)\) as molecular states. It was suggested in Ref. [12] that \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3327)\) were proposed to be the \(2S(3/2^{+})\) and \(1D(3/2^{+})\) states, respectively. However, a completely different conclusion was drawn from Ref. [13] that the \(\Omega_{c}^{*}(3327)\) is a good candidate of \(\Omega_{c}^{*}(1D)\) state with \(J^{P}=5/2^{+}\). There exists spin-parity puzzle for \(\Omega_{c}^{*}(3327)\) indicate it difficult to put \(\Omega_{c}^{*}(3327)\) into the conventional \(css\) states. Since the mass of \(\Omega_{c}^{*}(3327)\) is very close to the threshold of \(D^{*}\Xi\), the hadronic molecular configuration for \(\Omega_{c}^{*}(3327)\) is possible. Although the \(\Omega_{c}^{*}(3185)\) can be considered as a conventional charmed baryon [12], the hadronic molecule interpretations cannot be excluded due to the mass of \(\Omega_{c}^{*}(3185)\) is about 6.37 MeV below the threshold of \(D\Xi\). However, no work has been conducted to study whether the newly observed \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3327)\) can be explained as \(D\Xi\) and \(D^{*}\Xi\) molecular state, respectively. In this work, we estimate possible strong decay modes by assuming \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3327)\) as molecular states. By comparing the calculated decay widths with experimental data, we can evaluate the validity of the proposed molecular explanations for the structures of \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3327)\). This paper is organized as follows. In Sec. II, we will present the theoretical formalism. In Sec. III, the numerical result will be given, followed by discussions and conclusions in last section. ## II Formalism and ingredients With assuming the newly observed \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3227)\) as \(S\)-wave molecular states, the decay of \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3227)\) into \(K\overline{\Xi}_{c}\) and \(K\overline{\Xi}_{c}^{*}\) are allowed. The corresponding Feynman diagrams are illustrated in Fig. 1, which includes the \(t\)-channel \(\Sigma\), \(\Lambda\) baryons exchange and \(D_{s}\), \(D_{s}^{*}\) mesons exchange. To evaluate these diagrams, the Lagrangian for the coupling between the mesons and the baryons are obtained using the \(SU(4)\) invariant interaction Lagrangians [14] \[\mathcal{L}_{PBB}=ig_{p}(\omega\phi^{*\alpha\nu}\gamma^{5}P_{\alpha }^{\beta}\phi_{\beta\mu\nu}+b\phi^{*\alpha\nu}\gamma^{5}P_{\alpha}^{\beta}\phi _{\beta\mu\nu}), \tag{2}\] \[\mathcal{L}_{VBB}=ig_{v}(c\phi^{*\alpha\nu}\gamma^{\rho}\phi_{ \alpha}^{\alpha}\phi_{\beta\mu\nu}+d\phi^{*\alpha\nu\gamma}\gamma_{\rho}^{ \rho}\phi_{\beta\mu\nu}),\] where \(V_{\mu}\) and \(P\) are the \(SU(4)\) vector meson and pseudoscalar meson matrix, respectively. The meson matrices are \[P=\begin{pmatrix}\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta}{\sqrt{6}}+\frac{\eta_{c} }{\sqrt{12}}&\pi^{+}&K^{+}&\bar{D}^{0}\\ \pi^{-}&-\frac{\pi^{0}}{\sqrt{6}}+\frac{\eta}{\sqrt{12}}&K^{0}&D^{-}\\ K^{-}&\bar{K}^{0}&-\frac{2\eta}{\sqrt{6}}+\frac{\eta_{c}}{\sqrt{12}}&D_{s}^{ -}\\ D^{0}&D^{+}&D_{s}^{+}&-\frac{3\eta_{c}}{\sqrt{12}}\end{pmatrix} \tag{3}\] and \[V=\begin{pmatrix}\frac{\pi^{0}}{\sqrt{2}}+\frac{\omega_{\eta}}{\sqrt{6}}+ \frac{J^{\rm PV}}{\sqrt{12}}&\rho^{+}&K^{*+}&\bar{D}^{+0}\\ \rho^{-}&-\frac{\rho^{0}}{\sqrt{2}}+\frac{\omega_{\eta}}{\sqrt{6}}+\frac{J^{ \rm PV}}{\sqrt{12}}&K^{*0}&D^{*-}\\ K^{*-}&\bar{K}^{*0}&-\frac{2\omega_{\eta}}{\sqrt{6}}+\frac{J^{\rm PV}}{\sqrt{12 }}&D_{s}^{*-}\\ D^{*0}&D^{*+}&D_{s}^{*+}&-\frac{3J^{\rm PV}}{\sqrt{12}}\end{pmatrix}. \tag{4}\] with \(\omega_{8}=\omega\cos\theta+\phi\sin\theta\) and \(\sin\theta=-0.761\). The tensors \(\phi_{\beta\mu\nu}\) in above equation represents 20-plet of the proton [14], \[p=\phi_{112},n=\phi_{221},\Lambda=\sqrt{\frac{2}{3}}(\phi_{321}- \phi_{312}),\] \[\Sigma^{+}=\phi_{113},\Sigma^{0}=\sqrt{2}\phi_{123},\Sigma^{-}= \phi_{233},\] \[\Xi^{0}=\phi_{311},\Xi^{-}=\phi_{332},\Sigma_{c}^{++}=\phi_{114},\] \[\Sigma_{c}^{+}=\phi_{124},\Sigma_{c}^{0}=\phi_{224},\Xi_{c}^{+}= \phi_{134},\] \[\Xi_{c}^{0}=\phi_{234},\Xi_{c}^{+^{\prime}}=\sqrt{\frac{2}{3}}( \phi_{413}-\phi_{431}),\Xi_{c}^{0^{\prime}}=\sqrt{\frac{2}{3}}(\phi_{423}- \phi_{432})\] \[\Lambda_{c}^{+}=\sqrt{\frac{2}{3}}(\phi_{421}-\phi_{412}),\Omega _{c}^{0}=\phi_{334},\Xi_{c}^{++}=\phi_{441}\] \[\Xi_{cc}^{+}=\phi_{442},\Omega_{cc}^{+}=\phi_{443}, \tag{5}\] where the indices \(\beta\), \(\mu\), \(\nu\) denote the quark content of the baryon fields with the identification \(1\leftrightarrow u,2\leftrightarrow d,3\leftrightarrow s\), and \(4\leftrightarrow c\). Hence, the tensors \(\phi_{\beta\mu\nu}\) satisfy following condition \[\phi_{\mu\nu 2}+\phi_{\nu,\lambda\mu}+\phi_{\lambda\mu\nu}=0,\phi_{\mu\nu \lambda}=\phi_{\nu\nu \lambda}. \tag{6}\] By using the exact form of the matrix and the above relationship, the interaction vertices between baryons and pseudoscalar mesons can be estimated by expanding the SU(4) invariant interaction Lagrangians. The values of the coupling constants adopted in this work can be computed by comparing with the coefficients of the interaction Lagrangians \(\mathcal{L}_{uNN}\) and \(\mathcal{L}_{pNN}\) and determining the constants \(g_{V}\), \(g_{P}\), \(a\), \(b\), \(c\), and \(d\) in terms of \(g_{\pi NN}=13.5\) and \(g_{\rho NN}=3.25\)[14]. Then the values of the coupling constants can be computed and are listed in Tab. 1. In addition to the Lagrangian shown in Eq.2, the effective Lagrangians of relevant interaction vertices are also needed [15] \[\mathcal{L}_{PPV}=\frac{iG}{2\sqrt{2}}(\partial^{\mu}P(PV_{\mu}-V_{ \mu}P)), \tag{7}\] \[\mathcal{L}_{VVP}=\frac{G^{\prime}}{\sqrt{2}}\epsilon^{\mu\alpha \beta}(\partial_{\mu}V_{\nu}\partial_{\alpha}V_{\beta}P),\] (8) \[\mathcal{L}_{VVV}=\frac{iG}{2\sqrt{2}}(\partial^{\mu}V^{\nu}(V_{ \mu}V_{\nu}-V_{\nu}V_{\mu})), \tag{9}\] where the coupling constants \(G=12.0\) and \(G^{{}^{\prime}}=55.51\)[15]. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(g_{\Xi_{c}^{*}D^{*+}\Xi^{0}}\) & \(g_{\Xi^{*}K^{*}\Xi^{0}}\) & \(g_{\Xi^{*}K^{*}\Xi^{0}}\) & \(g_{\Xi^{*}K^{*}\Xi^{*}}\) & \(g_{\Xi^{0}K^{*}K^{*}}\) & \(g_{\Xi^{0}K^{*}\Xi^{*}}\) \\ \(3.78\) & \(13.5\) & \(6.55\) & \(3.43\) & \(5.35\) & \(-19.09\) & \(4.60\) \\ \(g_{D^{*+}\Xi^{0}\Xi^{*}}\) & \(g_{D^{*+}\Lambda^{0}\Xi^{*}}\) & \(g_{D^{*+}\Xi^{*}}\) & \(g_{\Xi^{0}D^{*+}\Xi^{*}}\) & \(g_{D^{*+}\Xi^{*}}\) & \(g_{D^{*+}\Lambda^{0}\Xi^{*}}\) & \(g_{D^{*+}\Lambda^{0}\Xi^{*}}\) \\ \(3.25\) & \(-5.63\) & \(-4.60\) & \(5.35\) & \(9.48\) & \(5.47\) & \(13.41\) \\ \(g_{\Xi_{c}^{*}D^{*-}\Xi^{0}}\) & \(g_{D^{*+}\Xi^{0}_{c}\Xi^{*}}\) & \(g_{D^{*+}\Xi^{0}_{c}\Xi^{*}}\) & \(g_{D^{*+}\Xi^{*}}\) & \(g_{\Xi^{0}D^{*+}\Xi^{*}}\) & \(g_{\Xi^{0}D^{*+}\Xi^{*}}\) \\ \(-5.63\) & \(3.98\) & \(-2.30\) & \(5.63\) & \(-13.40\) & \(12.00\) & \(12.00\) \\ \hline \hline \end{tabular} \end{table} Table 1: Values of the effective couplings constants. Since the \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3227)\) states are considered as \(S\)-wave bound states of \(D^{(*)}\Xi\), the couplings of \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3227)\) to their components are written as \[\mathcal{L}_{\Omega_{c}^{*}(3185)}^{1/2^{-}} =\sum_{j=\Xi\cdot D^{*},\Xi^{D}}C_{j\beta_{\Omega}^{*}(3185)\Xi D }\Omega_{c}(3185)(x)\] \[\times\int dy\Phi(y^{2})\Xi(x+\omega_{D}y)D(x-\omega_{\Xi}y),\] \[\mathcal{L}_{\Omega_{c}^{*}(3327)}^{1/2^{-}} =\sum_{j=\Xi\cdot D^{*},\Xi^{D}}C_{j\beta_{\Omega}^{*}(3327)\Xi D \cdot D}\Omega_{c}^{*}(3327)(x)\gamma^{\mu}\gamma^{5}\] \[\times\int dy\Phi(y^{2})\Xi(x+\omega_{D^{*}}y)D_{\mu}^{*}(x- \omega_{\Xi}y),\] \[\mathcal{L}_{\Omega_{c}^{*}(3327)}^{3/2^{-}} =-i\sum_{j=\Xi\cdot D^{*},\Xi^{D}D^{\pm}}C_{j\beta_{\Omega}^{*}(3 327)\Xi D^{\pm}}\Omega_{c}(3327)^{\mu}(x)\] \[\int dy\Phi(y^{2})\Xi(x+\omega_{D^{*}}y)D_{\mu}^{*}(x-\omega_{ \Xi}y), \tag{10}\] where \(\omega_{\Xi}=m_{\Xi}/(m_{\Xi}+m_{D^{(*)}})\) and \(\omega_{D^{(*)}}=m_{D^{(*)}}/(m_{\Xi}+m_{D^{(*)}})\) with \(m_{\Xi}\) and \(m_{D^{(*)}}\) are the masses of the \(\Xi\) and \(D^{(*)}\), respectively. \(C_{j}=1/\sqrt{2}\) is the isospin coefficient, which is calculated from the following isospin assignments for the \(\Xi\) and \(D^{(*)}\) \[\begin{pmatrix}\Xi^{-}\\ \Xi^{0}\end{pmatrix}\sim\begin{pmatrix}-|\frac{1}{2},-\frac{1}{2}\\ |\frac{1}{2},+\frac{1}{2}\end{pmatrix};\quad\begin{pmatrix}D^{(*)+}\\ D^{(*)0}\end{pmatrix}\sim\begin{pmatrix}|\frac{1}{2},+\frac{1}{2}\rangle\\ |\frac{1}{2},-\frac{1}{2}\rangle\end{pmatrix}.\] In the above equation, \(\Phi(y^{2})\) is the effective correlation function and serves the following two roles: 1) it shows the distribution of the components in the hadronic molecule, 2) it has the same role with the form factor that can avoid the Feynman diagram's ultraviolet divergence. Usually, the correlation function can vanish quickly in the ultraviolet region. Here we choose the Fourier transformation of the correlation function to have a Gaussian form \[\Phi(-p_{E}^{2})\doteq\exp(-p_{E}^{2}/\Lambda^{2}), \tag{11}\] where \(p_{E}\) is the Euclidean Jacobi momentum and the parameter \(\Lambda\) is taken as a parameter, which will be discussed later. The coupling constants of \(g_{\Omega_{c}^{*}(3185)\Xi D^{\pm}}^{1/2^{-}}\), \(g_{\Omega_{c}^{*}(3327)\Xi D^{\pm}}^{3/2^{-}}\), \(g_{\Omega_{c}^{*}(3327)\Xi D^{\pm}}^{3/2^{-}}\) appearing in Eq. (10) are given in Eq. (12) \[\Sigma_{\Omega_{c}^{*}(3185)}^{1/2^{-}}(k_{0})= (g_{\Omega_{c}^{*}\Xi D^{\prime}}^{1/2^{-}})^{2}\int_{0}^{\infty }d\alpha\int_{0}^{\infty}d\beta\sum_{j}C_{j}\mathcal{Y}(\omega_{D},m_{D})\] \[\times\mathcal{Z}(\omega_{D},m_{D}),\] \[\Sigma_{\Omega_{c}^{*}(3327)}^{1/2^{-}}(k_{0})= (g_{\Omega_{c}^{*}\Xi D^{\prime}}^{1/2^{-}})^{2}\int_{0}^{\infty }d\alpha\int_{0}^{\infty}d\beta\sum_{j}C_{j}\mathcal{Y}(\omega_{D^{\prime}},m_{D^{\prime}})\] \[\times[2\mathcal{Z}(\omega_{D^{\prime}},m_{D^{\prime}})+\frac{k_{0 }^{3}(-4\omega_{D^{\prime}}-2\beta)^{3}}{8m_{D^{\prime}}^{2}z^{3}}\] \[-\frac{3k_{0}\Lambda^{2}(-4\omega_{D^{\prime}}-2\beta)}{2m_{D^{ \prime}}^{2}z^{2}}+\frac{k_{0}^{3}(-4\omega_{D^{\prime}}-2\beta)}{4m_{D^{ \prime}}^{2}z^{2}}\] \[-\frac{k_{0}^{2}(-4\omega_{D^{\prime}}-2\beta)^{2}m_{\Xi}}{4m_{D^ {\prime}}^{2}z^{2}}+\frac{k_{0}\Lambda^{2}}{m_{D^{\prime}}^{2}z^{2}}+\frac{ \Lambda^{2}m_{\Xi}}{m_{D^{\prime}}^{2}z}],\] \[\Sigma_{\Omega_{c}^{*}(3327)}^{73/2^{-}}(k_{0})= g_{\Omega_{c}^{*}\Xi D^{\prime}}^{\infty}d\alpha\int_{0}^{ \infty}d\beta\sum_{j}C_{j}\mathcal{Y}(\omega_{D^{\prime}},m_{D^{\prime}})\] \[\times[\mathcal{Z}(\omega_{D^{\prime}},m_{D^{\prime}})+\frac{k_{0 }\Lambda^{2}(-4\omega_{D^{\prime}}-2\beta)}{4m_{D^{\prime}}^{2}z^{2}}\] \[+\frac{k_{0}\Lambda^{2}}{2m_{D^{\prime}}^{2}z}+\frac{\Lambda^{2}m_ {\Xi}}{2m_{D^{\prime}}^{2}z}], \tag{12}\] with \[\mathcal{Y}(\omega_{D^{(*)}},m_{D^{(*)}}) =\frac{1}{16\pi^{2}z^{2}}\exp\{-\frac{1}{\Lambda^{2}}[-2k_{0}^{2} \omega_{D}^{2}+\alpha m_{D}^{2}\] \[+\beta(-k_{0}^{2}+m_{\Xi}^{2})+\frac{(-4\omega_{D}-2\beta)^{2}k_{0}^ {2}}{4z}]\}\] \[\mathcal{Z}(\omega_{D^{(*)}},m_{D^{(*)}}) =k_{0}+m_{\Xi}+\frac{k_{0}(-4\omega_{D}-2\beta)}{2z}, \tag{13}\] where \(Z=2+\alpha+\beta\) and \(k_{0}^{2}=m_{\Omega_{c}^{*}}^{2}\) with \(k_{0}\), \(m_{\Omega_{c}^{*}}\) are the four momenta and the mass of the newly observed \(\Omega_{c}^{*}\), respectively. If the \(\Omega_{c}^{*}\) is a \(D^{(*)}\Xi\) molecular states with \(J^{P}=1/2^{-}\), the \(\Sigma^{1/2^{*}}\) is the self-energy operator of the hadronic molecule \(\Omega_{c}^{*}\). However, \(\Sigma^{73/2^{-}}\) is the transverse part of the self-energy operator \(\Sigma_{\Omega_{c}^{*}}^{\mu\nu}\) when we assuming \(\Omega_{c}^{*}\) as \(D^{(*)}\Xi\) molecular states with \(J^{P}=3/2^{-}\). Once the self-energy operator and its transverse part are obtained, the coupling constants of the hadronic molecule \(\Omega_{c}^{*}\) to its constituents \(D^{(*)}\Xi\) can be determined using the compositeness condition based on the work in Ref. [16]. This condition requires that the renormalization constant of the hadronic molecular wave function is equal to zero \[1-\frac{d\Sigma_{\Omega_{c}^{*}(3185/3327)}}{dk_{0}}=0, J=\frac{1}{2}\] \[1-\frac{d\Sigma_{\Omega_{c}^{*}(3327)}^{T}}{dk_{0}}=0. J=\frac{3}{2} \tag{14}\] With the above prepared, we can obtain the general expressions of the amplitudes corresponding to the Feynman diagrams Fig. 1 \[\mathcal{M}_{u}^{1/2^{-}} =\mu(p_{2})[\frac{1}{\sqrt{2}}g_{\Omega_{c}^{(*)}D^{*}\Xi^{0}}g_{ \Xi^{-}\mathcal{K}\cdot\Sigma^{0}}\int\frac{d^{4}k_{1}}{(2\pi)^{4}}\] \[\times\Phi[(p\omega_{D^{*}}-q\omega_{\Xi^{-}})^{2}]\gamma^{5} \frac{i(\not{k}_{1}+m_{\Sigma^{0}})}{k_{1}^{2}-m_{\Sigma^{0}}^{2}}\gamma^{5}\] \[\times\frac{i(\not{p}+m_{\Xi^{-}})}{p^{2}-m_{\Xi^{-}}^{2}}\frac{i} {q^{2}-m_{D^{*}}^{2}}+\frac{1}{\sqrt{2}}g_{\Omega 11}g_{\Xi^{(*)}D^{*}\Lambda^{0}}\] \[\times g_{\Xi^{-}\mathcal{K}\cdot\Lambda^{0}}\int\frac{d^{ \[\mathcal{M}_{c}^{1/2^{-}} = i\frac{1}{2}g_{\Omega 12}g_{D^{\pm}_{c}\cdot\,K^{-}}g_{\Xi\Xi^{0}_{c} \cdot\,D^{-}_{c}}\cdot\int\frac{d^{4}k_{1}}{(2\pi)^{4}}\] (17) \[\times\Phi[(p\omega_{D^{\pm}}-q\omega_{\Xi^{0}})^{2}]\mu(p_{2}) \gamma^{\nu}\frac{i(k_{1}+m_{\Sigma^{0}})}{k_{1}^{2}-m_{D^{-}}^{2}}\] \[\times(iq^{\mu}-ip_{1}^{\mu})\frac{i}{q^{2}-m_{D^{0}}^{2}}\mu(k_{ 0})\frac{i(\not{p}+m_{\Xi^{0}})}{p^{2}-m_{\Xi^{0}}^{2}},\] \[\mathcal{M}_{d}^{1/2^{-}} = \mu(p_{2})[\frac{1}{\sqrt{2}}g_{\Omega 21}g_{\Xi^{0}_{c}\cdot \,D^{-}\geq 0}g_{\Xi^{-}\cdot\,K^{-}\geq 0}\int\frac{d^{4}k_{1}}{(2\pi)^{4}}\] (18) \[\times \Phi[(p\omega_{D^{\pm}}-q\omega_{\Xi^{0}})^{2}]\gamma^{\mu}\frac{ i(\not{k}_{1}+m_{\Sigma^{0}})}{k_{1}^{2}-m_{\Sigma^{0}}^{2}}\gamma^{5}\] \[\times \frac{i(\not{p}+m_{\Xi^{-}})}{p^{2}-m_{\Xi^{-}}^{2}}\gamma^{5} \frac{i(-g^{\mu\nu}+\frac{q^{\prime}\vec{q}^{\prime}}{m_{D^{\pm}}^{2}})}{q^{2 }-m_{D^{+}}^{2}}\] \[+ \frac{1}{\sqrt{2}}g_{\Omega 21}g_{\Xi^{0}_{c}\cdot\,D^{-}\pm 0 }g_{\Xi^{-}\cdot\,K^{-}\geq 0}\int\frac{d^{4}k_{1}}{(2\pi)^{4}}\] \[\times \Phi[(p\omega_{D^{\pm}}-q\omega_{\Xi^{0}})^{2}]\gamma^{\mu}\frac{ i(\not{k}_{1}+m_{\Lambda^{0}})}{k_{1}^{2}-m_{\Lambda^{0}}^{2}}\gamma^{5}\] \[\times \frac{i(\not{p}+m_{\Xi^{-}})}{p^{2}-m_{\Xi^{-}}^{2}}\gamma^{5} \frac{i(-g^{\mu\nu}+\frac{q^{\prime}\vec{q}^{\prime}}{m_{D^{\pm}}^{2}})}{q^{2 }-m_{D^{+}}^{2}}\] \[\mathcal{M}_{d}^{3/2^{-}} = \mu(p_{2})[-i\frac{1}{\sqrt{2}}g_{\Omega 31}g_{\Xi^{0}_{c}\cdot \,D^{-}\pm 0}g_{\Xi^{-}\cdot\,K^{-}\geq 0}\int\frac{d^{4}k_{1}}{(2\pi)^{4}}\] (19) \[\times \Phi[(p\omega_{D^{\pm}}-q\omega_{\Xi^{-}})^{2}]\gamma^{\mu}\frac{ i(\not{k}_{1}+m_{\Lambda^{0}})}{k_{1}^{2}-m_{\Lambda^{0}}^{2}}\gamma^{5}\] \[\times \frac{i(\not{p}+m_{\Xi^{-}})}{p^{2}-m_{\Xi^{-}}^{2}}g^{\mu \lambda}\frac{i(-g^{\mu\nu}+\frac{q^{\prime}\vec{q}^{\prime}}{m_{D^{\pm}}^{2}}) }{q^{2}-m_{D^{+}}^{2}}\] \[- i\frac{1}{\sqrt{2}}g_{\Omega 31}g_{\Xi^{0}_{c}\cdot\,D^{-} \pm\Lambda^{0}}g_{\Xi^{-}\cdot\,K^{-}\geq 0}\int\frac{d^{4}k_{1}}{(2\pi)^{4}}\] \[\times \Phi[(p\omega_{D^{\pm}}-q\omega_{\Xi^{-}})^{2}]\gamma^{\mu}\frac{ i(\not{k}_{1}+m_{\Sigma^{0}})}{k_{1}^{2}-m_{\Lambda^{0}}^{2}}\gamma^{5}\] \[\times \frac{i(\not{p}+m_{\Xi^{0}})}{p^{2}-m_{\Xi^{0}}^{2}}g^{\mu \lambda}\frac{i(-g^{\mu\nu}+\frac{q^{\prime}\vec{q}^{\prime}}{m_{D^{\pm}}^{2}}) }{q^{2}-m_{D^{+}}^{2}}\] \[\mathcal{M}_{c}^{1/2^{-}} = \frac{1}{\sqrt{2}}g_{\Omega 22}g_{\Xi^{0}_{c}\cdot\,D^{0}\geq 0}g_{ \Xi^{0}\cdot\,K^{-}\geq 0}\int\frac{d^{4}k_{1}}{(2\pi)^{4}}\] (20) \[\times \Phi[(p\omega_{D^{\pm}}-q\omega_{\Xi^{0}})^{2}]\mu(p_{2})\gamma^{ \mu}\frac{i(\not{k}_{1}+m_{\Sigma^{0}})}{k_{1}^{2}-m_{\Sigma^{0}}^{2}}\gamma^ {5}\] \[\times \frac{i(\not{p}+m_{\Xi^{0}})}{p^{2}-m_{\Xi^{0}}^{2}}\mu(k_{0}) \gamma^{\nu}\gamma^{5}\frac{i(-g^{\mu\nu}+\frac{q^{\prime}\vec{q}^{\prime}}{m_{ D^{0}}^{2}})}{q^{2}-m_{D^{+}}^{2}},\] \[\mathcal{M}_{c}^{3/2^{-}} = -\frac{i}{\sqrt{2}}g_{\Omega 22}g_{\Xi^{0}_{c}\cdot\,D^{0}\geq 0}g_{ \Xi^{0}\cdot\,K^{-}\geq 0}\int\frac{d^{4}k_{1}}{(2\pi)^{4}}\] (21) \[\times \Phi[(p\omega_{D^{\pm}}-q\omega_{\Xi^{0}})^{2}]\mu(p_{2})\gamma^{ \mu}\frac{i(\not{k}_{1}+m_{\Sigma^{0}})}{k_{1}^{2}-m_{\Sigma^{0}}^{2}}\gamma^{5}\] \[\times \frac{i(\not{p}+m_{\Xi^{0}})}{p^{2}-m_{\Xi^{0}}^{2}}\mu(k_{0}) \frac{i(-g^{\mu\nu}+\frac{q^{\prime}\vec{q}^{\prime}}{m_{D^{0}}^{2}})}{q^{2}-m_{ D^{+}}^{2}},\] \[\mathcal{M}_{f}^{1/2^{-}} = \frac{i}{2}g_{\Omega 31}g_{D^{\pm}\cdot\,K^{-}}g_{\Xi\Xi^{0}_{c} \cdot\,\Xi^{0}_{c}}\int\frac{d^{4}k_{1}}{(2\pi)^{4}}\] (22) \[\times \Phi[(p\omega_{D^{\pm}}-q\omega_{\Xi^{0}})^{2}]\mu(p_{2})\gamma^{5} \frac{1}{k_{1}^{2}-m_{D^{-}_{\Sigma^{0}}}^{2}}\] \[\times \gamma^{5}\frac{(\not{p}+m_{\Xi^{0}})}{p^{2}-m_{\Xi^{0}}^{2}},\] \[\mathcal{M}_{f}^{3/2^{-}} = \frac{1}{2}g_{\Omega 32}g_{D^{\pm}\cdot\,K^{-}}g_{\Xi\Xi^{0}_{c} \cdot\,\Xi^{0}_{c}}\int\frac{d^{4}k_{1}}{(2\pi)^{4}}\] (23) \[\times (p_{1}^{\mu}-k_{1}^{\mu})\frac{(-g^{\mu\nu}+\frac{q^{\prime}\vec{q} ^{\prime}}{m_{D^{0}}^{2}})}{q^{2}-m_{D^{+}}^{2}}\mu(k_{0})\gamma^{\nu}\] \[\times \gamma^{5}\frac{(\not{p}+m_{\Xi^{0}})}{p^{2}-m_{\Xi^{0}}^{2}},\] \[\mathcal{M}_{f}^{3/2^{-}} = \frac{1}{2}g_{\Omega 32}g_{D^{\pm}\cdot\,K^{-}}g_{\Xi\Xi^{0}_{c} \cdot\,\Xi^{0}_{c}}\int\frac{d^{4}k_{1}}{(2\pi)^{4}}\] (24) \[\Phi[(p\omega_{D^{\pm}}-q\omega_{\Xi^{0}})^{2}]\mu(p_{2})\gamma^{ 5}\frac{1}{k_{1}^{2}-m_{D^{-}_{\Sigma^{0}}}^{2}}\] \[\times (p_{1}^{\mu}-k_{1}^{\mu})\frac{(-g^{\mu\nu}+\frac{q^{\prime}\vec{q}^{ \prime}}{m_{D^{0}}^{2}})}{q^{2}-m_{D^{+}}^{2}}\mu^{\nu}(k_{0})\frac{(\not{p}+m_{ \Xi^{0}})}{p^{2}-m_{\Xi^{0}}^{2 Considering the \(\Lambda\) values adopted in this work, we will first discuss the results of the calculative coupling constants. Substituting Eq. (12) into Eq. (14), the dependence of the coupling constants on the parameter \(\Lambda\) is solved. By performing the integration of parameters \(\alpha\) and \(\beta\) from 0 to infinite, the numerical results are presented in Fig. 2, which shows the variation of the coupling constants with respect to the \(\Lambda\) in the range of 0.90 to 1.10 GeV. From the results, we can find that the value of the coupling constants decreases with the increase of \(\Lambda\), and they are not very sensitive to the \(\Lambda\). It is important to note that the coupling constant \(g_{\Omega^{*}_{c}(3327)\Xi D^{*}}\) decreases sharply at a small \(\Lambda\) value. Due to the presence of ultraviolet divergences in the calculation, the average value of the molecular mass was used. Here, the divergences originate from the threshold of \(D^{0}\Xi^{0}\) and \(D^{*0}\Xi^{0}\) channels being slightly below the masses of the \(\Omega^{*}_{c}(3185)\) and \(\Omega^{*}_{c}(3327)\), respectively. After obtaining the coupling constants, we compute the partial decay widths for the transitions \(\Omega^{*}_{c}(3185)\to\bar{K}\Xi^{(^{\prime})}_{c}\) and \(\Omega^{*}_{c}(3327)\to\bar{K}\Xi^{(^{\prime})}_{c}\), and plot the results in Fig. 3, which vary with the parameter \(\Lambda=0.9-1.1\) GeV. Note that the partial decay widths in the channels \(\Omega^{*}_{c}(3185)\to K^{-}\Xi^{(^{\prime})+}_{c}\)\(\Omega^{*}_{c}(3185)\to K^{-}\Xi^{(^{\prime})+}_{c}\) and \(\Omega^{*}_{c}(3327)\to K^{-}\Xi^{(^{\prime})+}_{c}\) are only estimated here, and the other channels \(\Omega^{*}_{c}(3185)\to\bar{K}^{0}\Xi^{(^{\prime})0}_{c}\) and \(\Omega^{*}_{c}(3327)\to\bar{K}^{0}\Xi^{(^{\prime})0}_{c}\) can be obtained by isospin symmetry. The sum of these partial decay widths gives the total decay width of the \(\Omega^{*}_{c}(3185)\) or total decay width of the \(\Omega^{*}_{c}(3185)\) or \(\Omega^{*}_{c}(3327)\), which are also shown in Fig. 3. From the results of Fig. 3(a), we can find that the total decay width for the transitions \(\Omega^{*}_{c}(3185)\to\bar{K}\Xi^{(^{\prime})}_{c}\) increases with the increase of \(\Lambda\), while it decreases when \(\Lambda\) varies from 0.940 to 0.945 GeV. Comparing with the total decay width for the \(\Omega^{*}_{c}(3185)\to K\Xi^{(^{\prime})}_{c}\) reactions, we found the line shape for the total decay width of the \(\Omega^{*}_{c}(3327)\to\bar{K}\Xi^{*}_{c}\) reactions are very different. To see how different, we take \(\Omega^{*}_{c}(3327)\) at \(J^{P}=1/2^{-}\) case as an example. The obtained total decay width for the \(\Omega^{*}_{c}(3327)\to K\Xi^{*}_{c}\) reaction decreases, then it begin increases at \(\Lambda=0.935\) GeV. Now, let me give a clear discussion whether the \(\Omega^{*}_{c}(3185)\) and \(\Omega^{*}_{c}(3327)\) can be interpreted as molecules composed of \(D^{(*)}\Xi\) molecular components. From Fig. 3, we observe that the total decay width for the \(\Omega^{*^{P}=1/2^{-}}_{c}(3327)\to\bar{K}\Xi^{(^{\prime})}_{c}\) and \(\Omega^{*^{P}=3/2^{-}}_{c}(3327)\to\bar{K}\Xi^{(^{\prime})}_{c}\) is estimated to be about 104.39-167.54 MeV and 15.55-21.04 MeV, respectively, where the theoretical value 15.55-21.04 MeV is close to the experimental data \(20\pm 5^{+13}_{-1}\). This suggests if the spin-parity of the \(\Omega^{*}_{c}(3327)\) is \(J^{P}=3/2^{-}\), the assignment as an \(S\)-wave pure \(D^{*}\Xi\) molecular state for the \(\Omega^{*}_{c}(3327)\) is supported. However, if the \(\Omega^{*}_{c}(3327)\) has a spin-parity of \(1/2^{-}\), the predicted total decay width is much bigger than the experimental total width, which disfavors such a spin-parity assignment for the \(\Omega^{*}_{c}(3327)\) in the\(D^{*}\Xi\) molecular picture. Moreover, the spin-parity \(J^{P}=1/2^{-}\)\(D\Xi\) molecular assumptions for the \(\Omega^{*}_{c}(3185)\) cannot be conclusively determined. This is because the obtained total decay width for the \(\Omega^{*}_{c}(3185)\) with \(J^{P}=1/2^{-}\)\(D\Xi\) assignment is comparable with that of the experimental total width in the range of \(\Lambda=0.900-0.915\) and \(\Lambda=0.945-0.990\) GeV. It may be a meson-baryon molecule contain a big \(D\Xi\) component. Indeed, there exist several coupled states composed of \(D\Xi\) and \(D^{*}\Xi\) components [20]. However, the authors in Ref. [20] claim that the interaction between \(D^{(*)}\) meson and \(\Xi\) baryon is not strong enough to form a pure bound state with quantum numbers considered in the current work. A possible reason for this is that the potential kernels they obtained only consider light meson exchanges and do not include the contact term that describes the short distance interaction between the \(\Xi\) and charmed meson. Fig. 3 also tells us that the decay width of \(\Omega^{*}_{c}(3185)\) into \(\bar{K}\Xi_{c}\) is about 64.60-82.98 MeV, which almost fully accounts for the total width of \(\Omega^{*}_{c}(3185)\). In other words, the transition from \(\Omega^{*}_{c}(3185)\) to \(\bar{K}\Xi_{c}\), which is the experimental observation channel, provides a dominant contribution to the total decay width. However, the decay width of \(\Omega^{*}_{c}(3185)\to\bar{K}\Xi_{c}\) is up Figure 2: Coupling constants for the newly observed \(\Omega^{*}_{c}\) with different spin-parity as a function of the parameter \(\Lambda\). to several MeV, which accounts for 9.86%-10.70% of its total width. That means the transition \(\Omega_{c}^{*}(3185)\to\bar{K}\Xi_{c}^{\prime}\) gives a minor contribution. A possible explanation for this may be that the phase space for the transition \(\Omega_{c}^{*}(3185)\to\bar{K}\Xi_{c}^{\prime}\) is smaller than that of the \(\Omega_{c}^{*}(3185)\to\bar{K}\Xi_{c}\) reaction. The same conclusion can also be drawn for the \(\Omega_{c}(3327)\to\bar{K}\Xi_{c}^{\prime}\) reactions. It should be noted that the authors in Ref. [12] only compute the decay widths of \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(33327)\) into \(\bar{K}\Xi_{c}\) by assigning \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3327)\) as state \(2S(3/2^{+})\) and \(1D(3/2^{+})\), respectively. The predicted decay widths can reach 78.16 MeV and 25.60 MeV, respectively, which are both larger than the experimental central value. This suggest that if the newly observed \(\Omega_{c}^{*}\) is conventional three quark state, the transition \(\Omega_{c}^{*}\to\bar{K}\Xi_{c}^{\prime}\) is very small and can be ignored. Thus, we propose the experimental search for \(\Omega_{c}^{*}\) in the \(\Omega_{c}^{*}\to\bar{K}\Xi_{c}\) reaction that offers a nice channel to test the molecular nature of the \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(33327)\). ## IV Summary Inspired by the newly observed baryons \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(33327)\), the strong decay widths of the \(\Omega_{c}^{*}\to\bar{K}\Xi_{c}\) and \(\Omega_{c}^{*}\to\bar{K}\Xi_{c}^{\prime}\) was studied in an effective Lagrangian approach. Our theoretical approach is based on the assuming that the newly observed \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3327)\) can be explained as \(S\)-wave \(D\Xi\) and \(D^{*}\Xi\) molecular state, respectively. With the \(D^{(*)}\Xi\) assignment, the partial decay widths of the \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(33327)\) into the \(\bar{K}\Xi_{c}^{(^{\prime})}\) final states through hadronic loop are calculated. The decay process is described by the \(t\)-channel \(\Lambda\), \(\Sigma\) baryons and \(D_{s}\), \(D_{s}^{*}\) mesons exchanges, respectively. The current results support the \(\Omega_{c}^{*}(3327)\) with\(J^{P}=3/2^{-}\) as pure \(D^{*}\Xi\) molecule while the \(\Omega_{c}^{*}(3327)\) with \(J^{P}=1/2^{-}\) can not be well reproduced in the molecular state picture. In addition, the spin-parity \(J^{P}=1/2^{-}\)\(D\Xi\) molecular assumptions for the \(\Omega_{c}^{*}(3185)\) can't be conclusively determined. It may be a meson-baryon molecule with a big \(D\Xi\) component. We also find that the transition \(\Omega_{c}^{*}(3185)\to\bar{K}\Xi_{c}\), which is the experimental observation channel, provides a dominant contribution to the total decay width while the transition \(\Omega_{c}^{*}(3185)\to\bar{K}\Xi_{c}^{\prime}\) gives a minor contribution. However, the decay channel \(\Omega_{c}^{*}\to\bar{K}\Xi_{c}^{\prime}\) can be well employed to test the molecule interpretations of \(\Omega_{c}^{*}(3185)\) and \(\Omega_{c}^{*}(3327)\) by comparing our results with this in Ref. [12]. ###### Acknowledgements. Yin Huang acknowledges the YST Program of the APCTP and the support from the National Natural Science Foundation of China under Grant No.12005177.
2309.09515
Sparse and Privacy-enhanced Representation for Human Pose Estimation
We propose a sparse and privacy-enhanced representation for Human Pose Estimation (HPE). Given a perspective camera, we use a proprietary motion vector sensor(MVS) to extract an edge image and a two-directional motion vector image at each time frame. Both edge and motion vector images are sparse and contain much less information (i.e., enhancing human privacy). We advocate that edge information is essential for HPE, and motion vectors complement edge information during fast movements. We propose a fusion network leveraging recent advances in sparse convolution used typically for 3D voxels to efficiently process our proposed sparse representation, which achieves about 13x speed-up and 96% reduction in FLOPs. We collect an in-house edge and motion vector dataset with 16 types of actions by 40 users using the proprietary MVS. Our method outperforms individual modalities using only edge or motion vector images. Finally, we validate the privacy-enhanced quality of our sparse representation through face recognition on CelebA (a large face dataset) and a user study on our in-house dataset.
Ting-Ying Lin, Lin-Yung Hsieh, Fu-En Wang, Wen-Shen Wuen, Min Sun
2023-09-18T06:53:41Z
http://arxiv.org/abs/2309.09515v1
# Sparse and Privacy-enhanced Representation for Human Pose Estimation ###### Abstract We propose a sparse and privacy-enhanced representation for Human Pose Estimation (HPE). Given a perspective camera, we use a proprietary motion vector sensor (MVS) to extract an edge image and a two-directional motion vector image at each time frame. Both edge and motion vector images are sparse and contain much less information (_i.e._, enhancing human privacy). We advocate that edge information is essential for HPE, and motion vectors complement edge information during fast movements. We propose a fusion network leveraging recent advances in sparse convolution used typically for 3D voxels to efficiently process our proposed sparse representation, which achieves about 13x speed-up and 96% reduction in FLOPs. We collect an in-house edge and motion vector dataset with 16 types of actions by 40 users using the proprietary MVS. Our method outperforms individual modalities using only edge or motion vector images. We also demonstrate the generalizability of our approach on MMH-PSD and HumanEva datasets. Finally, we validate the privacy-enhanced quality of our sparse representation through face recognition on CelebA (a large face dataset) and a user study on our in-house dataset. The code and dataset are available on the project page: [https://lyhsieh.github.io/sphp/](https://lyhsieh.github.io/sphp/). ## 1 Introduction With the advance of deep learning [1] for visual perception and the lower cost of connected camera systems, human society is at the beginning of the smart camera systems era [3] facilitating our daily life. However, there are two key challenges to be addressed. Firstly, the dilemma between convenience and privacy. We want the systems (_e.g._, in our home) to recognize our behavior and assist us, but we also need to ensure they protect our privacy. Secondly, applying deep models on the cloud's visual data is costly and introduces delays. We aim to run these models efficiently on the edge device, which supports real-time response and protects our privacy. Therefore, a sparse and privacy-enhanced representation for human pose estimation is desired. An event-based camera is a sparse representation approach that only records the intensity variances between active pixels. Hence, privacy can be enhanced, and recorded identities cannot be directly recognized compared to traditional cameras. Human pose estimation from event data has been studied in recent years [B, E]. However, networks trained with event data usually have a significantly lower performance in HPE due to no event data recorded on still body parts. In order to achieve better HPE performance by sparse representation, a new approach is necessary for privacy-enhanced smart cameras on edge devices. Inspired by the seminal edge (_i.e._, boundary) [D] and motion [D] literature before deep learning, we advocate that edge information is essential for HPE, and motion vectors complement edge information during fast movements. Besides, both edge and motion vectors are sparse representations that can be efficiently processed and enhance privacy. Hence, we propose to use edge to capture the contour of all body parts and replace event data with two-directional motion vectors. Specifically, we use a proprietary motion vector sensor (MVS) to extract an edge image and a two-directional motion vector image at each time frame. Thus, our method differs from pure software solutions that may already contain privacy-sensitive data. We propose a fusion network leveraging these complementary sparse representations (_i.e._, edge and motion vectors) to perform better than each representation. Moreover, we exploit the sparsity of our data to replace standard convolution with submanifold sparse convolutions [D] typically used for 3D voxels to speed up the inference time by 13 times. Unlike RGB images, our proposed edge and motion vector representation lacks labeled data. Therefore, we collect an in-house dataset called Sparse and Privacy-enhanced Dataset for Human Pose Estimation (**SPHP**), as shown in Figure 1. This dataset includes recordings of 40 individuals performing 16 exercise and fitness-related actions, such as stretching or jogging, captured from 4 distinct viewing angles. We also collect grayscale images synchronized with edge and motion vectors to speed up joint position annotation. First, we apply a pre-trained natural-image-based HPE network on the grayscale images to obtain high-quality "initial" joint labels. Next, we ask human annotators to check and correct the errors if needed. This approach is estimated to save about 2,000 hours of manual joint labeling effort. To demonstrate the applicability of our proposed framework, we begin by conducting extensive experiments on our **SPHP** dataset. Our fusion method surpasses the performance of individual modalities (_i.e._, edge or motion vectors) for human pose estimation, especially for fast-moving joints, with a maximum relative improvement of 11% over using only the edge modality. We also compare traditional and submanifold sparse convolutions to demonstrate the advantage of our sparse data. The results show that sparse convolution networks can reduce the number of FLOPs of HPE by 96% while maintaining an acceptable error rate. Figure 1: Examples from SPHP. Our motion vector sensor (MVS) can extract an edge image and a two-directional motion vector image (visualized as horizontal MV\({}_{X}\) and vertical MV\({}_{Y}\), respectively) at each time frame. We also provide annotated body joints for Human Poses (HP) and corresponding grayscale images. In addition, we achieve a 13x acceleration in inference time, which is particularly beneficial for real-time applications. Besides, we confirm the generalizability of our approach on additional HPE datasets, such as MMHPSD[] and HumanEva[]. The results on MMHPSD highlight that motion vectors capture more valuable information compared to traditional event cameras. To verify the privacy-enhanced attribute of our data, we use ArcFace[] to evaluate the cosine similarity between various modalities of faces on the large-scale face attributes dataset, CelebA[]. Using edge as input results in a 10.2% recall drop compared to RGB images, indicating that privacy is enhanced by converting RGB images to edge images. Moreover, we conduct a user study to check if humans can identify a specific individual from 10 leaked edge images, given a grayscale reference face. The significantly low accuracy of 18.8% demonstrates the limited ability of humans to recognize faces in edge modality. Our main contributions are summarized as follows: 1. We introduce the Sparse and Privacy-enhanced Dataset for Human Pose Estimation (**SPHP**), which consists of synchronized, complementary images of edge and motion vectors along with ground truth labels for 13 joints. 2. Our fusion method outperforms individual modalities (_i.e._, edge or motion vectors) for human pose estimation, particularly for fast-moving joints, with a maximum relative improvement of 11% on our dataset when compared to using only the edge modality. Additionally, we further demonstrate the generalizability on other HPE datasets. 3. The high sparsity of **SPHP** enables us to achieve a maximum 13x acceleration in inference time and decrease FLOPs by 96% after applying sparse convolution, significantly reducing computational costs compared to traditional convolutional neural networks. 4. We demonstrate the privacy-enhanced nature of our sparse representation through face recognition experiments. We utilize edge images as input and validate a recall drop of over 10.2% in comparison to RGB images on the CelebA dataset. Furthermore, our user study shows that humans have limited ability to recognize faces in edge modality. ## 2 Related work ### Human Pose Estimation Human Pose Estimation (HPE) is a rapidly developing field of research in computer vision, intending to predict 2D/3D positions of joints from various types of input, including RGB, grayscale, or even depth image []. HPE has numerous potential applications, such as action recognition, health monitoring [], and physical education. Currently, CNN-based methods [] achieve the state-of-the-art performance. Some focus on single-person pose estimation [], while others can perform HPE on multi-person []. These methods are typically trained and evaluated on RGB-based image datasets such as MS-COCO [], MPII [], and CrowdPose []. Top-down and bottom-up approaches are two common strategies in HPE. A top-down approach is a two-stage method that performs human detection at each input image to find the region of interest before predicting joint positions. The accuracy of a top-down approach may rely on the quality of the human detection process. Representative networks for top-down paradigms include HRNet [], Mask-RCNN [], CFN [], CPN [], G-RMI [] and SimpleBaseline []. A bottom-up approach detects each joint position and assembles them into groups. Representative works are DeepCut [12], DeeperCut [13] and OpenPose [14]. Bottom-up approaches have shown greater efficiency in HPE for multiple humans compared to top-down approaches and can effectively handle complex poses involving self-occlusion. In this study, we adopt a bottom-up approach for our backbone networks. ### Sparse Convolution Convolution Neural Networks (CNNs) have been proven effective for many visual recognition tasks, including human pose estimation. However, CNNs' high computational cost makes it challenging to use on resource-limited embedding systems. The cost significantly increases when higher dimensional convolutions, such as 3D CNN on 3D point clouds, are needed. Despite the challenge, several sparse convolution techniques have been proposed to leverage the sparsity property within the data. For instance, [13, 14] propose sparse convolutional layers for processing 3D data efficiently. [14] proposes submanifold sparse convolution, which leads to computational saving but remains state-of-the-art performance. [15] designs a sparse data structure and realizes fast computation for 3D shape analysis. [15, 14] use sparse convolutional layers to maintain the efficiency for processing point clouds. Other than 3D data, handwritten characters are a 2D sparse data example demonstrating sparse convolution's effectiveness for character recognition [13]. Nevertheless, fewer studies [14] have been conducted to leverage sparse convolution for human pose estimation. ### Privacy Enhancing Using RGB images to recognize human behavior has many useful applications but raises privacy concerns. Hence, enhancing privacy using software or hardware-level techniques becomes an essential research direction. At the software level, early methods [13] involved algorithms such as encryption, image filtering, and object/people removal. Besides, [12] introduces inverse super-resolution, generating multiple low-resolution training videos (_e.g._, 16\(\times\)12) from high-resolution videos for human activity recognition. [14, 15] adopt adversarial learning to remove privacy-sensitive information from images while balancing the trade-off between privacy and task. At the hardware level, privacy-enhanced optics are designed to filter out private information while still retaining functionality such as human pose estimation and action recognition. [14] applies privacy-enhanced optics to block sensitive information from the incident light field before sensor measurements. [13, 14, 15] use optimal encoders to protect privacy, and use convolutional neural networks to extract features of specific tasks. In this work, we use a proprietary motion vector sensor (MVS) to extract an edge image and a two-directional motion vector image at the hardware level. We show that MVS significantly enhances privacy through our designed face recognition experiments. ## 3 Approach We describe the detailed process to obtain our sparse representation (Section 3.1). Then, we introduce a fusion model (Section 3.2), which benefits from both edge and motion vectors. To enhance the efficiency of our fusion model, we utilize sparse convolution (Section 3.3). ### Sparse Representation We use a proprietary Motion Vector Sensor (MVS) to extract an edge image and a two-directional motion vector image at each time frame. Our MVS provides sparse and privacy enhanced representation, as detailed below, compared to traditional RGB/grayscale cameras. **Edge Image.** MVS uses an efficient hardware implementation of edge detection, similar to Canny edge detection [B], to generate edge images. Each pixel in the edge image has a value within the range of {0, 255}. A higher value indicates a stronger intensity of the edge. **Motion Vector.** Inspired by the motion detection of the Drosophila visual system [E] and designed with patent-pending pixel-sensing technology, MVS detects vertical and horizontal motion vectors, denoted as MV\({}_{X}\) and MV\({}_{Y}\) shown in Figure 1, by analyzing changes in illumination. Each value falls within the range of {-128, 128}. The magnitude and sign of a value represent the strength and direction of motion, respectively. Since MVS only produces non-zero values when detecting edge or motion vectors, the resulting data can be very sparse. This greatly reduces resource usage during computation and storage. Additionally, the output data from MVS only focuses on changes in illumination over time and space, reducing the risk of privacy exposure (_i.e._, privacy-enhanced). ### Fusion Model Edge and motion vector information complement each other. While edge is sufficient for detecting clear and non-blurred body joints, incorporating motion vectors into our model can effectively address the challenges posed by fast movements and overlapping boundaries, which may confuse edge-based HPE models. Hence, we aim to combine the complementary information of edge and motion vectors while keeping our model compact and efficient. We directly concatenate an edge image and a two-directional motion vector image, proposing the early fusion model (referred to as FS) as illustrated in Figure 2. Our FS model can leverage various single-branch network architectures designed for compactness and efficiency. ### Sparse Convolution Sparse convolution methods, studied mainly on 3D point clouds [U] and 2D hand-written characters data [B], have shown on-par performance with dense methods while requiring substantially less computation. Our proposed edge and motion vector representation is 96.23% and 99.13% sparse on average, respectively. Hence, we leverage the submanifold sparse convolution network introduced in [B] to train our compact fusion model. This approach offers a powerful tool for processing sparse data, allowing us to effectively exploit the sparsity of our representation while capturing the necessary information for accurate predictions. We also achieve on-par performance with 96% fewer FLOPs compared to traditional convolution. ## 4 Our SPHP Dataset We use our proprietary MVS to collect an in-house edge and motion vector dataset with 16 types of actions performed by 40 users, as shown in Figure 1. **Data Format.** Motion Vector Sensor (MVS) captures three channels of 8-bit information, conveying edge, horizontal, and vertical motion vectors at each time frame. MVS has a resolution of 640\(\times\)480 and a field-of-view of 60 degrees. When collecting data, MVS records videos for 10 seconds per clip, with 30 frames per second. **Environment Setting.** We collect the data in a room with an area of 6\(\times\)5m\({}^{2}\). Two MVSs are located 3 meters away from the subject, with a viewing angle of 45 degrees centered on the subject's position as shown in Figure 3. This setup allows us to collect data from two different viewing angles simultaneously, which increases the data collection efficiency. ### Participants and Actions We collected data from 40 subjects, having a balanced distribution of 20 male and 20 female participants. The average ages of male and female participants are 32 years and 22 years, respectively. Besides, the average heights of male and female participants are 172 and 161 centimeters, respectively. Participants performed 16 fitness-related actions, which are listed in Table 1 and categorized into four classes based on the type of movement: C1 for upper-body movements, C2 for lower-body movements, C3 for slow whole-body movements, and C4 for fast whole-body movements. To diversify the viewing angles of our dataset, we apply a novel strategy to capture each action from multiple perspectives. Firstly, we place two cameras within an interval of 45 degrees. Then, we instruct the participants to face various directions (_i.e._, 0, 15, 30, and 45 degrees, respectively) while capturing their actions. In each direction, every participant will perform four actions, totaling 16 actions, as listed in Table 1. ### Efficient Annotation Our dataset is annotated with the positions of 13 joints: nose, left/right shoulder, left/right elbow, left/right hand, left/right hip, left/right knee and left/right foot. We use a well-trained grayscale HPE model to speed up the annotation process to mark the initial joint positions. \begin{table} \begin{tabular}{l l l l} \hline \hline C1 & C2 & C3 & C4 \\ \hline 1. Arm abduction & 5. Leg knee lift & 8. Squat & 12. Elbow-to-knee \\ 2. Arm bicep curl & 6. Leg abduction & 9. Walk in place & 13. Jump in place \\ 3. Wave hello & 7. Leg pulling & 10. Standing side bend & 14. Jumping jack \\ 4. Punch up forward & & 11. Roll wrists \& ankles & 15. Hop on one foot \\ & & & 16. Jog in place \\ \hline \hline \end{tabular} \end{table} Table 1: The 16 actions in our SPHP dataset are categorized into four classes: C1 for upper-body movements, C2 for lower-body movements, C3 for slow whole-body movements, and C4 for fast whole-body movements. Then, we manually confirm the initial position as acceptable or make fine-grained adjustments on each frame. All videos in our dataset are fully annotated. ### Dataset Comparison We compare our **SPHP** dataset and two event-based datasets, namely DHP19 [] and MMH-PSD [], in Table 2. Our dataset outperforms these datasets in various aspects. Firstly, our multi-modality data includes two-directional motion vectors, providing more detailed and nuanced information about actions. Secondly, SPHP boasts the highest number of subjects and frames, and exhibits greater diversity in different unique action types compared to DHP19, which has a large proportion of symmetric actions. Additionally, unlike DHP19 and MMHPSD, SPHP captures 10-second videos for each action rather than recording a fixed number of repetitions, thus providing more comprehensive information. ## 5 Experimental results ### Human Pose Estimation To showcase the superiority of our sparse data in HPE, we conduct the experiments on three datasets: **SPHP**, MMHPSD [] and HumanEva []. Within our SPHP dataset, we compare the performance across various input modalities, including edge, motion vectors, a fusion of edge and motion vectors, and grayscale images. Besides traditional convolution, we also employ submanifold sparse convolution to assess the computational efficiency gained from exploiting the sparsity in input data. Additionally, we assess the generalization capability of our method on MMHPSD and HumanEva. Notably, within the MMHPSD dataset, we incorporate the provided event data for further comparison with our motion vectors. **Implementation.** We test on 3 CNN backbones, including the DHP19 [] proposed model (218K), U-Net-Small (1.9M), and U-Net-Large (7.7M). U-Net-Small and U-Net-Large are built based on the architecture proposed by [], incorporating three downsampling and upsampling operations. The output dimensions of the \(3\times 3\) convolutions in the U-Net-Large are twice as large as those of the U-Net-Small. The input frames are resized to 288 \(\times\) 384 for the SPHP dataset, whereas the MMHPSD dataset retains its original size of 256 \(\times\) 256. For each joint, the model outputs a heatmap that indicates the likelihood of the joint position at each pixel. To generate a target heatmap for a joint, we initialize an all-zero map of the same size as the input frame and set a value of 1 to the pixel corresponding to the annotated joint position. The heatmap is then smoothed using Gaussian blur with \(\sigma=4\). Mean Squared Error (MSE) is employed as the loss function. **Evaluation Metrics.** Mean Per Joint Position Error (**MPJPE**) is chosen for evaluation. MPJPE \(=\frac{1}{N}\sum_{i}\|y_{i}-\hat{y_{i}}\|\) calculates the Euclidean distance between predicted positions \(\hat{y_{i}}\) and ground truth positions \(y_{i}\) for each joint, where \(N\) is the number of joints. \begin{table} \begin{tabular}{c|c c c c c} \hline \hline Dataset & Sub\# & Seq\# & Frame\# & Dir. & MM \\ \hline DHP19 [] & 17 & 33 & 87K & & \\ MMHPSD [] & 15 & 12 & 240K & & ✓ \\ SPHP (ours) & 40 & 16 & 384K & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 2: A comparison of our SPHP dataset with event-based datasets in terms of the number of subjects (Sub\#), action sequences (Seq\#) per subject, frames (Frame\#), directional motion (Dir), and multi-modality (MM). **HPE Performance on SPHP.** The results in Table 3 show that our fusion (FS) model achieves the best results compared to other sparse inputs (_i.e._, Edge (ED) and Motion Vector (MV)) for both traditional and sparse convolution networks. Besides, the gap between traditional and sparse convolutions is small in the U-Net-Small/U-Net-Large backbone, whereas the gap becomes more significant when using the smallest model (DHP19). In Table 4, we evaluate MPJPE with sparse convolution networks based on the speed of each joint, categorized into three levels: slow, medium, and fast. For a 640\(\times\)480 image, a joint that moves less than 4 pixels compared to the previous frame is considered slow; between 4-6 pixels is medium; more than 6 pixels is fast. The results show that the gap between FS and ED is the largest for fast-moving joints. U-Net-Small shows the most noteworthy relative improvement of 11% (from 4.33 to 3.85), validating that motion vectors complement edges in detecting fast movements. See supplementary materials for more performance analysis of backbones and joint speed. The qualitative results are shown in Figure 4. According to the results, the edge images alone can lead to motion blur and decreased accuracy. For instance, the joint predictions in \begin{table} \begin{tabular}{c c c c c} \hline \hline Backbone & \multirow{2}{*}{Input} & Slow & Medium & Fast \\ \hline & MV & 36.66 & 9.66 & 9.65 \\ DHP19 **[D]** & ED & 5.66 & 6.90 & 8.22 \\ & FS & 5.69 & **6.76** & **7.34** \\ \hline & MV & 31.07 & 5.33 & 4.77 \\ U-Net-Small & ED & 3.51 & 3.95 & 4.33 \\ & FS & **3.48** & **3.62** & **3.85** \\ \hline & MV & 29.84 & 5.10 & 4.57 \\ U-Net-Large & ED & 3.47 & 3.82 & 4.22 \\ & FS & **3.44** & **3.62** & **3.84** \\ \hline \hline \end{tabular} \begin{tabular}{c c c c c} \hline \hline Backbone & Params & Conv. & GFLOPs & FPS \\ \hline & C & 275.45 & 26.89 \\ \cline{2-4} DHP19 **[D]** & 218K & \begin{tabular}{c} **33.25** \\ (\(\downarrow\)87\%) \\ \end{tabular} & \begin{tabular}{c} **38.88** \\ (\(\downarrow\)5x) \\ \end{tabular} \\ \hline U-Net-Small & 1.9M & \begin{tabular}{c} **46.74** \\ **SC** \\ \end{tabular} & \begin{tabular}{c} **36.13** \\ (\(\downarrow\)96\%) \\ \end{tabular} & \begin{tabular}{c} **36.13** \\ (3x) \\ \end{tabular} \\ \hline U-Net-Large & 7.7M & \begin{tabular}{c} **46.74** \\ **SC** \\ \end{tabular} & \begin{tabular}{c} **36.13** \\ (\(\downarrow\)96\%) \\ \end{tabular} & \begin{tabular}{c} **36.13** \\ (3x) \\ \end{tabular} \\ \hline U-Net-Large & 7.7M & \begin{tabular}{c} **4510** \\ **SC** \\ \end{tabular} & \begin{tabular}{c} **186.80** \\ (\(\downarrow\)96\%) \\ \end{tabular} & \begin{tabular}{c} **13.89** \\ (13x) \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 4: MPJPE at different speed levels. Table 5: Comparison of GFLOPs and FPS Each joint is classified into three levels, (frames per second). All of the experiments are conducted for the fusion input “FS” stand for edge, motion vector, and fusion (edge and motion vectors). Figure 4(a)(b) are incorrect due to motion blur. Moreover, the unclear contour may also lead to inaccurate prediction, as shown in Figure 4(c)(d). Performing early fusion on edge and motion data enables models to rectify pose estimation. **Computational Efficiency.** As shown in Table 5, sparse convolution can achieve a noteworthy 96% reduction in FLOPs. We also evaluate frames per second (FPS) on an Intel Core i9-7940 3.1GHz CPU and observe a 3x and 13x acceleration in U-Nets. With sparse convolution, U-Net-Small strikes the best balance between accuracy (MPJPE) and speed (FPS). **Experiments on more complex datasets.** We evaluate our method on the MMHPSD [5], which provides event frames. On MMHPSD, we generate MV data using an off-the-shelf approach. The experimental results are presented in Table 6. Notably, our "MV + edge" fusion approach exhibits superior performance compared to using edge or MV modalities alone. This shows the generalizability of our method across various datasets. Furthermore, using "MV" as input demonstrates superior performance compared to using the "event" provided by MMHPSD, both with and without edges. This serves as evidence that MV captures better information than traditional event cameras. Besides MMHPSD, we include the experimental results from HumanEva [5] dataset in the technical report on our project page. convert CelebA's RGB images into grayscale and edge images for comparison. Note that facial motion is typically small or absent, so it is not included. We perform face recognition on each type of image using the ResNet-50 [] backbone with Arcface [] loss. Table 7 shows that face recognition performances are similar for RGB and grayscale images. However, when using edge input, the accuracy drops by 4.1% compared to RGB, while the recall drops even more significantly by 10.2%. This indicates that using the edge format can help reduce individuals' privacy exposure compared to the other two formats. **Human Ability for Cross-modality Face Matching.** We further test the cross-modality face-matching ability of humans on our **SPHP** dataset. To simulate people identifying different faces, we design a survey to analyze the ability of 100 participants to match leaked edge faces with grayscale faces from different angles and appearances. Each participant will be asked ten questions. In each question, the participant should identify the person from ten choices of edge faces given a grayscale reference face. In order to make the test more complicated, faces in the choices are presented from various angles and differ from those in the questions. The average test accuracy is only 18.8%, whereas the accuracy increases to 87.3% if we change the choices to grayscale images. These results suggest that people have a restricted ability to recognize daily faces from leaked edge images. ## 6 Conclusion We introduce the novel **SPHP** dataset, which contains sparse edge images and two-directional motion vectors. Our proposed fusion model exhibits enhanced performance in human pose estimation compared to individual modalities using only edge or motion vectors. By leveraging our sparse data with submanifold sparse convolutions, we further reduce the FLOPs by 96% and achieve a 13x speed-up compared to traditional convolutional neural networks. In addition, we showcase the generalization capability of our method through experiments on MMHPSD and HumanEva datasets. Furthermore, to verify the privacy-enhanced nature of our representation, we conduct a face recognition experiment using edge images as inputs, which results in a recall drop of 10.2% compared to the use of RGB images. Finally, we carry out a user study to demonstrate the limited ability of humans to recognize faces across leaked edge and daily grayscale images. ## 7 Acknowledgements This work is supported in part by Ministry of Science and Technology of Taiwan (NSTC 111-2634-F-002-022) and Novatek Microelectronics Corp. We thank National Center for High-performance Computing (NCHC) for computational and storage resource. Besides, we appreciate Novatek's MVS technology in efficiently collecting edge and motion data for this project. \begin{table} \begin{tabular}{c|c c|c c} \hline \hline Input & Acc. & Drop & Recall & Drop \\ \hline RGB & 88.9 & - & 84.1 & - \\ Grayscale & 88.7 & 0.2 & 84.2 & -0.1 \\ Edge & **84.8** & 4.1 & **73.9** & 10.2 \\ \hline \hline \end{tabular} \end{table} Table 7: Face recognition performance for various input types on CelebA []. We train the ResNet-50 backbone with Arcface [] loss to perform face recognition. The recall of edge input drops by 10.2.
2303.17980
Interstellar Objects and Exocomets
In this chapter we review our knowledge of our galaxy's cometary population outside our Oort Cloud - exocomets and Interstellar Objects (ISOs). We start with a brief overview of planetary system formation, viewed as a general process around stars. We then take a more detailed look at the creation and structure of exocometary belts, as revealed by the unprecedented combination of theoretical and observational advances in recent years. The existence and characteristics of individual exocomets orbiting other stars is summarized, before looking at the mechanisms by which they may be ejected into interstellar space. The discovery of the first two ISOs is then described, along with the surprising differences in their observed characteristics. We end by looking ahead to what advances may take place in the next decade.
Alan Fitzsimmons, Karem Meech, Luca Matrà, Susanne Pfalzner
2023-03-31T11:32:52Z
http://arxiv.org/abs/2303.17980v1
# Interstellar Objects and Exocomets ###### Abstract In this chapter we review our knowledge of our galaxy's cometary population outside our Oort Cloud - exocomets and Interstellar Objects (ISOs). We start with a brief overview of planetary system formation, viewed as a general process around stars. We then take a more detailed look at the creation and structure of exocometary belts, as revealed by the unprecedented combination of theoretical and observational advances in recent years. The existence and characteristics of individual exocomets orbiting other stars is summarized, before looking at the mechanisms by which they may be ejected into interstellar space. The discovery of the first two ISOs is then described, along with the surprising differences in their observed characteristics. We end by looking ahead to what advances may take place in the next decade. ## 1 Introduction At the time of publication of _Comets II_ (_Festou et al._ 2004), the presence of cometary bodies around other stars had been well established. Since that time there has been a significant increase in theoretical investigations of cometary formation in protoplanetary disks (see SS2.3 and _Simon et al., this volume_). Resolved imaging and spectroscopy of circumstellar disks has become almost commonplace, giving exquisite insights into the birthplaces of comets (see SS2). Increasingly common detections of gas in extrasolar Kuiper belts has opened a new approach to study exocometary gas and composition, combined with a new era of dust detection at IR and sub-mm wavelengths. At the same time, observational studies have resulted in the discovery of new exocomet host systems, the detection of dust continuum transits for the first time, and further understanding of extant systems like \(\beta\) Pictoris (hereafter \(\beta\) Pic; see Fig. 1 and SS3.2). White dwarf atmospheric modelling has revealed a variety of compositions of extrasolar small bodies, including potentially volatile-rich exocomets. Interstellar Objects (ISOs) are planetesimals unbound to stars, and generally thought to originate by ejection from their planetary systems by various physical mechanisms. These could be both asteroids that are generally considered to be inert rocky bodies, or comets that contain a large amount of ices which can lead to outgassing and mass loss, the most common method of telling the difference between them (however, see the chapter by Jewitt & Hsieh in this volume on the blurred boundaries between these objects). Assuming the same physics and dynamics occurs in other planetary systems as in our own, then the observable ISO population would primarily consist of cometary bodies of similar sizes to those orbiting our Sun, although see the discussion in SS6.4. In contrast to exocomets, the existence of ISOs was merely hypothesised in _Comets II_, although their discovery was widely anticipated. Detection of ISOs passing through the Solar system would in principle allow remote (and eventually in-situ) sampling of bodies from other plan Figure 1: Artist’s impression of the \(\beta\) Pic system, illustrating the extensive families of exocomets observed within this system (credit ESO/L. Calcada). etary systems. The surety that such a discovery would eventually occur grew with increasingly refined models of planetary system evolution, and a growing understanding of how cometary bodies are lost to interstellar space and potential evolutionary processes, outlined in SS4. Increasing observational capabilities of wide-field surveys led to the first discovery of an ISO, 1I/'Oumuamua, in 2017 (described in SS6). That said, the second ISO discovery, 2I/Borisov, was by an individual effort with a smaller telescope (see SS6.2). 2I/Borisov was clearly an active comet from first detection, matching the expectation that most ISOs would be exocometary in nature. But 1I/'Oumuamua was very different, with initial observations failing to detect any sign of activity, even though it was observed at a heliocentric distance of 1.2-2.8 au. Hence, it is already clear that ISO properties span a wider range than previously thought, to the extent that it is difficult to predict what will be found in the next decade. In this chapter we will discuss the formation of and the observational evidence for exocomets, their ejection mechanisms into space to become ISOs and the effects that this has on their physical properties, including processing in the interstellar medium. We then describe the discovery and characterization of the first two ISOs observed passing through the solar system and the next decade of ISO science. ## 2 Planetary System Evolution ### Star and primordial disk formation ISOs travel through the interstellar medium (ISM), which is the primary galactic repository from which stars, including their surrounding planetary system, form. Since the advent of 1I/'Oumuamua, it has become clear that the ISM contains, apart from gas and dust, a third component - interstellar objects. These are much smaller than the free-floating planets that have been detected via microlensing (_Mroz et al._, 2017) and directly imaged (_Miret-Roig et al._, 2022), but are much more numerous by many orders or magnitude. Thus ISOs are present during the entire process described in the following text. The star and planet formation process starts in the ISM when molecular clouds form through turbulent compression and global instabilities. Stars begin to form when denser parts of the clouds become unstable and start to collapse gravitationally. Once a molecular cloud begins to collapse, its central density increases considerably, eventually leading to star formation, with often an entire star cluster emerging (_Bate et al._, 2003). A large fraction of the formed stars are not single but binaries. The binary fraction is 30-50% for Solar-type stars in the local neighbourhood (_Raghavan et al._, 2010) and up to \(\approx\)70% in young clusters (e.g. _Raghavan et al._, 2010; _Duchene and Kraus_, 2013). This fact might be important because some of the suggested ISO ejection mechanisms rely on binaries being present (see SS4.1). In this phase, a nascent individual protostar grows in mass via accretion from the infalling envelope until the available gas reservoir is exhausted or stellar feedback effects become important and remove the parental cocoon (for details, see _Klessen and Glover_, 2016) and Bergin _et a._ in this volume. As a natural consequence of angular momentum conservation, a disk develops during the formation of the protostar. Any initial nudge that imparts some core rotation means that material collapsing from its outer regions (with higher angular momentum) is channelled onto a disk, rather than the protostar itself (_Terebey et al._, 1984). These young stellar objects (YSOs) consist of several components: the accreting protostar, the cloud (core) it is forming from, a disk surrounding it, and outflows of material that fail to be accreted either to the star or the disk. Observationally, YSOs are classified based on their continuum dust spectrum, thus the developmental stage of its disk. Class 1 objects are still deeply embedded within the cloud and mainly emit at mm and sub-mm wavelengths, leading to a characteristic positive IR spectral index. Class 2 objects reveal sources with NIR and MIR excess, and Class 3 objects show only slight NIR excess. This classification scheme mainly reflects the development of the disks surrounding the stars from disk formation up to disk dispersal after several Myr. ### Disk properties and their development Flattened disks of cold dust and gas rotate around almost all low-mass stars shortly after their birth (_Williams and Cieza_, 2011). These are the sites where planetesimals form. ISOs are generally believed to be ejected planetesimals. Therefore, we review the planetesimal formation process in order to understand better the properties of ISOs. Observations mainly at IR to millimeter wavelengths determine the frequency of disks and their mass, size, structure, and composition as a function of age. The ages of individual young stars can only be determined with a relatively large error margin. Therefore, the temporal development of disks is usually determined in young stellar clusters where many relatively coeval stars are located in close proximity. The mean age of the cluster stars is taken to test the temporal evolution of the disks in this environment. Disk masses seem to decrease rapidly with system age (_Ansdell et al._, 2016). In the extremely early phases, the disk mass can be comparable to the mass of the protostar, but after just 1 Myr, it is often less than 0.1% of the star's mass. Some material is lost through outflows; some matter accretes onto the star or the star's radiation photo-evaporates it. Significantly in this context, some material also condenses into centimeter- or larger-sized bodies, including planetesimals. It also seems that disk dust masses correlate with stellar mass (\(R_{disk}\propto M_{*}^{0.9}\)), but this correlation steepens with time, with a more substantial drop for low-mass stars (_Pascucci et al._, 2016; _Ansdell et al._, 2016). Observations show that a surprisingly large fraction of disk dust masses appears low compared to the solid content of observed exoplanets (_Najita and Kenyon_, 2014). This has been interpreted as a sign of early planet formation (_Anara et al._, 2018; _Tychoniec et al._, 2020). However, recent work suggests that detection biases might play a role here and that disks contain similar amounts of solids as found in exoplanetary systems (_Mulders et al._, 2021). Observations suggest a shallower density profile in the inner disk (\(\Sigma\propto r^{-1}\)) and steeper at larger distances (\(\Sigma\propto r^{-3}\)) (_Andrews_, 2020). However, the classical assumption of a smooth gas disk is not always appropriate. High-resolution observations with ALMA provide evidence for rings, gaps and spiral structures in at least some of the most massive disks. Pronounced gap structures are often interpreted as being carved by protoplanets, implying that planetesimal formation starts early in these environments. The current sample of very high-resolution measurements in the mm continuum or scattered light is biased in favor of larger, brighter disks that preferentially orbit more massive host stars (_Andrews et al._, 2018; _Garufi et al._, 2018). Therefore, any conclusions about the prevalence of substructure have to be carefully assessed. The typical disk sizes in dust measured by mm continuum observations lie in the range of 10 - 500 au. Gas disk sizes measured from the CO line emission are often larger than the dust disk size of the same disk. The disk dust radius appears to be correlated with the disk dust mass (_Andrews_, 2020; _Sanchis et al._, 2021). The disk lifetime is not only of prime importance for planet formation theory but equally for determining the timescale on which planetesimals form. The mean disk lifetime is mostly determined from the observed disk frequency of clusters of different mean ages. It is found that the disk fraction decreases rapidly with cluster age with \(<\)10% of cluster stars retaining their disks for longer than 2-6 Myr (_Haisch et al._, 2001; _Mamajek_, 2009; _Richert et al._, 2018) in solar neighborhood clusters. However, recently several disks have been found with ages \(>\) 10 Myr, some as old as 40 Myr that still have the potential of forming planets (_Flaherty et al._, 2019). Besides, the disk lifetime seems to be much shorter in dense clusters and longer in co-moving groups. The method of using disk fraction in clusters for determining disk lifetimes has been demonstrated to suffer from several biases (_Pfalzner et al._, 2022). Looking at the transition phase from Class II to Class III objects, _Michel et al._ (2021) found a higher typical disk lifetime of \(\approx\) 8 Myr. A more complex picture is emerging, where disk lifetimes can range from \(<\)1 Myr to a few tens of Myr, and the mean age of the field star population lies in the range 1-10 Myr, but this is still not definitely determined. ### Dust and planetesimal growth For planets that reside close to their star, dust accretion is the standard formation scenario. Here planet formation takes place over a variety of size scales (see Fig. 2). Initially, the mm-sized dust particles largely follow the movement of the gas. However, Brownian motion means that collisions between the dust particles take place nevertheless. The low relative velocity of the dust particles means that they stick upon contact so that first larger porous fractal aggregates form. These compact in subsequent collisions and settle toward the midplane (_Dubrulle et al._, 1995), where the growth sequence continues. Once particles reach a size where they start to decouple from the gas aerodynamically, these solids start to migrate radially inwards. Further collisions lead to the formation of boulders, and eventually, planetesimals emerge. The largest planetesimals accumulate nearby smaller ones, grow further, and become terrestrial planets or the cores of gas giants. However, it is the planetesimals that are of particular interest in this chapter. Today's comets and interstellar objects are thought to be largely unchanged ancient planetesimals (for details, see, for example _Blum and Wurm_, 2008; _Morbidelli and Raymond_, 2016; _Armitage_, 2018, and Simon _et al._ of this book). This model has two potential problems: growth barriers during the accretion process and a relatively long formation timescale. The various growth barriers are the bouncing, fragmentation and drift barriers. The times expected for the different growth phases are \(10^{2}-10^{4}\) yr to form mm- to cm-sized pebbles, \(10^{4}-10^{6}\) yr until the planetesimal stage is reached, \(10^{6}-10^{7}\) yr to form terrestrial-type planets, and an additional \(10^{5}\) yr for the gas giants to accumulate their gas (_Pollack et al._, 1996). The cumulative time seems at odds with observations of the disk frequency in young clusters, which indicate the median protoplanetary disk lifetime to be merely 1-3 Myr for both dust and gas (_Haisch et al._, 2001). However, over the last decade, mechanisms have been devised that might solve the timescale problem while at the same time overcoming the growth barriers (_Armitage_, 2018). These mechanisms are the streaming instability (_Joudin and Goodman_, 2005) and pebble accretion (_Lambrechts and Johansen_, 2012). In the outer parts of the disks, planets also might form via a second process. In the initial phases, these disks are still very massive and can become gravitationally unstable. Simulations show that such disk instabilities lead to spiral arm formation, which can fragment directly to form large protoplanets (_Boss_, 1997). However, the material near the star stays too hot to go unstable, so this process is expected to generate planets typically only at large distances (\(\geq\) 100 au) from the star. Planetesimals would be produced as a by-product of planet formation. Details of the growth process including relevant references are described in the chapter on planetesimal formation by Simon _et al._ in this volume. ### Outer areas of young disks In SS2.2, we saw that the gas and dust disk differ quite often in size. The difference in size is a result of the grain growth in the disk. Initially, the pressure gradient of a disk exerts an additional force that causes gas to orbit at a slightly subkeplerian speed. As long as the dust is coupled to the disk, it just follows the gas movement. However, as dust grains grow to about mm sizes, the orbiting grains experience a frictional force, drifting inwards. At the same time, the disks' viscosity means that the gas spreads out to conserve angular momentum and enable gas close in to ac crete onto the star. Observations at (sub-)mm wavelengths typically trace the large dust grains; thus, disks appear more extended in gas than dust. The combined effects of growth and vertical and radial migration of dust particles mean that the disks' mean particle size should decrease with distance to the midplane and the central star. Alternatively, in terms of timescales, dust growth should proceed slower off the midplane and in the outer disk areas (_Dullemond and Dominik_, 2004; _Birnstiel and Andrews_, 2014). The question is where do the planetesimals that are ejected and become ISO primarily form? In the outer disk areas, at particular distances from the central protostar, the temperature is so low that volatile compounds such as water, ammonia, methane, carbon dioxide, and carbon monoxide can condense into solid ice grains. These distances are referblack to as the snowlines. These snowlines are essential in the context of dust growth. Ices can change the effective particle strengths and thereby affect collision outcomes. _Pinilla et al._ (2017) argue that the critical velocity for fragmentation increases at the (CO or) CO\({}_{2}\) and NH\({}_{3}\) snowlines and decreases at the H\({}_{2}\)O snowline. However, direct imaging has observed exoplanets at large distances (\(>\) several tens of au) from their parent stars. Either these planets have formed there or migrated outwards to these orbits. Among these directly imaged planet hosts, there are several that are thought to be relatively young (\(<\) 20 Myr; for example, PDS 70 (_Mesa et al._, 2019)), which illustrates that even in the outer disks, planetesimals must be able to grow relatively fast. If protoplanets can grow on such a timescale, even in the outer disk, planetesimals and the resulting ISOs should form on even shorter timescales. ## 3 Exocomets Exocomets are small bodies orbiting other stars that exhibit signs of activity, through the release of dust and/or gas (_Strom et al._, 2020). As such, their definition is broader compablack to the solar system population, which is much more refined dynamically and compositionally. Evidence for exocomets dates back to 1980s, when exocomets were ineffback from variable absorption features in the spectrum of \(\beta\) Pic (_Ferlet et al._, 1987), and when the dust from an exocometary belt was first discoveblack around Vega (_Aumann et al._, 1984). In general, exocometary release of gas and dust is detected in both the inner (few stellar radii to few au) and the outer (\(\sim\) 10 to 100s au) regions of extrasolar planetary systems (as depicted in Fig. 3), and we will distinguish between these two populations in the following sections. ### Exocometary belts: extrasolar cometary reservoirs #### 3.1.1 Detection and basic observational properties Exocometary belts (also known as _debris disks_) are extrasolar analogs of our Kuiper belt, and are the reservoirs of icy exocomets in planetary systems. These belts are first detected through the dust produced by exocomets as they collide and grind down to release small, observable dust. Thermal emission as well as scattered starlight by this dust can be detected and used to study these belts. The majority of detectable belts lie at distances of tens of au from their host star, and are cold with typical temperatures of tens of K (_Chen et al._, 2014). They are therefore first detected by mid- to far-IR surveys, most sensitive to the peak of the dust's thermal emission, with space telescopes such as IRAS, _Spitzer_, WISE, and _Herschel_ (e.g. _Su et al._, 2006; _Sibthorpe et al._, 2018). These surveys show belts to be ubiquitous; they are detected around \(\sim\)20% of nearby, several Gyr-old field stars, even though the surveys' sensitivity at best enabled detection of belts a few times more massive than the Kuiper belt (_Eiroa et al._, 2013). This makes this number very much a lower limit; studies of younger, less evolved belts at a few tens of Myr indeed show occurrences as high as \(\sim\)75% (_Pawellek et al._, 2021). The typical observable (i.e. not the total) mass of belts, only considering solids of sizes up to \(\sim\)cm in diameter, range between \(\sim\)0.1-25% of an Earth mass, or about a tenth to 20 times the mass of the Moon (_Holland et al._, 2017). This is but a small fraction of the belts' total mass, which Figure 2: Size scales during dust growth. Images from: (a) Chondritic interplanetary dust particle from _Jessberger et al._ (2001) (CC license Attribution 2.5); (b) dust aggregate simulation from _Simon et al._ (2022); (c) 4.5 Gy old Allende meteorite with chondrules (CC Generic License 2.0); (d) images of boulders seen on by _Rosetta_ comet 67P on 16 May 2016 from a distance of 8.9 km (ESA/Rosetta/NAVCAM - CC BY-SA IGO 3.0) (e) nucleus of 67P seen by Rosetta on 7 July 2015 from a distance of 154 km with a resolution of 13.1 m/pixel (ESA/Rosetta/NAVCAM - CC BY-SA IGO 3.0). is dominated by the largest, unobservable bodies. The total mass of the Kuiper belt is estimated from observations of its largest bodies, and is about 6% of an Earth mass (_Di Ruscio et al._, 2020). The Kuiper belt's dust mass is however predicted to be much lower than that of exocometary belts, to the point that a true Kuiper belt analog would not be detectable around nearby stars (_Vitense et al._, 2012). In summary, detectable exocometary belts have dust masses typically \(\sim 10-10^{5}\) times higher than inferred for the present day Kuiper belt, where this broad range represents the spread of belt dust masses observed. #### 3.1.2 Birth of exocometary belts Exocometary belts are born at tens to hundreds of au from the central star within protoplanetary disks, but they cannot be identified until after the protoplanetary disk is dispersed. During dispersal, primordial gas and dust are removed leaving behind second-generation dust (and gas), produced destructively in collisions of larger bodies. The dispersal of the protoplanetary disk takes place rapidly once the star is a few to \(\sim\)10 Myr-old (_Ercolano and Pascucci_, 2017 and SS2.2). The emergence of an exocometary belt from this dispersal is evidenced by the presence of an infrared excess, but with a decrease of \(\sim\)2 orders of magnitude in dust mass (_Holland et al._, 2017) compared to protoplanetary levels. In practice, the largest exocomets within belts must form during the protoplanetary phase to access the gas needed for the planetesimal formation process (see the chapter by Simon _et al._, this volume), but observationally validating this is difficult. An interesting proposal is that exocometary belts may originate in (one or more of) the large rings Figure 3: Top-down view of a young, \(\sim 10-100\) Myr-old planetary system after dissipation of the protoplanetary disk. The system has largely formed outer gas and ice giant planet(s) (at few to tens of au, like the Neptune analog with its orbit depicted as the dashed line), has an exocometary belt at tens of au producing cold gas and dust, and star-grazing exocomets in the inner region, producing gas and potentially warm exozodiacal dust as they approach and recede from the star. Exocomets can be probed in a variety of ways. In the outer regions (blue/cyan symbols), cold exocomets (extrasolar KBO analogs) can be probed in the belts they formed in, through cold dust and gas emission, and cold absorption against the star for edge-on systems. In the inner regions (black symbols), exocomets are probed through warm exozodiacal dust emission, as well as blue- and black-shifted absorption (see also Fig. 5) and asymmetric ‘shark-fin’ transits for edge-on systems. observed in structured protoplanetary disks (_Michel et al._, 2021), which has been shown could potentially explain the population of observed exocometary belts (_Najita et al._, 2022). Another possibility is that the photoevaporative dispersal of the gas-rich protoplanetary disk induces high dust-to-gas ratios (_Throop and Bally_, 2005) that may trigger the streaming instability and produce massive belts of planetesimals/(exo)comets (_Carrera et al._, 2017). However, such massive, large belts are not observed, and other models of dust evolution in a photoevaporating disk indicate that the grains efficiently drift inwards faster than they can pile-up to trigger planetesimal formation (_Selek et al._, 2020). One difficulty in the protoplanetary disk - exocometary belt observational comparison is that most of the protoplanetary disks in nearby star-forming regions surround low-mass stars, whereas the majority of detectable exocometary belts orbit around A-F stars, likely due to current lack of sensitivity to belts around later-type stars, particularly M dwarfs (_Luppe et al._, 2020). In general, the presence of evolved Class III disks in young star forming regions, with masses consistent with young exocometary belts, suggests that exocomets can form early on, within the first \(\sim\)2 Myr (_Lovell et al._, 2021). A key component to the transition from a protoplanetary disk to an exocometary belt is the presence of gas. This produces drag, slowing collisions and favoring dust growth in protoplanetary disks; on the other hand, gas dispersal enables the more collisionally destructive, dust-producing environment of exocometary belts (_Wyatt et al._, 2015). Unfortunately, distinguishing primordial from second generation gas is even harder, due to the surprisingly low CO abundances observed in many protoplanetary disks (_Krijt et al._, 2020), combined with the presence of gas now discovered in a growing number of exocometary belts (see SS3.1.5), and the difficulty in measuring total gas masses in both evolutionary phases. #### 3.1.3 Evolution of exocometary belts The physics of observable (massive) exocometary belts is driven by collisions, causing the grind-down of large exocomets (analogs to the solar system's observable Kuiper belt objects) into small, observable grains. This produces a collisional cascade, which rapidly reaches a quasi-steady state size distribution where the number of solid particles scales as \(n(a)\propto a^{q}\), where \(q=-3.5\) for a cascade extending to infinitely large and small bodies (_Dohnanyi_, 1969). More detailed cascade simulations show that this \(q\) value can vary between -3 and -4 (_Gaspor et al._, 2012; _Pan and Schlichting_, 2012). Observations of mm-cm wave spectral slopes implying values varying between \(\sim-(2.9-3.8)\) for observable grains in the mm-cm size regime (_Norfolk et al._, 2021), although interpretation of measurements in terms of size distributions is complex (_Lohne_, 2020). These slopes imply that what extrasolar observations are sensitive to (the solids' cross-sectional area) is dominated by small up to \(\sim\)cm-sized grains, but the total mass is dominated by the largest, unobservable bodies. This is the opposite to observations of the Kuiper belt, where we are most sensitive to the largest bodies. At the bottom end of the cascade, grains are sensitive to radiation forces, and are typically removed by radiation pressure from the central star (_Burns et al._, 1979), or by stellar winds which are particularly important for young late-type stars (_Strubbe and Chiang_, 2006; _Augereau and Beust_, 2006). For a radiation pressure-dominated environment, the smallest, _blow-out_ grain size in the cascade is the one where the radiation pressure force dominates over gravity, pushing smaller grains on unbound orbits and rapidly ejecting them from the system (_Backman and Paresce_, 1993). This removal of small grains causes a net mass loss of material from the cascade as a function of time. In the simplest model of a collisional cascade, this would cause a belt's mass to initially stay constant - until the largest bodies begin colliding - and then decrease as \(t^{-1}\)(_Dominik and Decin_, 2003) although models with more realistic treatment of cascade processes predict somewhat different time decays (_Lohne et al._, 2008; _Kenyon and Bromley_, 2017). Nevertheless, a decrease in dust emission as a function of time is unambiguously observed in surveys of debris disks of different ages, and can largely be explained by collisional evolution models (_Sibthorpe et al._, 2018). Additional mass removal effects could come into play that may speed up this mass loss, as hinted at by recent observations (_Pavellek et al._, 2021). Notably, faster mass removal could be caused by gravitational interaction between planets and exocometary belts, potentially ejecting small bodies from the planetary system and contributing to the population of interstellar objects (see SS4). Such interaction could be akin to the late dynamical instability and/or to the outward migration of Neptune inferred to have depleted the vast majority of material in our solar system's Kuiper belt (_Malhotra_, 1993; _Gomes et al._, 2005). #### 3.1.4 Resolved dust imaging: structure and (exo)planet-belt interaction Resolved imaging is ultimately needed to study the location and structure of exocometary belts. Given the star's luminosity, the observed temperature from the dust's unresolved spectrum can be used to calculate its location/radius (distance from the star), under the assumption that the dust behaves as a blackbody. In reality, small dust dominating the spectrum's emission is an inefficient emitter, and thus such temperature-derived _blackbody radii_ typically underestimate the true belt locations by a factor of a few (_Booth et al._, 2013). Belts are detected around stars from as far as nearby star-forming regions (\(\sim 150\) pc) to our nearest neighbors (a few pc away), with typical angular sizes of a few tenths to a few tens of arcseconds on-sky, so easily resolvable with the latest generation of high resolution facilities from optical to mm wavelengths. Given their low masses and large on-sky extents, the difficulty in imaging exocometary belts arises often from their low surface brightness, requiring deep integrations. Nevertheless, several famous belts are bright enough for structural characterization at high resolution. The first resolved exocometary belt was that around \(\beta\) Pic (_Smith and Terrile_ 1984). Almost four decades later we now have a considerable inventory of images for a few tens of belts (e.g. Fig. 4). These have been obtained across the wavelength spectrum, from space and ground-based optical/near-IR scattered light images (milliarcsec resolution, e.g. _Apai et al._ 2015; _Wahhaj et al._ 2016; _Esposito et al._ 2020), to space-based IR (few arcsec resolution, e.g. _Booth et al._ 2013) and mm-wave ground-based interferometric images (sub-arcsecond resolution, e.g. _Marino et al._ 2018b; _Sepulveda et al._ 2019). The most basic measurable quantity of a resolved belt is its location (radius). Exocometary belts resolved across the wavelength range show typical radii of a few tens to 100s of au, with a shallow trend indicating that more luminous stars tend to host larger belts (_Matra et al._ 2018a; _Marshall et al._ 2021). While a combination of observational bias and radius-dependent collisional evolution can explain this positive trend, the low scatter suggests that belts (including our own Kuiper belt) may form at preferential locations in protoplanetary disks. If confirmed with a larger sample of belts, this would imply a potential connection to either temperature-dependent exocomet formation processes (e.g. the CO ice line), or to the timescale of planet formation processes preventing exocomets from growing further into planets at these distances. While the classic picture of an exocometary belts is that of a narrow ring, recent imaging at IR and mm wavelengths shows that broad belts (with width/radius, \(\Delta R/R\), approaching \(\sim 1\)) are at least as common as narrow rings (\(\Delta R/R\ll 1\); see Fig. 4). Some of these belts may be born narrow but others, like the famous HR 8799 belt, may have evolved to host a scattered disk of high eccentricity particles akin to our Kuiper belt's, through gravitational interaction with interior planets (_Geiler et al._ 2019). Beyond a simple radius and width, exocometary belts observed at sufficiently high resolution show structural features which have long been linked to the gravitational action of planets (or dwarf planets) interior, exterior, or within these belts (e.g. _Pearce et al._ 2022). (Sub-)structures can be broadly separated into radial, vertical, and azimuthal features. In the radial direction, a belt's inner/outer edge and its sharpness can be linked to dynamical truncation by planets just interior or exterior to it (_Chiang et al._ 2009). This is particularly true for inner edges with surface density profiles steeper than the \(r^{\alpha}\) with \(\alpha\sim 2-2.3\) expected from collisional evolution of an undisturbed disk (_Schippler et al._ 2016; _Matra et al._ 2020). In the absence of truncating planets, the sharpness of belt edges may also be used to probe the eccentricity dispersion, and hence the level of dynamical excitation of exocomets within the belt (_Marino_ 2021; _Rafikov_ 2023). Dynamical excitation of exocometary material within belts can also be probed by the belts' observed vertical structure in edge-on systems (_Matra et al._ 2019b; _Daley et al._ 2019). Like the Kuiper belt, this has the imprint of the system's dynamical history; for example, the vertical structure of the \(\beta\) Pic belt supports the presence of a double population of exocomet inclinations (_Matra et al._ 2019b), akin to the hot and cold populations of Kuiper belt objects (KBOs) in the solar system (_Brown_ 2001). While the radial distribution of some wide exocometary belts appears to be smooth (_Matra et al._ 2020; _Faramaz et al._ 2021), at least three systems so far (e.g. _Marino et al._ 2018b) show the presence of rings separated by a gap, reminiscent of the rings observed in younger protoplanetary Figure 4: Images of debris disks and exocometary belts. Fomalhaut credit: ALMA (ESO/NAOJ/NRAO), M. MacGregor; NASA/ESA Hubble, P. Kalas, B. Saxton (NRAO/AUI/NSF); other images are adapted from _Marino_ (2022). disks (_Andrews et al._ 2018). In the simplest scenario, a single stationary planet at the gap location (tens of au) can clear its chaotic zone and carve a gap of width and depth related to the planet's mass and the age of the system (_Morrison and Malhotra_ 2015). However, these gaps may be too wide and require planets to migrate (_Friebe et al._ 2022), or different scenarios such as secular effects due to an eccentric planet interior to both rings (_Pearce and Wyatt_ 2015). A few narrow belts, most prominently the famous Fomalhaut (_Kalas et al._ 2005; _MacGregor et al._ 2017) and HD202628 (_Krist et al._ 2012; _Faramaz et al._ 2019) rings are observed to be significantly eccentric (\(e\sim 0.12\)). Secular perturbation by eccentric planets interior to the belts can produce and maintain eccentric belts (e.g. _Wyatt et al._ 1999), though the narrower than predicted width of these rings may point to alternative scenarios, such as belts being born eccentric in the protoplanetary phase of evolution (_Kennedy_ 2020). Azimuthal asymmetries in the brightness of exocometary belts are often detected in scattered light observations, particularly with the Hubble Space Telescope (HST; _Schneider et al._ 2014), though these may be simply produced by a combination of disk eccentricity, grain scattering phase functions, and viewing geometry (_Lee and Chiang_ 2016), rather than true azimuthal asymmetries in their density distribution. Similarly, eccentric disks observed in thermal emission may produce brightness asymmetries due to pericenter and/or apocenter glow (e.g. _MacGregor et al._ 2022), which which are expected to vary with wavelength (_Pan et al._ 2016; _Lynch and Lovell_ 2022). The strongest evidence for azimuthal asymmetry arises from the edge-on \(\beta\) Pic belt, which displays a strong clump in dust emission at mid-IR wavelengths (_Telesco et al._ 2005; _Han et al._ 2023) and in gas emission (_Dent et al._ 2014; _Cataldi et al._ 2018), but not as prominently in scattered light or ALMA dust observations (_Apai et al._ 2015; _Matra et al._ 2019b). Such clumps may be produced by resonant trapping by a migrating planet at the inner edge of the belt (_Wyatt_ 2003), akin to Neptune's migration into the Kuiper belt in the young solar system (_Malhotra_ 1993), but also by giant impacts between planet-sized bodies (_Jackson et al._ 2014). With the clear dynamical link between exocometary belt structure and the presence of exoplanets interacting with them, it is natural to ask whether the very existence of a belt is linked to the presence (or not) of exoplanets in the same system. For example, if the brightest debris disks form from the most massive protoplanetary disks that formed exoplanets most efficiently, one may expect a positive correlation between the presence of a belt and planets (_Wyatt et al._ 2007). On the other hand, the presence of outer giant planets may efficiently eject and deplete exocometary belts (producing ISOs), while enabling terrestrial planet formation to take place interior to them without significant disruption; in this case, one would expect an anticorrelation with outer giant planets, and a correlation with low-mass planets (_Raymond et al._ 2011, 2012). Evidence of trends continues to be debated in the observational literature: a correlation between low-mass planets and belts (through their infrared excess) had initially been found (_Wyatt et al._ 2012; _Marshall et al._ 2014), but a study with an expanded sample found no correlations with either low- or high-mass planets (_MorMorstin et al._ 2015). Similarly, no correlation was found between the presence of radial velocity planets and the properties of exocometary belts (_Felverton et al._ 2020). It is notable however that a significant number of long-period, young directly imaged planets are found in systems with bright debris disks, with some evidence of a correlation (_Meshkat et al._ 2017). In summary, the structure of exocometary belts at tens of au probes the orbits of the icy bodies within it, and, as is the case in our solar system's Kuiper belt, can reveal dynamical interactions with mostly unseen outer planets. As proven by dynamical simulations successfully reproducing these structures, these gravitational interactions can remove exocomets in two ways. They can either eject exocomets from outer belts, potentially producing Oort clouds analogs, or directly turning them into ISOs (SS4.1.3), but also to scatter them inwards, which could eventually produced the transiting exocomets observed close to the central stars (SS3.2) and deliver volatile-rich material to planets in the habitable zones of their host stars. #### 3.1.5 Evidence for gas within outer exocometary belts Since their discovery almost 4 decades ago, exocometary belts have been considered to be gas free, to the point that this was long considered a defining difference between younger protoplanetary and more evolved debris disks (_Zuckerman et al._ 1995). However, evidence for circumstellar gas in these systems was discovered as early as 1975 around \(\beta\) Pic (_Slettebak_ 1975), in the form of narrow Ca II H and K absorption at the stellar velocity (i.e. not significantly red- or blue-shifted, as is instead treated in SS3.2) which was later spatially resolved to extend out to 100s of au from the star, where the dusty belt lies(_Brandeker et al._ 2004). For a long time, evidence for cold gas in these systems remained limited to the \(\beta\) Pic and 49 Ceti belts. Their near edge-on geometry allowed detection of gas in both emission and absorption against the central star (_Zuckerman et al._ 1995; _Dent et al._ 2005; _Roberge et al._ 2000, 2014). The evidence for exocometary gas has rapidly strengthened in the past decade. The sensitivity advance to cold gas emission brought by _Herschel_ and especially ALMA has led to detections of atomic (C I, O I and/or C II) and/or molecular (CO) emission within over 20 exocometary belts, largely around \(\sim\)10-40 Myr old stars (_Moor et al._ 2017), but also as old as 440 Myr (Fomalhaut, _Matra et al._ 2017b). Early ALMA CO gas detections highlighted a dichotomy between belts with very high CO masses, approaching those of protoplanetary disks (\(10^{-2}-10^{-1}\) M\({}_{\oplus}\), e.g. _Kospal et al._ 2013), and more tenuous CO gas disks (\(10^{-7}-10^{-5}\) M\({}_{\oplus}\), e.g. _Matra et al._ 2017a). The origin of the gas remains an open question for high mass CO systems. In a _primordial_ scenario, the gas is a remnant of the protoplanetary disk (contrary to the dust, which is second-generation, leading to the term _hybrid_ disk); H\({}_{2}\) dominates the gas mass, allowing CO to remain shielded from stellar and interstellar photodissociation over the lifetime of the system (_Kospal et al._ 2013). In a _secondary_ scenario, CO gas is continuously produced by exocomets within the belt, just like the dust (_Zuckerman and Song_ 2012; _Matra et al._ 2015; _Marino et al._ 2016). However, the production rate is unlikely to be sufficient to maintain the high CO masses observed, so a shielding agent other than H\({}_{2}\) (which is only produced in trace amounts in solar system comets) is needed. Shielding CO photodissociation (producing C and O) by C photoionization is a promising scenario to produce what would be second-generation, shielded high-mass CO disks (_Kral et al._ 2019). For low-mass CO belts, like Fomalhaut, HD181327 and \(\beta\) Pic, the secondary scenario is heavily favored, as the CO photodissociation timescale, even when accounting for the potential of unseen H\({}_{2}\) with primordial CO/H\({}_{2}\) ratios, is much shorter than the age of the system (e.g. _Marino et al._ 2016). This simple argument implies that CO must have been recently (and likely continuously) produced, and must be of secondary, exocometary origin. #### 3.1.6 Gas release processes, composition, and evolution The gas release process of exocomets within belts from their host stars is likely different from solar system comets. In the very cold (\(\sim\)tens of K) environments at tens of au, the heating and sublimation-driven process producing gas in single solar system comets and ISOs (as they approach the Sun) is unlikely to be efficient; while activity suggesting CO outgassing takes place out to Kuiper Belt distances in at least some long period comets (e.g. C/2017 K2 (Pan-STARRS) _Meech et al._ 2017a), no activity is detected from KBOs on stable orbits (_Jewitt et al._ 2008) including Arrokoth ((486958) 2014 MU69, _Lisse et al._ 2021). Care should be taken in associating locations with temperatures in the Kuiper belt versus exocometary belts. As exocometary belts are most commonly observed around more luminous, A-F type stars, they will have warmer temperatures at the same radial location. This is however mitigated by the temperature dependence with stellar luminosity being shallow for blackbody-like grains heated by starlight (\(T\propto L^{0.25}\)), combined with the fact that belts tend to be located at larger radii around more luminous stars (SS3.1.4). One possibility is that we are observing a vigorous sublimation period for exocomets immediately following the dispersal of the protoplanetary disk (_Steckloff et al._ 2021; _Kral et al._ 2021). In this period, hypervolatiles like CO, N\({}_{2}\) and CH\({}_{4}\) sublimate at high rates due to the heating by newly visible starlight, previously shielded by large amounts of protoplanetary dust. In this scenario, Arrokoth-sized objects (\(\sim\)10s of km diameter) at \(\sim\)45 au from the central star take 10-100 Myr for the heat to reach their interior, which could explain the higher detection rate around young exocometary belts in that age range. However, in exocometary belts a large range of sizes would have to be considered, and it remains unclear whether the production rates could be sufficiently high to explain the observed CO masses. Other physical mechanisms have been proposed to release gas in the collisionally active environment of young exocometary belts. While typical collision velocities are unlikely to result in sufficient heating, collisional vaporization of very small grains accelerated by radiation pressure could lead to ice release into the gas (_Czechowski and Mann_ 2007). Additionally, UV photodesorption could very effectively remove volatiles such as H\({}_{2}\)O from the surface of icy grains/objects (_Grigorieva et al._ 2007), given these are continuously produced/resurfaced through the collisional cascade. Once released, in the absence of shielding, molecular gas is expected to be rapidly destroyed by the stellar and interstellar UV fields. CO (with N\({}_{2}\)) is the most photo-resistant molecule, which may explain why it is the only one detected so far (_Matra et al._ 2018b). The CO ice mass fraction of exocomets can then be estimated from the ALMA mass measurements and steady state production/photodestruction arguments, indicating an exocometary CO content within an order of magnitude of solar system comets (_Matra et al._ 2017b, 2019a) - showing that accessing exocometary compositions similar to solar system comets is now possible. Unfortunately, the short photodissociation timescales and/or weak transitions of other molecules make other molecular detections currently challenging (_Matra et al._ 2018b; _Klusmeyer et al._ 2021). On the other hand, atomic gas is abundant and detected both in emission and absorption for edge-on belts. The detection of atomic N (_Wilson et al._ 2019) and S, as well as C and O (see _Roberge et al._ 2006, and references therein) in the \(\beta\) Pic belt suggest that molecules other than CO are being released, arguing against the gas release being limited to hypervolatile molecules. Atomic gas accumulating over time at the belt location is expected to produce a disk that viscously spreads over time, at a rate determined by the poorly-constrained gas viscosity, and eventually form an accretion disk (_Kral et al._ 2016). This model can largely explain the population of CO masses observed (_Marino et al._ 2020), including high CO mass disks through C shielding. However, resolved observations of C I gas by ALMA show inner holes inconsistent with accretion disk profiles, suggesting a more complex evolution (_Cataldi et al._ 2018, 2020). #### 3.1.7 Detectability of exo-Oort clouds Extrasolar Oort cloud analogs may also exist, and exocomets in those stars would be heated by absorption of radiation from their central star, nearby stars and the Cosmic Microwave Background (CMB). The challenge for detecting extrasolar Oort clouds is their low surface density, low temperature and large sky areas, which is why long-wavelength surveys around nearby stars are best suited for detection. An additional factor is whether dust might be produced in Oort clouds, and if so how long it might survive. _Stern et al._ (1991) used _IRAS_ to search for excess far-IR emission around 17 nearby stars but did not achieve any detections. More recently, _Baxter et al._ (2018) used _Planck_ survey data to average the sub-mm emission around selected stars within 300 pc, and also performed a directed search around Fomalhaut and Vega, but again did not detect any signal. Intriguingly, when stacking 43 hot stars within 40-80 pc, they detected a statistically significant excess flux at radial distances of \(10^{4}-10^{5}\) au. Searching around early-type stars is a key factor, as this will increase stellar heating of the exo-Oort body surfaces and aid detection. But they cautioned that this could be a false positive due to the fine structure of galactic dust on scales of the _Planck_ beam width. Alternatively it could be a real signal caused by weak nebular emission or dust grains ejected from debris disks near the stars. Future CMB missions such as LiteBIRD (_Hazumi et al._, 2020) may approach the sensitivity required for unambiguous detection. However, the \(0.5^{\circ}\) mission resolution would restrict it to searches around the nearest stars within 10 pc, for which \(10^{4}\) au spans \(\sim 0.3^{\circ}\). Unresolved emission from more distant stars would be ambiguous as it could also be caused by thermal emission from dust closer to the star, see SS3.1.4. ### Exocomets in inner planetary systems #### 3.2.1 The \(\beta\) Pictoris system The star \(\beta\) Pic is a nearby (\(d=19.3\) pc) A6V star that was the first to have a circumstellar debris disk optically imaged (_Smith and Terrile_, 1984). Soon after, transient redshifted absorption features (see Fig. 5) in the UV spectrum were detected in the wings of the stellar Al III, Mg II and Fe II lines (_Lagrange-Henri et al._, 1988), and in numerous high-resolution optical spectra that include the strong Ca II H&K lines (_Ferlet et al._, 1987). These were interpreted as originating from planetesimals undergoing sublimation when within a few stellar radii from \(\beta\) Pic, with most features being redshifted from motion towards the star. They were subsequently termed 'Falling Evaporating Bodies', or FEBs. In modern parlance these are exocomets, but care must be taken. The observation of metal lines may imply sublimation of refractory elements, and hence not necessarily a volatile-rich body as is common in our solar system. Subsequent studies of these exocomets have tended to use the strong Ca II H & K lines in high-resolution optical spectra, although Na I D-line absorption is also seen. Unfortunately only metal lines in these exocomets have been measured and the lack of detections of the volatile species seen in solar system comets has prevented a direct comparison. While studies of the gas in the outer circumstellar disk show it is likely to be released by exocomets in those regions as discussed in SS3.1.6, there is as yet no direct link to the exocomets seen close to the star. Akin to solar system comets, these exocomets are clearly undergoing significant mass loss, which argues for a source region in the system. _Beust and Morbidelli_ (2000) and _Thebault and Beust_ (2001) had previously shown that a Jupiter-sized planet within 20 au could sustain such a population by strong orbital evolution via the 4:1 and/or 3:1 mean-motion resonances. This would place the source region at 4-5 au from \(\beta\) Pic, within the imaged debris disk. The discovery by direct imaging of the Jovian exoplanet \(\beta\) Pic b by _Lagrange et al._ (2009) orbiting at \(a=9.9\) au gave credence to this hypothesis (see SS3.1.4 above). The more recent discovery of the closer massive planet \(\beta\) Pic c at \(a=2.7\) au (_Lagrange et al._, 2019) will result in more complex dynamical evolution, but clearly enhances the possibility of scattering exocomets existing in the observed disk into star-approaching orbits. Although exocomets typically transit the star over a couple of hours, _Kennedy_ (2018) was able to measure radial acceleration by individual bodies, and constrain their orbital parameters via MCMC methods. A larger analysis was made by _Kiefer et al._ (2014) of 252 individual exocomets seen in \(\sim 6,000\) spectra obtained in 2003-2011. The velocities and absorption strengths display a bimodal distribution, implying at least two dynamical populations. Exocomets creating shallow absorption lines tend to have small total absorption and higher velocities, with periastron distances \(\simeq 0.08\) au (9 stellar radii) over a wide range of longitudes. The less numerous exocomets that produce shallow absorption also exhibit smaller radial velocities due to larger periastron distances of \(\simeq 0.16\) au (18 stellar radii). A small range of longitudes of periastron imply these less active objects all share similar orbits, and may result from Figure 5: Spectra of the core of the Ca II K stellar line in \(\beta\) Pic at three separate epochs over 4 days. The stable central narrow absorption is due to gas in the circumstellar disk, marking the stellar radial velocity. Transient Ca II absorption features caused by individual exocomets can be seen both blue-shifted and red-shifted relative to the star (see §3.1, and Fig. 3). Data from the ESO 3.6-m+HARPS public archive (see acknowledgments). the breakup of a single progenitor similar to the sungrazing Kreutz family (_Jones et al._ 2018 and references therein). Exocomets orbiting \(\beta\) Pic have now also been detected via broadband optical light curves using the Transiting Exoplanet Survey Satellite (TESS) mission (_Zieba et al._ 2019). The transit signals from individual exocomets show a'shark fin' appearance caused by the asymmetric morphology of dust comae and tails, as predicted by _Lecavelier des Etangs et al._ (1999). Analysing all TESS light curves at that time, _Lecavelier des Etangs et al._ (2022) identified 30 individual transits. Assuming that each exocomet on average has the same fractional surface area sublimating, the differential size distribution has a power-law exponent of \(-3.6\pm 0.8\), similar to that seen in solar system comets. #### 3.2.2 Other systems exhibiting exocomet signatures As explored in Section 2, the decreasing mass of circumstellar material as a young star begins its main-sequence lifetime implies that exocomets may also be more abundant at early times. Indeed, transient spectroscopic absorption features similar to \(\beta\) Pic are seen around many Herbig Ae/Be stars (_Grady et al._ 1996). With \(\beta\) Pic as the archetype main-sequence host star, exocomets have been confirmed via similar spectroscopic or photometric transient features around 3 other stars: 49 Cet, HD 172555 and KIC 3542116 (_Strom et al._ 2020 and references therein). In addition there are likely exocomet systems such as c Aql (HD 183324), which exhibits Ca II variability on timescales of minutes (_Mongomery and Welsh_ 2017; _Iglesias et al._ 2018). In total, spectroscopic and photometric signatures indicating probable exocomet activity have been reported for another 30 or more stars (_Strom et al._ 2020). The common factor in all of these stars is that they are early-type A-F stars with bright apparent magnitudes. The brightness allows high signal-to-noise photometry from space-based missions such as Kepler and TESS to reveal weak transit signals from stars with less stellar activity than later K/M-type stars, while B/A stars also exhibit relatively clean optical spectra with fewer stellar absorption lines, facilitating spectroscopic detection. In terms of total numbers, a directed spectroscopic survey of 117 B-G stars by _Rebollido et al._ (2020) found \(\sim 15\)% exhibited transient Ca II and Na I absorption. In summary, there is strong evidence that exocomets are commonly associated with early-type B/A stars. The evidence for later-type stars is less extensive, given that a search of 200,000 _Kepler_ light curves by _Kennedy et al._ (2019) only found 5 potential photometric transits occurring for 3 stars. #### 3.2.3 Warm exozodiacal dust Aside from the more distant cometary belts described in SS3.1, approximately 20% of A-K stars exhibit thermal signatures of warm (\(T\sim 300\) K) dust within a few au (see _Kral et al._ 2018). The combination of the Poynting-Robertson effect on large dust grains and radiation pressure on small dust grains implies very short lifetimes without continuous resupply. Potential sources of this dust are either asteroidal collisions or dust ejection from comets. This may especially be the case for systems with colder outer dust disks, where the source of the dust is expected to be ice-rich bodies. Given that the generation of the solar system's zodiacal cloud is dominated by short-period comets (_Rigley and Wyatt_ 2022), it is plausible that this may be also the case for some exosystems (_Marboeuf et al._ 2016). A survey of exozodiacal dust around 38 stars showed a significant correlation with the existence of cold dust disks but no correlations with stellar age, as would be expected by analogy with the solar system (_Ertel et al._ 2020). An exemplar star where this appears to be the case is the main-sequence F2 star \(\eta\) Corvi. This star possesses a cold outer belt centered at \(\sim 165\) au, plus a warm inner dust ring whose inner boundary is as close as 2-3 au (_Duchene et al._ 2014 and references therein). Near and mid-IR spectra were modelled by _Lisse et al._ (2012) to reveal large quantities of amorphous carbon and water ice-rich dust in the inner ring, as well as silicates. They interpret this as the recent break-up of a large centaur-like icy body due to a collision with an even larger silicate body, resulting in most of the current observed warm dust. Inwards perturbation of icy bodies from the outer cold belt by undetected planets as simulated by _Bonsor et al._ (2014) is supported by the large gap between the inner and outer disks, plus the subsequent detection of CO at \(\sim 20\) au which could be due to inwardly evolving exocomets undergoing CO sublimation (_Marino et al._ 2017). ### Late stages of stellar evolution #### 3.3.1 Exocomets during the late stages of stellar evolution Once a star evolves off the main-sequence, the large changes in luminosity in the Red Giant/Asymptotic Giant Branch (RG/AGB) stages, together with the mass-loss at the end of the AGB phase, will result in significant physical and dynamical changes to any orbiting cometary bodies (_Veras_ 2016). For example, during the Sun's RG phase the water sublimation boundary will move from \(\sim 3\) au to \(\sim 230\) au, leading to cometary activity throughout the Trans-Neptunian region (_Stern et al._ 1990; _Stern_ 2003). _Melnick et al._ (2001) used the Submillimeter Wave Astronomy Satellite (_SWAS_) to discover a large amount of water vapor around the carbon-rich AGB star IRC+10216. This was initially interpreted as originating by ongoing sublimation of exocomets in that star's Kuiper Belt. The inferred mass of cometary bodies was \(\sim 10_{\oplus}\) (_Saavik Ford and Neufeld_ 2001). However, a later survey of eight AGB stars with _Herschel_ (_Neufeld et al._ 2011) showed a cometary source was unlikely, as the water line profiles matched that of the circumstellar CO, with widths as expected from the circumstellar expansion velocity. If the water originated from Kuiper belts, then at least some would be viewed near face-on, and the line widths of cometary-produced water would be much narrower. Instead, water formation appears to be either by shock-induced dissociation of CO, where the oxygen subsequently combines with hydrogen to produce H\({}_{2}\)O, or due to UV photodissociation of CO and SiO to again produce oxygen and water vapor (_Lombaert et al._, 2016). Therefore, there currently exist no strong indicators of exocomets around stars at the RG/AGB stage. However, we know they are present due to observations of stars at the very end of their stellar evolution, White Dwarfs. #### 3.3.2 Evidence for small bodies around white dwarfs There is now undisputed evidence that planetary systems can (in some form) survive post-main sequence evolution. The origin of heavy element absorption lines in White Dwarf (WD) photospheres is now accepted as being caused by pollution from infalling planetary material. The paradigm is that small bodies near the WD undergo tidal disruption to form a debris ring, where subsequent collisions create dust that is slowly removed via the Pointing-Robertson effect to "pollute" the stellar photosphere. Strong evidence for this is found in the association between WD infrared excesses due to circumstellar dust and atmospheric pollutants (see _Farihi_, 2016 and references therein). The relatively short timescales for this material to sink below the observed photosphere means this process is currently occurring where observed. The simplicity of WD photospheres results in high-precision elemental abundances being derived via model atmosphere fitting. There is no doubt that almost all small bodies providing this material were volatile-poor (_Gansicke et al._, 2012; _Xu et al._, 2019). However, two WDs have been identified as being polluted by water-rich bodies (_Farihi et al._, 2013; _Raddi et al._, 2015). This dominance of silicate-rich small bodies may not reflect the original small-body population that survives into the WD phase. A theoretical study of water retention by _Malamud and Perets_ (2016) found that larger water-rich bodies should survive the RGB and AGB stage of stellar evolution. They propose that the observed deficit is due to water being lost prior to tidal disruption or during the disk formation stage. If the WD is in a wide binary system, then dynamical evolution of exocomets from a Kuiper-belt via Kozai-Lidov resonances can result in WD pollution over a wide range of timescales of \(10^{8}-10^{10}\) yr (_Stephan et al._, 2017). Given that 25-50% of all WDs exhibit pollution by small bodies, exocomets may therefore be relatively common around white dwarfs. ## 4 Dynamical ejection and evolution of cometary bodies The discovery of 1I/'Oumuamua and 2I/Borisov gives evidence to the hypothesis that over a star's lifetime, part of its remnant small body population becomes unbound (e.g., _Fernandez and Ip_, 1984; _Duncan et al._, 1987; _Wyatt_, 2008). Thus ISO production seems to be a natural consequence of the existence of reservoirs of small bodies in planetary systems. These small bodies can become unbound from their parent system in several ways (see Fig. 6). In the following, we will describe the various ISO production processes in chronological order of their occurrence during a star's lifetime. We give estimates on the order of magnitude level of the total mass of ISOs released by the individual processes. However, these values are highly sensitive to the assumed (debris) disk masses. ### Individual ejection mechanisms #### 4.1.1 ISO formation during disk dispersal In SS2.2 we saw that disk masses decrease rapidly within the first 1-3 Myr. While initially, this mass loss will be in the form of small dust particles, later on, larger particles and eventually planetesimals will contribute to this mass loss. So far, the matter that becomes unbound during this early stage has received little attention. The main challenge is the significant uncertainties concerning the timescale on which planetesimals form. Therefore, it is unclear how many planetesimals are present in disks at this stage. Very young disks have masses comparable to the mass of their host star (\(m_{d}\approx M_{s}\)). Considerable amounts of ISOs could, in principle, already be produced during this stage. The disk masses decrease to (\(m_{d}\approx 0.01M_{s}\)) within just a few ten thousand years; if any planetesimals are already present during that phase, they could become released due to the lower potential caused by the gas loss. In recent years, several old disks (\(>10\) Myr) have been discovered still containing considerable amounts of gas. These disks likely also contain a sizable planetesimal population. When they eventually disperse their gas, the potential energy change due to the gas loss might be enough that a fraction of the planetesimals becomes unbound from the outer disk regions. Most proposed disk dispersal mechanisms are relatively gentle. Thus the ISOs would leave their parent star at relatively low velocities (\(\ll 0.5\) m/s). The only process that would potentially lead to asteroidal ISOs ejected at higher velocity are those involving disk jets. However, as jets only exist in the very early phases of disk development (\(<\)1 Myr) it is unlikely that planetesimals had already formed. #### 4.1.2 Cluster environment induced planetesimal release Planetesimals can also be released during another star's close flybys (_Hands et al._, 2019). Here mostly planetesimals of the outer disk become unbound so that the ISOs produced by this process are usually icy objects. Close flybys happen most frequently during the first 10 Myr of a star's life when the star is still part of a cluster (e.g. _Adams et al._, 2006). However, debris disks can also be affected, as their longer lifetimes can sometimes outbalance the lower frequency of such events during later stages. The frequency of close flybys varies enormously between different types of stellar groups. In long-lived compact open clusters, stellar flybys are much more common than in the short-lived, more extended clusters typical for the current solar neighborhood. Based on typical disk truncation radii in diverse cluster environments, _Pfalzner et al._ (2021a) find that clusters like the Orion nebula cluster are likely to generate the equivalent of 0.85 \(\,{\rm M}_{\oplus}\) of ISOs per star. In contrast, compact clusters like NGC 3603 can produce up to 50\(\,{\rm M}_{\oplus}\) of ISOs per star. Our solar system probably created the equivalent of 2-3 \(\,{\rm M}_{\oplus}\) of ISOs by this process. In clusters, ISOs leave their parent system typically with velocities in the range of 0.5-2 km/s. External photo-evaporation and viscous spreading can, under certain circumstances, strongly influence the gas component of discs (_Adams_ 2010; _Concha-Ramirez et al._ 2019). However, the decoupling of the gas and planetesimals means that these two processes release only a small number of ISOs. The velocity of these few ISOs should be much smaller than those released due to close flybys. #### 4.1.3 Early scattering by planets During the phase when a planet is still accreting, planetesimals can receive gravitational kicks from planets. Such a perturbation can lead to two outcomes: scattering and collisions. The ejection to collision rates (assuming constant density for the planet) are given as \(R_{e}/R_{e}=(M_{p}^{4/3}a^{2})(M_{*}^{2})\), where \(M_{p}\) and \(M_{*}\) are masses of the star and the planet, and \(a\) is the planet's semi-major axis (_Cuk_ 2018). At large distances, collisions become less likely, while scattering and, therefore, ejection becomes more efficient. The reason is that the Hill radius, which controls scattering, expands linearly with the orbital radius, whereas the physical size is fixed. A planet can eject a planetesimal if the Safranov number \(\frac{1}{2}v_{esc}/v_{orb}\gg 1\), where \(v_{esc}\) is the escape velocity and \(v_{orb}\) the orbital velocity of the planetesimal (_Raymond et al._ 2018b). While the other giant planets all have similar Safronov numbers, it is actually the structure of the system itself that makes Jupiter responsible for the vast majority of planetesimals that have been ejected from the solar system (_Dones et al._ 2015). Most investigations of this process assumed solar system-like environments (_Raymond et al._ 2020). The early solar system likely had a planetesimal disk with a mass of \(\approx 5\)-65 \(\,{\rm M}_{\oplus}\) (_Deienno et al._ 2017; _Liu et al._ 2022). The estimates of how many planetesimals have been ejected range from 0.1 \(\,{\rm M}_{\oplus}\) (_Raymond et al._ 2018b) to 20 \(\,{\rm M}_{\oplus}\) (_Trilling et al._ 2017) and even 40 \(\,{\rm M}_{\oplus}\) (_Do et al._ 2018) per star. The differences in estimated total masses are primarily due to the unknown nature of the size distribution of ejected planetesimals. The ejected planetesimals from these regions would leave the systems with a typical velocity of 4-8 km/s (_Adams and Spergel_ 2005), significantly higher than for the above mechanisms. The amount of planetesimals ejected by scattering depends on the structure of the planetary system. Systems of densely packed giant planets that orbit close to a common plane are most efficient in ejecting planetesimals. In our solar system, the ejected planetesimals would have been primarily icy because ejection is much more efficient outside the gas giants than closer in. Planetesimal ejection from the asteroid belt will be much less common. It is mainly caused by gravitational perturbations by Jupiter and Saturn and the mutual perturbations amongst the largest asteroids. It is currently an open question whether the early asteroid belt was high-mass and depleted (e.g. _Petit et al._ 2001) or whether it was born low-mass and later on populated by dynamical processes (_Raymond and Izidoro_ 2017). If it was initially of high mass, ejections have depleted the asteroid belt by a factor of \(\sim 100\) in number, the original asteroid belt being \(\approx 10\)-20 times more massive than today (_O'Brien and Greenberg_ 2003). In this case, the total mass of ejected asteroids would amount to \(\approx 0.004\) - \(0.008\)\(\,{\rm M}_{\oplus}\). However, the system's giant planets eject ISOs not only while still accreting gas (_Raymond and Izidoro_ 2017; _Porte Figure 6: Schematic picture of various planetesimal ejection mechanisms during the different stages of stellar evolution. Credits left to right, top to bottom; star cluster NGC 346 (NASA/STScI), planet formation (ESO/L. Calçada, solar system (NASA), red giant R Sculptoris (ALMA/ESO/NAOJ/NRAO/M. Maercker et al.), white dwarf planetary evolution (ESO/M. Kormmesser). gies Zwart et al._2018). In compact planet configurations, interactions between the planets themselves or the planets and the planetesimal disk can trigger instabilities. The planets move positions throughout such a phase of planet-planet scattering. During such an event, the bulk of planetesimal disk would be ejected (_Raymond et al._2010). The structure of today's Kuiper belt and the Oort cloud can be explained by this mechanism. Many exoplanetary systems are in compact configurations that might become unstable. Moreover, the solar system's giant planets were likely originally on more compact, resonant orbits (_Morbidelli et al._2007; _Pierens and Nelson_2008). Originally it was proposed that such an instability happened in the solar system well after planet formation finished (500-700 Myr); however, a consensus is emerging that the instability happened early, no later than 100 Myr after the formation of the solar system (_Zellner_2017; _Morbidelli et al._2018). #### 4.1.4 Drifting from Oort clouds Even though the solar system's Oort cloud formation still has many unknowns, we can also expect exo-Oort clouds to exist around other stars. Oort cloud bodies are icy excomets and possibly asteroidal objects (_Weissman_1999; _Meech et al._2016) and are only weakly bound to their parent system. Being situated at a significant distance from their parent star, they are subject to external influences, like Galactic tides and stellar flybys. Consequently, some planetesimals will always drift gently away from the star's Oort cloud over the entire main-sequence lifetime of the star (_Moro-Martin_2019b; _Portegies Zwart_2021). Thus their escape velocity is, on average, very low, probably \(<\) 0.1 m/s. _Hanse et al._(2018) estimates the Sun will lose 25-65% of its Oort cloud's mass mainly due to stellar encounters. Such former Oort cloud members will contribute but amount to \(<\) 10% of the total ISOs population (_Portegies Zwart_2021). #### 4.1.5 White dwarf phase Finally, there is an increased planetesimal ejection rate towards the end of a star's giant branch phase (_Veras_2016; _Veras et al._2020). During that phase, expansion of the stellar envelope can directly engulf closely orbiting planets and tidally draw into the envelope planets which reside beyond the maximum extent of the stellar envelope. While the planetesimal disks are mainly unaffected by engulfment, stellar mass loss plays a role. During the post-main sequence evolution, all objects, planets and planetesimals alike, move outward due to orbital expansion caused by mass loss (_Villaver et al._2014). In the solar system, planetesimals within \(\approx\)10\({}^{3}\) au would all double their semi-major axes (_Veras et al._2020) and therefore be less strongly bound to their host star. The ISOs produced during the giant branch phase gently drift away from their parent star at very low relative velocities of 0.1 - 0.2 km/s (_Pfalzner et al._2021a). In addition, mass loss also destabilizes the planetary system, resulting in the gravitational scattering of planetoids by massive perturbers. _Rafikov_(2018) suggests tidal disruption events of (initially bound) planetary objects by white dwarfs as an additional ISO production mechanism. Some of these objects are scattered towards the white dwarf on almost radial orbits and get tidally shredded apart. Some of the fragments that are produced are ejected into interstellar space. The highest mass white dwarf progenitors would yield the greatest giant branch excitation and and are most efficient in the ejection of planetesimals (_Veras et al._2020). This study points out that most stars in the Milky Way are less massive than the Sun, and concludes that the production of ISOs from within \(40-1000\) au of the evolved star is insignificant when compared to the number created during early stellar and planetary evolution. #### 4.1.6 Binary scenarios In SS2.1, we noted that binary and multiple stars are very common in the Galaxy. Moreover, binaries are also more efficient scatterers, providing more substantial dynamical perturbations than even a gas giant planet (_Raymond and Izidoro_2017; _Cuk_2018; _Jackson et al._2018). All the processes discussed before for single stars could also take place in binary systems. So far, more or less exclusively scattering processes have been considered in detail. Close binary stars have well-defined dynamical stability limits, and any objects entering within a critical orbital radius are destabilized. _Jackson et al._(2018) assume that planetesimals form beyond this stability limit and drift inwards across the limit due to aerodynamic gas drag. Their simulations show that all planetesimals that are drifting inside the stability limit are ejected. Especially for close binaries, the snow line is often exterior to the stability limit such that a considerable portion of their ejected planetesimals may be refractory. However, this mechanism is only efficient if a significant fraction of the solid disk mass drifts interior to the binary stability limit. For the aerodynamic forces to work efficiently the planetesimals have to be relatively small (\(r<1\) km). Only then can they enter the dynamically unstable zone close to the binary, rather than piling up at the pressure maximum in the gaseous circumbinary disk. It has been proposed that 1I/'Oumuamua could be a fragment of a planet that formed in a binary system and was tidally disrupted after passing too close to one of the stars (_Cuk_2018). However, tidal disruption is a rare event. Nevertheless, tight binary systems can eject an amount of rocky material comparable to the predominantly icy material thrown out by single- and wide binary-star systems. It is estimated that the mass of ISOs ejected from binary systems is at least the same as by planet scattering (_Jackson et al._2018) and the ISOs leave their parent binary system with \(\approx\) 6.2 km/s (_Adams and Spergel_2005). #### 4.1.7 Relative importance of the different ISO formation mechanisms All mechanisms mentioned in this section produce ISOs. The question is which of them is/are the dominant ISO production mechanism(s). All the processes discussed above produce of the order of 0.1 - 30 \(\,{\rm M}_{\oplus}\) per star. However, given the strong dependence on the assumed mass of the planetesimal reservoir, these numbers fail to give a clear winner. Thus different criteria have to be employed. The various models make different predictions concerning the properties of the ISOs. As soon as a large enough sample of ISOs is known, these model predictions can be used to identify the dominant ISO production mechanism. Differences are mainly in terms of the composition and the velocity of the ISOs. Most mechanisms lead predominantly to icy ISOs that formed outside the snowline in their parent systems. However, 1I/'Oumuamua did not show the cometary coma morphology one would expect from an icy ISO. Therefore, many mechanisms have been tested concerning their ability to produce refractory ISOs. In general, scattering processes tend to produce more asteroidal ISOs than stellar flybys, drifting from the Oort cloud and the processes at the end of the main sequence. However, even scattering processes produce mainly icy ISOs. There are significant differences in the ejection velocity of the various processes (_Pfalzner et al._ 2021a). The slowest ISOs are those that drift away from the Oort cloud or are shed during the stellar post-main-sequence phase (\(\approx\) 0.1-0.2 km/s). Stellar flybys lead to ejection velocities in the range of 0.5-2 km/s; even higher velocities are achieved by planet scattering (\(\approx\) 4-8 km/s), only exceeded by processes where two or more stars kick out the planetesimals together. Most processes discussed have a connection between velocity and composition. Due to their closeness to the star, the refractory planetesimals require a higher velocity to become unbound than the icy planetesimals (unless, of course the rocky planetesimals resided in the Oort cloud of the host star). As soon as a statistically significant sample of ISOs is discovered, the combined information of their observed velocities and composition might help constrain the dominant production process. However, the observed ISO velocity is only indirectly linked to the ejection velocity, and other effects such as dynamical heating for older ISOs may make such studies difficult. The complete ISO velocity distribution contains multiple components reflecting various ejection speeds and the parent system's different ages. In future, disentangling the different components will be one of the significant challenges. ### Resulting ISO population Most of the above described planetesimal ejection mechanisms produce considerably more icy ISOs than rocky ones. The ejection of volatile-free asteroids is possible, but they are thought to be a relatively small fraction of planetesimals that were ejected. However, this impression might be partly caused by our heliocentric view. In the solar system giants planets all orbit beyond the "snowline". Therefore, the majority of bodies within \(<\)2.5 au on unstable orbits end up colliding with the Sun (_Gladman et al._ 1997; _Minton and Malhotra_ 2010). In the solar system system there is indeed a much greater supply of icy planetesimals, however, this may not apply elsewhere (_Cuk_ 2018). If we look at the known exoplanet systems, many of them have relatively massive planets much further in than in our solar system. The ejection of rocky planetesimals would be more efficient where there are close-in massive planets around high-mass stars. However, even then it appears unlikely that the rocky planetesimals could dominate the galactic population of scattered ISOs. The reason is that the large reservoir of volatile planetesimals beyond the snowline is always much larger than that of the rocky planetesimals in the inner systems. Naturally, the size distribution of ISOs is connected to the size distributions of the reservoir of planetesimals the ISOs originate from. Unfortunately, there is still a large uncertainty of the size distribution of planetesimals. Often a mass function of the form \(N_{ISOs}\propto m^{-p}\) is assumed. This means that the number of ejected particles of a certain mass \(N(m)\) is then given by \[N(m)=\frac{2-p}{p-1}\frac{M_{ISO}}{m^{p-1}m_{up}^{2-p}} \tag{1}\] (_Adams and Spergel_ 2005), where \(M_{ISO}\) is the total mass of ISOs and \(m_{up}\) the largest mass possible. Sometimes \(p\) = 5/3 is chosen (_Adams and Spergel_ 2005). However, starting from a Dohnanyi distribution and from a distribution obtained by simulations of the streaming instability (p=1.6), _Raymond et al._ (2018b) argue that the underlying masses are off by 2-4 orders of magnitude. Observational constraints of the small body population of the solar system can also provide some constraints on the initial size distribution of ISOs. Beyond Neptune, the small body population is thought be the least collisionally evolved (_Abedin et al._ 2022). For these planetesimals a turnover in the size-number distribution is observed around a radius of 50 - 70 km (_Bernstein et al._ 2004; _Fraser et al._ 2014), which likely is of primordial origin (_Vitense et al._ 2012). Therefore, one can expect that size distribution of the ISOs before expulsion was similar. The size distribution of the planetesimals before expulsion sets the stage for the ISO size distribution. However, both distributions are not necessarily identical. First, there might be selection effects concerning the planetesimals mass (and therefore size) in some of the ISO production mechanisms. For example, in three body scattering processes, ejection is more likely the lower the mass of the body. Second, the ejection processes themselves can alter ISOs, such that the properties of the ISOs differ from those of the planetesimals in the parent system. While ISOs that are relatively gently released from their parent system during the Oort cloud phase will remain unaltered, ISOs ejected in a more violent fashion might be disrupted during ejection (_Raymond et al._, 2020). For example, planetesimals gravitationally scattered in very close encounters with a giant planet can be subjected to tidal disruption (e.g. _Asphaug and Benz_, 1996). In particular, scattering events that include a close passage to the parent star can lead to the decline or loss of cometary activity (e.g. _Levison and Duncan_, 1997), and sublimation-driven activity may also flatten future ISOs' shapes (_Seligman and Laughlin_, 2020; _Zhao et al._, 2021). Third, ISOs might be affected to different degrees by their journey through interstellar space (see SS4.4). By now, it should have become clear that planet formation and the production of ISOs are intrinsically linked processes. Ever since planets formed, a population of planetesimals were produced at the same time. A considerable portion of these planetesimals are released sooner or later from their parent star(s) and became ISOs. This means that the interstellar medium is steadily enriched with newly released ISOs (_Pfalzner and Bannister_, 2019). Just like the Galaxy's stellar metallicity increases over time so does the density of interstellar objects. The ISO age distribution should be directly linked to the age distribution of the planets and stars. The stellar age distributions within the Milky Way are shown in Fig. 7. It demonstrates that star formation peaked at the early ages of the Galaxy. However, it is unclear whether the first stars produced ISOs at the same rate as today. It is known that planet formation is much more efficient around high-metallicity stars. ISO formation is linked to planet formation. Suppose we take the age distribution of the planet host stars (Fig. 7 bottom) as a proxy for the planetesimal production distribution, this differs considerably from the general stellar age distribution in the Milky Way. The age distribution of planet host stars shows a clear maximum at approximately 4 Gyr. The age distribution of ISOs might shift even more to younger ages, as some portion of the ISOs may dissolve during their interstellar journey since the frequency of volatile loss and extinction is far higher for ejected planetesimals than for surviving ones (_Raymond et al._, 2020). ### ISO timescales A planetesimal that has been ejected from its host star and has become an ISO does not necessarily remain an ISO forever. ISOs may be recaptured. Recapture is most likely if the planetesimal has been ejected from a host star which is a member of a star cluster. This situation is most likely for young host stars because most stars are born in a star cluster environment (_Lada and Lada_, 2003). However, some stars remain for hundreds of Myr in open clusters and for several Gyr in globular clusters. In all these situations, recapture is a real option. In such an environment, the ISO might be very quickly (\(<1\) Myr) recaptured by a different member of the same star cluster and lose its status of an ISO (_Hands et al._, 2019). The likelihood of the ISO being recaptured rises with the cluster mass as the escape speed from the cluster increases with cluster mass. Equally, ISOs escaping from stars in the cluster centre are more likely to be recaptured. For ISOs that are not in a star cluster environment, the likelihood of recapture is much lower. In any case, the velocity of the ejected ISO is the key parameter for being recaptured. The lower the ISO's velocity the higher the probability that it is recaptured. This correlation is valid in every capture environment, whether it is another star, a disc surrounding a young star or a molecular cloud (_Grishin et al._, 2019; _Pfalzner et al._, 2021a; _Moro-Martin and Norman_, 2022). ### Physical processing in the ISM ISOs are often viewed as travelling through interstellar space in calm cryogenic conditions. Therefore, ISOs are expected to be pristine samples of the planetesimals of other planets. As planetesimals are leftovers from the planet formation process, ISOs should therefore give us direct information about planet formation elsewhere. Generally, an ISO's journey through interstellar space might be not as uneventful as often imagined. The environmental effect on ISOs can be expected to be similar to that of Oort cloud comets. Just like the comets residing in the Oort cloud Figure 7: Top: Stellar age distributions in the Milky Way, adapted from _Gallart et al._ (2019). Bottom: Age distribution of planet hosting stars as listed in the public database at _exoplanets.eu_. The observational bias towards surveying Sun-like stars is clearly evident. (_Stern_, 2003), a variety of thermal, collisional, radiation, and ISM processes might affect ISOs during their long journey (_Stern_, 1990). The ISOs' surface is subject to these environmental influences, which can potentially lead to the erosion or even complete destruction of an ISO. How destructive these processes are depends on the composition of the ISO, with a rocky ISO being much less affected than, for example, a hydrogen-rich ISO. During their journey, ISOs experience cosmic microwave radiation and radiation from the stars. The temperature increase by the cosmic microwave radiation on the surface of the ISO is probably negligible. In contrast, the effect of stellar radiation depends strongly on the ISO's individual journey. Any ISOs passing close to a star can be expected to be modified considerably by such an encounter. The prime effect is the reduction or complete loss of volatiles. Such an event is expected to be rare. However, parsec-range and closer encounters with highly luminous O and supergiant stars can heat ISO surfaces to temperatures capable of removing the most volatile ices, such as neon or oxygen (_Stern_, 2003). ISOs are also subjected to the effects of cosmic rays and stellar energetic particles. Both have a broad spectrum of energies and interact with the ISO surface and subsurface. Cosmic rays are the primary source of space weathering for the comets in the Oort Cloud and, therefore, likely also for ISOs. While low-energy particles interact only with the ISO surface, the most energetic ones deposit a significant amount of energy down to tens of meters. These processes can modify the isotopic ratios in cometary ices and create secondary compounds through radiolysis (_Gronoff et al._, 2020). The penetration depth of cosmic rays is relatively small in comparison to the ISOs of size \(\geq 100\) m that are likely to be detected by telescopes, see SS5. Even for the extreme case of pure hydrogen ISOs, it would be \(<\) 10 m (_Hoang and Loeb_, 2020), for rocky material considerably shallower. Thus cosmic rays immediate effect on ISOs is small, but over time it can erode an ISO gradually from the surface or affect the composition of the upper layers (see SS7.3.2). The interstellar medium contains large amounts of gas and dust. The effect of this gas and dust on ISOs depends on the gas and dust densities and the relative velocity of the individual particles. Gas and dust densities are relatively low in the interstellar medium itself (\(n_{H}\approx 10\) cm\({}^{-3}\)) and are considerably higher in molecular clouds (\(n_{H}\approx 10^{4}\) cm\({}^{-3}\)) and even higher when passing through protoplanetary disks of newly forming stars (\(n_{H}\approx 10^{5-7}\) cm\({}^{-3}\)). Collisions of ISOs with the ambient gas at high speeds can heat the frontal area, possibly resulting in transient evaporation. Such a situation occurs when the ISO passes through a molecular cloud. Each particle directly impacting on the surface of an ISO delivers an energy \(E_{p}=m_{p}v_{p}^{2}/2\). Therefore, ISOs are most impacted by high-velocity particles. At least for Oort cloud comets, simulations show that dust particles could significantly erode the cometary surface in the range of several meters (_Stern_, 1990; _Munma et al._, 1993), preferentially removing the sub-micron grains. However, erosion by ISM grains is a complex process depending on the ISM grains composition and structure. While moving through a relatively massive molecular cloud forming a large star cluster, there is a non-negligible probability that the most massive stars explode as a supernova. Therefore, an ISO might be subjected to the explosion products from such a supernova. Despite supernovae explosions being brief (\(\approx 0.1\) yr), they are extremely luminous \(L=10^{9}L_{sun}\) and therefore can heat ISOs from considerable distances. Their intense but shorter thermal pulses could propagate 0.1-2 m into ISO surfaces (_Stern_, 2003). These processes complicate the interpretation of ISO observations; the surface layer of ISOs might be modified over time. In summary, all these processes erode the surface of ISOs to different degrees. However, as the distinctive properties of the ISOs in terms of their size and composition are not well constrained, it remains unclear whether their surface is affected or whether they decrease in size with age and how many small ISOs become destroyed. ## 5 ISO Searches and Discovery ### Dynamically recognising ISOs Upon discovery, ISOs will appear as either comets or asteroids depending on their level of activity. Hence the single most important discriminant is showing the orbit is hyperbolic, with \(e>1\) and \(a<0\) at the 3-\(\sigma\) level or higher. Dynamically, this is equivalent of saying that their velocity at an infinite distance \(v_{\infty}>0\) (also known as the hyperbolic excess velocity). This requires accurate astrometric measurements of the object over a period of time, and hence sensitive wide-field surveys that allow early detection. For ISOs close by the Earth it may take only a few days of observations before the orbital arc is large enough to show this unambiguously; more distant ISOs may take weeks (see SS6). It should be noted that many comets are discovered each year on apparently hyperbolic orbits. These are in reality dynamically new comets from the Oort cloud, but are listed as hyperbolic due to a number of reasons. First, an orbit that is near-parabolic in the solar system barycentric reference frame may appear to be hyperbolic in the heliocentric reference frame, in which most orbital elements are published. For example, the dynamically new comet C/2021 A9 (PANSTARRS) has an osculating heliocentric orbital eccentricity of \(e=1.004\). Transforming to the correct barycentric frame gives \(e=1.0004\), much closer to a parabolic orbit. Secondly, an originally weakly bound object may become unbound due to gravitational perturbations or non-gravitational forces due to outgassing, see _Krolikowska and Dybczytiski_ (2010). To illustrate this, using data from the JPL Small Bodies Database ( [https://ssd.jpl.nasa.gov/tools/sdbdb_query.html](https://ssd.jpl.nasa.gov/tools/sdbdb_query.html) ), 17 Long-Period Comets (LPCs) were discovered in 2020-2021 with osculating orbital eccentricity \(e\geq 1.0\). Yet all eccentricities were so close to unity that either these comets have \(e\leq 1.0\) within the orbital uncertainties and/or within a barycentric reference frame, or they would have possessed an originally parabolic/elliptical orbit before entering the our planetary system. Understanding these factors is crucial for confirming the status of suspected ISOs with \(e\simeq 1\). ### ISO Studies pre-1l/oumuamua There were many papers that made predictions about the space density of ISOs based on the non-detection in surveys. Some very early work was reported by _Safronov_ (1972); _McCrea_ (1975); _Sekanina_ (1976); _Duncan et al._ (1987); _Valtonen and Zheng_ (1990); _Zheng et al._ (1990). The advent of wide-field CCD-based sky surveys in the 1990s allowed quantitative upper limits to be assessed. _Francis_ (2005) derived a 95% upper confidence limit of \(4.5\times 10^{-4}\) au\({}^{-3}\) using the LINEAR survey. Just before the discovery of 1l/'Oumuamua, _Engelhardt et al._ (2017) numerically integrated ISOs and linked them to the non-detection by the combined major sky surveys operational at that time; the Catalina Sky Survey, the Mt. Lemmon Survey and the Pan-STARRS project. Although they assumed isotropic approach trajectories which disregarded the local standard of rest distribution of stellar velocities, they presciently derived upper limits for both inactive and active (cometary) ISOs. They reported 90% confidence upper limits for 1 km diameter ISOs of \(\leq 2.4\times 10^{-2}\) au\({}^{-3}\) and \(\leq 1.4\times 10^{-4}\) au\({}^{-3}\) respectively. The important thing to get across here is why was the first ISO discovered now? To discover these fast moving objects you need to survey the whole sky quickly it to very faint limiting magnitudes, and we have only had surveys capable of doing this in the last couple of decades. _Heinze et al._ (2021) shows that current near Earth object (NEO) surveys are poorer than expected at finding fast moving objects. However, it is expected that future surveys will do much better (see SS7.3.1). ## 6 Observed Characteristics of ISOs ### 1l/oumuamua Some of the most important questions about the first ISO include: (1) Where did it come from? and (2) What is it made of? As discussed below, we still don't know the answers to these questions. 1l/'Oumuamua was easily observable from the ground for a little over a week, but as it moved away from the Earth it faded quickly, and large telescopes were able to observe for only about 1 month. The last Hubble Space Telescope observations were made in Jan. 2018. It is remarkable that we know as much about this object as we do, because all of the large telescope time had to be secured through Director's requests. In total approximately 100 hrs on 2.5-10-m ground-based telescopes were devoted to characterizing this exceptional object. To date, over 200 refereed papers have been written on both interstellar objects, and nearly 450 papers including non-referred material. It is not practical to cite everything, so key papers have been highlighted, and we have included some reviews of the field. #### 6.1.1 Discovery of 1l/'Oumuamua On 2017 Oct. 19 the Pan-STARRS survey found an object, designated P10Ee5V, moving quickly with respect to Figure 8: Images of the two ISOs passing through the inner solar system show the dramatic difference in their characteristics. Top row: 1l/’Oumuamua: (a) Pan-STARRS discovery image (top) taken on 2017 Oct. 19, with follow up images from the CFHT near min/max brightness on Oct. 27; (b) Gemini 8-m color composite image made from 192 images (1.6 hrs) showing no hint of a dust coma (Gemini Observatory/AURA/NSF); (c) artist’s depiction of two possible nucleus shapes based on the large light curve range (Credit: ESO, M. Kornmesser; William Hartmann) and artist’s view of the ISO after the discovery of non-gravitational acceleration (ESA/Hubble, NASA, ESO, M. Kornmesser). Bottom row: 2l/Borisov: (d) CFHT image taken 10 days post-discovery (2019 Sep. 10); (e) HST image on 2019 Oct. 12 (NASA/ESA); (f) radially normalized ratio HST images median averaged over an orbit from 2019 Oct. 12, Dec 24. (00:31 UT), and Dec 24. (02:21 UT) showing outgassing jets, and 2020 Jul. 6 showing the split nucleus (NASA/ESA), processed by H. Boehnhardt. the stars (see Fig. 8a) during the normal near earth object survey. Rob Weryk then found pre-discovery images in Pan-STARRS data from Oct. 18. Follow up data obtained by the 1m ESA ground station on Oct. 20 was rejected by the Minor Planet Center because the data implied a large eccentricity, and P10EeV was classified as an Earth-orbit crossing asteroid. Data from Oct. 20 obtained by the Catalina Sky Survey suggested that the object should be classified as a short period comet. However, observations obtained from CFHT on Oct. 22 showed that the orbit was hyperbolic with an eccentricity of 1.188. On Oct. 24 the object was designated C/2017 U1 (MPEC 2017-U181). This was corrected on Oct. 26 after deep images from the CFHT from Oct. 22 showed no coma. The object was named A/2017 U2 (MPEC 2017-U183). Within a week of discovery a request was made to the Hawai'ian cultural group that advised Maunakea observatories management group to propose a name for the new object. They proposed the name 'Oumuamua, meaning "a messenger from afar arriving first" or a scout or messenger sent from our distant past to reach out to us or build connections with us. In an extraordinarily fast effort, the IAU approved this on 2017 Nov. 6, and the new name became 1I/'Oumuamua. This became the foundation of a project called A Hua He Inoa which blends traditional indigenous practices into the official naming of astronomical discoveries (_Kimura et al._, 2019; _Witze_, 2019). #### 6.1.2 Nuclear Characteristics One of the most straight forward measurements to make is the brightness, and from this get an estimate of the object's radius for an assumed albedo: \[pr_{N}^{2}=2.235\times 10^{22}r_{h}^{2}\Delta^{2}10^{[0.4(m_{\odot}-m)]}10^{0. 4(\beta\alpha)} \tag{2}\] where \(r_{h}\) and \(\Delta\) are the helio- and geocentric distances [au] and m\({}_{\odot}\) and m are the apparent magnitudes of the sun and comet (_Russell_, 1916). It was apparent from the earliest imaging observations of 1I/'Oumuamua that it had a very large rotational light curve range so various estimates of the size were reported, even for the same assumed albedo. Combining several nights worth of observations from several observatories yields \(H_{V}\) = 22.4 which gives an average radius of 0.11 km assuming a typical cometary albedo of 0.04 (_Meech et al._, 2017). _Spitzer_ observations of 1I/'Oumuamua, in principle could have provided measurement of both the nucleus size and albedo, however because of strict solar avoidance angles these observations could not be made until late 2017 Nov. and only upper limits on the flux were obtained (_Trilling et al._, 2018). Thermal models were used to estimate the corresponding radius and albedo, with a preference for an albedo of 0.1 and a radius consistent with the previous estimate. The rotational light curve shown in Fig. 9 has a brightness range of \(\Delta\)m \(\sim\) 3 mag, implying an axis ratio for an oblate sphere of \(\frac{\sigma}{6}\) = 10\({}^{0.4(\Delta m)}\) = 15. However, this does not take into account the phase angle (which can make the object look more elongated than it is (see Fig. 8c)). It also does not take into account that the rotation pole position is completely unknown; if the pole was more closely aligned to the ecliptic, then the object could appear even more elongated that the light curve suggests. The consensus is that the ratio is likely a/b \(\gtrsim\) 6 ('_Oumuamua ISSI Team et al._, 2019; _Mashchenko_, 2019). Solar system objects typically do not have axis ratios this large, and the cause of the unusual shape of 1I/'Oumuamua remains one of the enduring mysteries (see SS7.3.2). Attempts to find a rotation period from the data produced a variety of periods, near 8 hrs, all differing depending on the length of the data set. The most comprehensive model concluded that 1I/'Oumuamua is in a complex rotation state, rotating around its shortest axis with a period of 8.67\(\pm\)0.34 hours with a period of rotation around the long axis of 54.48 hours (_Belton et al._, 2018). The damping timescale for a body this small is long enough that an excited rotation state can be preserved from the time of ejection from the host star. The shape of 1I/'Oumuamua as interpreted from the rotational light curve can change significantly, depending on the rotation state (see Fig. 8c). _McNeill et al._ (2018) used the shape and rotation of 1I/'Oumuamua to place some constraints on the strength and density of the ISO, finding it was more likely to have a density typical of asteroids. #### 6.1.3 Constraints on composition Several groups obtained spectra for 1I/'Oumuamua, both from the ground and from _Spitzer_, but there was no evidence of outgassing (see '_Oumuamua ISSI Team et al._ (2019) for a summary). Many groups used either spectra or filter photometry to estimate the spectral reflectivity of the surface. The spectral reflectivity, \(S\), was found to increase with wavelength and ranged between \(S\) = (7-23)\(\pm\)3%/1000 A (_Jewitt and Seligman_, 2022). This measurement was challenging because the rotational signature had to be removed. Regardless of the value, the surface of 1I/'Oumuamua was red, typical of organic-rich comet surfaces, but also consistent with iron-rich minerals and space weathered surfaces. Deep stacks of images from many nights of data from ground based and space-based optical images showed no dust at all to a limit of there being \(\lesssim\) 10\({}^{-3}\) kg/s of micron-sized grains being produced (_Meech et al._, 2017; _Jewitt et al._, 2017). In contrast, the dust production upper limit from _Spitzer_ observations for 10 \(\mu\)m grains was \(<\) 9 kg/s (_Trilling et al._, 2018). #### 6.1.4 Trajectory and potential origins One of the primary goals for the HST time awarded for 1I/'Oumuamua was to extend the astrometric arc length to be able to determine its orbit with sufficient precision to trace its path backwards and determine its home star system. The chances of being able to trace the trajectory backwards to find its parent star were low because gravitational scattering from stellar encounters would limit the search to the past few tens of millions of years (_Zhang_, 2018). As 1I/'Oumuamua faded, astrometry was obtained with the VLT in Chile and with HST, with the last observations from HST being obtained on 2018 Jan. 2 when the ISO was fainter than V \(\sim\) 27 at 2.9 au. An analysis of the combined ground and HST observations showed that the orbit could not be well fit with a gravity only solution. By adding a radial non-gravitational acceleration term directed radially away from the Sun with a \(r_{h}^{-1}\) dependence, the orbit was well fit (_Micheli et al._, 2018). The authors ruled out a number of other more hypotheses, e.g. Yarkovsky effect, frictional drag forces, impulsive velocity changes, photocenter offset from a binary or non-uniform albedo, interaction with the solar wind or radiation pressure. Most were rejected because the acceleration produced by these mechanisms was either too small, in the wrong direction, wouldn't match the continuous nature, or implied a non-physical bulk density for 1I/'Oumuamua. By assuming that the same non-gravitational acceleration was operating on the pre-perihelion trajectory, an attempt was made to find the originating star system for 1I/'Oumuamua. _Dybczyrtski and Krolikowska_ (2018) attempted a search for the parent star taking advantage of the _Gaia_ satellite data release 1 (DR1) of stellar positions, but did not find an obvious parent star for 1I/'Oumuamua. The _Gaia_ DR2 data release provided the necessary astrometric position and _velocity_ information that was needed to be able to try to trace the path of 1I/'Oumuamua back to its home star using its Keplerian orbit (_Bailer-Jones et al._, 2018). Plausible home stars would be those which had been within 0.5 pc of 1I/'Oumuamua's trajectory (the approximate size of the Sun's Oort cloud), and which had low encounter (ejection) velocities \(<\) 10 km/s (see SS4.1). An initial selection of stars which passed within 10 pc of the trajectory was selected and the candidates and the path of 1I/'Oumuamua were integrated through a smooth Galactic potential. This resulted in four potential "home" systems with encounter distances between 0.6-1.6 pc and encounter velocities between 10.7-24.7 km/s, all with ejection times less than 6.3 Myr ago. Unfortunately, all of these systems had ejection speeds much higher than expected for ISOs. #### 6.1.5 Cause of the non-gravitational acceleration In order to explain the non-gravitational acceleration of 1I/'Oumuamua in the absence of any apparent activity (dust coma or gas), _Micheli et al._ (2018) estimated the mass loss needed to accelerate the nucleus to the observed value for a range of comet and asteroid densities. They found mass loss requirements between 0.7-140 kg/s, but adopted 10 kg/s as the best estimate. A thermal model matching the acceleration would require sublimation of water plus a more volatile species (such as CO\({}_{2}\) or CO). With a high CO/H\({}_{2}\)O ratio the model could produce enough mass loss to within a factor of 2-3 of what was needed to accelerate 1I/'Oumuamua. _Trilling et al._ (2018) found 3\(\sigma\) upper limits for the dust production of 9 kg/s (for 10 \(\mu\)m grains), \(Q\)(CO\({}_{2}\)) = 9 \(\times\) 10\({}^{22}\) mol s\({}^{-1}\) and CO 1.1 \(\times\) 10\({}^{24}\) mol s\({}^{-1}\) (after correction). None of the optical spectra were sensitive enough to detect gas with such a low CO or H\({}_{2}\)O production rate. Having consistency within a factor of a few between the model and observations suggests that volatile outgassing is a plausible scenario for the acceleration of 1I/'Oumuamua. _Stern_ (1990) postulated that any comet passing through a molecular cloud would have the small grains eroded from the surface, and this could explain the lack of dust in the presence of outgassing. Other more exotic volatile sublimation scenarios to explain the non-gravitational acceleration are discussed in SS7.3.2. Finally, _Seligman et al._ (2021) investigated the spin dynamics of 1I/'Oumuamua with outgassing consistent with the non-gravitational acceleration and found that this need not cause a spin up of the nucleus. They were able to reproduce the observed light curve with a model of CO outgassing but not with water-ice jets. ### 2I/Borisov #### 6.2.1 Discovery of 2I/Borisov The second ISO was discovered on 2019 Aug. 30 by Gennadiy Borisov, an engineer at the Crimean Astronomical Station and amateur astronomer, using a 0.65m f/1.5 astrograph (_Borisov and Shustov_, 2021). Borisov reported it as a potential new comet with a compact 7 arcsec diameter coma, and detected a short 15 arcsec tail two days later. It was placed on the Minor Planet Center Potential Comet Figure 9: Rotational light curve of 1I/’Oumuamua compiled from data taken from various observatories in 2017 October. Confirmation Page as object gb00234, and by 2019 Sep. 11 sufficient additional astrometry had been reported to give a cometary designation of C/2019 Q4 (Borisov) and an initial orbit with an eccentricity \(e=3.08\) (MPEC 2019-R106). On 2019 Sep. 24 the permanent designation of 2I/Borisov was announced (MPEC 2019-S72). As further observations were reported, by mid-October the derived orbit had stabilized with \(e=3.354\), \(q=2.006\) au and \(i=44.05^{\circ}\), giving discovery circumstances of \(r_{h}=2.99\) au and \(\Delta=3.72\) au. This allowed _Ye et al._ (2020) to identify precovery images in archival survey data dating back to December 2018 when it was at a heliocentric distance of 7.8 au. #### 6.2.2 Trajectory and potential origins The significantly longer observation arc for 2I/Borisov allowed the measurement of standard cometary non-gravitational forces, implying that searches for progenitor systems might suffer less uncertainty than for 1I/'Oumuamua. However, _Hallatt and Wiegert_ (2020) did not identify any systems with the golden combination of small encounter distance and low encounter velocity. _Bailer-Jones et al._ (2020) investigated a range of non-gravitational force models and found that 2I/Borisov was \(\sim 0.053-0.091\) pc from the M0V star Ross 573 910 kyr ago. They also demonstrated that such a close encounter is unlikely over this short time period. However the predicted encounter velocity was \(22.6\) km/s, significantly higher than expected for ejection of ISOs, even assuming giant planet interactions (_Bailer-Jones et al._ 2018). Hence like 1I/'Oumuamua, the home system of 2I/Borisov remains unidentified. It is likely that unless the ejection was from a nearby star very recently, that tracing the path to the home system is not possible. As soon as an ISO passes through a molecular cloud, back-tracking no longer possible. #### 6.2.3 Nuclear Characteristics Due to its activity, there were no direct detections of the nucleus of 2I/Borisov. In their archival analysis, _Ye et al._ (2020) did not detect 2I/Borisov in deep co-added images from November 2018. This gave a strong upper limit to the nucleus radius of \(r_{N}\leq 7\) km assuming a geometric albedo of \(0.04\). Rigorous imaging constraints came from HST observations by _Jewitt et al._ (2020), who measured a strong upper limit of \(H_{V}\geq 16.60\pm 0.04\) assuming a nominal cometary phase function of \(0.04\) magnitudes/degree. Fitting the coma surface brightness distribution gave showed no significant excess from a central nuclear psf, giving an improved upper limit to the nucleus radius of \(r\leq 0.5\) km, or \(H_{V}\geq 19.0\). _Hui et al._ (2020) calculated the non-gravitational parameters for a range of heliocentric distance laws, and combined them with outgassing measurements (see below) and an assumed nuclear density of \(\rho=0.5\) gm cm\({}^{-3}\) to derive \(r\leq 0.4\) km. _Xing et al._ (2020) used measured water production rates to derive a minimum radius of \(r\geq 0.4\) km assuming the entire sunward surface of the nucleus was active. Several authors used apparent jet structures visible from HST imaging of the inner coma to attempt to constrain the spin axis. _Kim et al._ (2020) found it difficult to explain the overall dust coma anisotropy in terms of a single isolated emission source, and preferred a model of general mass loss from the afternoon nuclear surface, giving a pole orientation of \((\alpha,\delta)=(205^{\circ},+52^{\circ})\). From the same data, _Manzini et al._ (2020) interpreted the appearance of two jet/fan structures as implying localized emission near the rotation equator, and derived a spin pole direction of \((260^{\circ},-35^{\circ})\). Combining the _Jewitt et al._ (2020) data with additional HST imaging, _Bolin and Lisse_ (2020) interpreted the positional evolution of the jets as implying a near-polar source with a direction \((322^{\circ},+37^{\circ})\). Although all three studies reported uncertainties of at least \(\pm 10^{\circ}\), the disparity between these results led _Bolin and Lisse_ (2020) to conclude that it was not possible to sufficiently constrain the pole orientation with the extant data. The latter study also reported a tentative detection of a 5.3 hr periodicity in the light curve, but the low amplitude of \(\sim 0.05\) magnitudes implies a low significance. The lack of precise determinations of these nuclear properties are all unremarkable when compared to remote studies of solar system comets (see _Knight et al._, in this volume). The similarity of 2I/Borisov to solar system comets extended to the detection by Drahus _et al._ (ATEL 13549) of two outbursts on 2020 Mar. 4 and 8, brightening the comet by \(\sim 0.7\) magnitudes. A sequence of HST imaging from 23 March to 20 April to investigate possible nuclear fragmentation was reported by _Jewitt et al._ (2020). These images showed a discrete nuclear secondary on 30 March, 26 days after the first outburst, that was not visible in images 4 days later. They posited this transient nature by the secondary consisting of one or more boulders ejected from the nucleus during the outburst, disrupting weeks later near the time of their observations due to rotational spin-up. The measured properties of 1I/'Oumuamua and 2I/Borisov are summarized in Table 1. #### 6.2.4 Composition and evolution The first characterizations of the coma of 2I/Borisov were obtained even before it was given a provisional designation, with early optical photometry and spectra of the coma giving a red reflectance spectrum typical of normal comets (_Jewitt and Luu_ 2019; _de Leon et al._ 2019; _Guzik et al._ 2020) and similar to 1I/'Oumuamua (_Fitzsimmons et al._ 2018). Gas emission via the bright CN(0-0) emission band at 388 nm was first detected on 2019 Sep. 20-24 by _Fitzsimmons et al._ (2019) and _de Leon et al._ (2019). Spectroscopy and narrow-band photometry over the following month was reported by _Opitom et al._ (2019). As with the earlier spectroscopy, they did not detect the C\({}_{2}\)(0-0) emission band at 517 nm and concluded the comet was similar to carbon-chain depleted solar system comets. However weak detection of C\({}_{2}\)(0-0) was reported by several observers from \({}^{\dagger}\)\(v_{\infty}\) is the velocity an object on a hyperbolic orbit has infinitely far from the central body (here, the Sun); \({}^{\ddagger}\)Reference Key: [1] Osculating orbit: JPL Horizons orbital solution #16 [2] speed relative to the Sun while in interstellar space (_Bailer-Jones et al._ 2018); [3] _Meech et al._ (2017b); [4] the CO production rate is corrected from the number reported by _Trilling et al._ (2018); [5] _Belton et al._ (2018); [6] _'Oumuamua ISSI Team et al._ (2019); [7] _Jewitt et al._ (2017); [8] _Jewitt and Seligman_ (2022); [9] _Hui and Knight_ (2019); [10] _Micheli et al._ (2018); [11] JPL Horizons orbital solution #53, [12] _Jewitt et al._ (2020a), [13] _Xing et al._ (2020), [14] _Cordiner et al._ (2020), [15] _Jewitt and Luu_ (2019), [16] _de Leon et al._ (2019). For production rates of other species for 2l/Borisov, see the summary in _Jewitt and Seligman_ (2022). \begin{table} \begin{tabular}{l l l l l} \hline Quantity & & 1I/\({}^{\prime}\)Oumuamua & Ref\({}^{\ddagger}\) & 2I/Borisov & Ref\({}^{\ddagger}\) \\ \hline **Dynamical Properties** & & & & \\ Perihelion date & \(T_{p}\) & 2017 September 09.51 & [1] & 2019 December 08.55 & [11] \\ Perihelion distance & \(q\) [au] & 0.256 & [1] & 2.007 & [11] \\ Eccentricity & \(e\) & 1.201 & [1] & 3.356 & [11] \\ Earth close approach & \(\Delta\) [au] & 0.162 & [1] & 1.937 & [11] \\ Incoming velocity\({}^{\dagger}\) & \(v_{\infty}\) [km s\({}^{-1}\)] & \(26.420\pm 0.002\) & [2] & 32.288 & [11] \\ Non-grav acceleration & \(A_{1}\times 10^{8}\) [au d\({}^{-2}\)] & 27.90 & [1] & 7.09 & [11] \\ Non-grav acceleration & \(A_{2}\times 10^{8}\) [au d\({}^{-2}\)] & 1.44 & [1] & -1.44 & [11] \\ Non-grav acceleration & \(A_{3}\times 10^{8}\) [au d\({}^{-2}\)] & 1.57 & [1] & 0.065 & [11] \\ \hline **Physical Properties** & & & & \\ Absolute magnitude & \(H_{V}\) & \(22.4\pm 0.04\) & [3] & \(>\)19.1 & [12] \\ Albedo & \(p_{V}\) & 0.01 – 0.2 & [4] & – & \\ Radius (for \(p_{V}\)=0.04) & \(r_{N}\) [m] & 110 & [3] & \(<\) 500 & [12] \\ Rotation state & & complex, long-axis mode & [5] & – & \\ Rotation period & \(P\) [hr] & \(8.67\pm 0.34\) hr & [5] & – & \\ Axis ratio & a:b & \(>\)6:1 & [6] & – & \\ Spectral slope & \(S_{V}\) [\% per 100 nm] & (7-23)\(\pm\)3 & [6,7] & 12\(\pm\)1 & [8] \\ H\({}_{2}\)O production & \(Q\)(H\({}_{2}\)O) [molec s\({}^{-1}\)] & \(<\) 6.1\(\times 10^{25}\) @ 0.38 au (obs) & [9] & 1.1 \(\times 10^{27}\) @ 2.0 au (obs) & [13] \\ H\({}_{2}\)O production & \(Q\)(H\({}_{2}\)O) [molec s\({}^{-1}\)] & 4.9 \(\times 10^{25}\) @ 1.4 au (model) & [10] & – & \\ Hyper volatile (CO\({}^{\ddagger}\)) & \(Q\)(X) [molec s\({}^{-1}\)] & 4.5 \(\times 10^{25}\) @ 1.4 au (model) & [10] & – & \\ CO\({}_{2}\) production & \(Q\)(CO\({}_{2}\)) [molec s\({}^{-1}\)] & \(<\) 9 \(\times 10^{22}\) @ 2.0 au (obs) & [4] & – & \\ CO production & \(Q\)(CO) [molec s\({}^{-1}\)] & \(<\) 1 \(\times 10^{24}\) @ 2.0 au (obs) & [4] & 4.4 \(\times 10^{26}\) @ 2.0 au (obs) & [14] \\ Dust production & \(Q\)(dust) [kg s\({}^{-1}\)] & \(<\) 10\({}^{-3}\) @ 1.4 au (obs) & [3,7] & \(2-50\) @ 2.6 au (obs) & [15,16] \\ \hline \end{tabular} \({}^{\dagger}\)\(v_{\infty}\) is the velocity an object on a hyperbolic orbit has infinitely far from the central body (here, the Sun); \({}^{\ddagger}\)Reference Key: [1] Osculating orbit: JPL Horizons orbital solution #16 [2] speed relative to the Sun while in interstellar space (_Bailer-Jones et al._ 2018); [3] _Meech et al._ (2017b); [4] the CO production rate is corrected from the number reported by _Trilling et al._ (2018); [5] _Belton et al._ (2018); [6] _’Oumuamua ISSI Team et al._ (2019); [7] _Jewitt et al._ (2017); [8] _Jewitt and Seligman_ (2022); [9] _Hui and Knight_ (2019); [10] _Micheli et al._ (2018); [11] JPL Horizons orbital solution #53, [12] _Jewitt et al._ (2020a), [13] _Xing et al._ (2020), [14] _Cordiner et al._ (2020), [15] _Jewitt and Luu_ (2019), [16] _de Leon et al._ (2019). For production rates of other species for 2l/Borisov, see the summary in _Jewitt and Seligman_ (2022). \end{table} Table 1: A summary of measured properties of interstellar objects November (_i.e. Lin et al._, 2020), and by perihelion on 8 December 2019 the C\({}_{2}\)/CN ratio was formally consistent with both depleted and normal carbon-chain cometary abundances (see Fig. 4 of _Aravind et al._, 2021). These spectra and those presented by _Opitom et al._ (2021) also showed other normal cometary molecular emission features due to species such as NH, NH\({}_{2}\), and CH. Hence the optical species showed a significant evolution during this time as the heliocentric distance decreased from 2.7 au to 2.0 au The first constraint on the sublimation rates of primary ice species came from detection of the [OI] 6300 A line by _McKay et al._ (2020). Assuming this came solely from sublimation and subsequent dissociation of water, they derived a production rate of \(Q(H_{2}O)=(6.3\pm 1.5)\times 10^{26}\) mol s\({}^{-1}\) on 29 October at \(r_{h}=2.38\) au. _Xing et al._ (2020) used the UVOT telescope onboard the NASA Swift satellite to observe the OH (0-0) and (1-1) emission between \(280-320\) nm, again resulting from dissociation of water molecules. The derived water production rates showed a peak of \(Q(H_{2}O)=(1.1\pm 0.1)\times 10^{27}\) mol s\({}^{-1}\) near perihelion, but with higher production rates pre-perihelion than post-perihelion. Given the significant activity at \(r_{h}\geq 3\) au, it was clear that other volatile species such as CO or CO\({}_{2}\) should be present in 2I. _Bodewits et al._ (2020) used HST to measure the CO fluorescence bands at 140-170 nm and corresponding production rates during and after perihelion. _Cordiner et al._ (2020) used ALMA to measure CO (\(J=3-2\)) emission along with HCN (\(J=4-3\)). Both these teams showed that the CO abundance was exceptionally high in 2I/Borisov, being on par with the H\({}_{2}\)O abundance at \(Q(CO)/Q(H_{2}O)\simeq 0.35-1.55\). Only rare CO-rich comets such as C/2016 R2 had previously exhibited such CO-rich compositions (see _McKay et al._ (2019) and the chapter by Biver _et al._ in this volume). _Bodewits et al._ (2020) found that the CO production rate stayed relatively constant around perihelion, while that of H\({}_{2}\)O fell soon after, implying a significant change in the near-surface abundance ratios of these ices. This may have been due to seasonal effects of nucleus insolation, coupled with a non-heterogenous composition or erosion of the surface, similar to that seen by ESA Rosetta in some species at comet 67P/Churyumov-Gerasimenko (_Lauter et al._, 2020), although for that comet the CO/H\({}_{2}\)O ratio was relatively constant within 2.5 au. Similarly, as mentioned above, the abundance ratios of the primary optical species C\({}_{2}\)/CN also changed significantly as the comet moved towards perihelion. Finally, shortly after the discovery of significant neutral metal line emission in optical spectra of non-sungrazing comets (_Hutsemekers et al._, 2021), these were also found in spectra of 2I/Borisov (_Guzik and Drahus_, 2021; _Opitom et al._, 2021). The abundance ratio of \(\log(Q(NiI)/Q(FeI)=0.21\pm 0.1\) was within the range measured for solar system comets, as was \(Q(NiI+FeI)/Q(CO)\). Taken together, although several authors concluded that the optically active species were consistent with normal solar system comet abundances near perihelion, these studies point to a significantly evolving coma composition as 2I/Borisov passed the Sun. It is unclear to what degree this evolution was caused by its interstellar nature, although the CO-rich composition is clearly unusual. ### ISO number densities and size distribution With the discovery of 1I\({}^{\prime}\)Oumuamua and 2I/Borisov, astronomers could finally estimate the true space density of ISOs in the Solar neighborhood, albeit with large uncertainties given \(N=2\). A complication also arises given their different apparent nature during their encounters, as to whether they may come from different populations of ISOs. Inert ISOs like 1I\({}^{\prime}\)Oumuamua appear to have a local space density of \(\sim 0.1\) to \(0.3\) au\({}^{-3}\) at \(H=22\)(_Meech et al._, 2017; _Do et al._, 2018). This approximately agrees with pre-discovery upper limits for inert ISOs (_Engelhardt et al._, 2017) when scaled to this absolute magnitude. Some authors reported a possible tension between the inferred mass density contained in ISOs and the galactic stellar density, with 1I\({}^{\prime}\)Oumuamua implying an exceptional amount of mass-loss from planetary systems _e.g. Rafikov_ (2018). However the _'Oumuamua ISSI Team et al._ (2019) demonstrated that the space density of \(\sim 10^{15}\) pc\({}^{-3}\equiv 0.1\) au\({}^{-3}\) was compatible with possible size distributions. The space density of ISOs exhibiting normal cometary activity like 2I/Borisov is less well constrained, even though most earlier upper limits _assumed_ cometary activity. _Engelhardt et al._ (2017) found a pre-2I/Borisov 90% upper limit of \(\leq 1.4\times 10^{-4}\) au\({}^{-3}\) for a nuclear absolute magnitude \(H\leq 19\), assuming the entire sunward surface was active. _Eubanks et al._ (2021) used the statistics of LPC discoveries, together with assuming that ISOs share the same velocity distribution as stars within 100 pc, to derive a lower density of \(7\times 10^{-5}\) au\({}^{-3}\). This is only a factor 2 less than _Engelhardt et al._ (2017), and is plausible given the extra survey time since that study. If 1I/'Oumuamua and 2I/Borisov come from the same ISO population, then Fig. 10 provides constraints on the luminosity distribution. We use absolute magnitudes to avoid any assumptions on albedo, although it is important to remember these have been derived using assumed phase laws. If they share the same albedo then this also constrains the size distribution. For the interstellar space density from 1I\({}^{\prime}\)Oumuamua we take the space density and confidence limits from _Levine et al._ (2021), and for 2I/Borisov we take the space density of _Eubanks et al._ (2021) together with Poisson uncertainty ranges calculated for a single detection at both magnitudes. We assume a single power-law relationship for the number density against size, as with only 2 points a more complex relationship is unwarranted. An obvious size distribution to test is the theoretical differential size distribution for a population in collisional equilibrium, where the number of objects at a radius \(R\) is given by \(dN/dR\propto R^{-q}\) with \(q=3.5\)(_Dohnanyi_, 1969). A second size distribution would be that measured for solar system comets, normally given as a cumulative size distribution \(N(>R)\propto R^{1-q}\). Several investigators have found \(q\simeq 2.9\)(_Meech et al._, 2004; _Snodgrass et al._, 2011; _Fernandez et al._, 2013). A more recent analysis of NEOWISE comet observations by _Bauer et al._ (2017) found a debiased value of \(q=3.3\) for JFCs similar to the earlier studies, but a shallower debiased \(q=2.0\) for LPCs. Note that in comparison with the mass distribution \(N(m)\propto m^{-p}\) used in SS4.2, the power-law exponents are related by \(q=(3p-2)\). If we transform these size distributions to differential absolute magnitude distributions \(N(H)\propto 10^{\alpha H}\), the exponents of the two forms are related by \(\alpha=(q-1)/5\). The above size distributions imply a theoretical magnitude distribution with \(\alpha=0.5\), and measured values of \(\alpha=0.2\to 0.46\). It is clear from Fig. 10 that 1I/'Oumuamua and 2I/Borisov do not match these distributions. There are a number of possible explanations. First, they could be from different populations of ISOs, each of which has a more standard size distribution but with significantly different space densities. If they instead reflect the true size distribution of ISOs, this implies the size distribution is steeper than expected. As explained in SS4.2, ejection processes can produce an increase in the relative number of smaller bodies, consistent with Fig. 10. An ejection-produced steepening of the size distribution would also explain the mismatch with the observed LPC population that originates from our Oort cloud, where objects have undergone similar long-term exposure to the ISM but not ejection. Finally, there could be a strong variation in albedo between bodies, possibly due to their individual histories in the ISM or, with 1I/'Oumuamua, by its close perihelion passage pre-discovery. Identifying which of the these explanations is most likely will require further ISO detections for a much better measurement of the magnitude/size distribution. ### Cneos 2014-01-08 While 1I/'Oumuamua and 2I/Borisov are _macroscopic_ ISOs, the existence of interstellar dust particles - _microscopic_ ISOs - has been known for some time from spacecraft detection (see the review by _Sterken et al._, 2019). The Advanced Meteor Orbit Radar multi-station complex has provided extensive data on interplanetary dust characteristics. The dynamical information that is collected allows identification of discrete source regions. One of the main sources appears to be the \(\beta\) Pic disk (_Baggaley_, 2000). During its cruise phase, the _Stardust_ mission captured particles from the oncoming interstellar dust stream (_Westphal et al._, 2014). In between dust and ISO sizes should lie objects \(\sim\)1 cm -10 m in size. Fireballs produced by objects at these sizes are regularly detected via US DoD satellites, which measure both the luminosity of the fireball and (above a luminosity limit) the entry velocity vector. A search through the publicly accessible fireball catalogue led _Siraj and Loeb_ (2022) to identify a potential interstellar impactor, CNEOS 2014-01-08. This object entered the Earths' atmosphere over the Western Pacific with a reported entry velocity of 44.8 km/s. Although formal measurement uncertainties are not published, DoD personnel communicated that "the velocity estimate reported to NASA is sufficiently accurate to indicate an interstellar trajectory". When integrated backwards, the reported velocity vector implied an original orbit with \(e=2.4\pm 0.3\), hyperbolic at the 3-\(\sigma\) level. While the identification of an interstellar origin appears sound based on the available satellite data, there is a tension with the fireball and meteor data regularly obtained by ground-based optical and radar meteor surveys. _Musci et al._ (2012) identified 2 hyperbolic orbits out of 1739 meteors observed with the Canadian Automated Meteor Observatory, but careful inspection ruled them out as measurement error. Similarly, in an analysis of 824 fireballs detected by the European Fireball Network, _Borovicka et al._ (2022) identified two objects that had orbital eccentricities \(e>1\) at the 3-\(\sigma\) level, but the absolute values were near unity and they concluded there was no strong evidence for a hyperbolic nature. In an analysis of nearly 4000 meteors observed by the FRIPON project, _Colas et al._ (2020) report an upper limit of interstellar meteors of \(0.1\)% but suspect the true value to be much lower. Collating several meteor orbit catalogues derived from photographic, video and radar systems, _Hajdukova et al._ (2020) concluded that there Figure 10: Number densities for 1I/’Oumuamua and 2I/Borisov as a function of their absolute magnitudes. Also shown are differential brightness power-laws with exponents \(\alpha=0.5\) for collisional equilibrium, and the range \(\alpha=0.46\) to \(0.2\) measured for solar system comets. All are anchored to the absolute magnitude of 1I/’Oumuamua. was no convincing evidence of any interstellar meteors being detected in 25 years of data covering many thousands of objects. That said, a careful analysis of \(4\times 10^{5}\) meteors detected by the Canadian Meteor Orbit Radar led _Froncisz et al._ (2020) to identify 5 possible interstellar meteors. We note that this possible interstellar fraction of \(\sim\)1 in 80,000 meteors is significantly lower than the \(\sim\)1 in 900 implied for the current CNEOS fireball database. It is clear from these latter studies that measurement uncertainties are extremely important in interstellar meteor identification, due to the short timespan over which the observations are obtained. Until such a time that a quantitative description of the uncertainties on the DoD satellite measurements is forthcoming (and indeed their actual values), some uncertainty unfortunately still remains concerning an interstellar origin for CNEOS 2014-01-08. It therefore follows that the pursuit of interstellar meteor detections with quantifiable uncertainties is a highly worthwhile endeavor. ### ISO impact hazard As reported in the Planetary Decadal survey (_National Academies of Sciences and Medicine_ 2022), as of 2021 about 95% of the NEOs greater than 1 km in diameter have been found. These objects are capable of causing global effects upon impact. Objects that are larger than 140 m in diameter can cause regional destruction and to date, only about 33% of these have been discovered. Estimates show that there might be more than \(10^{5}\) objects \(\geq\)50 m in diameter which could cause destruction on urban scales, and less than 2% of these have been found. In 2005 Congress directed NASA to find 90% of all NEOs larger than 140m, and to date this directive has not been met. Once it is met, the threat from impacts from near earth asteroids will be minimized, but this will not address the risk from objects on long period Oort cloud comet (OCC) trajectories. It is likely that OCCs are responsible for most of the very large impacts on Earth (_Jeffers et al._ 2001). The typical NEO encounter velocity peaks around 15 km/s, with a range up to 40 km/s (_Heinze et al._ 2021), however the encounter velocities for OCCs will typically be around 54 km/s ranging from 16 up to 72 km/s (_Jeffers et al._ 2001). ISO trajectories will be very similar to those of OCCs, but the encounter velocities can be even higher. At the time of close approach to the Earth on 2017 Oct. 14, the relative velocity of 1I/"Oumuamua with respect to the Earth was 60.3 km/s. It passed within 63 Earth-moon distances and we did not and could not have discovered it until after it had passed. Objects like 1I/"Oumuamua which are on OCC-like orbits but which do not have any detectable activity, have small nuclei, and low albedo (such as the Marx comets, see _Meech et al._ 2016) are particularly dangerous because they will be harder to detect. _Heinze et al._ (2021) also noted that there is a larger bias than expected against finding small fast moving objects because of the streak length on the detector, so inactive ISOs will be particularly hard to find and we are more likely to be missing more of these. These OCC and ISO trajectories will be distributed across a wide range of inclinations. As discussed in SS7.3.1, the powerful new Vera C. Rubin Observatory (Rubin) Legacy Survey of Space and Time (LSST) will begin in mid-2025. There is a concerted effort to optimize the survey effort to benefit solar system science, including detection of OCCs, NEOs and potentially hazardous objects (_Schwamb et al._ 2023). 1I/"Oumuamua was discovered at 1.22 au moving at a rate of 6.2\({}^{\circ}\) per day, i.e. typical of a faint nearby NEO that the LSST will be optimized to detect. ## 7 The Next Decade of Exocomet and ISO Studies - What Don't we Know? ### Exocomet Systems #### 7.1.1 Probing outer exocomet populations Current and under-construction facilities should lead to significant advances in the field of exocomet science. An important area of study will be to understand the dynamical evolution of exocomets in planetary systems, which requires both high angular resolution and sensitivity at dust-emission wavelengths. Near to mid-IR dust imaging will take place with the 6.5m James Webb Space Telescope (JWST), the 39-m ESO Extremely Large Telescope (ELT) with its MICADO and METIS cameras (_Brandl et al._ 2010; _Davies et al._ 2018), and large, deep ALMA surveys such as the ongoing ARKS (The ALMA survey to Resolve exoKuiper belt Substructures, Marino et al. in prep.). These will deliver unprecedentedly detailed images of dust emission from exocometary belts, revealing planet-belt interaction and allowing us to infer the dynamical fate of exocomets within belts, be it ejection, inward scattering, or survival within the belt. Gas observations offer a unique window into exocometary compositions in young planetary systems just after formation. Near- to mid-IR spectroscopy with JWST combined with ground-based near-IR high-resolution spectrographs (like the newly-installed CRIRES+ instrumentation on the ESO Very Large Telescope and later the ELT+HARMONI) will be used to probe ro-vibrational transitions of new molecules undetectable at mm wavelengths, such as CO\({}_{2}\), CH\({}_{4}\) and of course H\({}_{2}\)O and its photodissociation product OH. This will constrain exocometary molecular gas compositions within outer belts, with the potential to confirm the gas origin, determine the physical release mechanism, and ultimately the ice composition of young exocomets - a missing evolutionary link between outer protoplanetary disks and solar system cometary/KBO compositions. In addition, ALMA CO and C I line imaging will constrain the radial and vertical distribution and hence the evolution of gas within exocometary belts; this will be combined with detail kinematic analysis, thanks to ALMA's maximum spectral resolution, to reveal the environmental conditions (temperature, bulk density) of the gas, and eval uate the dynamical influence of planets on the evolution of exocometary gas. #### 7.1.2 Probing the inward transport of exocomets The highest priority for the future of exocomets in the inner regions should be to expand the number of detections and detected systems - currently dominated by \(\beta\) Pic - and study their compositions and dynamics as they enter the innermost regions of a planetary system. This will enable us to probe their formation location and the mechanism leading to their inward transport - with the potential of linking their origin to detected exoplanets, and imaged populations of exocomets in outer belts. JWST is especially sensitive to warm dust emission in the interior regions of nearby systems, either through its direct imaging or aperture masking modes, and this can be used to set limits on inward scattering and transport of icy exocomets from the outer belt towards the terrestrial region (_Marino et al._, 2018). The Roman Space Telescope is a 2.4-m wide-field optical and near-IR facility, currently planned to launch on 2027. Its coronagraph instrument is also expected to expand the number of systems with exozodiacal light detections in the terrestrial planet region (_Douglas et al._, 2022). Imaging of exozodiacal dust may for the first time also be possible around the nearest stars with the ELT (_Roberge et al._, 2012). The identification and confirmation of more \(\beta\) Pic-like systems requires high resolution echelle spectrographs with large aperture telescopes to provide high signal-to-noise per wavelength bin. Observations with new high-resolution spectrographs on the ELT such as ANDES (_Maolino et al._, 2013) and METIS (_Brandl et al._, 2010) will exploit the ELT's large collecting area with their high spectral (R\(\sim\)100,000) resolution and wide wavelength coverage. This will allow more sensitive searches for absorption in edge-on systems, both from narrow, stable gas at the stellar velocity and variable red- and blue-shifted features from exocometary gas. Monitoring observations looking for high-velocity exocomet features should aim to cover volatiles as well as metallic atomic species across a broad wavelength range, as one of the highest priorities is to understand the nature and volatile content of star-grazing exocomets. Finally, continued exploration of TESS and CHEOPS stellar light-curves to look for asymmetric exocomet transits are likely to lead to new discoveries. These will be further supported by the PLATO transiting exoplanet mission (_Rauer et al._, 2014), with its factor \(\sim 5\) increased photometric accuracy over TESS, currently scheduled for launch in 2026. This will contribute to understanding the occurrence rate of star-grazing exocomets around large samples of stars, pushing towards smaller transits. ### Galactic evolution of ISOs #### 7.2.1 Galactic effects on ISOs In SS4.2 we saw that ISOs experience various influences from the environment in the form of thermal heating and exposure to radiation and particles of various kinds. The question is whether ISOs also affect the environments they are passing through. The discovery of 1I/'Oumuamua taught us that the ISM contains not only gas and dust but that ISOs are a natural third component of the ISM. ISOs also pass through molecular clouds (see section 4.3), with young ISOs more likely to make such an experience than older ones (_Pfalzner et al._, 2020). Such a cloud passage might alter not only the ISO surface and structure but also the direction of their path, while at the same time decelerating or accelerating them. Depending on the size of the actual molecular cloud \(10^{19}-10^{20}\) ISOs reside in them at any time. Being so small in mass, their individual effect is negligible, but it could be different if co-moving streams of ISOs pass through molecular clouds. However, although every ISO travels through the ISM, certainly not every ISO passes through a molecular cloud and even a protoplanetary disk. However, the probability of ISOs passing through molecular clouds is surprisingly high. In the solar neighborhood, an ISO spends \(\approx\) 0.1%-0.2% of its journey passing through molecular clouds (_Pfalzner et al._, 2020). This value increases for young ISOs close to the Galactic center. In comparison, passing through an existing protoplanetary disk is a much rarer event due to its much smaller cross-section. Nevertheless, _Grishin et al._ (2019) estimate that at least \(10^{4}\) ISOs larger than 1 km cross any protoplanetary disk around field stars, with the number increasing to \(10^{5}\) for star cluster environments. Under certain conditions, molecular clouds collapse and form stars. It seems that ISOs take part in this process to some extent. This would mean that ISO could become concentrated in collapsing molecular clouds (_Pfalzner et al._, 2021). However, these simulations indicate that there is a competitive capture process at work that favors the capture of ISOs by massive star clusters. In the current planet formation scenario, there exist some difficulties in proceeding from m-sized to km-sized objects (see SS2.3). It has been suggested that ISOs might easily overcome the meter-sized growth barrier by acting as seeds to catalyze planet formation. Two different scenarios have been suggested. _Grishin et al._ (2019) theorize that ISOs could be captured from the local star formation region into the disks surrounding young stars. Alternatively, _Pfalzner and Bamister_ (2019) propose the ISOs in the ISM taking part in molecular cloud collapse would become a natural component of the forming disk, without the need for additional capture. They give a first conservative estimate of the order \(10^{11}\) ISOs typically being incorporated into forming star-disk systems. _Moro-Martin and Norman_ (2022) found a similar total number when considering metre-scale ISOs and larger, but also noted the number in the disk could increase by 2-3 orders of magnitude in cluster environments. If ISOs take part in cloud collapse leading to star formation, this would also mean that ISOs are incorporated into forming stars. The question is how many ISOs end up in stars? When ISOs pass through molecular clouds and are cap tured in planet-forming disks, they can be affected by these environments. Possible mechanisms are devolatilization and erosion. Both will lead to changes in the ISO size distribution. Whether the ISOs are altered depends primarily on the gas/dust density, which increases from an ISM environment to molecular clouds and protoplanetary disks. Finally, _Do et al._ (2018) point out that exo-Oort clouds around stars that produce supernovae will be irradiated and lose surface volatiles, but the exocomets there will survive. As they then drift away from the supernova remnant they will form a natural population of devolatilized ISOs, possibly somewhat similar to 1I/'Oumuamua. However supernovae are rare events, and such ISOs should not form a major component of the population. #### 7.2.2 ISO effects on the galaxy Over the last four years, it has become apparent that ISOs are a natural component of molecular clouds. The question is whether the presence of the huge number of low-mass ISOs does "contaminate" star forming regions. This depends on the mass of the ISOs present in molecular clouds, but also on the different chemistry between star forming systems and on whether ISOs are evenly spread or come in streams from their parent systems. So far this question has been studied more from the perspective of individual already forming planetary systems. This question of whether material can be spread from one planetary system to another is of long standing interest (_Nittonen and Innanen_ 1982; _Melosh_ 2003; _Adams and Spergel_ 2005; _Brasser et al._ 2006). More recently, estimates of the likelihood of ISO capture events have been performed for the early phase in young star-forming clusters and the local galactic neighborhood (_Hands et al._ 2019; _Portegies Zwart_ 2021; _Napier et al._ 2021). In young clusters the stellar density is much higher than in the field, therefore, the ISO capture rate per Myr is naturally much higher. However, whether the capture probability is in general higher during the short cluster phase than over the entire lifetime of a star depends strongly on the assumed cluster properties and the stellar mass. The latter determines the capture cross section and the lifetime of a star. Besides, GAIA data strongly indicate that clusters seem not completely dissolved within 10 Myr, but move together in an unbound state for a considerable length of time. Thus assuming field densities for the stars and the ISOs underestimate the capture rates in the first few hundred Myr. This requires further investigation in the future. On average every star will contribute ISOs to the galactic population, and this material might be injected into the inner regions of star systems. _Seligman et al._ (2022a) suggest that this material can impact both the host stars and their planets, enriching their atmospheres, and that understanding the composition of the ISO population will have implications for post-formation exoplanet atmospheric composition. Currently there exists no definitive evidence that any gravitationally bound objects in the solar system are of extrinsic origin (_Morbidelli et al._ 2020). However, the discovery of 1I/'Oumuamua and 2I/Borisov sparked wide-ranging speculation regarding the possibility that our solar system is being more broadly contaminated by minor bodies of extrasolar origin (_Siraj and Loeb_ 2019; _Namouni and Morais_ 2020; _Hands and Dehnen_ 2020). There exists a wide range in the estimated masses of planetesimals accreted from other stars while the Sun lived within its birth cluster from \(10^{-5}\,{\rm M}_{\oplus}\) to a third of the Oort cloud population. _Napier et al._ (2021) finds that about \(10^{6}\) more ISOs were captured during the cluster phase than accreted subsequently from the field, while the current steady state factor is \(10^{4}\) and the total mass of surviving captured ISOs is \(\sim 10^{-9}\,{\rm M}_{\oplus}\). To put this into perspective, the mass of the Oort cloud is estimated to range from 0.75 \(\pm\) 0.25 \(\,{\rm M}_{\oplus}\) (_Brasser_ 2008) to 1.9 \(\,{\rm M}_{\oplus}\) (_Weissman_ 1996). With a typical comet-mass of a few times \(10^{12}\) to \(10^{14}\) kg (_Sosa and Fernandez_ 2011) the Oort cloud contains approximately \(10^{10}\)-\(10^{12}\) comet-sized objects. Thus the captured ISO population is probably extremely small in comparison to the primordial comet population in the Oort cloud. Current capture of ISOs into the solar system has also been explored. _Dehnen et al._ (2022) suggest that planets capture \(\sim\)2 ISOs every 1000 years which would result in 8 ISOs captured within 5 au of the Sun at any time. A perennial candidate for a captured ISO is comet 96P/Machholz, due to its highly anomalous optical spectrum that is dominated by NH\({}_{2}\), with strong depletions of carbon-based molecules (_Langland-Shula and Smith_ 2007; _Schleicher_ 2008). Its orbital elements of \(q=0.124\), \(i=58^{\circ}\) and \(e=0.959\) mark it as one of the lowest perihelia comets known, although its high inclination is due to strong Kozai-Lidov oscillations (_McIntosh_ 1990). Its nucleus was studied by _Eisner et al._ (2019) to compare with 1I/'Oumuamua to search for possible effects of small perihelion passages. The strongest evidence for it being a captured ISO remains its peculiar coma composition, but this is not a definitive marker. #### 7.2.3 Intergalactic Objects The existence of ISOs in the Milky Way makes it likely that other galaxies are also populated with large numbers of ISOs. One can also speculate about the number of objects that have been ejected from their parent galaxy and become Intergalactic Objects. These would clearly be difficult to create, given the huge velocities required; at the Sun's position the escape velocity from the Milky Way is \(\simeq 500\) km/s (_Koppelman and Helmi_ 2021). While some tens of hypervelocity stars escaping the Milky Way have now been identified (_Li et al._ 2021), most are thought to have attained their velocities via encounters with massive black holes, whose dynamical and radiation environments are hardly conducive for ISO survival. However, we note that while escape to intergalactic space is highly unlikely for an ISO, we can let the galaxy carry it to a new home. Galactic mergers have taken place throughout the history of the Universe, with 5 minor mergers identified for the Milky Way (_Kruijssen et al._, 2020). An unknown fraction of the ISOs in those galaxies would have merged with the galactic population, and of course subsumed stellar systems would have ejected their own ISOs through the usual methods described in SS4. Hence given enough time as ISO detections increase, there is a finite but as-yet unquantified probability of detecting an ISO carried here by its own galaxy. ### The range of ISO properties #### 7.3.1 Future detections and surveys The trajectory of 1I/'Oumuamua brought it in from above the plane of the solar system from the direction of the constellation Lyra (Fig. 11). It came to perihelion inside the orbit of Mercury at \(q=0.255\) au on 2017 Sep. 9, and was discovered post-perihelion at \(r=1.22\) au after it had made its close approach to Earth on 2017 Oct. 14. This close approach was at 0.162 au, passing within 63 lunar distances. Images from the Catalina Sky Survey detected it on Oct. 14, but it was noted only after it had been discovered. 1I/'Oumuamua passed through the SOHO and STEREO fields of view in early 2017 Sep., near perihelion (_Hui and Knight_, 2019). Because of the large phase angle and the extreme forward scattering, this would have enhanced the brightness of any dust around the nucleus by \(\sim\) two orders of magnitude but neither satellite detected it. It could not have been discovered any earlier than it was because 1I/'Oumuamua was just coming out of solar conjunction as seen from the ground and was faint (mag\(\sim\)23.7 at 0.79 au on Oct. 2). Prior to moving into solar conjunction in early August it would have been fainter than mag\(\sim\)25 - both times too faint to be discovered by any of the existing surveys. 1I/'Oumuamua was inside the orbit of Jupiter for less than 1.3 yrs. The second ISO was discovered by an amateur searching close to the horizon in twilight - something that the large telescope surveys cannot do (see Bauer _et al._, this volume). The Rubin observatory in Chile has an 8.3m survey telescope with a camera that covers a FOV of 9.6 deg\({}^{2}\). It will begin regular survey operations in mid-2025 that will provide a much greater depth than all previous surveys, reaching mag\(\sim\)24.7 and scanning the accessible sky every 3 days (_Jones et al._, 2009). _Hoover et al._ (2022) have estimated the number of discoveries that might be made by the LSST by generating a synthetic population of ISOs, assuming no cometary activity. They predict that the survey will find on the order of 1 ISO per year, but the number can be larger than 1 when cometary activity is considered. Ironically, however, even had the LSST survey been operational in 2017, it would have been unlikely to discover 1I/'Oumuamua because the ISO would have had an _average_ brightness brighter than mag\(\sim\)24.7 for less than a week before moving into solar conjunction in early 2017 Aug. #### 7.3.2 Probing origins and evolution The standard mechanisms of ISO creation via ejection of exocomets from their home systems are described in detail in SS4. 1I/'Oumuamua's discovery resulted in the proposal of additional possible creation routes. Many of the formation mechanisms have been suggested in an attempt to explain the origin of the unusual shape for 1I/'Oumuamua. These include fluidization to a Jacobi ellipsoid during the red giant phase of a star (_Katz_, 2018), interstellar ablation (_Vavilov and Medvedev_, 2019), collisional elongation (_Sugiura et al._, 2019), formation as an icy fractal aggregate (_Mono-Martin_, 2019a) and a tidally disrupted planetesimal that passed close to growing giant planets, from which the volatiles were removed during a close stellar passage (_Raymond et al._, 2018a; _Zhang and Lin_, 2020). Two 1I/'Oumuamua origin scenarios suggested a volatile composition based on homonuclear diatomic molecules with no dipole moments and no vibrational spectral lines, which would explain why no gas was detected. In an attempt to both explain the shape and acceleration of 1I/'Oumuamua, _Seligman and Laughlin_ (2020) suggested that it was composed of molecular hydrogen ice. In their model, mass wasting from sublimation far out in the solar system could explain both the shape and the acceleration. They proposed that the ISO formed in a cold dense molecular cloud core. The freezing point of H\({}_{2}\) is around 14K, and the lowest temperature cloud cores are around 10K. H\({}_{2}\) has no dipole moment and can only be detected in the far-UV or in the infrared through its rotational lines from space. An alternate suggestion was made by _Desch and Jackson_ (2021) who proposed that 1I/'Oumuamua was a collisional fragment from an exo-Kuiper belt. Many large KBOs in our solar system exhibit N\({}_{2}\) ice on their surfaces. Gaseous N\({}_{2}\) has no bending mode vibration spectral lines and is infra-red inactive. Its spectral features are in the UV where no observations were taken. This makes it an attractive volatile Figure 11: Path of 1I/’Oumuamua as it entered the solar system illustrating why it was not discovered sooner. to explain how 1I/'Oumuamua could have unobserved outgassing. _Levine et al._ (2021) suggests that neither scenario is plausible because large H\({}_{2}\)-ice bodies are not likely to form in cloud cores and that the impacts in a Kuiper belt are not sufficient to generate large N\({}_{2}\) fragments. In order to reconcile some of the observational and model inconsistencies, _Bergner and Seligman_ (2023) propose that the acceleration of 1I/'Oumuamua is caused by release of molecular hydrogen that formed by cosmic ray processing of water ice. These authors argue that laboratory experiments show that H\({}_{2}\) is efficiently produced in water ice during radiation by high energy particles. They propose that the H\({}_{2}\) gas is released over a wide range of temperatures during annealing of amorphous water ice and that there is sufficient gas released to account for the observed non-gravitational acceleration. Finally, regarding origin location within a disk, _Seligman et al._ (2022b) propose that measuring the C/O ratio of an ISO can be a tracer of whether the ISO formed inside or outside the ice line in its home star system. ### In-situ Observations: Space missions 1I'/Oumuamua was accessible to ground based telescopes for less than a month, a little longer using space facilities. After this brief period of observation it was found that the characteristics were quite different from what was expected from the first ISO, and this one discovery has energized a new interdisciplinary awareness in the study of planet formation. However, a more detailed investigation of ISOs with an in-situ mission presents unique challenges: the orbits may have high inclinations and the encounter speeds are typically high (10s of km/s) which means a short encounter. There are increased risks with high velocity flybys if the ISO ejects large dust particles. The Giotto spacecraft was destabilized with a cm-sized impactor at a relative velocity of 68 km/s (_Bird et al._ 1988). NASA's competed mission calls are not compatible with missions that are responsive to new discoveries, i.e., missions that do not have a target at the time they are proposed. This is relevant for ISOs as well as OCCs, Manx comets, and hazardous NEOs. Two approaches have been suggested to explore these targets: spacecraft in storage, ready to launch following target discovery and spacecraft in a standby orbit, as is being done by ESA's Comet Interceptor mission (see Snodgrass _et al_ this volume, and _Snodgrass and Jones_ 2019; _Sanchez et al._ 2021). Typically, the target will be unknown at the time of spacecraft development and this has an effect on the definition of basic spacecraft capability (e.g., \(\Delta\)v) and on the payload. Launch following the discovery of an ISO offers a greater flexibility in terms of target access but requires a very fast turnaround of a launch vehicle. A spacecraft in a standby orbit is more responsive to a target but has a more limited target accessibility. Using known OCC orbits, an assessment of the phase space of targets with available launch energies, maximum encounter speed and for times of flight of less than 10 years, launching while the comet is more than 0.5 years from coming to perihelion, shows that for low launch energies, C\({}_{3}\), there are almost no targets. (In astrodynamics, the characteristic energy C\({}_{3}\) km\({}^{2}\)/s\({}^{2}\) is the measure of the excess specific energy over that required to just barely escape a massive body; C\({}_{3}\) = v\({}_{\infty}^{2}\).) There are also almost no targets unless the encounter speeds are greater than 10 km/s. This suggests that at present only fast flyby missions are an option for ISOs. For a comprehensive mission, this likely requires significant advancements in payload capabilities (e.g. small deployable satellites and autonomous navigation) (_Donitz et al._ 2021, 2022). There were several concepts developed for missions that could reach 1I/'Oumuamua. _Seligman and Laughlin_ (2018) estimated that for launch-on-detection scenarios, the wait time would be on the order of 10 years for a favorable mission opportunity. _Hibberd et al._ (2020) explored a range of flyby trajectories which could launch in the early 2030's and deliver a spacecraft to 1I/'Oumuamua with relative velocities between 15-20 km/s, arriving between 2048-2052. Finally, _Miller et al._ (2022) proposed high-performance solar sail scenarios that would allow for a rendezvous with an ISO by maintaining a high potential energy position until the detection of an ISO and then matching velocity through a controlled fall toward the Sun. The LSST survey will increase the number of discoveries by going much fainter, and therefore to larger discovery distances. As shown by _Engelhardt et al._ (2017), ISOs are more common at larger perihelion distances. The fainter limiting survey magnitude would enable detection of a 1 km inactive nucleus with 4% albedo out to 5.5 au, active ISOs much further. Although most detected ISOs may be too distant to easily reach with spacecraft, 1I/'Oumuamua and 2I/'Borisov demonstrate ISOs exist that come within 2 au, and detection of an incoming ISO at large distances may provide enough warning time for either the storage or standby concepts discussed above. ## 8 Summary From the multitude of studies described in this chapter, it is clear that exocomets should be and are common in nearby stellar systems. Together with the physical mechanisms by which they can be lost to interstellar space, the local stellar neighborhood should be rich in ISOs, a prediction finally confirmed through the discovery of 1I/'Oumuamua and 2I/Borisov. So the question remains, how is 1I/'Oumuamua not visibly active while 2I/Borisov is strongly active; is this due to origin, age, or the unseen 1I/'Oumuamua perihelion passage? From models of our solar system (_Shannon et al._ 2015), some fraction of ISOs will not be cometary but more like C- or S-type asteroids. Based on the observation of Manx comets (_Meech et al._ 2016), we should be able to distinguish asteroidal objects from inert comets. Is there a continuum of properties for ISOs, or have we already found two separate populations arriving from different origin mechanisms? This sample of two very different ISOs makes it difficult to predict what will 3I be like - will it be like planetesimals from our solar system or something different (see Fig. 12)? Given the increasing sensitivity of sky surveys, the current and next generation of optical, IR and sub-mm facilities, plus the theoretical advances in cometary/exocomet structure and evolution, these open questions have a chance of being answered in _Comets IV_. ## Acknowledgments We thank the reviewers, Matthew Knight, Sean Raymond and an anonymous referee for the very thorough and helpful reviews on a very short timescale! K.J.M. acknowledges support through NASA Grant 80-NSSC18K0853 and in addition to support for HST programs GO/DD-15405, -15447, 16043, -16088, and -16915 provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy under NASA contract NAS 5-26555. A.F. acknowledges support from UK STFC award ST/X000923/1. Figure 5 is based on data obtained from the ESO Science Archive Facility with DOI [https://doi.org/10.18727/archive/33](https://doi.org/10.18727/archive/33).
2309.05307
FOREVER22: Gas and metal outflow from massive galaxies in protocluster regions
We study gas and metal outflow from massive galaxies in protocluster regions at $z=3-9$ by using the results of the FOREVER22 simulation project. Our simulations contain massive haloes with $M_{\rm h} \gtrsim 10^{13}~\rm M_{\odot}$, showing high star formation rates of $> 100~\rm M_{\odot}~yr^{-1}$ and hosting supermassive black holes with $M_{\rm BH} \gtrsim 10^{8}~\rm M_{\odot}$. We show that the mass loading factor ($\eta_{\rm M}$) sensitively depends on the halo mass and it is $\eta_{\rm M} = 1.2~(9.2)$ for $M_{\rm h} = 10^{13}~(10^{11})~\rm M_{\odot}$. Once the halo mass exceeds $\sim 10^{12.5}~\rm M_{\odot}$, the outflow velocity of the gas rapidly decreases near a virial radius, and the gas returns to a galactic centre finally as a fountain flow. Also, the metal inflow and outflow rates sensitively depend on the halo mass and redshift. At $z=3$, the inflow rate becomes larger than the outflow one if $M_{\rm h} \gtrsim 10^{13.0}~\rm M_{\odot}$. Thus, we suggest that massive haloes cannot be efficient metal enrichment sources beyond virial radii that will be probed in future observations, e.g., studies of metal absorption lines with the Prime Focus Spectrograph on the Subaru telescope.
Naoki Harada, Hidenobu Yajima, Makito Abe
2023-09-11T08:51:55Z
http://arxiv.org/abs/2309.05307v1
# FOREVER22: Gas and metal outflow from massive galaxies in protocluster regions ###### Abstract We study gas and metal outflow from massive galaxies in protocluster regions at \(z=3-9\) by using the results of the FOREVER22 simulation project. Our simulations contain massive haloes with \(M_{\rm h}\gtrsim 10^{13}\) M\({}_{\odot}\), showing high star formation rates of \(>100\) M\({}_{\odot}\) yr\({}^{-1}\) and hosting supermassive black holes with \(M_{\rm BH}\gtrsim 10^{8}\) M\({}_{\odot}\). We show that the mass loading factor (\(\eta_{\rm M}\)) sensitively depends on the halo mass and it is \(\eta_{\rm M}=1.2\) (9.2) for \(M_{\rm h}=10^{13}\) (\(10^{11}\)) M\({}_{\odot}\). Once the halo mass exceeds \(\sim 10^{12.5}\) M\({}_{\odot}\), the outflow velocity of the gas rapidly decreases near a virial radius, and the gas returns to a galactic centre finally as a fountain flow. Also, the metal inflow and outflow rates sensitively depend on the halo mass and redshift. At \(z=3\), the inflow rate becomes larger than the outflow one if \(M_{\rm h}\gtrsim 10^{13.0}\) M\({}_{\odot}\). Thus, we suggest that massive haloes cannot be efficient metal enrichment sources beyond virial radii that will be probed in future observations, e.g., studies of metal absorption lines with the Prime Focus Spectrograph on the Subaru telescope. keywords: galaxies: evolution - galaxies: formation - galaxies: high-redshift ## 1 Introduction In the modern theory of cosmological structure formation, galaxies evolve via mergers and matter accretion from the intergalactic medium (IGM). The growth rate of galaxies sensitively depends on the environment in the large-scale structures. In the overdense regions, the matter accretion rate and merger frequency are expected to be larger than the cosmic mean fields. Thus, multiple massive/bright galaxies can collectively form in such overdense regions even in the early Universe. According to the hierarchical structure formation scenario, such particular regions finally evolve into local galaxy clusters (Overzier, 2016). Therefore, the clustering regions of massive galaxies in the early Universe are dubbed protoclusters (PCs). Thanks to the brightness of massive galaxies in PCs, recent galaxy surveys have discovered a lot of PCs and allowed us to study the galaxy evolution in overdense regions (e.g., Matsuda et al., 2005; Toshikawa et al., 2018; Higuchi et al., 2019; Umehata et al., 2019; Miller et al., 2018; Hill et al., 2020). These observations indicate that a diversity of galaxy populations emerge in PC regions. However, the origin of the diversity has not been understood well. It is believed that the baryon cycle, i.e., accretion of matter, outflow, and recycling of gas, has an inseparable relationship with the evolution of galaxies. Star formation activities in galaxies are closely related to the supply of the gas accreted from circumgalactic medium (CGM) and IGM, and from galaxy mergers. The interstellar gas and metals can be returned into CGM/IGM via stellar/active galactic nuclei (AGN) feedback. Therefore, the physical states of CGM/IGM are tightly related to the activities of the galaxies. Some parts of the ejected components would fall back again into the galaxy due to its deep gravitational potential. This recycling process can affect the subsequent star formation histories. One of the central issues in galactic astronomy is to correctly follow the baryon cycle with the stellar/AGN feedback. So far, various theoretical works have proposed the numerical feedback models to reproduce the baryon cycle through the simulation of the galaxy formation (e.g., Springel and Hernquist, 2003; Schaye et al., 2015; Yajima et al., 2017; Pillepich et al., 2018). However, there is a large room in parameters for the star formation and the feedback models in cosmological simulations of galaxy formation. By introducing the models and parameters appropriately, previous studies have successfully reproduced observational properties, e.g., the star formation history (e.g., Springel and Hernquist, 2003; Oppenheimer and Dave, 2006; Pillepich et al., 2018), the galaxy luminosity and mass function (e.g., Benson et al., 2003; Oppenheimer et al., 2010; Schaye et al., 2015; Pillepich et al., 2018), the galaxy size (e.g., Pillepich et al., 2018), the mass-metallicity relation (MZR) (e.g., Finlator and Dave, 2008; De Rossi et al., 2017), and the presence of metals in the diffuse IGM (e.g., Aguirre et al., 2001; Oppenheimer and Dave, 2006). Recent observations have been studying the baryon recycling process between galaxies, circumgalactic medium (CGM) and IGM, that can provide hints to understand the feedback mechanism related to the enhancement and quenching of star formation (Tumlinson et al., 2017). Bertone et al. (2007) introduced the model of recycling in their semi-analytical studies. Also, a lot of theoretical studies have been devoted to clear the relation between the baryon recycle and the galaxy evolution, for example, in the viewpoint of gas dynamics in CGM (Oppenheimer and Dave, 2008; Oppenheimer et al., 2010; Ford et al., 2014; Christensen et al., 2016; Hafen et al., 2019; Borrow et al., 2020; Lochhaas et al., 2020; Mitchell et al., 2020; Hafen, 2020; Wright et al., 2021; Mitchell and Schaye, 2022), the impacts of stellar and AGN feedback (Shen et al., 2013; Tollet et al., 2019; Wright et al., 2020; Donahue and Voit, 2022), stellar mass and star formation rates (SFRs) (Dave et al., 2011), metal enrichment history (Brook et al., 2014; Muratov et al., 2017), and galaxy mass assembly (Angles-Alcazar et al., 2017; Mitchell and Schaye, 2022). Among them, the particle tracking method in smoothed-particle hydrodynamics (SPH) simulations is a promising way to study and visualize the trace of gas flow in CGM and/or IGM (Oppenheimer and Dave, 2008; Oppenheimer et al., 2010; Ford et al., 2014; Christensen et al., 2016; Hafen et al., 2019; Borrow et al., 2020). However, the mass range of galaxies investigated is still limited. Recent discoveries of high-redshift PC regions and passive galaxies bring the motivation of understanding the feedback and the outflow of very massive galaxies. Progenitors of today's high-mass "Coma-type" clusters with \(M_{\rm h}>10^{15}\) M\({}_{\odot}\) is thought to have the halo mass of \(M_{\rm h}>10^{13}\) M\({}_{\odot}\) at \(z\sim 3\)(Chiang et al., 2013). Simulations of massive PCs are challenging because of their rarenesis in space which requires large simulation volumes. In this paper, we perform cosmological SPH simulations with a large simulation box of (714.3 \(\rm{cMpc}\))\({}^{3}\), and study the baryon cycling in massive dark matter (DM) haloes at redshifts of \(3-9\) that are supposed to be progenitors of massive galaxy clusters in the local Universe. In Section 2, we explain FOREVER2 project and the star formation, and the feedback models. The inflow and outflow of gas in massive galaxies are presented in Section 3. We show the metal outflow rate and spatial distribution around massive haloes in Section 4. In Section 5, we summarize our main results. ## 2 Simulation ### The FOREVER22 simulation In this study, we use the result of FOREVER22 project which is a new series of cosmological hydrodynamic simulations focusing on 10 PC regions (Yajima et al., 2022). We utilize the modified version of smoothed particle hydrodynamics (SPH)/\(N\)-body code GADGET-3 (Springel et al., 2001; Springel, 2005). The code incorporates the star formation, the radiation feedback and supernova (SN) feedback from the stars, and the AGN feedback from the massive black holes (BHs). Here we use a standard SPH scheme that can suppress the mixing of clouds and background gas (see also Agertz et al., 2007; Schaller et al., 2015; Braspenning et al., 2023). Therefore, the outflow gas can propagate more easily. On the other hand, we carefully tune the subgrid models to be fit for use in the cosmological simulation with the standard SPH. Here we summarize the implementation of the code. More details of the simulations are described in Yajima et al. (2022); (202). ### Star formation In the FOREVER22 project, the star formation is followed by using the model developed in Schaye and Dalla Vecchia (2008) that is based on the Kennicutt-Schmidt law \(\dot{\Sigma}_{*}=A(\Sigma_{\rm g}/1\ {\rm M}_{\odot}\ {\rm pc}^{-2})^{n}\) of local galaxies. The star formation rate \(\dot{m}_{*}\) is evaluated from the total pressure \(P\) of the interstellar medium (ISM) as follows \[\dot{m}_{*}=m_{\rm SPH}(4(1\ {\rm M}_{\odot}\ {\rm pc}^{-2})^{-n}\left(\frac{ \gamma}{G}f_{\rm g}P\right)^{(n-1)/2}, \tag{1}\] where \(m_{\rm SPH}\) is the SPH particle mass, \(\gamma=5/3\) is the ratio of specific heats, \(f_{\rm g}\) is the gas mass fraction (set to unity), \(n\) and \(A\) are the free parameter. During the simulation time step \(\Delta t\), the conversion probability from an SPH particle to a stellar particle is evaluated by (Schaye and Dalla Vecchia, 2008) \[\dot{p}_{*}=\min\left(\frac{\dot{m}_{*}\Delta t}{m_{\rm SPH}},1\right). \tag{2}\] In the simulation, the stellar particle is treated as a star cluster assuming a Chabrier initial mass function (IMF) with the mass range of \(0.1-100\) M\({}_{\odot}\). We confirmed that the star formation and feedback models of the FOREVER22 project reproduced observational properties, e.g., the cosmic star formation rate density and the stellar mass function (Yajima et al., 2022), and most parameters referred to the values derived in Schaye et al. (2015). The slope \(n=1.4\) and \(A=1.5\times 10^{-4}\) M\({}_{\odot}\) yr\({}^{-1}\) kpc\({}^{-2}\) are the fiducial values, but the slope \(n\) is set to 2.0 if the gas density exceeds the metallicity dependent critical value \(n_{0}(Z/0.002)^{-0.64}\) cm\({}^{-3}\). Also, if the gas density satisfies \(n_{\rm H}>n_{0}\), the effective equation of state with the adiabatic index \(\gamma_{\rm eff}\) is used. To prevent the undesirable gravitational fragmentation in the unresolved scale, we set \(\gamma_{\rm eff}=4/3\). The critical density \(n_{0}\) and the floor temperature at \(n_{\rm H}=n_{0}\) are the parameters, which depend on the resolution of the simulation (see SS2.1 in Yajima et al., 2022). Here we set \(n_{0}=0.1\) cm\({}^{-3}\). The gas temperature is estimated with a net cooling rate derived based on the collisionally and photoionization equilibrium states of each gas and metal species using cloudy v07.02 (Ferland, 2000). We take into account the line cooling processes of metals and primordial gas and the cooling by continuum radiation such as from thermal bremsstrahlung and recombination. ### Stellar feedback Young stars have an impact on the physical states of the interstellar medium (ISM), the CGM, and the IGM via radiation and SN feedback. These feedback processes are considered in our simulations as follows. #### 2.3.1 Radiation feedback First, the simulations take into account the photoionization feedback. We consider UV radiation only from star clusters with an age younger than \(10^{7}\) yr. The photoionized region is evaluated by taking the balance between the ionizing photon production rate \(\dot{N}_{\rm ion}\) of a stellar particle and the recombination rate of neighbour gas particles as \[\dot{N}_{\rm ion}=\sum_{i}n_{\rm H,\alpha}^{2}\alpha_{\rm B}\frac{m_{i}}{ \rho_{i}}, \tag{3}\] where \(n_{\rm H,i}\)\(m_{i}\) and \(\rho_{i}\) are hydrogen number density, mass and density of \(i\)-th SPH particle, \(\alpha_{\rm B}\) is the case-B recombination coefficient (Hui & Gnedin, 1997). The local recombination rate is summed up in order of the distance from the stellar particle until the recombination rate reaches \(\dot{N}_{\rm ion}\). The \(\dot{N}_{\rm ion}\) of each stellar particle is evaluated by using a synthesized SED of the Chabrier IMF considering its age and metallicity. Once the gas is ionized, the temperature of the photoionized SPH particles is set to be \(3\times 10^{4}\) K as hot H\({}_{\rm II}\) bubbles, and the star formation is forbidden in the H\({}_{\rm II}\) region. This feedback can affect the state of ISM if we have sufficient resolution to resolve the cold gas component. Therefore, this feedback can play an important role in the suppression of star formation significantly in the high-resolution series of the FOREVER22 simulations, First0 and First1 runs. In addition to the photoionization process, the UV photon absorption by dust gives momentum to gas. The simulation models the feedback of the UV radiation pressure by giving the momentum kick to the SPH particle. We evaluate the mean free path of the UV continuum photons against the dust as \(l_{\rm mfp}=1/\kappa_{\rm d}\rho\) where \(\kappa_{\rm d}\) is the opacity of dust. If gas particles are within \(l_{\rm mfp}\) from a star cluster, they are influenced by the radiation pressure as \[\mathbf{F}_{\rm rad}=\frac{\kappa_{\rm d}\rho L_{\rm UV}}{4\pi r^{2}c}\frac{\mathbf{r} }{r}, \tag{4}\] where \(r\) is the distance from the stellar particle to the gas particle, \(L_{\rm UV}\) is the luminosity for the UV continuum (\(1000-5000\) A). Besides, we take into account the UV background radiation at \(z\lesssim 10\) by using the model of Haardt & Madau (2001). Once the UV background turns on, the chemical compositions and the cooling rates can change due to the photo-ionization process. We consider the modifications of the cooling rates for 11 elements (H, He, C, N, O, Ne, Mg, Si, S, Ca, and Fe) under the UV background following Wiersma et al. (2009a). Also, the self-shielding effect for dens gas is incorporated. We assume that if the hydrogen number density is larger than 0.01 cm\({}^{-3}\), the flux decreases with a fraction (\(n_{\rm H}/0.01\) cm\({}^{-3}\))\({}^{-2}\)(Yajima et al., 2012; Rahmati et al., 2013). Thus, the UV background suppresses the cooling of the low-density gas. #### 2.3.2 Supernova Feedback The SN feedback is modelled by using the recipe proposed by Dalla Vecchia & Schaye (2012). In this scheme, the SN energy is injected into neighbour gas particles as thermal energy. To avoid the over-cooling problem, the heating gas particles are stochastically chosen and are heated to \(10^{7.5}\) K. Dalla Vecchia & Schaye (2012) argued that the injected thermal energy is converted into kinetic one if the cooling time is longer than the local sound crossing time. Thus, the SN feedback effectively works if the gas density is less than the following critical value, \[n_{\rm H}\sim 100~{}{\rm cm}^{-3}~{}\left(\frac{T}{10^{7.5}~{}{\rm K}} \right)^{3/2}\left(\frac{m_{\rm SPH}}{10^{4}~{}{\rm M}_{\odot}}\right)^{-1/2}. \tag{5}\] As in Schaye et al. (2015), we introduce a multiplication factor \(f_{\rm th}\) to avoid the over-cooling problem even at the high-density star-forming region (\(n_{\rm H}>100\) cm\({}^{-3}\)), \[f_{\rm th}=f_{\rm th,min}+\frac{f_{\rm th,max}-f_{\rm th,min}}{1+\left(\frac{ 2}{0.1~{}Z_{\odot}}\right)^{n_{\rm s}}\left(\frac{n_{\rm H,birth}}{n_{\rm H,0} }\right)^{-n_{\rm s}}}, \tag{6}\] where \(n_{\rm H,birth}\) is the gas density where a stellar particle forms, \(n_{\rm s}\), \(n_{\rm s}\) and \(n_{\rm H,0}\) are free parameters. The SN feedback is likely to be strong in low metallicity and low-density regions because of inefficient radiative cooling. Therefore, the multiplication factor is set as the function of metallicity \(Z\) and the density \(n_{\rm H}\) as \(n_{\rm z}=n_{\rm n}=2/\ln(10)\) and \(n_{\rm H,0}=0.67\) cm\({}^{-3}\)(Schaye et al., 2015). We set \(f_{\rm th,max}=2.5\) and \(f_{\rm th,min}=0.3\) as the upper/lower limit of \(f_{\rm th}\). The multiplication factor can exceed unity. This considers the additional feedback processes that are not included in the simulation, e.g., stellar winds, cosmic rays, and/or more energetic SN yields than assumed in the simulation. Note that the upper limit of the multiplication factor is lower than that of previous works (\(f_{\rm th,max}=3.0\) is chosen in Schaye et al., 2015) by considering the additional radiative feedback channels as described above. With the parameters, our previous work reproduced properties of observed high-redshift galaxies (Yajima et al., 2022). In our simulations, stellar particles continuously release hydrogen, helium, and metals into the neighbour gas. Based on the prescription in Wiersma et al. (2009b), we consider the gas and metal productions with tabulated yields for Types Ia and II SNe, and from asymptotic giant branch stars. ### Black holes The AGN feedback is expected to suppress the star formation in massive galaxies via the heating and evacuating ISM (e.g., Dubois et al., 2012; Schaye et al., 2015). Our simulations include the growth and the feedback processes of massive black holes (BHs) in galaxies. Since resolving an accretion disc around a BH is still difficult even in the current galaxy formation simulations, we introduce subgrid models of the growth and feedback of BHs. First, a seed BH with the mass of \(10^{5}~{}h^{-1}~{}{\rm M}_{\odot}\) is put at the galactic centre when the halo mass reaches \(10^{10}~{}h^{-1}~{}{\rm M}_{\odot}\). As for the orbital evolution of BH particles, the simple repositioning technique is adopted to mimic the effect of dynamical friction on the BH. Recent work suggested that this technique is important to consider the feedback from supermassive BHs (SMBH) in the large-scale cosmological hydrodynamic simulations (Bahe et al., 2022). We evaluate the gas accretion rate onto the BH particle based on the Bondi-Hoyle-Lyttleton model as \[\dot{m}_{\rm Bondi}=\frac{4\pi cGM_{\rm BH}^{2}\rho}{(c_{\rm s}^{2}+v_{\rm rel}^ {2})^{3/2}}, \tag{7}\] where \(\rho\), \(c_{\rm s}\) are the density and the sound speed of gas particle around a BH, \(v_{\rm rel}\) is the relative velocity between the gas and the BH. We consider the suppression of the gas accretion due to the angular momentum of the gas as \[\dot{m}_{\rm acc}=\dot{m}_{\rm Bondi}\times{\rm min}(C_{\rm visc}^{-1}(c_{\rm s }/V_{\phi})^{3},~{}1), \tag{8}\] where \(C_{\rm visc}\) is a numerical parameter regarding the viscosity of the accretion disc, \(C_{\rm visc}=200\pi\) is chosen as the fiducial setup (Schaye et al., 2015). Using the accretion rate, the growth rate of BHs is estimated by \(\dot{m}_{\rm BH}=(1-f_{\rm r})\dot{m}_{\rm acc}\), where \(f_{\rm r}=0.1\) is the radiative efficiency. The physical properties of neighbour gas are evaluated by using the nearest 100 gas particles. The fiducial model takes into account the upper limit of the accretion rate by the Eddington limit, \[\dot{m}_{\rm Edd}=\frac{4\pi GM_{\rm BH}m_{\rm p}}{f_{\rm e}\sigma_{\rm T}c}. \tag{9}\] The FOREVER22 project also includes the run with the optional allowing the super-Eddington accretion, i.e., \(\dot{m}_{\rm acc}>\dot{m}_{\rm Edd}\). Throughout this paper, we investigate only the results from the fiducial models with the Eddington limit. In the simulations, during the timestep of \(\Delta t\) the released energy is estimated as \(\Delta E=f_{\rm e}f_{\rm e}\dot{m}_{\rm BH}c^{2}\Delta t\), where \(f_{\rm e}=0.15\) is the thermal coupling factor. As described below, the FOREVER22 project considers two types of feedback mechanisms, thermal (quasar mode) or kinetic (radio mode) feedback. #### 2.4.1 Quasar mode feedback In the quasar mode feedback, the released energy \(\Delta E\) is injected into neighbour gas particles by heating the temperature up to \(\Delta T=10^{9}\) K. Unlike the SN feedback, the gas-particle heated is selected from the nearest one. If the released energy from a BH is large, the next particles are heated up in order of the distances. If the BH mass is smaller than \(10^{9}\) M\({}_{\odot}\), the quasar mode feedback alone is always used. #### 2.4.2 Radio mode feedback The jet-like kinetic feedback is activated when the BH mass exceeds \(10^{9}\) M\({}_{\odot}\). In radio mode feedback, half of \(\Delta E\) is injected as the kinetic energy, and the other one is used as the thermal energy. The target gas particle of the radio mode feedback receives the momentum kick with the velocity of 3000 km s\({}^{-1}\). The direction axis of momentum is determined along the total angular momentum vector \(\mathbf{n}=\mathbf{L}/|\mathbf{L}|\) of the neighbour gas particles. Then the direction \(\mathbf{n}\) or \(-\mathbf{n}\) is randomly chosen. ### The Models The FOREVER22 project focuses on massive 10 PC regions chosen in the (714.3 \(\rm\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, Figure 1: Spatial distributions of gas column densities in most massive haloes at \(z=3.0\) in the three protocluster regions: PCR0, PCR1, and PCR2. The halo masses are \(1.2\times 10^{14}\) M\({}_{\odot}\) (PCR0), \(8.4\times 10^{13}\) M\({}_{\odot}\) (PCR1), and \(5.5\times 10^{13}\) M\({}_{\odot}\)/h(PCR2). The projection depth is 50 ckpc h\({}^{-1}\) and white arrows represent streamlines of gas. Figure 2: Halo mass \(M_{\rm h}\) scaling of the mass loading factor \(\eta_{\rm M}\), median of the radial velocity \(v_{\rm t-50}\), and the outflow rate \(\dot{M}_{\rm out}\) extracted from top 500 massive haloes in PC0 at \(z=3.16\). Red circles indicate the median and gray shades show the quantiles (25 - 75 percent) in each bin with a bin width of 0.4 dex. Red dashed lines in top panels are fitting results of medians by the power law, \(\eta_{\rm M}=7.43\times 10^{5}(M_{\rm h}/{\rm M}_{\odot})^{-0.45}\) for the left top panel and \(\eta_{\rm M}=7.24\times 10^{3}(M_{\rm h}/{\rm M}_{\odot})^{-0.27}\) for the right top panel. Blue lines in the middle panels are the escape velocity estimated assuming the NFW profile. In the left panels, each quantity is evaluated at radius \(r=0.1R_{\rm vir}\) as galaxy-scale outflow whereas at \(r=R_{\rm vir}\) as halo-scale outflow in right panels. the best-fit parameters as \(\alpha=-0.45\) and \(\beta=5.9\). This slope is intermediate between two characteristic scaling law in \(\eta_{\rm M}-M_{\rm h}\) relation, one is the momentum-conservation model (\(\eta_{\rm M}\propto M_{\rm h}^{-1/3}\)) and another is the energy-conservation one (\(\eta_{\rm M}\propto M_{\rm h}^{-2/3}\)) (Muray et al., 2005; Veilleux et al., 2020). The reasonable model sensitively depends on the cooling efficiency at a post-shock region. The adiabatic case is corresponding to the energy-conservation model. Our results suggest that the cooling is moderately effective on the galactic scale. In the halo scale, \(\eta_{\rm M}\) slowly decreases at \(M_{\rm h}\lesssim 10^{13}\) M\({}_{\odot}\) and becomes almost flat at \(M_{\rm h}\gtrsim 10^{13}\) M\({}_{\odot}\). We perform the fitting at \(M_{\rm h}<10^{13}\) M\({}_{\odot}\) as with the galaxy-scale analysis, and the best-fit slope is \(\alpha=-0.27\). Therefore, we indicate that the outflow propagates with the momentum-conservation mode in the halo scale. Our results at \(M_{\rm h}\lesssim 10^{13}\) M\({}_{\odot}\) are similar to Mitchell et al. (2020) in the galaxy scale, but the slope is shallower than their work (\(\alpha\sim-1\)) in the halo scale. This may indicate that in our simulations the cooling works efficiently rather than Mitchell et al. (2020) at the halo scale. Although the number of samples is not sufficient to discuss the statistical features, our simulation contains the more massive haloes. Our result shows even massive haloes with \(M_{\rm h}\sim 10^{14}\) M\({}_{\odot}\) can keep \(\eta_{\rm M}\) greater than unity that indicates protoclusters in the early Universe can provide gas and metals into IGM. Radial velocities of outflow gas are also shown in the figure. In the galactic-scale, median values of the outflow gas (\(v_{\rm r-50}\)) monotonically increase with the halo mass from 38 km s\({}^{-1}\) at \(M_{\rm h}=10^{10.8}\) M\({}_{\odot}\) to 304 km s\({}^{-1}\) at \(10^{14}\) M\({}_{\odot}\). One of the characteristic velocities of the galactic halo is the escape velocity \(V_{\rm esc}=(2GM_{\rm h}/R_{\rm vir})^{1/2}\) which is equal to \(\sqrt{2}\) times the circular velocity \(V_{\rm c}\). Assuming \(M_{\rm h}\propto R_{\rm vir}^{3}\), we obtain the halo mass scaling of escape/circular velocities as \(\propto M_{\rm h}^{1/3}\). In the middle panel of Fig. 2, we overplot \(V_{\rm esc}\) as the function of \(M_{\rm h}\). Our simulations show that \(v_{\rm r-50}\) has a halo mass dependence similar to \(V_{\rm esc}\) whereas the absolute value is smaller. Martin (2005) have pointed out the tight relationship between the outflow velocity and the circular velocity. The velocity ratio of \(v_{\rm r-50}\) to \(V_{\rm c}\) is \(\sim 0.4\) at \(10^{10.8}\) M\({}_{\odot}<M_{\rm h}<10^{12.8}\) M\({}_{\odot}\). For halo-scale outflow, however, the increasing trend disappears beyond \(M_{\rm h}\sim 10^{12}\) M\({}_{\odot}\). The difference between \(V_{\rm esc}\) and \(v_{\rm r-50}\) increases with the halo mass, suggesting that most gas particles cannot escape from massive haloes. This will be discussed in Section 3.4 and 4. The bottom panels show outflow rates. As shown in Yajima et al. (2022), massive galaxies are on the observed main-sequence (Pearson et al., 2018), having SFR of \(\gtrsim 1000\) M\({}_{\odot}\) yr\({}^{-1}\). A similar amount of gas is evacuated, resulting in \(\eta\gtrsim 1\). Given that the gas accretion rate onto a halo is low, most gas can be lost within the cosmic time scale. Note that, however, Yajima et al. (2022) suggested that most massive haloes in PC regions keep gas-rich states. Massive stars and BHs are thought to be the two main drivers of the galactic outflow. Fig. 3 shows the dependence of \(\eta_{\rm M}\) on specific star formation rate (sSFR) and gas accretion rate onto BH for \(r=0.1\)\(R_{\rm vir}\). We can find that massive haloes have a lower mass loading factor as compared with low-mass haloes even if they have similar sSFR or \(\dot{M}_{\rm BH}\). This is consistent with the negative \(\eta_{\rm M}-M_{\rm h}\) relation in Fig. 2. We find that \(\eta_{\rm M}\) decreases with sSFR. Low-mass haloes show frequent fluctuations in star formation histories due to the cycles of outflows and refreshing of gas (see also, Yajima et al., 2017; Arata et al., 2020; Abe et al., 2021). Then, galaxies can keep high star formation rates as the halo mass increases. Therefore, the low values of sSFR may indicate the phase of the suppressed star formation after a starburst. Note that the fluctuations of SFR can also be shown in the case with a merging resolution following the structure low-mass galaxies (e.g., Crain et al., 2015). Considering the time lag between the launching due to the starburst and the large-scale outflow \(0.1R_{\rm vir}/V_{c}\sim 31.5\left(\frac{1+z}{\alpha}\right)^{-3/2}\) Myr, the anti-correlation can be explained. The decreasing tendency for \(\eta_{\rm M}\) shifts to flat ( orange line) or slightly increasing tendency (shade) at sSFR \(\gtrsim 1\) Gyr\({}^{-1}\). A part of galaxies with sSFR \(\gtrsim 1\) Gyr\({}^{-1}\) are thought to be massive galaxies in the starburst phase. As the result of a large amount of SN energy from stars, the mass loading factor may keep the constant value slightly less than 10 (as the median value). Contrary to the result for sSFR, we find no strong dependency of \(\eta_{\rm M}\) regarding the BH activities. Here, the BH activity is measured at each time step. However, it can change rapidly over time scales shorter than the dynamical time of the galactic outflow. The time-scale difference may weaken the dependence. Note that the energetic AGN feedback has the potential to drive the outflowing gas comparable to or greater than that of stars. As we described in SS2.4, the BH particles release the energy of \(E=f_{\rm e}f_{\rm e}\dot{m}_{\rm BH}c^{2}\Delta t\) during the time interval of \(\Delta t\). Thus, we estimate the releasing energy rate of a BH as \[\dot{E}_{\rm BH} = f_{\rm e}f_{\rm e}\dot{m}_{\rm BH}c^{2} \tag{11}\] \[= 8.5\times 10^{44}\ {\rm erg\ s^{-1}}\ \left(\frac{f_{\rm e}f_{\rm e}}{0.015}\right)\left(\frac{\dot{m}_{\rm BH}}{\rm M_{\odot}\ yr^{-1}}\right).\] On the other hand, the available releasing SN energy per unit mass is \(\epsilon_{\rm SNII}=8.73\times 10^{15}\) erg g\({}^{-1}\) by assuming the Chabrier IMF and the energy from a single event of SN as \(10^{51}\) erg (Dalla Vecchia and Schaye, 2012). Considering the continuous star formation over \(\sim 10\) Myr (typical lifetime of the massive stars), the releasing energy rate due to the SN feedback may become as \[\dot{E}_{\rm SN}=\epsilon_{\rm SNII}\times{\rm SFR}=5.5\times 10^{43}\ {\rm erg\ s^{-1}}\ \left(\frac{{\rm SFR}}{100\ {\rm M_{\odot}\ yr^{-1}}}\right). \tag{12}\] In the case of the energy conservation outflow, the releasing energy rate \(\dot{E}\) is equated with the production rate of kinetic energy as \(\frac{1}{2}\dot{M}_{\rm out}V_{\rm out}^{2}=\dot{E}\). On the other hand, in the case of the momentum conservation mode, the production rate of the momentum can be estimated as \(\dot{M}_{\rm out}V_{\rm out}=\dot{P}\sim\dot{N}m_{\rm SPHth}\sim\dot{E}/v_{\rm th}\), where \(v_{\rm th}\) is the thermal velocity of heated gas particles due to the SN/BH feedback. Given that \(V_{\rm out}\sim V_{\rm esc}\), the ratio of the outflow rates due to the BH and the SN feedback is evaluated as \[\frac{\dot{M}_{\rm out}^{\rm BH}}{\dot{M}_{\rm out}^{\rm SN}}=\xi\left(\frac{ f_{\rm e}f_{\rm e}}{0.015}\right)\left(\frac{\dot{m}_{\rm BH}/{\rm SFR}}{0.01} \right), \tag{13}\] where the factor \(\xi=15.3\) for the energy conservation case and \(\xi=2.7\left(\frac{10^{7.5}\ {\rm K}}{\Delta T_{\rm SN}}\right)^{1/2}\left(\frac{10^{9. \rm K}}{\Delta T_{\rm BH}^{-1}}\right)^{-1/2}\) for the momentum conservation case, respectively. Hence, the BH feedback would contribute to the outflow rate if \(\dot{m}_{\rm BH}/{\rm SFR}\) is larger than \(6.5\times 10^{-4}\) (\(3.7\times 10^{-3}\)) for the energy (momentum) conservation case. As indicated in McAlpine et al. (2017), BHs can grow rapidly once the halo mass exceeds \(\sim 10^{12}\) M\({}_{\odot}\). A similar trend is shown in FOREVER22 simulation (see Fig. 8 and 11 in Yajima et al., 2022). In the numerical simulations, the activity of BH feedback drastically changes over time and we find no clear dependency between \(\eta_{\rm M}\) and \(\dot{M}_{\rm BH}\). The BH growth and activity are quite different from simulations (e.g., Dave et al., 2019; Dubois et al., 2021). However, Nelson et al. (2019) showed a similar weak dependency regardless of the different resolution and BH models. However, the BH feedback has the potential to boost the \(\eta_{\rm M}\) even in the high-mass galaxies with the mass \(M_{\rm h}\gtrsim 10^{12}\) M\({}_{\odot}\) (see Fig. 2). ### Radial profile of inflow and outflow rate The velocity of the outflow gas gradually decreases as interstellar matter or CGM is stacked. Once the velocity reaches the escape velocity at a specific radius, the outflow gas starts to return as the inflow. To study the point, we examine radial profiles of inflow and outflow rates. Fig. 4 presents the specific inflow and outflow rates defined as \(f_{\rm s-in,s-out}\equiv\dot{M}_{\rm in,out}(r)/M_{\rm h}\). We divide haloes into four mass bins and measure the median values. \(f_{\rm s-out}\) of the most massive group (top-left-hand panel) declines at \(r\gtrsim R_{\rm vir}\). This indicates a part of gas returns into a halo at \(r\sim R_{\rm vir}\) as a halo-scale fountain flow. As the radial distance increases, the difference between \(f_{\rm s-in}\) and \(f_{\rm s-out}\) increases and reaches around ten times. In cases of lower mass haloes, the outflow rates keep constant or increase even beyond \(r\sim R_{\rm vir}\), suggesting that metal-enriched gas from star-forming regions is provided into IGM. The specific outflow rates exceed \(\sim 3\times 10^{-2}\) Gyr\({}^{-1}\). Therefore, the low-mass haloes can lose most gas within the cosmic age at \(z\sim 3\) considering the gas fraction \(M_{\rm gas}/M_{\rm h}\sim\Omega_{\rm b}/\Omega_{\rm M}\) if no gas inflow exists. However, we confirm that \(f_{\rm s-in}\) is higher than \(f_{\rm s-out}\) overall radius within \(2R_{\rm vir}\). Therefore, most high-redshift galaxies grow in a gas-rich state. Mitchell et al. (2020) also showed the radial profiles of the outflow gas within a viral radius at \(z=0\). Our simulation results show similar structures that the outflow rate increases with the radial distance within a virial radius. Note that, the outflow rate is higher than the figure of the radial profile in Mitchell et al. (2020) by a factor of few because of the different redshifts. Here, by expanding the range of the radial distance and the halo mass, we show that the rates of the outflow gas rapidly decrease just beyond the virial radii in the case of massive haloes. The structures of the inflow and the outflow gas are likely to be reflected in properties of absorption lines in SEDs of background objects like QSOs or bright galaxies, which will be one of the main topics carried out by future observational missions, e.g., the Subaru Prime Focus Spectrograph. Wright et al. (2020) suggested that the baryon accretion rate sensitively depended on the sub-grid physics like stellar or AGN feedback. In particular, the accretion rate for low-mass and low-redshift galaxies is sensitive to the sub-grid models because of the efficient galaxy outflows. We will investigate the reasonable sub-grid models via comparisons with upcoming observational data in future work. ### Redshift evolution of inflow and outflow Fig. 5 shows the redshift evolution of the specific gas inflow/outflow rate at \(r=R_{\rm vir}\). We pick up haloes with the stellar mass larger than \(10^{8}\) M\({}_{\odot}\) and divided them into three groups according to their halo mass \(M_{\rm h}\) at each redshift. Overall, we find that there are no clear differences between the three groups. All cases indicate an increasing trend in \(f_{\rm s-in}\) between \(3<z<9\). Dekel et al. (2013); Dekel and Mandelker (2014) developed a simple toy model on the galaxy evolution and showed the mass accretion rate as a function of the scale factor, \(a=(1+z)^{-1}\). Given the Einstein-de Sitter universe, the specific mass accretion rate in the galaxy by the dark matter shows a monotonical increase with \(z\) and is proportional to \((1+z)^{n}\), \(n=5/2\). Dekel et al. (2013) assumed that the specific gas inflow rate is also expressed with the same relation and confirmed it by their simulations (Dekel and Mandelker, 2014). In the cases with the halo masses of \(<10^{12}\) M\({}_{\odot}\), the inflow rates increase with redshift as \((1+z)^{2.5}\), which indicates the haloes grow via mergers with gas-rich haloes simply. In the case of massive haloes with \(M_{\rm h}>10^{12}\) M\({}_{\odot}\), the slope is somewhat shallower. This can be due to matter accretion or halo mergers with a low baryon fraction. The outflow rates, on the other hand, is remarkably different from the inflow and it does not show clear evolution. It keeps fairly constant with \(z\) between \(3<z<9\), whereas it shows some small variations, the increasing trend at \(z\sim 3\), and the decreasing feature at high redshifts. Since the difference between the inflow and the outflow produce the change of baryon mass \(M_{\rm b}\) in haloes, the large difference at high redshift suggests active growth in haloes. At a lower redshift of \(z\sim 3\), however, the difference becomes small and the baryon mass growth may be suppressed. Note that, \(f_{\rm s-out}\) of the massive halo group decreases at \(z\gtrsim 6\). This is likely due to the deep gravitational potential that hosts most gas against the SN feedback. ### Trajectories of outflowing gas particles Thanks to the Lagrangean format of our code, we can track outflow gas particles (Oppenheimer and Dave, 2008; Ford et al., 2014; Christensen et al., 2016; Angles-Alcazar et al., 2017; Borrow et al., 2020; Hafen, 2020). Fig. 6 shows radial distances of the outflow gas particles of two sample haloes with the halo masses of \(1.3\times 10^{14}\) (Halo0) and \(1.1\times 10^{12}\) M\({}_{\odot}\) (Halo37). We follow the motions of gas particles from \(z=4.8\) to \(z=2.9\), corresponding to the time period of 1.0 Gyr. We classify target particles into two groups to study galaxy scale and halo scale outflow. For the galaxy-scale samples (group A), all gas particles with initial radial positions between 0.05 and \(0.1R_{\rm vir}\) are selected. For the halo scale (group B), we consider gas particles between 0.5 and 0.6 \(R_{\rm vir}\). The black and blue thick lines respectively represent median values for groups A and B at each time, while thin lines show trajectories of randomly selected 10 particles. As shown in the top panel of Fig. 6, in Halo0, both median values reach \(r\sim 0.8\)\(R_{\rm vir}\) at \(t\sim 1.8\) Gyr and keep staying a similar distance. This indicates that most outflow gas contributes to hot halo gas. We find that a part of gas particles return to a galactic centre after reaching the radius of \(\gtrsim 0.5\)\(R_{\rm vir}\), i.e., the large-scale fountain flows (see the black thin line in the top panel of Fig. 6). This feature indicates that most outflow gas have outflow velocities similar to halo escape velocities but it does not exceed them. Thus, in the case of massive haloes, the outflow rate decreases near the virial radius as shown in Fig. 4. In Halo37 (bottom panel in Fig. 6), on the other hand, most gas particles exceed the virial radius within \(\sim 0.3\) Gyr and reach \(\sim 2.5\)\(R_{\rm vir}\) at \(t\sim 1\) Gyr finally. Thus, we suggest that the baryon cycle between the galaxy and CGM sensitively depends on the host halo mass, which will be investigated with future observations with the Prime Focus Spectrograph on the Subaru telescope. We measure the maximum distances of outflow gas particles (\(R_{\rm max}\)). Fig. 7 shows the mass dependence of \(R_{\rm max}\) by using 44 haloes with the mass range from \(3.66\times 10^{11}\) M\({}_{\odot}\) to \(2.83\times 10^{13}\) M\({}_{\odot}\) at \(z=3.0\). Here we pick the gas particles distributed within 0.1 \(R_{\rm vir}\) initially and consider their trajectories with the redshift range from \(z=4.0\) to 3.0. We confirm that these 44 haloes are always major progenitors of the haloes at \(z=3\), i.e., they do not experience mergers with heavier haloes, because it is difficult to distinguish fountain flows and accretion onto merger companions. We find that most gas in haloes with a mass larger than \(10^{12}\) M\({}_{\odot}\) do not go beyond virial radii and \(R_{\rm max}\) distributes at \(\sim R_{\rm vir}\) or \(\la 0.5\)\(R_{\rm vir}\). On the other hand, some haloes of \(M_{\rm h}<10^{12}\) M\({}_{\odot}\) show the large \(R_{\rm max}\). Note that, in the case of haloes of \(M_{\rm h}<10^{12}\) M\({}_{\odot}\), there is a large dispersion with the range between \(R_{\rm max}\sim 0.02\) and 2.3 that can be induced gas structures around star-forming regions. Borrow et al. (2020) showed that a large fraction of gas can move beyond 1 Mpc even in the case of massive galaxies by \(z=0\). They suggested that the jet feedback could induce the out Figure 4: Radial profiles of inflow and outflow rates divided by halo masses at \(z=3.0\). The inflow/outflow rates are evaluated at \(r=R_{\rm vir}\). Red solid and blue dashed lines are the median values of outflow and inflow rates in each halo mass range. Shades represent the quartiles (25-75 percent). Figure 3: left panel: Mass loading factor \(\eta_{\rm M}\) as a function of sSFR and gas accretion rate onto BHs divided by stellar mass at \(r=0.1\)\(R_{\rm vir}\). The top 500 massive haloes at \(z=3.16\) are analyzed. Red lines and shades show the median values and quartiles (25 - 75 percent) in each bin. Each point is classified by color which indicates the halo mass. flow. Therefore, the migration distance of the gas can depend on the redshift and the feedback models. If young stars are embedded in dense clouds even at the end of the lifetime of massive stars \(\sim 10\) Myr, thermal energy released by SN feedback can be lost via radiative cooling before the galactic wind is launched. The haloes with the low values of \(R_{\rm max}\) are likely to keep star formation smoothly, while the star formation of the ones with the large \(R_{\rm max}\) can be quenched until the gas is recovered. The quenched time scale can be \(\tau\sim R_{\rm vir}/V_{\rm c}\sim 315\) Myr at \(z\sim 3\). ## 4 Metal outflow In this section, we investigate metal outflow from galaxies. The metal-enriched CGM can be a powerful tool to reveal the feedback and the star formation processes via comparisons between observations and theoretical models (e.g., Tumlinson et al., 2017; Fujimoto et al., 2019). Also, the metal distribution in IGM can reflect the star formation history over the cosmic age (e.g., Cowie & Songaila, 1998; Aguirre et al., 2001). The complex nature of the metal recycling and the distribution would be related to not only the star formation but also the AGN activities (Moll et al., 2007). Previous numerical simulations investigated the outflow of metals from galaxies while considering the variation of halo mass. For instance, Christensen et al. (2018) performed high-resolution cosmological hydrodynamic simulations to investigate the history of metal recycling from dwarf to Milky Way-sized galaxies. They found that the gas outflow effectively removed metals from galactic discs and the metal was distributed dispersedly even beyond virial radii, while more massive galaxies can retain more metals within the virial radii. Recently, Pandya et al. (2021) performed the state-of-art cosmological hydrody Figure 5: Redshift evolution of outflow and inflow rates normalized by halo masses for three halo groups: \(10^{10}<M_{\rm h}/{\rm M}_{\odot}<10^{11}\) (left panel), \(10^{11}<M_{\rm h}/{\rm M}_{\odot}<10^{12}\) (center panel), and \(10^{12}<M_{\rm h}/{\rm M}_{\odot}\) (right panel). Red and blue lines are the median values of outflow and inflow rates in each halo group, and shades represent the quartiles (25-75 percent). Figure 6: Radial motion of outflowing gas particles in the most massive halo (panel (a), \(M_{\rm h}\sim 1.3\times 10^{14}\) M\({}_{\odot}\) at \(z=2.89\)) and Halo37 (panel (b), \(M_{\rm h}\sim 1.1\times 10^{12}\) M\({}_{\odot}\) at \(z=2.89\)) from \(z=4.79\) to \(2.89\), corresponding to \(0.96\) Gyr as the time interval. At the initial step (\(z=4.79\)), some outflow particles are categorized into two groups depending on their initial distances: \(0.05<r/R_{\rm vir}<0.1\) (black lines) and \(0.5<r/R_{\rm vir}=0.6\) (blue lines). Thick lines show median values of all outflow particles in each category. Thin lines represent the trajectories of 10 particles in each group. namic simulation and argued the properties of the gas/metal outflow. They found that the metal loading factor for low-mass galaxies could be an order of unity, which implies that most of the metals were ejected with the outflow. Also, for the low-mass galaxies, the ratio of metal loading factors for ISM and halo regions exhibited similar values, suggesting the efficient supply of the metals to CGM and IGM. However, the ratio drops for massive galaxies and the smaller fraction of the metals leaves the haloes. In this study, we extend the mass range of the galaxies up to \(\gtrsim 10^{13}\) M\({}_{\odot}\) and investigate the properties of metals between the galaxies and CGMs. Fig. 8 represents the metal inflow and outflow rates at \(r=R_{\rm vir}\) for 500 massive haloes at \(z=3,4,5,\) and 6. The inflow and outflow rates are calculated as \[\dot{M}_{\rm in,out}^{\rm metal}=\sum_{i=1}^{n}\frac{m_{i}Z_{i}}{\delta r}|v_{ \rm r,i}(r_{\rm i},t)|. \tag{14}\] Similar to the case of the gas flow, the metal outflow rate increases with the halo mass. At \(z=3\), massive haloes induce the metal outflow larger than \(1.0\) M\({}_{\odot}\) yr\({}^{-1}\), corresponding to the gas outflow of \(\dot{M}_{\rm out}\gtrsim 1000\) M\({}_{\odot}\) yr\({}^{-1}\) with the metallicity \(Z\sim 0.1\)\(Z_{\odot}\). The mass loading factors for the massive galaxies show an order of unity at the virial radii (see Fig. 2). The metal loading factor is evaluated as \(\sim 10^{-3}\). Therefore, a small fraction of the metals are ejected into IGM in the case of massive galaxies. As expected from the results in the previous section, the metal outflow rate does not increase significantly at \(M_{\rm h}>10^{12}\) M\({}_{\odot}\). On the other hand, the metal inflow rate simply increases with the halo mass via merger processes and accretion of metal-enriched gas around galaxies. In particular, the IGM around the massive haloes is metal-enriched earlier, which is likely due to the metal outflow from neighbour low-mass haloes. Therefore, the metal inflow rate becomes high at \(M_{\rm h}\gtrsim 10^{12}\) M\({}_{\odot}\). The metal outflow rate overcomes the inflow rate in the case of low-mass haloes and the trend is reversed at massive ones. Therefore, the process of metal enrichment within galaxies would be distinguishable depending on the halo mass. For low-mass haloes, the metal contents within the virial radius can be dominated by the metals produced in-situ star formation via supernovae. As the halo mass increases, the fraction of the metals supplied from outside the halo via accretion and mergers can be dominant. Christensen et al. (2018) investigated the galaxies with the mass range of \(M_{\rm h}\sim 10^{9.5-12}\) M\({}_{\odot}\) at \(z=0\) and pointed out the similar mass trend. Fig. 9 shows net metal outflow rates, i.e., \(\dot{M}_{\rm out}^{\rm metal}-\dot{M}_{\rm in}^{\rm metal}\). We find that the net metal outflow changes from positive to negative as the halo mass increases. The critical halo mass increases as the redshift decreases and it is \(\sim 10^{13.0}\) M\({}_{\odot}\) (\(\sim 10^{11.2}\) M\({}_{\odot}\)) at \(z=3\) (\(z=6\)). Because of deeper gravitation potentials at higher redshifts, the gas cannot escape from the haloes efficiently compared to the low-redshift haloes that have the same masses. The moderately massive galaxies with \(M_{\rm h}\sim 5\times 10^{12}\) M\({}_{\odot}\) have the net metal outflow rates of \(\gtrsim 0.1\) M\({}_{\odot}\) yr\({}^{-1}\). Also, the net-metal outflow rate of higher redshift is lower than that of lower redshift. This is likely due to lower metallicities of IGM and ISM at higher redshifts. Galaxies with the halo masses of \(\gtrsim 10^{12}\) M\({}_{\odot}\) show the metallicities with \(\sim 0.1-0.2\)\(Z_{\odot}\) at \(z=6\) and it increases by a factor of \(\sim 1.5-2.0\) at \(z=3\). Thus, low-mass haloes at lower redshifts can be efficient metal-enrichment sources of IGM. Fig. 10 presents the radial profiles of metallicities for haloes in PCR0 region from \(z=5.0\) to \(3.0\). The metallicity monotonically increases with the cosmic time, indicating the accumulation of metals in the CGM region, and also decreases as the radial distance increases. First, metals are produced near the centre of the galaxy and then carried to CGM and IGM via the stellar/AGN feedback (Boissier & Prantzos, 1999). We make fitting as \(\frac{Z}{Z_{\odot}}\propto\left(\frac{r}{R_{\rm vir}}\right)^{\alpha}\) for \(0.5<r/R_{\rm vir}<1\). The slope becomes shallower with increasing halo mass, and it is \(-1.06\) for \(10^{10}<M_{\rm h}/{\rm M}_{\odot}<10^{11}\), \(-0.85\) for \(10^{11}<M_{\rm h}/{\rm M}_{\odot}<10^{12}\), \(-0.77\) for \(10^{12}<M_{\rm h}/{\rm M}_{\odot}<10^{13}\) and \(-0.76\) for \(10^{13}<M_{\rm h}/{\rm M}_{\odot}\). The shallower slopes of massive haloes may reflect that metal outflows stall near virial radii and accumulate into halo gases as suggested in the previous section. The distributions of massive haloes with \(M_{\rm h}>10^{12}\) M\({}_{\odot}\) get shallower as the redshift decreases, while that for lower halo masses does not change significantly with time. This reflects the different efficiencies of the metal accumulation in the CGM depending on the nature of the outflow gas, i.e., the fountain flow or escaping from host haloes. We note that the AGN-driven outflow would have the potential to affect the metal distribution. For instance, Taylor & Kobayashi (2015) argued that the powerful AGN-driven outflow can remove the gas and metals from galaxies and enhance the metallicity of CGM compared to the result without AGN feedback. As we mentioned in SS3.1, BHs grow rapidly and give feedback to surrounding gas once the halo mass exceeds \(\sim 10^{12}\) M\({}_{\odot}\) in our simulations. Therefore, the radial profile would also be shallower if the AGN-driven outflow activates in the low-redshift high-mass galaxies and reduces the metals at the central region, though the impact might strongly depend on the modeling of AGN feedback. The metallicity profiles are also related to the mass-metallicity relation (MZR) suggested in observations (e.g., Maiolino et al., 2008; Mannucci et al., 2009) and simulations (e.g., Brooks et al., 2007; Figure 7: Maximum distances of outflow gas particles at \(3.0<z<4.0\) for 44 massive haloes. The halo mass is measured at \(z=4.0\). Only outflow particles with the initial positions of \(0<r/R_{\rm vir}<0.1\) are chosen at \(z=4.0\). Black-filled circles and error bars represent the median values of the outflow particles and the quartiles (25-75 percent). Ma et al., 2016; Collacchioni et al., 2020). At \(r=R_{\rm vir}\) and \(z=3.0\), for example in our results, \(Z/Z_{\odot}\) increases from \(0.023\) at \(10^{10}<M/{\rm M}_{\odot}<10^{11}\) (upper-left panel) to \(0.061\) at \(10^{12}<M_{\rm h}/{\rm M}_{\odot}<10^{13}\) (lower-left panel), but decreases to \(0.055\) in massive haloes with \(10^{13}<M_{\rm h}/{\rm M}_{\odot}\) (lower-right panel). Such a peaking behavior in the MZR is also suggested for galaxies at \(z\lesssim 1\)(Collacchioni et al., 2020). The radial metal distribution can be investigated via observations of absorption lines in SEDs of background objects. However, observational sight-lines are too sparse to probe the radial distributions because the background sources are frequently limited to bright QSOs due to sensitivities of current observational facilities (e.g., Steidel et al., 2010; Lehner et al., 2014). Recently, Mendez-Hernandez et al. (2022) showed the metal distributions from stacking spectra using background galaxies. Future observational missions like the Subaru PFS will be able to detect metal absorption lines in SEDs of many background galaxies and allow us to study the radial metal distributions. With the upcoming data, we will do comparison studies and investigate reasonable star formation and feedback models in future work. Note that, there are dispersions of the metal distributions in the same mass range as in the figure. Therefore, statistical studies would be required. ## 5 Summary In this paper, we investigate gas and metal outflows from massive galaxies in protocluster regions by using the results of the FOREVER22 simulation project that follows the formation of protocluster regions at \(z\geq 2\). Our simulations contain very massive haloes of \(\gtrsim 10^{13}\) M\({}_{\odot}\) even at high redshifts \(z\gtrsim 2\). Also, there are starburst galaxies with \({\rm SFR}\gtrsim 1000\) M\({}_{\odot}\) yr\({}^{-1}\) and supermassive black holes with \({\rm M}_{\rm BH}\gtrsim 10^{8}\) M\({}_{\odot}\) in the protocluster regions at \(z\sim 3\). Here we study the relation between the outflows and the halo mass or SFR, or the gas accretion rate onto a BH. The main conclusions are summarized as follows. * The gas outflow rate increases with the halo mass and it reaches \(\sim 1000\) M\({}_{\odot}\) yr\({}^{-1}\) for \(M_{\rm h}\gtrsim 10^{13}\) M\({}_{\odot}\). The mass loading factor decreases with the halo mass, it is \(\eta\sim 1(10)\) for \(M_{\rm h}\sim 10^{13}(10^{11})\) M\({}_{\odot}\). * Massive haloes with \(M_{\rm h}>10^{12.5}\) M\({}_{\odot}\) show the radial profiles of the gas outflow rates that decrease significantly at virial radii. Therefore the outflow from star-forming regions contributes to halo gas or large-scale fountain flow in the case of massive haloes. * The metal outflow depends on the halo mass and redshift. Considering the metal inflow and outflow, total metal masses in massive galaxies of \(M_{\rm h}\gtrsim 1\times 10^{13}\) M\({}_{\odot}\) at \(z=3\) increase with time, while lower-mass haloes can lose metal via galactic winds. Figure 8: Metal outflow and inflow rates evaluated at virial radii. Red and blue lines are the median values of outflow and inflow rates in each halo mass range. Shades represent the quartiles (25-75 percent). Thus, we suggest the origin of CGM with metals sensitively depends on the halo mass. Future observational missions with James Webb Telescope or the Prime Focus Spectrograph on the Subaru telescope will be able to investigate spatial distributions of gas and metal around high-redshift galaxies. ## Acknowledgments The numerical simulations were performed on the computer cluster, XC50 in NAOJ, and Trinity at Center for Computational Science in University of Tsukuba. This work is supported in part by MEXT/JSPS KAKENHI Grant Number 17H04827, 20H04724, 21H0489, 22H00149, JST FOR-EST Program, Grant Number JPMJFR202Z, ABC project research, Grant Number AB041008 (HY). ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2303.17758
Commuter Count: Inferring Travel Patterns from Location Data
In this Working Paper we analyse computational strategies for using aggregated spatio-temporal population data acquired from telecommunications networks to infer travel and movement patterns between geographical regions. Specifically, we focus on hour-by-hour cellphone counts for the SA-2 geographical regions covering the whole of New Zealand. This Working Paper describes the implementation of the inference algorithms, their ability to produce models of travel patterns during the day, and lays out opportunities for future development.
Nathan Musoke, Emily Kendall, Mateja Gosenca, Lillian Guo, Lerh Feng Low, Angela Xue, Richard Easther
2023-03-31T01:01:06Z
http://arxiv.org/abs/2303.17758v1
# Computer Count: Inferring Travel Patterns from Location Data ###### Abstract We present a new algorithm for computing the number of events in a single-event data set, which is a combination of a single-event data set and a single-event data set. We show that the number of events in a single-event data set is significantly reduced by a factor of \(10^{-11}\). We also show that the number of events in a single-event data set is significantly reduced by a factor of \(10^{-11}\). Introduction The movement of people between geographical regions and their personal interactions are key determinants of the spread of pathogens such as Covid-19. While interpersonal connections occur on scales ranging from individual households to international travel, interactions between people in the course of their daily routines provide a key "meso" layer in any detailed analysis of pathogenic transmission. The accumulation and analysis of data on the daily activities of individuals has privacy implications and commercial sensitivities, creating (entirely legitimate) barriers to its use by modelers. However, while it is unlikely that detailed trajectories of individuals through the course of a day will be shared outside of tightly controlled environments, aggregated spatio-temporal data can often be made available. In this Working Paper we analyse strategies for using aggregated spatio-temporal population data acquired from telecommunications networks to infer travel and movement patterns within regions. Specifically, we focus on hour-by-hour cellphone counts for the SA-2 geographical regions covering the whole of New Zealand [12] and base our work on algorithms described by Akagi _et al._[1]. This Working Paper describes the implementation of these algorithms, their ability to yield inferences based on this data to build a model of travel patterns during the day, and lays out opportunities for future development. Our testing data set consists of cellphone counts during January and February 2019 and 2020, where counts are given for individual New Zealand SA-2 geographical regions on an hourly basis. For reference, there are 2253 SA-2 regions in New Zealand. The regions vary in area, such that their typical population is on the order of a few thousand people. The Greater Auckland region contains approximately 600 SA-2s whereas in remote parts of the South Island a similar geographical area might correspond to just handful of SA-2s.1 Footnote 1: There are several exceptions, including offshore islands, which are very thinly populated. This approach also implicitly assumes that cellphone counts are a good proxy for the locations of the majority of the population. We focus on the two algorithms, developed by Akagi and colleagues, referred to as the 'exact' and 'approximate' methods. These algorithms use hour-by-hour population counts to estimate bidirectional flows between pairs of geographical regions. Long-distance travel over short time periods is penalised by a simple function of the physical separation between regions. Furthermore, a strict upper bound can be applied to the distance plausibly travelled between successive time steps, so that not all possible region pairings are viable. The algorithms adapt naturally to "real" geographies with complex shapes (rather than a grid-based geometry) and data in which the total number of individuals is not constant, due to phones being turned on or off or moving in and out of coverage areas. The motivation for this work was to facilitate analyses of the spread of Covid-19 in New Zealand. However, the treatment here can be applied to any number of tasks requiring models of population flows. This work investigates these algorithms and extends our understanding of their properties and limitations. Having implemented both the exact and approximate algorithms, we test the consistency of their outputs and find limitations and sensitivities to input parameters which are common to both algorithms. We also identify and address issues that arise when the number of people leaving a given region is roughly similar to the number of destinations available to them, so that the expected number of people moving between many pairs is less than one. In addition we have developed a simulator which generates synthetic data that allows the algorithms to be explored without access to cellphone counts and the underlying individual trajectories, facilitating additional verification strategies. Our implementation of the exact algorithm is computationally efficient; far more so than originally reported. In particular, we can "solve" for the Greater Auckland and Waikato regions (encompassing the three Auckland area District Health catchments) in tens of seconds of walltime on a laptop. This Working Paper is structured as follows. Section 2 provides a quick quantitative survey of the cellphone count data utilised in the inference process. Section 3 discusses the construction of a likelihood function characterising the probabilities of transitions between regions and Section 4 summarises the exact Akagi algorithm to maximise this likelihood, describes its implementation in a computationally efficient Python code2, and identifies issues with its application to this problem. Section 5 introduces a simple data simulator used to validate the code, while Section 6 looks at the sensitivity of the results to key parameters in the model. Section 7 describes the alternative approximate algorithm proposed by Akagi _et al._ and contrasts its output to the exact algorithm. In Section 8 we sketch an approach to compare our results to external sources of commuter information. Finally, Section 9 provides a brief summary of our experiences and identifies areas for further exploration. We discuss possible improvements and extensions to the algorithms, and highlight issues with these algorithms that might limit their utility. Footnote 2: Available at [https://github.com/auckland-cosmo/FlowStock](https://github.com/auckland-cosmo/FlowStock) ## 2 Cellphone Count Data Our analysis uses aggregated cellphone count data gathered from New Zealand telecommunications companies. In particular, this data gives hourly counts of the number of cellphones active in each SA-2 region. Note that the term 'active' applies to all cell phones which are powered on and detectable by the cell network; a cell phone does not need to be in use to be considered active. Within this data, it is possible to clearly discern patterns in population flow, for example during Figure 1: Representative counts throughout a month. There are clearly large daily commutes in and out of the central-city SA-2 region Auckland-University, anti-correlated with flows to the residential area Balmoral. There is a discernible difference between workday and weekend counts. Inspecting data from Puketona–Waitangi, containing the Waitangi Treaty grounds, one can see a significant increase in the lead up to February 6th. weekends, holidays, or large gatherings. Figure 1 provides some representative examples.3 Footnote 3: February 6th is a public holiday in New Zealand, during which there is often a large gathering at Puketona-Waitangi to commemorate the signing of the Treaty of Waitangi. It should be noted that each cell phone is counted in only one SA-2 region per hour. This is reflected by the conservation of the total cell phone count over time. Indeed, while a cell phone may be in range of multiple cell towers at any given moment, it will only use a single tower for communication at any one time, as determined by the relative signal strength. Hence, when the instantaneous count is performed on an hourly basis, each cell phone is associated with only one tower/SA-2 region. As the hourly data represents instantaneous counts, movements between SA-2 regions/cell towers on timescales smaller than one hour are not captured. While most of the adult population is likely to carry a cellphone with them on a day-to-day basis, there is no guarantee that cellphone counts map uniquely to individuals or that the mapping is unbiased. Indeed, we expect that certain demographics -- e.g. the very young or very old, and the economically deprived -- may be missed in this data. Furthermore, populations with 0 or multiple phones will be heavily biased in age, social class, and other areas that are correlated with infection risk factors. Unfortunately, the currently available data on cell phone access is not sufficiently detailed to incorporate into our modelling at this time. While some relevant 2018 census data is available [8], it only provides information on access to telecommunication systems at the household, rather than individual, level. Furthermore, the census data includes no information for 7.7% of households. While a detailed study of cell phone ownership is outside of the scope of this work, it is expected that data from future national surveys may improve our ability to correlate the movements of cell phones with the movements of individual persons. Finally, we also note that the data exhibits frequent discontinuities in regional counts at midnight, Figure 2: Hourly differences in the count in the Auckland–University area during the 26th to 28th of February 2020. One can see a sharp morning rush hour and less pronounced evening rush hour. There is an anomaly at midnight on the 28th. Such features are common at midnight and appear to be artifacts associated with the capture and processing of the data by the telecommunications providers. when cell tower data is apparently "rebaselined", as shown in Figure 2. However, since our focus will be on movements during the working day this is of limited relevance to our analysis. ## 3 Log-Likelihood and Likelihood Gradient Following Akagi _et al._, we introduce a probability of transition between different regions, \[P(\mathbf{M}|\mathbf{N},\boldsymbol{\theta})=\sum_{t=0}^{T-2}\sum_{i\in V} \left(\frac{N_{ti}!}{\prod_{j\in\Gamma_{i}}M_{tij}!}\prod_{j\in\Gamma_{i}} \theta_{ij}^{M_{tij}}\right). \tag{1}\] Here \(N_{ti}\) denotes the observed number of people in region \(i\) at step \(t\) (the algorithms can consider multiple time slices), which is provided as input data. The number of transitions from region \(i\) to \(j\) at step \(t\) is represented by \(M_{tij}\). The \(M_{tij}\) are the quantities we seek to estimate4. For each starting region \(i\), the set of possible destination regions is denoted by Footnote 4: Note that \(T\) represents the total number of time _slices_, such that there are \(T-1\) time _steps_ between the slices, labelled from \(t=0\) to \(t=T-2\). \[\Gamma_{i}=\{j\in V|d_{ij}\leq K\} \tag{2}\] where \(d_{ij}\) is the distance from region \(i\) to region \(j\); \(K\) is a cutoff distance beyond which people are assumed not to travel in a single time step.5 The probability of a person in region \(i\) at time \(t\) moving to region \(j\) at time \(t+1\) is then \(\theta_{ij}\). In general, this probability will be dependent on the time of day. For example, commuter traffic into and out of central business districts tends to reverse from morning to evening. It is therefore important that the estimation algorithm be applied across time periods in which transition probabilities may be assumed to be roughly constant. Footnote 5: We assume that the distance metric \(d\) corresponds to the centroid-to-centroid distance between SA-2 regions. Centroid coordinates are available at [https://datafinder.stats.govt.nz/layer/93620-statistical-area-2-2018-centroid-true/](https://datafinder.stats.govt.nz/layer/93620-statistical-area-2-2018-centroid-true/). \begin{table} \begin{tabular}{l l} \(V\) & set of regions \\ \(n\) & number of regions, \(|V|\) \\ \(T\) & number of snapshots \\ \(\mathbf{N}\) & \(n\times T\) matrix of counts in regions at each snapshot \\ \(\mathbf{d}\) & \(n\times n\) matrix of distances \(d_{ij}\) from region \(i\) to \(j\) \\ \(K\) & distance cutoff \\ \(V\) & set of regions \\ \(\Gamma_{i}\) & set of neighbours of region \(i\); \(\{j\in V|d_{ij}\leq K\}\) \\ \(M_{tij}\) & the number of people who move from \(i\) to \(j\) between \(t\) and \(t+1\); \(\mathbf{M}\) is a \((T-1)\times n\times n\) array \\ \(\pi_{i}\) & departure probability of region \(i\) \\ \(s_{i}\) & gathering scores for region \(i\) \\ \(\beta\) & scalar distance weighting \\ \(\theta_{ij}\) & probability for a person to move from region \(i\) to \(j\) between snapshots \\ \(\mathcal{C}(\mathbf{M};\mathbf{N})\) & cost function to enforce number conservation \\ \(\lambda\) & weighting of cost function \\ \(\mathcal{L}(\mathbf{M},\boldsymbol{\pi},\mathbf{s},\beta;\mathbf{N},\lambda, \mathbf{d},K)\) & likelihood of \(\mathbf{M}\), \(\boldsymbol{\pi}\), \(\mathbf{s}\), and \(\beta\) given data \(\mathbf{N}\) and assumptions \(\lambda\), \(\mathbf{d}\), \(K\) \\ \(\epsilon\) & convergence threshold for iterative optimisation \\ \end{tabular} \end{table} Table 1: Symbols used in the text. Bold symbols are non-scalar quantities. The algorithm requires an assumption for the transition probability, which is taken to be \[\theta_{ij}=\begin{cases}1-\pi_{i}&\text{if }i=j\\ \pi_{i}\frac{s_{j}\exp(-\beta d_{ij})}{\sum_{k\in\Gamma_{i}\setminus\{i\}}s_{k} \exp(-\beta d_{ik})}&\text{if }i\neq j\end{cases}, \tag{3}\] where the \(\pi_{i}\) are components of the vector \(\boldsymbol{\pi}\) of length \(n\) which describes the probability of a person leaving their current region. Their possible destinations are weighted by another \(n\)-vector, \(\mathbf{s}\), where \(s_{j}\) describes the tendency for people to gather in region \(j\). For example, regions within the central business district would be expected to have a strong tendency to attract commuters during the morning rush hour. Following Akagi _et al._, we include an exponential penalty on long-distance travel, but note that other forms of penalty are possible.6 Finally, note that \(\mathbf{s}\) has an arbitrary overall normalisation. Footnote 6: As we see below, \(\beta\) is one of the parameters we adjust to optimise the fit. In some cases the optimal value of \(\beta\) was negative, but often for unrealistically small regions — and there are also more possible pairings at greater distances. We experimented with the choice \(e^{-\beta_{1}d_{ij}+\beta_{2}d_{ij}}\) but did not pursue it in detail. Akagi _et al._ obtain a log-likelihood function from Equation (1), \[\mathcal{L}^{\prime}=\mathcal{L}_{0}+\mathcal{L}_{1}+\mathcal{L}_{2}-\frac{ \lambda}{2}\mathcal{C}(\mathbf{M},\mathbf{N}), \tag{4}\] where the individual components are given by: \[\mathcal{L}_{0}=\sum_{t=0}^{T-2}\sum_{i}\log(1-\pi_{i})M_{tii}, \tag{5}\] \[\mathcal{L}_{1}=\sum_{t}\sum_{i}\sum_{j\in\Gamma_{i}\setminus\{i\}}\left(\log (\pi_{i})+\log(s_{j})-\beta d_{ij}-\log\sum_{k\in\Gamma_{i}\setminus\{i\}}s_{ k}e^{-\beta d_{ik}}\right)M_{tij}, \tag{6}\] \[\mathcal{L}_{2}=\sum_{t}\sum_{i}\sum_{j\in\Gamma_{i}}(1-\log M_{tij})M_{tij}, \tag{7}\] \[\mathcal{C}(\mathbf{M},\mathbf{N})=\sum_{t=0}^{T-2}\sum_{i}\left(N_{ti}-\sum _{j}M_{tij}\right)^{2}+\sum_{t=0}^{T-2}\sum_{i}\left(N_{t+1,i}-\sum_{j}M_{tij} \right)^{2}. \tag{8}\] Stirling's approximation for factorials is used in the course of this derivation; we will revisit this choice in Section 6.1. The diagonal component of the \(t\)-th transition matrix \(M_{tii}\) corresponds to the population that does not leave block \(i\) at step \(t\). The cost function \(\mathcal{C}(\mathbf{M},\mathbf{N})\) is a soft enforcement of number conservation and this is the only place where the overall size of the population enters the likelihood, rather than dimensionless transition rates. The strength of the cost function is controlled by the parameter \(\lambda\). We estimate flows by maximizing \(\mathcal{L}\) with respect to the \(n^{2}\) components of \(\mathbf{M}\) (per time step), the \(n\) components of \(\boldsymbol{\pi}\) and \(\mathbf{s}\), and the scalar \(\beta\). The distance cutoff can fix some components of \(\mathbf{M}\) to zero but this will not always result in a meaningful simplification of the optimisation problem. For instance, the Auckland region directly couples in excess of 500 SA-2 blocks, creating approximately 250,000 pairs, each of which has a corresponding \(M_{tij}\). Consequently, the application of this algorithm to a realistic problem involves estimating values for \(10^{5}\) to \(10^{6}\) variables. We perform the optimisation with the SciPy [2, 14] implementation of the L-BFGS-B algorithm. By default, derivatives of the target function are evaluated via differencing, requiring multiple evaluations of the likelihood. Since the complexity of the likelihood and the number of free parameters both grow with the number of possible pairs the optimisation quickly becomes numerically challenging. However, we can greatly improve performance by supplying analytic derivatives as the \(M_{tij}\) do not appear in complicated combinations within the likelihood. After some calculation, we find that the derivatives of the terms in Equation (4) are \[\frac{\partial\mathcal{L}_{0}}{\partial M_{tij}}=\begin{cases}\log(1-\pi_{i})&i=j \\ 0&i\neq j\end{cases} \tag{9}\] \[\frac{\partial\mathcal{L}_{1}}{\partial M_{tij}}=\begin{cases}0&i=j\\ \log(\pi_{i})+\log(s_{j})-\beta d_{ij}-\log\sum_{k\in\Gamma_{i}\setminus i}s_ {k}e^{-\beta d_{ik}}&j\in\Gamma_{i}\setminus\{i\}\\ 0&j\notin\Gamma_{i}\end{cases} \tag{10}\] \[\frac{\partial\mathcal{L}_{2}}{\partial M_{tij}}=\begin{cases}-\log M_{tij}&j \in\Gamma_{i}\\ 0&j\notin\Gamma_{i}\end{cases} \tag{11}\] \[\frac{\partial\mathcal{C}}{\partial M_{tij}}=-2\left(N_{ti}-\sum_{l}M_{til} \right)-2\left(N_{t+1,j}-\sum_{l}M_{tlj}\right) \tag{12}\] While computing the likelihood requires summing \(\mathcal{O}(n^{2})\) terms, the derivative of the cost function requires summing \(\mathcal{O}(n)\) terms, and each of other terms in the derivative is a _single_ term from the sums in \(\mathcal{L}\). Consequently, evaluating the derivative of \(\mathcal{L}\) with respect to each of the \(n^{2}\) components of \(\mathbf{M}\) will involve \(\mathcal{O}(n\times n^{2})\) operations whereas approximating them numerically would involve \(n^{2}\) evaluations of the likelihood, for a total cost of \(\mathcal{O}(n^{2}\times n^{2})\). We may further improve computational efficiency in evaluating both \(\mathcal{L}\) and \(\partial\mathcal{L}/\partial M_{tij}\): when optimising with respect to \(\mathbf{M}\), the \(\log\) in Equation (5) and bracketed term in Equation (6) do not change, and can be precomputed, offering a significant improvement in efficiency over millions of evaluations. ## 4 "Exact" maximisation algorithm The "exact" maximisation algorithm described by Akagi et al. requires an iterative maximisation, looping over three separate maximisations until the relative difference in \(\mathcal{L}\) changes by less than \(\epsilon\), an adjustable parameter. We have implemented the exact algorithm as follows: 1. Initialise \(\mathbf{M}\), \(\boldsymbol{\pi}\), \(\mathbf{s}\), \(\beta\). This step is unspecified in [1], but the way it is done has a significant impact on the results of the algorithm. We discuss this further in Section 6.2. 2. Loop over steps below until the relative difference in \(\mathcal{L}\) changes by less than \(\epsilon\). 1. Maximise \(\mathcal{L}\) with respect to \(\mathbf{M}\) while keeping \(\boldsymbol{\pi}\), \(\mathbf{s}\) and \(\beta\) constant. 2. Maximise \(\mathcal{L}\) with respect to \(\boldsymbol{\pi}\) while keeping \(\mathbf{M}\), \(\mathbf{s}\), \(\beta\) constant, via the exact expression from Akagi _et al._ \[\pi_{i}=\frac{\sum_{t}\sum_{j\in\Gamma_{i}\setminus\{i\}}M_{tij}}{\sum_{t} \sum_{j\in\Gamma_{i}}M_{tij}}\,.\] (13) 3. Iteratively optimise \(\mathcal{L}\) with respect to \(\mathbf{s}\) and \(\beta\). In contrast with Akagai _et al._, who use the Minorisation-Maximisation algorithm for this step, we optimise the \(\mathbf{s}\) and \(\beta\) dependent part of \(\mathcal{L}\) directly. Our experience was that the former approach can become "stuck" during an evaluation. The only part of \(\mathcal{L}\) that depends on \(\mathbf{s}\) and \(\beta\) is \(\mathcal{L}_{1}\) and it can be rearranged into a target function \(f\) defined by \[f=\sum_{i\in V}\left(A_{i}\log(s_{i})-B_{i}\log\left(\sum_{k\in \Gamma_{i}\setminus\{i\}}s_{k}\exp(-\beta d_{ik})\right)\right)-\beta D \tag{14}\] \[A_{i}=\sum_{t}\sum_{j\in\Gamma_{i}\setminus\{i\}}M_{tji}\] (15) \[B_{i}=\sum_{t}\sum_{j\in\Gamma_{i}\setminus\{i\}}M_{tij}\] (16) \[D=\sum_{t}\sum_{i\in V}\sum_{j\in\Gamma_{i}\setminus\{i\}}d_{ij} M_{tij}\,. \tag{17}\] The derivation of \(A_{i}\) requires reordering the sum containing \(\mathbf{s}\). This resummation obscures the scale-independence of \(\mathbf{s}\) seen in Equations (1) and (4), and is only valid when the matrix \(\mathbf{d}\) of distances \(d_{ij}\) is symmetric.7 We proceed as follows: Footnote 7: The \(d_{ij}\) is effectively a cost function for travel between Block-\(i\) and Block-\(j\). We have assumed this is symmetrical and \(d_{ij}=d_{ji}\) but in principle this could be (for example) time-dependent and incorporate congestion related delays. We do not consider these possible asymmetries in \(d\). 1. Optimise \(f\) with respect to \(\mathbf{s}\). There is a closed form for this: \[s_{i}=\frac{A_{i}}{\sum_{k}C_{k}\exp(-\beta d_{k_{i}})}\] (18) 2. Normalise \(\mathbf{s}\) with \[\mathbf{s}\mapsto\frac{\mathbf{s}}{\max(\mathbf{s})}\,.\] (19) This is done to avoid numerical problems where \(|\mathbf{s}|\to 0\) otherwise. 3. Maximise \(f\) with respect to \(\beta\). This maximisation is done with the bounded Brent algorithm. We found that this optimisation of \(\mathbf{s}\) and \(\beta\) would occasionally enter a closed loop. When this happens, we terminate the optimisation of \(\mathbf{s}\) and \(\beta\) and return to the optimisation of \(\mathbf{M}\) and \(\boldsymbol{\pi}\) before trying again. We note the similarity of the procedure described here to the well-known Expectation Maximisation (EM) algorithm [3]. The EM algorithm is a method for performing maximum likelihood estimation in the presence of latent variables and has broad applicability to diverse fields from computational biology to machine learning [5, 4]. The EM algorithm works by iteratively improving parameter value estimates through alternating between an expectation step and a maximisation step. In the expectation step, a function for the expectation value of the log-likelihood is computed using the existing estimations for the parameter values. In the subsequent maximisation step, new parameter values are computed which maximise the value of the previously determined function. This process is then iterated until the desired convergence criteria are met. An adaptation of the EM algorithm which closely resembles our approach is known as Expectation Conditional Maximisation (ECM) [7]. In this case, each maximisation step is subdivided into a series of conditional maximization steps in which maximisation with respect to each parameter is undertaken individually, while all other parameters remain fixed. A detailed comparison between the efficacy of the algorithm implemented in this work and other variants of EM/ECM is out of scope here, but warrants further investigation going forward. The majority of the implementation of this "exact" maximisation algorithm is in idiomatic Python and Numpy [13]. Some calculations make use of Numba [6], but ultimately this was not a major performance gain over vanilla Numpy. Including the analytic functions in Equations (9) to (12) for the derivative of the likelihood improved performance by multiple orders of magnitude. Figure 3 shows wall clock computational time for a representative test problem based on synthetic data. While a plateau is observed in Figure 3 when the number of regions exceeds \(\sim 120\), we would of course expect further increases in run time for much larger data sets. The details of this will depend upon multiple factors such as NumPy's usage of multiple CPUs for certain operations, and the available memory. In practice, estimating movements within the approximately \(800\) SA2 blocks in the Auckland and Waikato regions took \(\sim 30\) seconds on a laptop; this is consistent with synthetic data. Consequently, the numerical performance of our implementation of the exact algorithm presents no significant challenges for any currently envisaged applications, and appears to improve significantly on that reported by Akagi _et al._, presumably thanks to the use of the analytic derivatives. ## 5 Synthetic data We do not have the true values of \(\mathbf{M}\) for the cellphone data, whereas Akagi _et al._ had access to trajectory information. However, we can test against synthetic data that strictly matches the assumptions laid out in Section 3. We specify the number of regions we want to simulate, the distances \(d_{ij}\) between them, cutoff distance \(K\) and distance weighting \(\beta\). Then we stipulate vectors of gathering scores \(\mathbf{s}\) and departure probabilities \(\boldsymbol{\pi}\) corresponding to each region. From this, the simulator calculates the set of possible destinations \(\Gamma_{i}\) of each region and probabilities \(\theta_{ij}\) for moving to each from Equations (2) and (3). We specify an initial distribution of people in each region as a vector \(N_{0}\) and number of time steps to take. Then for each time step \(t\) and region \(i\), the simulator loops over each of the people currently in region \(i\) and assigns them to be in region \(j\) at time \(t+1\) with probability \(\theta_{ij}\). This defines a "true" \(M_{tij}\) Optionally, the simulator also randomly adds or removes people before calculating where they go. This allows us to test against scenarios that do not conform exactly to the assumptions in Section 3. Figure 3: Plot of the run time against number of regions. The shaded bands represent the standard deviation across 18 runs with 3 random seeds and 2 noise amplitudes for the synthetic data, and 3 choices of \(\lambda\) in the solver. Clearly, demanding higher precision increases the run time but it remains manageable even as the number of regions grows beyond 100. All simulations were run on a consumer grade computer. Figure 4 shows a deliberately simple setup: an initially uniform population distributed among 9 regions arranged in a regular \(3\times 3\) grid. The distance between the centres of the outermost cells on each side of the grid is therefore 2 units. We set the distance threshold \(K=2\) and \(\beta=1\). Because the grid (expressed in terms of the centroids) has a width of 2, only corner to corner travel is disallowed with \(d=2\sqrt{2}>K\). The departure probabilities \(\boldsymbol{\pi}\) are sampled uniformly from \(0.01\)-\(0.02\), other than the central region which has a probability of \(0.1\). This higher departure probability is evident in the populations after one time step; the central region has a larger net loss of people than the outer regions. The gathering scores \(\mathbf{s}\) are set to 1 other than 4 regions shown in the left panel of Figure 4. The gathering scores have the expected effect; regions with larger \(s\) have correspondingly larger net gains. Figure 5 shows the results of applying our implementation of the exact algorithm to the data described above. It is able to get good estimates of the gathering scores and departure probabilities. However, the estimates of \(M_{tij}\) and \(\beta\) are poor. There are a number of transitions that are severely underestimated. In addition, \(\beta\) is estimated at 0.08 rather than the true value of 1.0. Poor estimates of \(\beta\) are a recurring theme in the following sections. Figure 4: From left to right: true \(s\), true \(\pi\), initial counts, and final counts for simulated synthetic data. This data has 9 regions, each with an initial count of \(N_{0}=1,000,000\). People leave each region with probability \(\pi\) and each region has a gathering score \(s\). One can see that the region with higher \(\pi\) has a net loss in \(N_{1}\). The regions with larger \(s\) have correspondingly larger net gains. Figure 5: Scatter plots of the true and estimated values of \(s_{i}\), \(\pi_{i}\) and \(M_{tij}\). The accuracy of the \(s\) and \(\pi\) estimates are relatively good, but there are a number of the transition matrix elements \(M\) that are severely underestimated. The large elements of \(M_{tij}\sim 10^{6}\) are in the diagonals; these are people who did not move. Figures such as this are presented as 2-D histograms in the following sections, where there are too many points for a sensible scatter plot. Implementation and Validation We now consider the stability of the results against changes in free parameters in the algorithm, and specific issues that arise when we apply the Akagi _et al._ algorithms to the present use-case. ### Scaling As mentioned in Section 3, Equation (4) assumes that Stirling's approximation \(\log n!\approx n\log n-n\) applies to the elements of the transition matrix, or \(M_{tij}\gtrsim 1\). However, this assumption is violated by fairly typical real-world data. SA2 regions generally contain \(\mathcal{O}(1,000)\) people and if \(\mathcal{O}(100)\) people enter or leave in an hour with \(\mathcal{O}(100)\) allowed destinations and origins, some transitions will necessarily involve "fractional" numbers of people. These fractional numbers of people should be interpreted as probabilities. We have found that the inapplicability of Stirling's approximation can be ameliorated by scaling up the number of people to the point where all allowed transitions have \(M_{tij}\gtrsim 1\). The cost function, Equation (8) is quadratic in this scale factor, so one must simultaneously rescale \(\lambda\) to compensate. For sufficiently large scalings the results become scaling independent. We checked that this is true by comparing the results for the SA2s contained in the combined Auckland Region and Waikato on February 18th between 7am and 9am. There are 798 regions in total, but the small number with less than 100 people are dropped from the analysis of scaling by 1000 and 10,000, as shown in Figure 6. This strategy is not necessarily perfect -- the \(M_{tij}\log(M_{tij})\) term in \(\mathcal{L}_{2}\) is non-linear in \(M_{tij}\) and requires more detailed analysis -- but will be more robust than using unscaled populations. All other results shown in the following sections use a scaling large enough to ensure that the computed \(M_{tij}\gtrsim 1\) and these are then rescaled back to their original values. ### Repeatability and Initial conditions The solver needs initial conditions that do not represent pathological states.8 As an initial guess we make the following "static" choice Footnote 8: Note that ‘initial conditions’ does not refer to values at \(t=0\) but to the initial guesses for \(\beta\), \(\mathbf{s}\), \(\boldsymbol{\pi}\), and the entire \(\mathbf{M}\) matrix, prior to optimisation. \[\pi_{i}=0.02 \tag{20}\] \[s_{i}=0.02\] (21) \[\beta=\frac{50}{\max(d)}\] (22) \[M_{tij}=\begin{cases}N_{ti},&\text{for }i=j\\ 0,&\text{for }i\neq j\end{cases}, \tag{23}\] where the last line implies that no-one moves between blocks. This is not self-consistent, since if the off-diagonal \(M_{tij}=0\), the \(\pi_{i}\) and \(s_{i}\) should also vanish, but these initial conditions can yield plausible results. Varying the starting position causes the algorithm to converge on different maxima. We first test the sensitivity to initial conditions by adding a random scatter to the initial guess: \[M_{tij}=\begin{cases}N_{ti}+\delta_{tii},&\text{for }i=j\\ \delta_{tij},&\text{for }i\in\Gamma_{i}\setminus\{i\}\\ 0,&\text{for }i\notin\Gamma_{i}\end{cases} \tag{24}\] where \(\delta_{tij}\) is sampled uniformly from the range \([0,N_{ti})\). In Figure 7 we show that this scatter in the initial conditions does not have a drastic impact on the output by analysing data from SA2s contained in the combined Auckland Region and Waikato regions, on February 18th at 7am 8am, Figure 6: Histograms comparing computed \(M_{tij}\) for different population scalings. The data is for transitions between the 798 unique SA-2s in the regional councils of Auckland Region and Waikato. The plots show the counts of \(M_{tij}\) for pairs of scalings; with perfect agreement all elements would lie on the diagonal, up to the scatter arising from the large number of near-degenerate solutions to the maximisation problem. The top panel compares the raw counts (a scaling of 1) with a scaling of 1000 (y-axis). The bottom panel compares a scaling of 1000 (x-axis) and 10,000 (y-axis). and 9am. We quantify this sensitivity by computing the mean and standard deviation for the values of each of the \(M_{tij}\) from a sequence of 20 runs with different random perturbations to the original initial condition Equation (23). We find that the ratio of the standard deviation to the mean is small for the vast majority of \(M_{tij}\) for the cases we consider. We also consider a "moving" initial guess, \[M_{tij}=\begin{cases}N_{ti},&\text{for }i=j\\ \frac{|N_{ti}-N_{t+1,i}|}{|\Gamma_{i}\setminus\{i\}|},&\text{for }j\in\Gamma_{i} \setminus\{i\}\\ 0,&\text{for }i\notin\Gamma_{i}\end{cases}. \tag{25}\] This encodes an expectation that most people stay where they are and that the number of people moving out of a region is on the order of the change in its population (regardless of whether that change is a net inflow or outflow). In Figure 8 we compare the two initial conditions choice described above. We use data from the 63 most populated areas in Southland Region, on 11 February 2020 at 6am, 7am, 8am, 9am and 10am. There is a clear discrepancy when \(\epsilon=10^{-2}\) but moving to a more stringent \(\epsilon=10^{-4}\) eliminates much of this bias. ### Sensitivity to \(\epsilon\) and \(\lambda\) The values \(\epsilon\) and \(\lambda\) have an impact on the output of the algorithm. We used the normalised absolute error (NAE) to quantify the error in these fits, which is defined to be \[\frac{\sum\limits_{t,i,j}\left|M^{*}_{tij}-M_{tij}\right|}{\sum\limits_{t,i,j} M^{*}_{tij}}, \tag{26}\] where \(M^{*}_{tij}\) represents the 'true' \(M\) values from the simulated data. We note that the NAE may be misleading in cases where there are a small number of regions for which the \(M_{tij}\) are highly inaccurate if these regions have relatively large populations. We examined the impact of \(\epsilon\) and \(\lambda\) by running the exact estimator on simulated data, as in Section 5. We assume \(15^{2}\) cells distributed on a regular \(2\times 2\) grid and a distance cutoff \(K=1.5\), so that each region has \(100\) to \(225\) possible destinations. The initial number of people, gathering scores and leaving probabilities in cell \(i\) were set to \[N_{0,i} =\nu\exp\left(-(\sqrt{x_{i}^{2}+y_{i}^{2}}-r_{0})^{2}\right) \tag{27}\] \[s_{i} =\exp(-4(x_{i}^{2}+y_{i}^{2}))\] (28) \[\pi_{i} =\frac{1}{10}\frac{N_{0,i}}{\max_{j}(N_{0j})}\] (29) \[\beta =1 \tag{30}\] where \(r_{0}=0.8\). The gathering score is high at the center, and the departure probability is proportional to the initial number of people in a cell. There is a higher density of people in a ring of radius \(r_{0}\) around the center. This is intended to be roughly analogous to people migrating from the outskirts of a city into the center. We allowed a 10% error in the number of people at each location during each time step. The results are shown in Figure 9. The absolute variance in the NAE as a function of both \(\epsilon\) and \(\lambda\) is not large. Counterintuitively, we found that smaller values of \(\epsilon\) do not necessarily give more accurate results by this measure, but the differences are not significant. There is no obvious choice of \(\lambda\); large values heavily penalise solutions where summing the people going in and out of regions does not match the known data \(N\). There are also numerical issues introduced by large \(\lambda\); these Figure 7: Top: Histogram of the normalised standard deviation \(\mathrm{std}(M_{tij})/\bar{M}_{tij}\) of \(M\) values. The mean is with respect to 20 runs, each with the initial conditions of Equation (24) and different seeds for the random jitter; in most cases \(\mathrm{std}(M_{tij})/\bar{M}_{tij}\) is significantly less than unity. Bottom: Two-dimensional histogram of the same data. When the standard deviation is below the red line it is less than the mean value; \(M_{tij}\) above this line have large scatter between runs. Note the logarithmic colour scale on this plot; the most common points and the largest \(M\) values are below the line. Data is for SA2s contained in the combined Auckland Region and Waikato, on February 18th at 7am 8am, and 9am. Figure 8: Histograms of inferred \(\mathbf{M}\) values with different initial conditions. The top panel has \(\epsilon=10^{-2}\) and the bottom has \(\epsilon=10^{-4}\). Static initial conditions (\(x\)-axis) start with diagonal transition matrices; i.e. no-one moves, as in Equation (24); the \(y\)-axis have initial conditions for which many people move, as in Equation (25). With a loose convergence parameter the final result reflects the initial choice; setting \(\epsilon=10^{-4}\) eliminates most sensitivity to initial conditions. Data is from the 63 most populated areas in Southland Region, on 11 February 2020 at 6am, 7am, 8am, 9am and 10am. seem much like the issues introduced with very small \(\epsilon\). Small values of \(\lambda\) allow proposed solutions to have large violations of number conservation. Given that the real-world data is known to have imperfect number conservation, some deviation should be allowed and a middle ground should be found. The bottom panel in Figure 9 confirms this intuition. ## 7 Alternative Algorithm: Approximate Inference Method Akagi _et al._ present an alternative method, which is billed as being less computationally expensive. This concern is less pressing, given the speedup applied to the exact algorithm. We have implemented a variation of this algorithm in Python, with a few key differences. In particular, Akagi _et al._ bin regions by their separations but we cannot adopt this approach, given the irregular spacing of the SA-2 centroids. ### Summary of alternative algorithm We begin by defining the following parameters: \[X_{tij}\equiv M_{tij},\quad Y_{ti}\equiv\sum_{j\neq i}M_{tij},\quad Z_{ti}\equiv M _{tii}. \tag{31}\] Using these parameters, \(\boldsymbol{\pi}\) and \(f(\mathbf{s},\beta)\) are given by: \[\pi_{i}=\frac{\sum\limits_{t}Y_{ti}}{\sum\limits_{t}(Y_{ti}+Z_{ti})}, \tag{32}\] \[f(\mathbf{s},\beta)=\sum_{t,i,j}(X_{tij}\log s_{i}-\beta d_{ij}X_{tij})-\sum_{ t,i}Y_{ti}\log\Big{(}\sum_{k\neq i}s_{k}\exp(-\beta d_{ij})\Big{)}, \tag{33}\] where \(d_{ij}\) is the distance between centroids of SA-2 regions. We also define parameters \(\theta_{ij}\) and \(\mu_{ij}\) as follows: \[\theta_{ij}=\begin{cases}1-\pi_{i},&(i=j)\\ \pi_{i}\left(\frac{s_{j}\exp(-\beta d_{ij})}{s_{ki}\exp(-\beta d_{ij})}\right),&(i\neq j)\end{cases} \tag{34}\] \[\mu_{ij}=\sum_{t}N_{tj}\theta_{ji}. \tag{35}\] Following Akagi _et al._, the approximate log likelihood is given by \[\mathcal{L}_{\text{approx}}= \sum_{t,i,j}\big{(}X_{tij}\log(\mu_{ij})+X_{tij}-X_{tij}\log(X_{tij })\big{)}\] \[+\sum_{t,i}\big{(}Y_{ti}\log(N_{ti}\pi_{i})+Y_{ti}-Y_{ti}\log(Y_{ ti})\big{)}\] \[+\sum_{t,i}\big{(}Z_{ti}\log(N_{ti}(1-\pi_{i}))+Z_{ti}-Z_{ti}\log( Z_{ti})\big{)}, \tag{36}\] Figure 9: The value of the NAE as compared to simulated data estimator parameters \(\epsilon\) (top) and \(\lambda\) (bottom). The error bands come from aggregating over 8 instances of \(N\) and \(\lambda\) (top) and \(\epsilon\) (bottom). The blue solid line is the error when the simulated data conforms exactly to the assumptions of the likelihood Section 3. The orange dashed line assumes that there is an error of up to 10% at each step. Interestingly, the estimator performs better on noisy data. Smaller values of \(\epsilon\) do not have a clear advantage when there is noise in the data. On the other hand, \(\lambda=10\) is better than 1 or 100. with the associated constraint function, \[C(X,Y,Z)=\sum_{t,i}\Big{(}|N_{ti}-(Y_{ti}+Z_{ti})|^{2}+|N_{t+1,i}-\sum_{j}X_{tij}|^{ 2}\Big{)}. \tag{37}\] We then have a log likelihood function for the final calculation of \(M\) as follows: \[\mathcal{L}_{\text{final}}= \sum_{t,i}\log(1-\pi_{i})M_{tii}+\sum_{t,i,j}\big{(}M_{tij}-M_{tij} \log(M_{tij})\big{)}\] \[+\sum_{t,i,j\neq i}\bigg{(}\log(\pi_{i})+\log(s_{j})-\beta d_{ij} -\log\Big{(}\sum_{k\neq i}s_{k}\exp(-\beta d_{ik})\Big{)}\bigg{)}, \tag{38}\] with associated constraint function: \[C(M)=\sum_{t,i}\Big{(}|N_{ti}-\sum_{j}M_{tij}|^{2}+|N_{t+1,i}-\sum_{j}M_{tji}|^ {2}\Big{)}. \tag{39}\] The inference proceeds as follows: 1. Initialise parameters \(\mathbf{M}\), \(X\), \(Y\), \(Z\), \(\boldsymbol{\pi}\), \(\mathbf{s}\), and \(\beta\), 2. Maximise \(\mathcal{L}_{\text{approx}}\) - \(\frac{\lambda}{2}C(X,Y,Z)\), 3. Update \(\boldsymbol{\pi}\), 4. Update \(\mathbf{s}\) and \(\beta\) by maximising \(f(\mathbf{s},\beta)\), 5. Repeat 1 - 4 until specified convergence criterion is reached for the value of the approximate log likelihood, 6. Calculation of \(\mathbf{M}\) through Maximising \(\mathcal{L}_{\text{final}}\) - \(\frac{\lambda}{2}C(\mathbf{M})\), using the final \(\boldsymbol{\pi}\), \(\mathbf{s}\), and \(\beta\) values calculated above. When optimising \(\mathcal{L}_{\text{approx}}\) and \(\mathcal{L}_{\text{final}}\), it is useful to define their analytic Jacobians, as it is computationally expensive to compute approximate derivatives, along with analytic Jacobians for the constraint. These are as follows: \[\frac{\partial\mathcal{L}_{\text{approx}}}{\partial X_{tij}} =\log(\mu_{ij})-\log(X_{tij}), \tag{40}\] \[\frac{\partial\mathcal{L}_{\text{approx}}}{\partial Y_{ti}} =\log(N_{ti}\pi_{i})-\log(Y_{ti}),\] (41) \[\frac{\partial\mathcal{L}_{\text{approx}}}{\partial Z_{ti}} =\log(N_{ti}(1-\pi_{i}))-\log(Z_{ti}), \tag{42}\] with constraint function derivatives: \[\frac{\partial\big{(}-\frac{\lambda}{2}C(X,Y,Z)\big{)}}{\partial X_{tij}} =\lambda\Big{(}N_{t+1,i}-\sum_{k}X_{tik}\Big{)}, \tag{43}\] \[\frac{\partial\big{(}-\frac{\lambda}{2}C(X,Y,Z)\big{)}}{\partial Y _{ti}} =\frac{\partial\big{(}-\frac{\lambda}{2}C(X,Y,Z)\big{)}}{\partial Z _{ti}}=\lambda\Big{(}N_{ti}-(Y_{ti}+Z_{ti})\Big{)}. \tag{44}\] For the final log likelihood, we have: \[\frac{\partial\mathcal{L}_{\text{approx}}}{\partial M_{tii}} =\log(1-\pi_{i})-\log(M_{tii}), \tag{45}\] \[\frac{\partial\mathcal{L}_{\text{approx}}}{\partial M_{tij\neq i}} =\log(\pi_{i})+\log(s_{j})-\beta d_{ij}-\log\Big{(}\sum_{k\neq i }s_{k}\exp(-\beta d_{ik})\Big{)}-\log(M_{tij}), \tag{46}\] with constraint function derivatives: \[\frac{\partial\big{(}\frac{-\lambda}{2}C(M)\big{)}}{\partial M_{tij}} =\lambda\Big{(}N_{ti}+N_{t+1,i}-\sum_{k}M_{tik}-\sum_{k}M_{tkj} \Big{)}. \tag{47}\] ### Performance of alternative algorithm We implemented this algorithm in both Python 2 and Python 3, noting that the former tends to outperform the latter, apparently due to bugs within the Numba compiler in Python 3. Our implementation was first tested using synthetic data. Using the NAE as a measure of the performance of the algorithm it was found that for large data sets, it is beneficial to nest the main loop, as described by steps 1 to 6 above, within an outer loop. This outer loop feeds the calculated \(\mathbf{M}\) values back as initial conditions in the subsequent evaluation. For testing purposes, the outer loop is terminated either when the NAE reaches a specified target value, or when successive loops result in no further decrease in the NAE. This is only possible when the true values are known, as in our test case. Hence, when applying this algorithm to real-world data, one may choose to terminate the outer loop when the successive change in \(M_{tij}\) values reaches a certain threshold. As an example, using simulated data with 225 regions over 3 time steps gave an NAE of 0.046 after three iterations through the outer loop, compared to 0.154 with only one iteration. By comparison, the "exact" algorithm achieved an NAE of 0.100, so that in this case the alternative algorithm appears to perform better, though it does take more computation time. In this case we chose \(\lambda=10\), a convergence criterion of \(0.001\%\) on the approximate log-likelihood, and a tolerance \(\mathtt{ftol}=10^{-4}\) within scipy.optimise.minimise for the \(\mathbf{M}\) calculation. We can also calculate the off-diagonal NAE, as the large values on the diagonal can dominate the NAE, obscuring how well the algorithm is able to identify regions with high gathering scores. In this case, the off-diagonal NAE for the alternative algorithm was 0.279, compared with 0.558 for the "exact" algorithm, again indicating more accurate reproduction of the input data.9 Footnote 9: The synthetic data used in this test, along with the implementation used to analyse it, are available in the code base within the folder ’naE-comparison’. ### Discussion of alternative algorithm Testing indicates that initialising the \(\mathbf{M}\) arrays with the corresponding \(\mathbf{N}\) values on the diagonal and small random numbers on the off-diagonal provides the best outcomes.10 The alternative inference algorithm runs through the entire inference process multiple times, inputting the new \(\mathbf{M}\) arrays as initial conditions in each run. In some cases this leads to much improved results, but can also result in an 'overshoot', whereby the off-diagonal elements become too high. Footnote 10: The introduction of explicit randomness in the \(\mathbf{M}\) initialisation can make it difficult to compare successive runs. To overcome this one may fix the seed of the random number generator. The output is highly sensitive to the value of \(\lambda\), which controls the strength of the penalty terms. If \(\lambda\) is too small, the algorithm tends to overpopulate the off-diagonals. Conversely, if the value is too high, all off-diagonal elements tend to zero. The optimal value of \(\lambda\) varies on a case-by case basis, making it difficult to guess a suitable value in advance. In addition to the algorithm's sensitivity to the \(\mathbf{M}\) initialisation, \(\lambda\) value, and number of complete inference loops, one must also consider the convergence criteria set for the approximate log-likelihood in the inner loop, and tolerances set in the optimisation routines as well. Tighter convergence constraints may increase computation time to an unacceptable degree, or may preclude convergence entirely. The original Akagi _et al._ treatment introduced this algorithm for its greater efficiency, and it serves as a useful counterpoint to the "exact" version. However, given its relative fragility with respect to control parameters and the efficiency of our implementations it does not appear to offer a significant, generic advantage. Validation Test Case: Southland We are unable to test the performance of the algorithms against real-world trajectory information. However, one may gauge how well the algorithm captures the essential features of daily travel by comparing its output to census data, and we take a very cursory look at this problem here. We accessed the publicly available self-declared commuting trends from the 2013 census using the Stats NZ 'Commuter View' tool [9]. This tool presents the number of residents who commute out of that region for work and the number that commute in. We can then compare the trends in the census data to the sum of the off-diagonal \(\mathbf{M}\) matrix elements for outbound and inbound travel for each region on a standard weekday, assuming most workers travel to work in a time period from 6am to 10am. For a simple test case, we have singled out the SA-2 regions which belong to the wider Southland district, discarding regions with missing data at any of the times of interest. Of the 65 SA-2 regions within the Southland district, only 2 regions had incomplete data, both with populations of less than \(10\). This comparison is not exact. The subdivision of geographical regions within the census data does not match the SA-2 regions used in the telecommunications data so assumptions must be made when matching the data to our calculations. Furthermore, this method does not capture the varying work hours of the general populace, and is seven years out of date. We ran the approximate inference algorithm for the telecommunications data from the 11th of Febuary, 2020 (Tuesday) from 6am to 10am. We then compare the total outbound and inbound commuters output by the algorithm with the census data. The results are displayed in Figure 10, including only those census regions which clearly correspond to one or more SA-2 regions. Moreover, cell coverage is not necessarily representative of an individual's exact location at any given moment, and data between neighbouring regions may be somewhat mixed.11 Footnote 11: The code used to generate the comparisons shown here, along with the corresponding initial conditions, is in the folder ‘southland-comparison’. While the algorithm appears to capture the essential features of the inbound commuting trends, with gathering concentrated within the main metropolis, the outbound inference fares significantly less well, with outbound commuters significantly underrepresented when compared to regions within the census data with high traffic. This comparison is strictly qualitative and "one off", and self-declared travel within inner-city regions may correspond to a change in cell-coverage regions, and vice-versa. We also note that the 'Commuter View' tool has since been re-branded to 'Commuter Waka', and now also incorporates data from the 2018 census [10]. However, due to a particularly low response rate of 81.6% to the 2018 census [11], we choose to test our algorithm against the older data set - based upon the responses to the 2013 census only - which had a much higher response rate. It is hoped that better quality data will become available in future for more thorough verification testing. ## 9 Summary This Working Paper describes and re-implements two possible approaches to using count data to impute meso-scale human movement patterns. We investigated the validity of the likelihood assumed and improved the minimization method. At this point it can analyse data for large fractions of the country (e.g. \(\sim\)800 out of \(\sim\)2000 SA-2 regions in a single run) via a computationally efficient and publicly available code. The algorithm demonstrates qualitative agreement with simulated and real-world data. The actual numerical counts of people moving from one region to another come with some uncertainty, stemming from the fact that the problem is highly degenerate. In particular, we occasionally find estimated values of \(\mathbf{s}\), \(\boldsymbol{\pi}\)\(\mathbf{M}\) and \(\beta\) that have a higher likelihood than the "true" solution, but nevertheless differ from it. Moreover, the model used here computes large numbers of "point to point" journeys, but in real world scenarios residents of two well-separated blocks may be more likely to interact in some third block, to which they both travel during the course of day. That said, we can see a number of ways in which these approaches could be improved, and our implementations are publicly available. The algorithms could prove useful when averaged data is extracted from the outputs, such as estimates of mean distances travelled per day. Such averaged quantities may be more reliable than estimates of individual point-to-point journeys. The codes are therefore probably best regarded as providing order-of-magnitude estimates which may be of use in sensitivity testing complex infection propagation models, but should not be seen as yielding precise quantitative predictions. Figure 10: Stats NZ Commuter View vs. movements estimated with the “exact” and “approximate” algorithm for Southland commuters. We used data from 11th February, for the five hours from 6am to 10am. SA-2 regions which do not correspond to a single commuter region are discarded in this analysis. While improvement is still needed, our work may have important applications in many areas relating to disease outbreak and containment. Namely, identification of the areas with highest gathering statistics could help to inform the most effective locations for lockdown boundaries, while a better understanding of common transit routes could help to identify high risk sub-regions outside of the most densely populated commercial and residential hubs. Finally, outputs from this algorithm may serve as useful inputs to more complex models of disease propagation. Specific directions for future work might include: * Adding a more nuanced distance metric, including driving distance, rather the centroid to centroid Euclidean distance. * Considering a more complex penalty function, e.g. \(\exp(-\beta d-\alpha d^{2})\). * Improving the quality of the data set. In particular, a count of cell phones in a block that were present in the previous hour would allow separate estimations of the \(s_{i}\) and \(\pi_{i}\) and would fix the diagonal elements of \(\mathbf{M}\), but would likely raise few privacy concerns. * Improving validation testing against census data, or traffic flow information for urban regions. * Fitting \(\boldsymbol{\pi}\), \(\mathbf{s}\), and (possibly) \(\beta\) to census data rather than count data. One would have to justify the continued use of these values during periods of modified behaviour, such as when travel restrictions are in place. * Developing an improved travel simulator to better test the model against a full realisation of movement patterns in areas of interest. * Properly accounting for the fact that in most realistic datasets the transition probabilities will be time dependent, varying over the course of the (working) day. Finally, we emphasise that this overall problem is essentially an imputation exercise. Any results obtained with them are estimates, and any model that uses them as inputs should be interpreted accordingly. ## Biographical Note This work was undertaken in response to the Covid-19 emergency and was largely completed while New Zealand was at lockdown levels 3 and 4. At the time the work was undertaken the authors were all members of the theoretical cosmology group at the University of Auckland.
2308.16734
Native vs Web Apps: Comparing the Energy Consumption and Performance of Android Apps and their Web Counterparts
Context. Many Internet content platforms, such as Spotify and YouTube, provide their services via both native and Web apps. Even though those apps provide similar features to the end user, using their native version or Web counterpart might lead to different levels of energy consumption and performance. Goal. The goal of this study is to empirically assess the energy consumption and performance of native and Web apps in the context of Internet content platforms on Android. Method. We select 10 Internet content platforms across 5 categories. Then, we measure them based on the energy consumption, network traffic volume, CPU load, memory load, and frame time of their native and Web versions; then, we statistically analyze the collected measures and report our results. Results. We confirm that native apps consume significantly less energy than their Web counterparts, with large effect size. Web apps use more CPU and memory, with statistically significant difference and large effect size. Therefore, we conclude that native apps tend to require fewer hardware resources than their corresponding Web versions. The network traffic volume exhibits statistically significant difference in favour of native apps, with small effect size. Our results do not allow us to draw any conclusion in terms of frame time. Conclusions. Based on our results, we advise users to access Internet contents using native apps over Web apps, when possible. Also, the results of this study motivate further research on the optimization of the usage of runtime resources of mobile Web apps and Android browsers.
Ruben Horn, Abdellah Lahnaoui, Edgardo Reinoso, Sicheng Peng, Vadim Isakov, Tanjina Islam, Ivano Malavolta
2023-08-31T13:51:56Z
http://arxiv.org/abs/2308.16734v1
Native vs Web Apps: Comparing the Energy Consumption and Performance of Android Apps and their Web Counterparts ###### Abstract _Context._ Many Internet content platforms, such as Spotify and YouTube, provide their services via both native and Web apps. Even though those apps provide similar features to the end user, using their native version or Web counterpart might lead to different levels of energy consumption and performance. _Goal._ The goal of this study is to empirically assess the energy consumption and performance of native and Web apps in the context of Internet content platforms on Android. _Method._ We select 10 Internet content platforms across 5 categories. Then, we measure them based on the energy consumption, network traffic volume, CPU load, memory load, and frame time of their native and Web versions; then, we statistically analyze the collected measures and report our results. _Results._ We confirm that native apps consume significantly less energy than their Web counterparts, with large effect size. Web apps use more CPU and memory, with statistically significant difference and large effect size. Therefore, we conclude that native apps tend to require fewer hardware resources than their corresponding Web versions. The network traffic volume exhibits statistically significant difference in favour of native apps, with small effect size. Our results do not allow us to draw any conclusion in terms of frame time. _Conclusions._ Based on our results, we advise users to access Internet contents using native apps over Web apps, when possible. Also, the results of this study motivate further research on the optimization of the usage of runtime resources of mobile Web apps and Android browsers. ## I Introduction The market share of mobile devices has grown explosively in recent years. Surpassing desktop computers in 2016 and accounting for around 60% of devices as of 2022 [1]. This makes it the primary target for Internet content platforms, which includes any provider that offers information or services over the Internet. Many Internet content platforms, such as Spotify and YouTube, provide their services via both native and Web apps. Native apps are self-contained software packages that are installed on the target device and usually bundled with all static resources, while Web apps are composed of HTML documents, CSS style sheets, and JavaScript for client-side computation. Web apps are accessed through a Web browser and their resources are downloaded every time the Web app is opened unless cached. Dynamic content is loaded at runtime, typically over HTTP(S) from a remote endpoint. Content platforms have the incentive to provide their services via both native and Web apps, as seen in Fig. 1, with full or nearly full feature parity in order to capture as many users as possible. Even though those apps provide similar features to the end user, using their native version or Web counterpart might lead to different levels of energy consumption and performance. Users can be motivated to use either one due to various factors, such as their environmental awareness, improved battery life, and perceived responsiveness contributing to better usability. One should not however assume that this will be a universal decision since noteworthy popular platforms vary greatly in the type and method of presenting their content. For example, YouTube and TikTok offer almost exclusively video, while other social media platforms such as Facebook, Twitter, and Reddit consist of text, images, and video combined. A very significant part of using such platforms consists of consuming the content that they provide, which is possible either through a native app or the corresponding mobile Web app. The observation that video content requires more energy and processing than just audio playback is trivial. Users can infer this from the difference in battery life for the different activities. Some manufacturers, like Apple, also report this in the expected battery life for different activities in the device specifications [2]. Gauging the extent of the difference in performance and energy consumption between native apps and their Web app counterparts is a non-trivial research avenue. The answer to this could be useful in deciding which version of any given platform a user should consider. The **goal** of this study is to compare the energy consumption and performance of native Android apps and their Web counterparts. To this aim, we select 10 Internet content Fig. 1: Reddit native Android (left) vs Web app (right) platforms across 5 categories. Then, we measure them based on the energy consumption, network traffic volume, CPU load, memory load, and frame time of their native and Web versions; then, we statistically analyze the collected measures and report our results. Based on our results, we advise users to access Internet content using native apps over Web apps, when possible to optimize the battery life of their devices. The maintenance of two different versions of the same app (perhaps even more in the case of multiple native apps for different platforms) requires additional resources from the provider. Therefore, the results might be of interest to the developers to decide which type of app is best suited for their particular platform. Focusing only on one version may allow them to lower development costs. Additionally, the findings of this study will motivate further research on the optimization of the usage of runtime resources of mobile Web apps and Android browsers. ## II Experiment Definition The goal of this study is to _analyze app types for the purpose of evaluation with respect to their energy consumption and performance from the point of view of a user in the context of Internet content platforms on Android_. From the goal, we derive the following two research questions which focus on energy and performance respectively: **RQ1**_How does energy consumption vary between native and Web versions of the same app?_ Between native and Web, technical differences such as the media codecs, content (pre-)fetching and caching strategy or user interface behavior can impact the energy consumption and performance of mobile apps. We suspect that in practice, there may be a statistically significant difference in energy consumption and performance. In order to answer the question quantitatively, the energy consumption of the apps is measured in Joules (J) over a certain period of time while interacting with the content in a typical fashion. The typical user interaction may vary between apps, depending on how much input is required to consume the content. To simulate continuous consumption of content, we provide a custom input sequence per content platform, for example, continuous scrolling to load new items in a news feed or no input when watching a video. Since the quality of the content may differ between native and Web versions (compression, available resolution, protocol overhead), we also measure the network traffic volume. **RQ2**_How does performance vary between native and Web versions of the same app?_ Native Android apps are predominantly written in Java, while Web apps run within the HTML and JavaScript engine of the browser app. This could result in computational overhead, however the performance of user interface components and their use by Web developers could even be more optimized compared to native apps. For this research question, we consider the utilization of the hardware resources (CPU, memory) and the achieved refresh rate of the user interface as well as the network traffic volume. CPU utilization is measured as percentages of the respective maximum device capability and memory utilization in kilobytes (kB). The refresh rate is the time between two consecutive frames being rendered (ns) and the network traffic volume is the total amount of bytes (B) sent and received by the device. A lower frame rate could be an indication of insufficient hardware resource utilization or availability, while a higher network traffic volume at similar hardware utilization may indicate more efficient data processing. Both CPU and memory utilization are measured because some, but not all, content is strictly linear (video). ## III Experiment Planning ### _Subjects Selection_ For the selection of subjects in this research, we start by obtaining a list of the top 2000 most visited fully qualified domain names from the Tranco list [3], as well as a list of the top 2000 most downloaded native apps from the Google Play Store from a Kaggle dataset [4]. Next, we create a pairwise matching between the two lists using the domain name without the top-level domain and the app name based on exact lowercase string comparison per word and filter out all elements for which no match could be made. This is necessary because sometimes the app name contains additional words. Matching on the package name identifier did not yield correct results, because some companies changed their app name and domain used on the Web as part of a rebranding, but the package name cannot be changed. From this, we obtain a list of 170 domains without duplicates. The category of each platform is derived from the app category in the Google Play store and described in Table I. Other categories like utilities or games are excluded because they are not suitable for this study; due to the functional diversity of the apps within the respective category, the lack of dynamically loaded content from a remote source, or the lack of a Web app version. For each of our categories in Table I we randomly sample two items. This is done by generating a random permutation of the entire list and iterating over it. For each item, we do the following steps until we have selected 10 subjects: 1. Verify that comparable native and Web apps exist for Android 2. Determine the category according to Table I 3. Select the current item, if less than two items have been selected for the category of the current item (see Table I) While the number of investigated apps is rather small, having more than one app per category may help to support the generalizability of any findings. We only consider native Android apps and Web apps running in the Android version \begin{table} \begin{tabular}{l l} \hline **Category** & **Description** \\ \hline News & Online newspapers, magazines as well as specialized \\ & offerings such as in weather, finance, and sport \\ Social media & User driven platforms including messaging, (micro-)blogging and image boards \\ E-Commerce & Online retail, review, and trading platforms \\ Audio streaming & Playback of recordings or live streams \\ Video streaming & Playback of recordings or live streams \\ \hline \end{tabular} \end{table} TABLE I: Selected content platform categories of the Google Chrome browser to limit the scope of the study. Many default app settings vary between native and Web versions, such as automatically playing video previews and default resolution. To simplify the experiment, we consider the default app settings assuming that they are not changed by the majority of users and are consciously set by the developers. Further, only apps that can be used with a free account without providing a phone number and without regional restrictions are considered to simplify the experiment. False positive matches using similar names are manually removed. Using this method, we obtain the following study subjects listed in Table II. ### _Experimental Variables_ In the following paragraphs, we identify the independent and dependent variables for our two research questions. The dependent variables are shown in Table III. For our first research question concerning energy consumption, we only have one independent, nominal variable _APP TYPE_. The two possible treatments for the factor corresponding to this variable are 'native' and 'Web'. This determines if the user should interact with the native or Web app for the respective content platform. Thus, each run of a fixed length of 3 minutes selects the concrete app and version for simulated interaction based on the treatment for this factor. The single dependent, continuous variable for the first research question is the energy consumption (\(\mathbf{\epsilon}\)) of the device. For our second research question, we have the same independent variable _APP TYPE_ and possible corresponding values as for the first research question. Here multiple dependent variables indicate the performance. CPU load (\(\mathbf{c}\)) and frame time (\(\mathbf{f}\) ) are continuous while memory load (\(\mathbf{m}\)) and network traffic volume (\(\mathbf{n}\)) are discrete. These variables are captured using different plugins for the Android Runner experiment framework [5], namely the _batterystats_, _Android_ and _frametimes_ plugins provided by the framework itself. Network traffic volume is measured in bytes using a custom script that computes the difference of the overall traffic volume measured by the operating system using the _dumpsys_ utility [6]. ### _Experimental Hypotheses_ Let \(\mathbf{\mu}_{\mathbf{\sigma}_{\mathbf{t}}}\) denote the mean of the sample for a given dependent variable (\(\mathbf{d}\)) and _APP TYPE_ (\(\mathbf{t}\)). For our first research question, the null hypothesis is that the mean energy consumption for any native app is equal to its Web app counterpart. The alternative hypothesis is consequently that there is a difference between native and Web versions of the same app in terms of energy consumption. Since we do not know if native apps could consume more or less energy than Web apps, we use a two-sided statistical test. \[\begin{split}&\mathbf{\mu}_{0}:\mathbf{\mu}_{\mathbf{\sigma}_{\text{native}}}=\mathbf{\mu}_{\mathbf{\sigma}_{\text{ Web}}}\\ &\mathbf{\mu}_{\mathbf{a}}:\mathbf{\mu}_{\mathbf{\sigma}_{\text{native}}}\text {/}=\mathbf{\mu}_{\mathbf{\sigma}_{\text{ Web}}}\end{split} \tag{1}\] For our second research question, we investigate the performance indicated by network traffic volume (\(\mathbf{n}\)), CPU load (\(\mathbf{c}\)), memory load (\(\mathbf{m}\)), and frame time (\(\mathbf{f}\) ). The null-hypothesis is that there is no difference in means for any of these four dependent variables between native and Web versions of a subject. The alternative hypothesis states that there is at least one variable that is significantly different between native and Web versions. Once again, we do not make any prior assumption about the sign of the difference between native and Web, so a two-sided statistical test is required. \[\begin{split}&\mathbf{\mu}_{0}:\mathbf{\mu}_{\mathbf{\sigma}_{\text{native}}}=\mathbf{\mu}_{\mathbf{\sigma}_{\text{ Web}}}\quad\forall d\in\{\mathbf{n},\mathbf{c},\mathbf{m},\mathbf{f}\}\\ &\mathbf{\mu}_{\mathbf{a}}:\mathbf{\mu}_{\mathbf{\sigma}_{\text{native}}}\text {/}=\mathbf{\mu}_{\mathbf{\sigma}_{\text{ Web}}}\quad\exists d\in\{\mathbf{n},\mathbf{c},\mathbf{m}, \mathbf{f}\}\end{split} \tag{2}\] ### _Experiment Design_ Our experiment only has a single factor, leading to a simple design. We consider 5 different categories, as described in Table I. Furthermore, we have 2 different app types: native and Web apps. Last, for each category, two subjects will be evaluated, for example, YouTube and Twitch in the category of video streaming platforms. As a result, the total number of trials for the experiment is 20. Moreover, the total run duration is chosen to be 3 minutes (180 seconds). This is a significant length that allows for capturing meaningful data for our measurements. In order to take into account the possible fluctuations of the collected energy measures and to reach higher statistical power [7], the number of repetitions for each subject is 25 times, resulting in a total of 500 runs over all subjects. This should help to average out our results, thereby reducing the impact of small variations between runs. The cooldown process ensures that energy consumption, power state, as well as the temperature of the device to return to ambient levels after each run. In order to achieve this, we pause the experiment execution for 30 seconds after each run. \begin{table} \begin{tabular}{l l l} \hline **Native app** & **Web app** & **Category** \\ \hline ESPN & espn.com & News \\ The Weather Channel & weather.com & News \\ LinkedIn & linkedin.com & Social media \\ Pinterest & pinterest.com & Social media \\ Coupang & coupang.com & E-Commerce \\ Shope & shopee.tw & E-Commerce \\ SoundCloud & soundcloud.com & Audio streaming \\ Spotify & spotify.com & Audio streaming \\ Twitch & twitch.tv & Video streaming \\ YouTube & youtube.com & Video streaming \\ \hline \end{tabular} \end{table} TABLE II: Subjects \begin{table} \begin{tabular}{l l} \hline **Variable** & **Description** & **RQ** \\ \hline Energy consumption (\(\mathbf{\epsilon}\)) & Energy consumption is measured in & RQ1 \\ & Joules (J) as the energy consumed by & \\ & the mobile device during the experiment run & \\ Network traffic (\(\mathbf{n}\)) & Amount of data in Bytes (B) sent and & RQ2 \\ & received by the mobile device during & \\ CPU load (\(\mathbf{c}\)) & Mean relative (\%) device CPU utilization across all cores & RQ2 \\ Memory load (\(\mathbf{m}\)) & Mean (kB) device memory utilization & RQ2 \\ Frame time (\(\mathbf{f}\)) & Median time in nanoseconds (ns) between two successive frames (We use & RQ2 \\ & median as an aggregation measure, & since we expect extreme outliers due \\ & to apps blocking on the main thread & \\ & during certain operations) & \\ \hline \end{tabular} \end{table} TABLE III: Dependent variables The Android Runner framework furthermore adds a framework overhead of 60 seconds to each run. Finally, the total duration for the execution of the experiment comes out to 37.5 hours, considering all the aforementioned factors. ### _Data Analysis_ Since zero or negative values are not possible for the selected metrics, any run that either contains such values or is missing any values indicates some failure in the execution. These runs are thus not included in the analysis to avoid inaccurate results. For each subject, we ensure that we have the same number of data points for the native and Web version. We discard additional runs by random selection. For all tests, we use a standard likelihood threshold \(\alpha=0.05\). In the context of our experiment, the population is the set of all pairs of corresponding native and Web apps available on Google Play and the Web. After obtaining the dependent variables for each run from the measurements, they are quantitatively analyzed. Data description and explorationInitial insight is obtained by visualizing the data per dependent variable and treatment using box-jitter mixed plots. Testing for normalityUsing the Shapiro-Wilk test, we determine if the data for each dependent variable follows a normal distribution over all values for _CATEGORY_. Visual inspection is performed by means of density and quantile-quantile (QQ) plots. Given normalityWe investigate the difference between native and Web versions for all categories by performing a paired t-test. Since we do not make any prior assumption about the sign of the residual, we perform a two-tailed test. We quantify the effect size using Cohen's d measure if a significant difference can be found and \(\mathcal{H}_{0}\) from hypotheses set 1 can be rejected. The obtained effect size is interpreted according to the suggestion of [8]. Not given normalityWe use non-parametric tests to investigate the difference between native and Web versions. First, we do not consider the category and apply the Wilcoxon signed-rank test and, if a statistically significant difference is found. In this case, we reject \(\mathcal{H}_{0}\) from hypotheses set 2. To quantify the effect, we use Cliff's delta. Our interpretation is informed by [9]. ## IV Experiment Execution A replication package of the code used for the experiment is made available through an anonymous repository [10]. ### _Preparation_ The first step consists of downloading the list of native apps that were selected as subject of study in Section III. For this, we download the corresponding app from the Google Play Store and extract the APK file using the Android Debug Bridge (ADB) [11]. For convenience, a simple bash script is used to install or uninstall all the APK files prior to running the experiment. This guarantees that the correct app version (Native app versions are available in the replication package: see Section IV) is used and saves time during the experiments since apps are not installed and uninstalled between runs. Furthermore, by first removing all subjects from the device, we guarantee that they will use their default configuration during the experiment. The Web apps are run in the Google Chrome browser app. This Web browser is chosen because it has the highest market share on mobile devices [12]. Some Internet content platforms require the user to authenticate in order to engage with their content. Since this is a hard and time-consuming process to automate, and it may further show dynamic user interface elements, part of the initial setup consists of the manual user authentication for native and Web apps. At this point, the experiment is prepared to be setup for the following steps in the execution. ### _Setup_ Next, we describe our experiment setup, which is visualized in Fig. 2. The experiment is carried out using the experiment framework Android Runner [5] with a Nokia 6.2 (model TA-1198) running Android (Go edition) 10. This entry level smartphone from 2019 is equipped with a 1.8 GHz octa-core Snapdragon 636 CPU, 3 GB memory, and a 3500 mAh battery [13]. The android device (hereafter just device) is connected through USB 3 to a computer with a GNU/Linux 5 system running the experiment framework. This computer is connected to 230V wall power and charges the device. Since the energy consumption of the device is estimated using the hardware activity and power profile of the device, the battery is not drained during the experiments. This also prevents the device from switching on any power saving measures that could impact the experiment or from unexpectedly shutting down. All measurements are taken on the device itself using the built-in debugging tools and transmitted to the computer through the ADB and the _systrace_ utility [14]. Using ADB, the device state is configured and input is simulated during the experiment. These features are provided by the Android Runner framework [5], which orchestrates the experiment following the corresponding configuration file. This configuration Fig. 2: Experiment setup (yellow indicates Android utilities, blue the subjects, green the components of the Android Runner framework, and red our experiment.) further specifies scripts and plugins to be run before and after each step in the experiment. It is not feasible to completely mock the remote resources for all subjects in our experiment. Thus, they are accessed regularly over the Internet using Wi-Fi. During the execution of the experiment, we limited fluctuations of the Wi-Fi network by having only one device connected to it, always at the same distance from the Wi-Fi router. Before the experiment, the connection speed is measured to be 100Mbps using _speedtest_[15]. Using built-in hooks of the Android Runner framework, we ensure that the device is in a pre-defined state for each run. At the beginning of the experiment, a custom script resets the state of the device, which includes the following steps: (1) Unlock the device, (2) Enable 'Stay awake', (3) Stop all running apps, (4) Enable airplane mode but keep Wi-Fi enabled, (5) Dim the screen and mute media playback. Before each new run, the app of the previous run is shut down using ADB. Since some selected applications require authentication and take measures against repeated automated logins, we log in using a test account before the experiment and do not clear the app data between runs. However, after each run, we automatically selectively clean the browser session including tabs and all cached data except cookies required to persist the authentication. For each run of a native app, we clear the corresponding app cache. ### _Measurement_ Android Runner is used to start each experiment. All parameters for measuring the dependent variables are specified in two configuration files, _config_native_json_ and _config_web_json_. These measurements are collected by the built-in android, batterstats, and frametines plugins as well as our own custom network plugin. bandwidth requirements for Web apps, as they often have to be re-downloaded on startup. Appearing noticeably bimodal, the distribution is more spread out for native apps. The high network traffic volume for social networks may be explained by higher-resolution previews and automatic video playback. Mean CPU loadThe distribution of the mean CPU load of native apps shown in Fig. 5 ranges from roughly 10% to roughly 50% and looks like a mix between two highly overlapping bell shapes. For Web apps the mean is at slightly above 35% and the distribution has three narrow peaks. The news apps form a peak at the lower end, perhaps likely due to the fact that not much processing is required once the initial content is loaded and displayed. Audio and video streaming exhibit another peak above the mean, most likely due to the continuous decoding of the incoming media. Mean memory loadThe stark difference in the mean memory load of roughly 1.52 GB between native and Web apps is observable in Fig. 6. This is similar to the findings of [21] and [22]. A possible explanation for the memory overhead could be the fact that Web apps run on top of the Google Chrome browser app. Since the memory footprint of the browser contributes to the measurements, it is plausible that the measurements for the Web apps are higher than those of the native versions. For other browsers, which have a higher memory load, this effect may be even stronger as indicated by [23, Figure 3]. Median frame timeLooking at the density plot in Fig. 7, there does not seem to be a large difference between native and Web apps with respect to this independent variable. The only consistent outliers are the Twitch Web app and both the native and Web app for SoundCloud. However, falling still below 50 ms, this is not sufficient evidence for degraded performance that would be noticeable by the user. ### _Testing for normality_ As already indicated by the density plots in Fig. 3, Fig. 4, Fig. 5, Fig. 6 and Fig. 7, the QQ plots available in the replication package (see Section IV) show that for all dependent variables, the data for native and Web apps is not normally distributed. This is further confirmed by the Shapiro-Wilk tests in Table VII which shows non-normal distribution for all dependent variables for both native and Web. Thus, we use non-parametric tests for further analysis. the samples. Thus, we can reject the null-hypothesis in (2) that the populations are not different. ### _Interpretation of effect size_ Since we perform exclusively non-parametric tests, we use Cliff's delta in order to quantify any statistically significant difference between native and Web apps as \(\mathbf{d}\). A 'negligible' difference is indicated by \(\mathbf{\rho}|<0.147\), a'small' difference by \(0.147\)\(\mathbf{\underline{\mathbf{d}}}\mathbf{\uparrow}\mathbf{\underline{\mathbf{0}}}.33\), a'medium' difference by \(0.33\leq|\mathbf{d}|<0.474\), and a 'large' difference by \(0.47\mathbf{\underline{\mathbf{d}}}\mathbf{\not\in}|\). The sign indicates the direction of the difference where negative means that the values for native are lower than those for Web and positive indicates the opposite. This interpretation is proposed by [9]. The results of these tests are also shown in Table IX. Using the value obtained for Cliff's delta and the interpretation described above, we identify a large difference between native and Web in regard to energy consumption. For the network traffic volume (\(n\)) a logarithmic scale is used in Fig. 4 to better visualize the difference. While the median values for native and Web differ by an order of magnitude, the overall range of values in both samples is between 3 and 4 orders of magnitude and the values of the medians are rather large, sitting at either side of \(10^{7}\), so the difference is relatively small. This is confirmed using Cliff's delta, which indicates a small difference. For the CPU utilization (\(\mathbf{c}\)) and memory utilization (\(m\)) the effect is determined to be large using this measure. The latter even has no overlap in the density plot in Fig. 6. Since the maximum absolute value of Cliff's delta is 1.0, the effect is too extreme to be meaningfully quantified by this metric. Because there is clearly a difference in means between native and Web, we also compute Cohen's d for memory utilization, which results in a value with a magnitude of 13.23714. This also indicates a large effect size according to [8]. For the frame time (\(\mathbf{f}\) ), we do not compute a value for Cliff's delta, since there is no statistically significant difference based on its \(\mathbf{p}\)-value. The shapes of all distributions are rather different, thus, we can not make any assumptions about the difference of the population means based on the results of the non-parametric tests if the distributions overlap to a significant degree, such as in the case of network traffic volume. Nonetheless, we find a (far) more than negligible difference for the other dependent variable. Turning again to the box plots from above, we find a large effect size in those where the interquartile ranges do not overlap. Those are Fig. 3, Fig. 5 and Fig. 6. This intuitively confirms the effect sizes prescribed to them. ## VI Discussion Based on the \(\mathbf{\rho}\)-values of the Wilcoxon signed-rank tests and corresponding effect sizes obtained using Cliff's delta from the previous section, we proceed to answer our research questions. For **RQ1**, we conclude that native apps consume significantly less energy than their Web counterparts, with a large effect size. We thus recommend end users access Internet content using native apps over Web apps to optimize the battery life of \begin{table} \begin{tabular}{c c c c c} \hline \hline **Variable** & \(\mathbf{p}\)**-value** & **Cliff’s delta** & **Effect size** & **RQ** \\ \hline \(\mathbf{\underline{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bm their Android devices. However, since we did not consider app background activity such as (location) tracking and push notifications, which are mostly exclusive to native apps, exceptions to this rule may exist. Since there is a statistically significant difference in the energy consumption of native and Web apps, this may only occur in extreme cases. For **RQ2**, the \(\rho\)-values are significantly smaller than the \(\alpha\) threshold for most dependent variables indicative of performance (network traffic volume, CPU utilization, and memory utilization). Web apps use more CPU and memory, with a statistically significant difference and large effect size. Therefore, we conclude that native apps tend to require fewer hardware resources than their corresponding Web versions. The network traffic volume exhibits a statistically significant difference in favour of native apps, with a small effect size. This is plausible because of the diversity of our subjects. As for the frame time, our results do not allow us to draw any conclusions. Thus, we also recommend users to use native apps over Web apps for accessing the Internet content when considering their performance. For developers, it may not be realistic to expect that the choice between native and Web apps can be made only based on their expected energy consumption and performance. However, they should not offer a Web app for purposes other than user acquisition. Thus, it may be acceptable to offer only limited functionality in the Web app while committing more to developing a native app and encouraging users to switch to it. Furthermore, researchers may consider the results of this empirical study to motivate their future research on the efficient usage of runtime resources of mobile Web apps and Android browsers. ## VII Related Work Oliveira et al. [24] compared the energy efficiency of native Java apps and JavaScript apps based on the Apache Cordova framework (system WebView) running on Android 5 across 33 benchmarks using the same algorithms. In 26 cases, the Java implementation consumed on average 1.82 and 2.09 times energy as the JavaScript version between the two benchmark suites (Rosetta Code and The Computer Language Benchmark Game), but in 5 cases it consumed 1.4 times energy and time. They noted that there seems to be no correlation between energy efficiency and execution time and pointed out that only Java can utilize multiprocessing. JavaScript seems favorable in cases of multiple small computations. Furthermore, the authors hybridized two open-source apps by re-implementing a compute-intensive functionality using different invocation strategies. Their experiment shows that in certain cases this can yield a significant improvement (35.69 times less energy consumed) without seriously impacting the maintainability of the codebase. However, the applicability of this strategy depends on the amount of computation inside the app. They later published a follow-up study, which also includes a comparison to C++ using the Android NDK [25], demonstrating two orders of magnitude performance increases in an application rewritten using a combined approach using Java and C++. The results of our study favour native apps in terms of energy efficiency, which is different from the results obtained by Oliveira et al.; we conjecture that the difference in the results of these two studies lies in the fact that Oliveira et al. focused on the energy consumption of _computation_ and did not consider complete apps, including the user interface. Ma et al. [26] compared native and Web versions of 328 popular services offered by 12 different providers on Android 4.2 and Chrome 40 to investigate if Web apps are generally less performant. Their experiments focused heavily on energy consumption and networking. Ma et al. pointed out that requests made by Web apps are usually larger and that they generally need to fetch more resources compared to native apps, which typically bundle static resources. Even if some resources in Web apps are cached, they expire rather quickly and need to be downloaded again. This might be also an explanation of the results we obtained in this study. Indeed, in the study by Ma et al. Web apps tended to perform more poorly than their native equivalent while consuming less energy. However, in 31% of their experiments, the Web app version provided better performance. The traffic volume was generally lower for Web versions, and caching had the highest impact on GET requests. The difference in utilization of TLS (handshake overhead and cryptography) and the amount of re-fetching per feature utilization between providers and variants caused some significant outliers. While the study by Ma et al. reports energy consumption and detailed Web requests, they do not include hardware utilization and do not group or rank their study subjects, unlike our study which considers the different types of apps listed in Table I based on the content they offer. The Android version used by Ma et al. is also several generations older than the one we use in this study, and might not take advantage of recent optimizations. Chan-Jong-Chu et al. [27] studied the correlation between performance scores and the energy consumption of 21 out of the 100 most-visited Websites on the Internet, according to the Alexa Rank. Their primary goal was to understand how the performance of Web apps could potentially impact the energy consumption of a mobile device. Their results indicate a significantly lower energy consumption by Web apps that have better performance scores. Our primary goal is to understand the difference in energy consumption and performance between the native and Web version of the same app, while the authors compared different Web apps with regard to their performance and energy consumption. Corbalan et al. [21] compared the energy efficiency from different frameworks for cross-platform mobile development. The authors classified the frameworks into four different main categories: _Web Approach, Hybrid Approach, Interpreted Approach, and Cross-Compiled Approach_. Their primary goal was to understand which of these frameworks consumed less energy based on certain tasks performed in mobile device experiments. The results show that _Web_ and _Hybrid Approaches_ consume the most energy, with 3.31% and 3.03% in Android, whilst iOS devices consumed 2.48% and 2.5%. This is in line with the results of our study. Metri et al. [22] compared apps and Web apps belonging to the categories of browsers, social networking, as well as video and music streaming across Windows 8.1, iOS 7.0.6, and Android 4.3 tablets with respect to energy consumption and hardware utilization. In line with our results, the results obtained by Metri et al. show that native apps usually outperform their Web counterpart in terms of energy efficiency, which they attribute to the higher CPU utilization and lower memory utilization. They also observed on Windows that using multiple cores can speed up certain operations and allow them to sleep for longer periods afterward. Furthermore, they observed the wake-up frequency of the hardware components negatively affects energy efficiency. ## VIII Threats To Validity **Internal Validity**. An internal threat to validity may arise from the subject selection. Since we only select 10 subjects from a pool of over 2.6 million as of June 2022 [28], there may be some potential issues of misrepresenting the general population. As a mitigation, we consider app categories, which were previously discussed in Section III, and select exactly 2 apps for each category. This ensures a level of diversity in the subjects. Another threat would be browser and native app caching, which falls under maturation internal threats since the effect occurs on subsequent runs. Caching could lead to inconsistent results, as the browser or native app may use saved data instead of downloading it on subsequent runs. To mitigate this potential threat, we clear the native app for the corresponding subject or browser cache after every run. Thus, making the app's behavior consistent across runs. Moreover, apps and their dynamic content may change greatly over time. This means that replications of the experiment may not arrive at the same results and that even the results within the experiment may be influenced by the time of execution. To mitigate this, we provide the exact build numbers of all native apps which are not updated during the course of the study. This is not possible for Web apps and HTTP interfaces. We did not observe a change in the type of dynamic content or update of the Web apps during the experiment. **External Validity**. First, the experiment is run using only one browser, Google Chrome. Browsers can vary in design and implementation, and these differences could impact the metrics like energy consumption or CPU, and memory utilization measured in this experiment. However, since Google Chrome is the most widely used browser across Android devices (often coming preinstalled) with a market share of over 60% across all mobile operating systems over the last few years [12], no other browser on Android has a comparatively significant market share. Thus, the selection is highly representative. Second, similar to the browser threat, we are only running this experiment on a single device, as described in Section IV. The device could also be a factor in the experiment, which may influence the outcome and thus be important to consider for the generalizability of the results. Due to the limited scope of the experiment, we did not mitigate this threat. However, it is reasonable to assume that our results are a good indicator for the average contemporary Android based smartphone, as represented by the Nokia 6.2 (model TA-1198) test device. Running Android version 10, which accounts for roughly 20% of all Android phones as of September 2022 [29], the operating system version represents a significant portion of Android devices and can be considered the future lower bound for relevant versions. Future replications of this study might consider different devices under different configurations. An external threat to validity may arise from the simulated interactions with the app during the experiments. Our experiment is limited to performing a single static loop of basic interaction with each app. Such an approach may fail to represent how the app is used under real circumstances. This threat is not directly mitigated due to the limited scope of the experiment, as well as other practical reasons. It is not possible to include certain activities such as e-commerce checkout and posting content since providers actively prevent this by legal and technical means. These activities would require the use of a'man in the middle' to completely simulate each remote resource accurately, which is far from trivial. However, such activities are less common than those which only consist of consuming content provided through the app and which can thus be integrated into the experiment. We further took great care in covering the major use cases per app with the scripted interaction to minimize the potential impact of this threat. **Construct Validity**. To mitigate potential inadequate operational explanation of constructs, we defined our constructs a priori before the experiment execution. All the details related to the design of the experiment (_e.g.,_ the goal, research questions, variables, data analysis procedures) was defined before executing the experiment. We used the GQM approach to define our goal, which then guided us to derive the research questions of this study. The hypotheses, dependent and independent variables, and treatments were all defined during the planning phase of the experiment. **Conclusion Validity**. First, since most of the native and Web apps from our selected subjects contain unpredictable interstitial advertisements, this could introduce some 'flaky' behavior when running the experiment. In order to achieve consistent behavior, we had to mitigate this issue by using an external service called dns.adguard.com for blocking advertisements in all native and Web apps based on domain name resolution. For some subjects, this did not remove all advertisements. For example, Pinterest does not use a separate domain to serve advertisements. However, this is ignored, since the advertisements displayed on Pinterest behave similarly to 'normal' posts, so the scripted interaction does not need to handle them separately. System-wide advertisement blocking requires technical awareness and expertise of the user and may thus be rather uncommon, however, it has limited effectiveness and does not influence the behavior of apps in any significant form or their functionality. Second, variations in device settings, such as the screen brightness or audio playback volume, could influence the measurements for the dependent variables. As a mitigation, we created a custom script that is executed during the experiment setup phase. This script is responsible for handling the state of the device, making sure that all settings are reset as described in detail in Section IV. Third, another potential threat to the reliability of measurements is the tools used to record the metrics, which are the basis of the dependent variables in our experiment. Due to the limitations of available and compatible profilers, we could not address this issue. The Trepn profiler [30] seemed incompatible with the device used in the experiment. Thus, all measurements are obtained using the internal profiling tools of the Android device as described in Table IV. Before analyzing the data obtained from the measurements, they are scrutinized and all invalid values are removed as described in Section V. This only affected the metric network traffic volume and thus, this metric is not relied on to answer our research questions. Instead, the null hypothesis is rejected on the basis of the mean memory load, which paints a sufficiently clear picture. Moreover, investigating only 10 pairs of native and Web apps would result in a very small sample with low statistical power. Thus, we perform 25 repetitions, resulting in a total of 500 data points to mitigate this threat. The number of repetitions has been decided based on the literature on measurement-based experiments on mobile (web) apps and available resources. Due to the limited number of subjects and their potentially large heterogeneity, the dependent variables were not likely to be normally distributed. This results in parametric tests not being applicable. Furthermore, our non-parametric tests may not be adequate to reject the null hypotheses which are concerned with the population mean. This could result in a type 1 error. This is not trivial to address. We use non-parametric tests for non-normally distributed data instead of transforming it. Since the effect size is either very small or very large, we can confidently make a statement regarding the null hypotheses. ## IX Conclusions In this paper we conducted an empirical analysis on the energy consumption and performance of native Android apps and their Web counterparts. In our experiment, we selected 10 Internet content platforms across 5 categories, having 2 subjects per category. Then, we measure the energy consumption, network traffic volume, CPU load, memory load, and frame time of their native and Web versions. Our results show that Web apps consume about 53% more energy than their native counterparts. From our findings, we conclude that Web apps consume significantly more energy than their native counterparts, with a large effect size. In addition, Web apps use more CPU and memory, with a statistically significant difference and large effect size. The CPU utilization of Web apps is roughly 49% higher than their native versions, yet has a slightly lower standard deviation. The difference in memory utilization can most likely be attributed to the overhead posed by the Google Chrome browser. Our results do not allow us to draw any conclusion in terms of the frame time. Based on our findings, we suggest users to use native apps over their Web counterparts for accessing the Internet contents, when possible. Nevertheless, users should also consider other factors when considering to use Web or native apps, such as: available storage space, convenience, availability of the installed native apps, etc. Possible future work includes extending the experiment by increasing the number of subjects per category to investigate the degree of the observed variance in the measurements based on the app category. Since the use of web browsers may have an impact on energy consumption and performance, it will be interesting to repeat the experiment on other web browsers. Furthermore, it would be interesting to replicate this study on iOS devices as it holds a global market share of 26.98% [31]. Finally, it would be interesting to investigate deeper on the _root causes_ of the observed differences between native and Web mobile apps by investigating their source code, system API calls, used programming language (_e.g._, Kotlin vs Java in the case of native apps [32]), and other technical aspects. ## Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 871342 "uDEVOPS".
2301.13347
Tight Data Access Bounds for Private Top-$k$ Selection
We study the top-$k$ selection problem under the differential privacy model: $m$ items are rated according to votes of a set of clients. We consider a setting in which algorithms can retrieve data via a sequence of accesses, each either a random access or a sorted access; the goal is to minimize the total number of data accesses. Our algorithm requires only $O(\sqrt{mk})$ expected accesses: to our knowledge, this is the first sublinear data-access upper bound for this problem. Our analysis also shows that the well-known exponential mechanism requires only $O(\sqrt{m})$ expected accesses. Accompanying this, we develop the first lower bounds for the problem, in three settings: only random accesses; only sorted accesses; a sequence of accesses of either kind. We show that, to avoid $\Omega(m)$ access cost, supporting *both* kinds of access is necessary, and that in this case our algorithm's access cost is optimal.
Hao Wu, Olga Ohrimenko, Anthony Wirth
2023-01-31T00:33:56Z
http://arxiv.org/abs/2301.13347v2
# Tight Data Access Bounds for Private Top-\(k\) Selection ###### Abstract We study the top-\(k\) selection problem under the differential privacy model: \(m\) items are rated according to votes of a set of clients. We consider a setting in which algorithms can retrieve data via a sequence of accesses, each either a random access or a sorted access; the goal is to minimize the total number of data accesses. Our algorithm requires only \(O(\sqrt{mk})\) expected accesses: to our knowledge, this is the first sublinear data-access upper bound for this problem. Accompanying this, we develop the first lower bounds for the problem, in three settings: only random accesses; only sorted accesses; a sequence of accesses of either kind. We show that, to avoid \(\Omega(m)\) access cost, supporting _either_ kind of access, i.e. the freedom to mix, is necessary, and that in this case our algorithm's access cost is almost optimal. ## 1 Introduction We consider the differentially private top-\(k\) selection problem; there are \(m\) items to be rated according to \(n\) clients' votes. Each client can either vote or not vote for each item, and can vote for an unlimited number of items. Since this data can be sensitive (e.g., visited websites, purchased items, or watched movies), the goal is to identify a set of \(k\) items with approximately the highest number of votes, while concealing the votes of individual clients. Private top-\(k\) selection is a fundamental primitive and underlies a wide range of differentially private machine learning and data analytics tasks such as discovering frequent patterns from data (Bhaskar et al., 2010), training wide neural networks (Zhang et al., 2021), tracking data streams (Cardoso and Rogers, 2022), false discovery rate control in hypothesis testing (Qiao et al., 2021), etc. In recent years, a significant progress has been made towards understanding how accurate the algorithms for this problem can be. For example, (Bafna and Ullman, 2017; Steinke and Ullman, 2017) provide lower bounds for the problem in terms of sample complexity, which can be achieved by a number of existing algorithms (Durfee and Rogers, 2019; Qiao et al., 2021). Another line of research is devoted to improving the efficiency of the algorithms. Early works such as the peeling solution (Bhaskar et al., 2010) need to iterate \(k\) times over all items. The improved mechanisms (Durfee and Rogers, 2019; Qiao et al., 2021) iterate over each item only once. Since \(k\) can be much smaller than \(m\), the research community remains interested in the following question: **Research Question:** Can we develop private top-\(k\) selection algorithms that access only a sublinear number of items? Although it seems to be an unachievable target, it is possible to address this question by considering how items are accessed. For example, Durfee and Rogers (2019) consider the setting where the data has been pre-processed and resides in an existing data analytics system, that can return the items in sorted order (which we refer to as _sorted access_ in our paper). Their top-\(k\) algorithm can make a sublinear number of accesses at a cost of potentially returning fewer than \(k\) items, while to guarantee that \(k\) items are returned, the number of retrieved items can be \(m\). Since retrieving information from an existing system incurs corresponding query processing and communication cost, it is crucial to minimize the number of data accesses. In this paper, we systematically investigate the minimum number of items an algorithm needs to evaluate (a.k.a. _access cost_), in order to answer the private top-\(k\) selection problem. In addition to _sorted access_, we also consider another common way of accessing items' data, i.e., the _random access_, in which an algorithm can actively request the data of an arbitrary item.1 Both types of accesses have been considered by previous literature for the non-private version of top-\(k\) selection problem (see Ilyas et al. (2008) for a comprehensive survey). Footnote 1: Here _random_ carries the sense of RAM, rather than the outcome of a random process. _Example 1.1_.: Consider the example of a movie ranking database. It can present the movies in sorted order, according to their rating by the clients, or it can return directly the rating of a specific movie. **Our Contributions.** Our results are threefold. On the upper bound side, * If the system supports both sorted access and random access, we design an algorithm with expected access cost \(O(\sqrt{mk})\). To our knowledge, this is the first asymptotically sublinear bound of the access cost for the private top-\(k\) selection problem. Our algorithm builds on existing works (Durfee and Rogers, 2019; Qiao et al., 2021) and inherits their error bounds, which are known to be asymptotically optimal (Bafna and Ullman, 2017; Steinke and Ullman, 2017). On the lower bound side, * If the system supports either only sorted accesses or only random accesses, but not both, we show a lower bound of \(\Omega(m)\). * If the system supports both sorted accesses and random accesses, we show a lower bound of \(\Omega(\sqrt{mk})\). These statements are informal versions of Theorems 5.1, 5.3, and 5.4, which impose modest assumptions on the privacy guarantee, and relatively weak assumptions on the accuracy guarantee of the algorithms. They show that supporting sorted and random access to the items' data simultaneously is necessary to break the linear barrier, and the access cost of our algorithm is essentially optimal. **Organization.** Our paper is organized as follows. Section 2 introduces the problem formally. Section 3 discusses the preliminaries for our algorithm. Section 4 introduces our algorithm. Section 5 presents the lower bounds for the problem. Section 6 discusses the related works. ## 2 Model Description Let \(\mathcal{C}\doteq\{1,\ldots,m\}\) be a set of \(m\) items, and \(\mathcal{U}\doteq\{1,\ldots,n\}\) be a set of \(n\) clients. Each client \(v\in\mathcal{U}\) can cast at most one vote for each item, and can vote for an unlimited number of items. Hence, client \(v\)'s votes, denoted by \(\vec{x}_{v}\), can be viewed as a vector in \(\mathcal{D}\doteq\{0,1\}^{m}\), such that for each \(i\in\mathcal{C},\ \vec{x}_{v}[i]=1\) if \(v\) votes for item \(i\), where \(\vec{x}_{v}[i]\) is the \(i^{(th)}\) entry of \(\vec{x}_{v}\). We regard the collection of voting vectors from all \(n\) clients as a dataset \(\mathcal{X}=\{\vec{x}_{1},\ldots,\vec{x}_{n}\}\in\mathcal{D}^{n}\). For each item \(i\in\mathcal{C}\), let its score \(\vec{h}[i]\doteq\sum_{v\in\mathcal{U}}\vec{x}_{v}[i]\) be the number of clients that vote for \(i\). The dataset \(\mathcal{X}\) can be described by its histogram \(\vec{h}\doteq(\vec{h}[1],\ldots,\vec{h}[m])\in\mathbb{N}^{m}\). We also define \(\pi:\mathcal{C}\rightarrow\mathcal{C}\) to be a permutation that puts the entries of \(\vec{h}\) in nonincreasing order2, s.t., \(\vec{h}[\pi(1)]\geq\cdots\geq\vec{h}[\pi(m)]\). Footnote 2: We break ties arbitrarily. Our goal is to design a differentially private algorithm that returns a set \(S\) of \(k\) items with (approximately) largest scores, while minimizing its data access cost. In what follows, we discuss the privacy guarantee, the utility guarantee, the data access model of an algorithm formally. **Privacy Guarantee.** We call two datasets \(\mathcal{X},\mathcal{X}^{\prime}\) neighboring, denoted by \(\mathcal{X}\sim\mathcal{X}^{\prime}\), if they differ in addition or deletion of one client vector, e.g., \(\mathcal{X}^{\prime}=\mathcal{X}\cup\{\vec{x}_{n+1}\}\) or \(\mathcal{X}^{\prime}=\mathcal{X}\setminus\{\vec{x}_{v}\}\) for some \(v\in[n]\). Let \(\vec{h}\) and \(\vec{h}^{\prime}\) be the histograms corresponding to \(\mathcal{X}\) and \(\mathcal{X}^{\prime}\), respectively. It is easy to see that if \(\mathcal{X}\sim\mathcal{X}^{\prime}\), then the score of each item can differ by at most \(1\), i.e., \(\|\vec{h}-\vec{h}^{\prime}\|_{\infty}\leq 1\). Hence, for every \(\vec{h},\vec{h}^{\prime}\in\mathbb{N}^{m}\), we also call them neighboring histograms, written as \(\vec{h}\sim\vec{h}^{\prime}\), if \(\|\vec{h}-\vec{h}^{\prime}\|_{\infty}\leq 1\). Let \(\mathcal{A}\) be an algorithm for the top-\(k\) selection problem. To protect the voting information of individual clients, we would like its output distributions to be similar for neighboring inputs, as defined thus. **Definition 2.1** (\((\varepsilon,\delta)\)-Private Algorithm (Dwork and Roth, 2014)).: Given \(\varepsilon,\delta>0\), a randomized algorithm \(\mathcal{A}:\mathbb{N}^{m}\rightarrow\mathcal{Z}\) is called \((\varepsilon,\delta)\)-differentially private (DP), if for every \(\vec{h},\vec{h}^{\prime}\in\mathbb{N}^{m}\) such that \(\vec{h}\sim\vec{h}^{\prime}\), and all (measurable) \(Z\subseteq\mathcal{Z}\), \[\Pr\left[\mathcal{A}(\vec{h})\in Z\right]\leq e^{\varepsilon}\cdot\Pr[ \mathcal{A}(\vec{h}^{\prime})\in Z]+\delta\,. \tag{1}\] We call \(\varepsilon\) and \(\delta\) the _privacy parameters_. Typically, it is required that \(\delta\) is cryptographically negligible, i.e., \(\delta\leq 1/m^{\omega(1)}\)(Vadhan, 2017; Dwork and Roth, 2014). An algorithm \(\mathcal{A}\) is also called \(\varepsilon\)-DP for short, if it is \((\varepsilon,0)\)-DP. **Utility Guarantee.** In line with previous research (Bafna and Ullman, 2017; Durfee and Rogers, 2019), we measure the error of an output \(S\) by the maximum amount by which \(\vec{h}[\pi(k)]\) exceeds the score of an item in \(S\), defined formally as follows. **Definition 2.2** (\((\alpha,k)\)-Accuracy).: Given a vector \(\vec{h}\), parameters \(k\in\mathbb{N}^{+}\), and \(\alpha\in\mathbb{R}^{+}\), an output \(S\subseteq[m]\) of size \(k\) is called \((\alpha,k)\)-accurate, if for each \(i\in S\), \(\vec{h}[i]\geq\vec{h}[\pi(k)]-\alpha\). **Data Access.** We assume that the histogram \(\vec{h}\) has been preprocessing by an existing data management system, and an algorithm \(\mathcal{A}\) can access \(\vec{h}\) only through the system. We consider two access models that abstract common functionalities supported by a system: _sorted access_ and _random access_. Such access models have been widely accepted by the community for non-private top-\(k\) selection problems (see Ilyas et al. (2008) for a survey). Sorted Access.Let \(\mathcal{C}_{s}\) be the set of items already returned by sorted access (initially, \(\mathcal{C}_{s}=\varnothing\)). When a new sorted-access request is submitted, the system returns an item-score pair \((i,\vec{h}[i])\), where \(i\in\mathcal{C}\setminus\mathcal{C}_{s}\) has the largest score, i.e., \(i=\arg\max_{j\in\mathcal{C}\setminus\mathcal{C}_{s}}\vec{h}[j]\). An alternative view is that the system returns \(\big{(}\pi(1),\vec{h}[\pi(1)]\big{)},\big{(}\pi(2),\vec{h}[\pi(2)]\big{)},\ldots\) in order, one tuple at a time. Random Access.A request of random access consists of a reference \(i\in\mathcal{C}\) to an item. In response, the system returns the corresponding item-score pair \((i,\vec{h}[i])\). We emphasize that a random access does not imply that \(i\) must be a randomly chosen item. Access Cost.Given an algorithm \(\mathcal{A}\) and a histogram \(\vec{h}\), the _access cost_ of an algorithm on \(\vec{h}\), \(cost\left(\mathcal{A},\vec{h}\right)\), is the total number of accesses - either sorted or random - to the system. Note that this is an upper bound of distinct number of entries \(\mathcal{A}\) learns from \(\vec{h}\), as a random access may retrieve a previously encountered item-score pair. ## 3 Preliminaries In this section, we review two building blocks for constructing our algorithm: a technique for aggregating ranked data from multiple sources, and a framework for the existing one-shot algorithms for private top-\(k\) selection. ### Threshold Algorithm The threshold algorithm (Fagin et al., 2003) is a top-\(k\) selection algorithm, when the information of an item needs to be aggregated from multiple resources. In this scenario, there are \(m\) items, each associated with \(t\) attributes. W.L.O.G., assume that each attribute is a real number. Therefore, each item \(i\) can be represented by a vector \(\vec{y}_{i}\in\mathbb{R}^{t}\). The score of item \(i\) is computed by a function \(f:\mathbb{R}^{t}\rightarrow\mathbb{R}\), which is assumed to be _monotone_, s.t., for each \(\vec{y},\vec{y}^{\prime}\in\mathbb{R}^{t}\), if \(\vec{y}[j]\leq\vec{y}^{\prime}[j],\forall j\in[t]\), then \(f(\vec{y})\leq f(\vec{y}^{\prime})\). The vectors \(\vec{y}_{1},\ldots,\vec{y}_{m}\) do not reside in a single data management system, but distributed in \(t\) systems \(L_{1},\ldots,L_{t}\), s.t., for each item \(i\), its \(j^{th}\) attribute \(\vec{y}_{i}[j]\) resides on \(L_{j}\). Each \(L_{j}\) allows for both _sorted access_ and _random access_. We can view it as an array of \(m\) tuples \(L_{j}[1],\ldots,L_{j}[m]\), each of the form \((i,\mathsf{val})\in[m]\times\mathbb{R}\), where \(\mathsf{val}\) equals \(\vec{y}_{i}[j]\). The tuples in \(L_{j}\) are sorted in descending order by their \(\mathsf{val}\)'s. Further, \(L_{j}\) is augmented with an inverted index \(\sigma_{j}:[m]\rightarrow[m]\) to support random access, such that, for each an item \(i\in[m]\), \(L_{j}[\sigma_{j}(i)]\) contains the tuple \((i,\vec{y}_{i}[j])\). The aim is to identify the top-\(k\) items with highest scores according to \(f\), while minimizing the access cost, i.e., the total number of data accesses performed by the algorithm.to \(L_{1},\ldots,L_{t}\). The algorithm is described in Algorithm 1. ``` 1:Input: Sorted array \(L_{j}\) and inverted index \(\sigma_{j},\forall j\in[t]\). 2:\(S\leftarrow\emptyset\). 3:repeat 4: Do sorted access in parallel to each of the \(t\) array \(L_{j}\). As a tuple \((i,\mathsf{val})\) is seen under sorted access: 5: Retrieve from \(L_{1},\ldots,L_{t}\) by random access (with the help of \(\sigma_{1},\ldots,\sigma_{t}\)) all attributes of item \(i\), to compute \(f(\vec{y}_{i})\). 6: If \(f(\vec{y}_{i})\) is among the-\(k\) highest scores seen so far, add \((i,f(\vec{y}_{i}))\) to \(S\); if \(|S|>k\), remove the tuple with lowest score from \(S\). 7: For each \(L_{j}\), let \(\underline{y}_{j}\doteq\vec{y}_{i}[j]\), where \(i\) is the last item seen in \(L_{j}\) under sorted access. 8: Define the threshold \(\tau\doteq f(\underline{y}_{1},\ldots,\underline{y}_{m})\). 9:until there are \(k\) tuples in \(S\) with score at least \(\tau\). 10: Return the set of items contained in the tuples in \(S\). ``` **Algorithm 1** Threshold Algorithm \(\mathcal{A}_{\textit{TA}}\)(Fagin et al., 2003) The correctness of the algorithm is obvious: when the algorithm stops, since \(f\) is monotone, the scores of all unseen items are at most \(\tau\), which are lower than the score of each tuples in \(S\). For each tuple \((i,\mathsf{val})\) encountered during sorted access, \(\mathcal{A}_{\textit{TA}}\) retrieves all entries \(\vec{y}_{i}[j]\) of \(\vec{y}_{i}\) by random accesses. This seems to be redundant, as \(i\) might have been encountered previously, and therefore \(f(\vec{y}_{i})\) has already been evaluated. The algorithm \(\mathcal{A}_{\textit{TA}}\) performs so, to keep the buffer size (memory usage) strictly bounded by \(O(k)\). If this is not required, \(\mathcal{A}_{\textit{TA}}\) can be augmented with an additional data structure to keep track of previously evaluated \(i\)'s, to avoid repeated computation. Access Cost.Fagin et al. (2003) did not provide asymptotic bound for the access cost. Instead, they proved that \(\mathcal{A}_{\textit{TA}}\) is _instance optimal_. Informally, instance optimally implies that for every algorithm \(\mathcal{A}\) which solves the top-\(k\) selection problem correctly and whose first access to an item must be sorted access as opposed to random access, the access cost of \(\mathcal{A}_{\textit{TA}}\) is at most the access cost of \(\mathcal{A}\) (up to some multiplicative constant). In Section 4, we apply a different technique to asymptotically bound the access cost of our algorithm. ### One-shot Private Top-\(k\) Algorithm We review an existing framework for the differentially private top-\(k\) selection algorithms (Durfee and Rogers, 2019; Qiao et al., 2021). The framework is described in Algorithm 2. The framework does not consider a specific data access model, and instead needs to learn all entries of \(\vec{h}\). ``` 1:Input: Sorted array \(L_{j}\) and inverted index \(\sigma_{j},\forall j\in[t]\). 2:\(S\leftarrow\emptyset\). 3:repeat 4: Do sorted access in parallel to each of the \(t\) array \(L_{j}\). As a tuple \((i,\mathsf{val})\) is seen under sorted access: 5: Retrieve from \(L_{1},\ldots,L_{t}\) by random access (with the help of \(\sigma_{1},\ldots,\sigma_{t}\)) all attributes of item \(i\), to compute \(f(\vec{y}_{i})\). 6: If \(f(\vec{y}_{i})\) is among the-\(k\) highest scores seen so far, add \((i,f(\vec{y}_{i}))\) to \(S\); if \(|S|>k\), remove the tuple with lowest score from \(S\). 7: For each \(L_{j}\), let \(\underline{y}_{j}\doteq\vec{y}_{i}[j]\), where \(i\) is the last item seen in \(L_{j}\) under sorted access. 8: Define the threshold \(\tau\doteq f(\underline{y}_{1},\ldots,\underline{y}_{m})\). 9:until there are \(k\) tuples in \(S\) with score at least \(\tau\). 10: Return the set of items contained in the tuples in \(S\). ``` **Algorithm 2** Threshold Algorithm \(\mathcal{A}_{\textit{TA}}\)(Fagin et al., 2003) **Definition 3.1** (Noise Distributions).Given parameter \(b\in\mathbb{R}\), the Laplace distribution, \(\mathbb{L}\mathrm{ap}\left(b\right)\), and the Gumbel distribution, \(\mathbb{G}\mathrm{umbel}\left(b\right)\), have probability density func tions \(p(z)=\frac{1}{2b}\cdot\exp\left(-\frac{|z|}{b}\right)\), \(\forall z\in\mathbb{R}\), and \(p(z)=\frac{1}{b}\cdot\exp\left(-\big{(}\frac{z}{b}+\exp\left(-\frac{z}{b}\right) \big{)}\right)\), \(\forall z\in\mathbb{R}\), respectively. Candidates noise distributions for \(Z_{i}\) in Algorithm 2 include \(\mathbb{Lap}\left(1/\varepsilon\right)\)(Qiao et al., 2021) and \(\mathbb{Gumbel}\left(1/\varepsilon\right)\)(Durfee and Rogers, 2019). The corresponding privacy guarantees, are stated as follows. _Fact 3.2_ ((Qiao et al., 2021)).: Assume that \(Z_{i}\sim\mathbb{Lap}\left(1/\varepsilon\right)\), then \(\mathcal{M}\) is \(2k\varepsilon\)-DP. Given \(\delta\in[0,0.05]\), if it holds that \(m\geq 2\) and \(8\varepsilon\sqrt{k\log\left(m/\delta\right)}\leq 0.2\), then \(\mathcal{M}\) also satisfies \(\left(8\varepsilon\sqrt{k\log\left(m/\delta\right)},\delta\right)\)-DP. _Fact 3.3_ ((Durfee and Rogers, 2019)).: Assume that \(Z_{i}\sim\mathbb{Gumbel}\left(1/\varepsilon\right)\). For each \(\delta\in[0,1]\), \(\mathcal{M}\) is \((\varepsilon^{\prime\prime},\delta)\)-DP, where \(\varepsilon^{\prime\prime}\doteq\min\left\{k\varepsilon,k\varepsilon\Big{(} \frac{e^{\varepsilon}-1}{e^{\varepsilon}+1}\Big{)}+\varepsilon\sqrt{k\ln \frac{1}{\delta}}\right\}\). Next, we discuss their utility guarantees. _Fact 3.4_.: Given \(\beta\in(0,1)\), if the \(Z_{i}\sim\mathbb{Lap}\left(1/\varepsilon\right)\), or \(\mathbb{Gumbel}\left(1/\varepsilon\right)\), then with probability at least \(1-\beta\), the returned solution by Algorithm 2 is \((\alpha,k)\)-accurate, for \(\alpha\in O\Big{(}\frac{\ln\left(m/\beta\right)}{\varepsilon}\Big{)}\). _Remark 3.5_.: Compared to Laplace noise, the Gumbel noise allows the algorithm to return a ranked list of indices, instead of a set which contains on order information. For consistency of presentation, we assume that Algorithm 2 returns a set for both choices. ``` 1:Input: vector \(\vec{h}\) 2:for each item \(i\in[m]\)do 3:\(\vec{v}[i]\leftarrow\vec{h}[i]+Z_{i}\), where \(Z_{i}\) is an independent noise random variable; 4:endfor 5: Return a set \(S\) of \(k\) items that the maximizes the \(\vec{v}[i]\)'s. ``` **Algorithm 2** Private Top-\(k\) Algorithm \(\mathcal{M}\) ## 4 Sublinear Access and Time Algorithm In this section, we present an algorithm for top-\(k\) selection problem, which achieves optimal privacy utility trade-off, and with high probability, has an expected access cost \(O(\sqrt{mk})\) and computation time \(O(\sqrt{mk}\log\log m)\). Our presentation follows two steps: we first present a strawman algorithm with sublinear access cost but only linear computation; next we show how to improve its time complexity to \(O(\sqrt{mk}\log\log m)\). ### A Strawman Approach A natural idea is to combine the threshold algorithm \(\mathcal{A}_{\mathcal{H}}\) with the oneshot private top-\(k\) algorithm. Each item \(i\in[m]\) now has two attributes, namely, \(\vec{h}[i]\) and \(Z_{i}\), where \(Z_{i}\) is sampled independently from \(\mathbb{Lap}\left(1/\varepsilon\right)\) or \(\mathbb{Gumbel}\left(1/\varepsilon\right)\). Since the histogram \(\vec{h}\) is stored in a database management system, which allows for two types of access: sorted access and random access, we can think of this as a sorted array \(L_{1}\) of \(m\) tuples, each of the form \((i,\text{val})\in[m]\times\mathbb{R}\), where val equals \(\vec{h}[i]\).The tuples in \(L_{1}\) are sorted in descending order by their val's. Further, \(L_{1}\) has an inverted index \(\sigma_{1}\) to support random access. Additionally, we can construct another sorted array \(L_{2}\) of \(m\) tuples, each of the form \((i,\text{val})\in[m]\times\mathbb{R}\), where val equals \(Z_{i}\). The tuples in \(L_{2}\) are also sorted in descending order by their val's. \(L_{2}\) also has an inverted index \(\sigma_{2}\) to support random access. Then we can run the algorithm \(\mathcal{A}_{\mathcal{H}_{2}}\) with input \(L_{1},L_{2}\), \(\sigma_{1},\sigma_{2}\), and an aggregating function \(f(\vec{h}[i],Z_{i})\doteq\vec{h}[i]+Z_{i}\). It is easy to see that \(f\) is monotone. The pseudo-code is in Algorithm 3. ``` 1:Let \(I_{1}=1,I_{2}=2,\ldots,I_{m}=m\). Generate \(m\) tuples \((I_{1},Z_{1}),\ldots,(I_{m},Z_{m})\), where the \(Z_{i}\)'s are i.i.d. random variables; sort the tuples in descending order by the values of the \(Z_{i}\)'s, denote the sorted sequence by \(\big{(}I_{(1)},Z_{(1)}\big{)},\ldots,\big{(}I_{(m)},Z_{(m)}\big{)}\), and store this sequence in an array \(L_{2}\); construct \(\sigma_{2}:[m]\rightarrow[m]\), s.t., \(L_{2}[\sigma_{2}(i)]=(i,Z_{i})\) for each item \(i\in[m]\). 2: Run Algorithm 1 on input \(L_{1},L_{2},\sigma_{1},\sigma_{2}\), with an aggregation function \(f\big{(}(\vec{h}[i],Z_{i})\big{)}\doteq\vec{h}[i]+Z_{i}\); ``` **Algorithm 3** Private Threshold Algorithm \(\mathcal{A}_{\text{Pri\'{}TA}}\) _Privacy and Utility Guarantee._ The privacy and utility guarantee of the algorithm inherits directly from Algorithm 2. _Access and Time Complexity._ It is easier to first discuss the time complexity and then the access cost. Generating the random variables takes \(O(m)\) time, and sorting them takes \(O(m\log m)\) time. Hence the total running time is bounded by \(O(m\log m)\). It remains to discuss the number of accesses the algorithm performs on \(L_{1}\). Our analysis relies on the following important observation. **Lemma 4.1**.: _The \(I_{(1)},\ldots,I_{(m)}\) are distributed uniformly over all possible permutations over \([m]\), and are independent of the random variables \(Z_{(1)},\ldots,Z_{(m)}\)._ Intuitively, the claim holds since each \(Z_{i}\) in Algorithm 3 follows the same distribution independently. The proof of the lemma is included in Appendix B.1. **Theorem 4.2**.: _The expected access cost of Algorithm 3 on \(L_{1}\), \(\mathbb{E}\left[cost\;(\mathcal{A}_{\text{Pri\'{}TA}},L_{1})\right]\), is bounded by \(O\Big{(}\sqrt{mk}\Big{)}\)._ The proof of the Theorem is in Appendix B.1. ### An Online Sampling Approach Pre-generating all \(m\) noise values may be excessive. For problems with small values of \(k\), e.g., \(k=10\), the Algorithm 3 may need to know only a small subset of tuples in \(L_{2}\). It is of interest whether we can also reduce the number of noisy random variables generated to \(O(\sqrt{mk}\log\log m)\), by constructing the \(L_{2}\) (and \(\sigma_{2}\)) on the fly. The main result of this section is stated as follows. **Theorem 4.3**.: _There is an algorithm \(\mathcal{A}_{oracle}\), that,_ * _does not require to pre-generate_ \(L_{2}\)_;_ * _answers sorted access and random access query to_ \(L_{2}\) _in_ \(O(\log\log m)\) _time in expectation._ _Further, the tuples returned by \(\mathcal{A}_{oracle}\) have the same marginal distribution as those generated by Algorithm 3._ _Initialization_ ``` 1:\(\mathcal{J}\leftarrow\emptyset\), \(idx\gets 0\); 2:\(L_{2}[i]\gets nil,\sigma_{2}(i)\gets nil,\,\forall i\in[m]\) 3:Sorted Access 4:\(idx\gets idx+1\); 5:if\(idx\notin\mathcal{J}\)then 6: Sample \(I_{(idx)}\) uniformly from \([m]\setminus I_{(\mathcal{J})}\); 7: Invoke \(\mathcal{A}_{ord\_stat}\) to sample \(Z_{(idx)}\); 8:\(\sigma_{2}(I_{(idx)})\gets idx\); 9:\(L_{2}[idx]\leftarrow(I_{(idx)},Z_{(idx)})\); \(\mathcal{J}\leftarrow\mathcal{J}\cup\{idx\}\). 10:endif 11:return\(L_{2}[idx]\) 12:Random Access (Input: item \(i\in[m]\)) 1:if\(\sigma_{2}(i)=nil\)then 2: Sample \(j\) uniformly from \([m]\setminus\mathcal{J}\); 3: Invoke \(\mathcal{A}_{ord\_stat}\) to sample \(Z_{(j)}\); 4:\(\sigma_{2}(i)\gets j\); 5:\(L_{2}[j]\leftarrow(i,Z_{(j)})\); \(\mathcal{J}\leftarrow\mathcal{J}\cup\{j\}\). 16:endif 17:return\(L_{2}[\sigma_{2}(i)]\) ``` **Algorithm 4** Algorithm \(\mathcal{A}_{oracle}\) There are two key ingredients for constructing \(\mathcal{A}_{oracle}\). _Sampling the \(I_{(j)}\)'s._ The first ingredient is Lemma 4.1, which allows \(\mathcal{A}_{oracle}\) to sample the \(I_{(j)}\) and the \(Z_{(j)}\) independently according to their marginal distributions, without changing the joint distribution of the \(I_{(j)}\) and the \(Z_{(j)}\). The lemma states that the \(I_{(j)}\)'s are distributed uniformly over all possible permutations over \([m]\). It is not hard to sample an \(I_{(j)}\) on the fly: let \(\mathcal{J}\) be the set of indexes such that the values of \(I_{(j^{\prime})},j\in\mathcal{J}\) have been determined; if \(j\notin\mathcal{J}\), then \(I_{(j)}\) just distributes uniformly over the subset of unseen items, i.e., \([m]\setminus I_{(\mathcal{J})}\), where \(I_{(\mathcal{J})}\doteq\left\{I_{(j^{\prime})}:j^{\prime}\in\mathcal{J}\right\}\). Correspondingly, we can also construct the inverted index \(\sigma_{2}\) on the fly: given an item \(i\in[m]\), if it has not been encountered, then \(\sigma_{2}(i)\) should equal one of the undetermined indexes, namely \([m]\setminus\mathcal{J}\), uniformly at random. _Sampling the \(Z_{(j)}\)'s._ The second ingredient is an algorithm \(\mathcal{A}_{ord\_stat}\) which generates the \(Z_{(j)}\)'s on the fly.Formally, for each \(\mathcal{J}\subseteq[m]\), define \(Z_{(\mathcal{J})}\doteq(Z_{(j)},j\in\mathcal{J})\), and let \(z_{(\mathcal{J})}\) refer to a vector \((z_{(j)},j\in\mathcal{J})\in\mathbb{R}^{|\mathcal{J}|}\). Denote by \(p_{Z_{(\mathcal{J})}}(\cdot)\) the marginal density of \(Z_{(\mathcal{J})}\), induced by the generating procedure of Algorithm 3. Call \(z_{(\mathcal{J})}\) a feasible realization of \(Z_{(\mathcal{J})}\), if \(p_{Z_{(J)}}(z_{(\mathcal{J})})>0\). Given such a feasible realization, let \(p_{Z_{(j)}|Z_{(\mathcal{J})}}(\cdot\mid z_{(\mathcal{J})})\) be the density function of \(Z_{(j)}\), conditioned on \(Z_{(\mathcal{J})}=z_{(\mathcal{J})}\). The property of \(\mathcal{A}_{ord\_stat}\) is stated as follows. **Lemma 4.4**.: _For each \(\mathcal{J}\subseteq[m]\) s.t., \(\mathcal{J}\neq[m]\), each \(j\in[m]\setminus\mathcal{J}\), and each feasible realization \(z_{(\mathcal{J})}\) of \(Z_{(\mathcal{J})}\), \(\mathcal{A}_{ord\_stat}\) samples a random variable with the conditional density \(p_{Z_{(j)}|Z_{(\mathcal{J})}}(\cdot\mid z_{(\mathcal{J})})\) in \(O(\log\log m)\) expected time._ The proof of the Lemma is discussed in Section 4.2.1. Now, we return to the construction of \(\mathcal{A}_{oracle}\). The algorithm is described in Algorithm 4. _Initialization._ The algorithm creates an empty array \(L_{2}\) and an empty inverted index \(\sigma_{2}\). Further, it creates a set \(\mathcal{J}\), to record the positions of \(L_{2}\) which are already sampled, and a variable \(idx\), to record the last visited position by sorted access. In practice, \(L_{2}\) and \(\sigma_{2}\) need not to be physically initialized, and can be implemented by hash sets with constant initialization time. _Sorted Access._ Indeed, it is trivial to handle the sorted access. We just maintain an index, \(idx\in\mathbb{N}\), of last tuple returned by sorted access. When a new request of sorted access arrives, we increase \(idx\) by \(1\). Let \(\mathcal{J}\) be the set of indexes of tuples in \(L_{2}\) that have already been generated. If \(idx\in\mathcal{J}\), then \(\mathcal{A}_{oracle}\) returns \(L_{2}[idx]\) directly; otherwise, it generates \(L_{2}[idx]\) before returning it. _Random Access._ A random access request comes with a reference to an item \(i\in[m]\). We need to identify the tuple \(L_{2}[j]=(I_{(j)},Z_{(j)})\) s.t., \(I_{(j)}=i\). There are two cases: if item \(i\) has been encountered previously (\(\sigma_{2}(i)\neq nil\)), then \(\mathcal{A}_{oracle}\) returns \(L_{2}[\sigma_{2}(i)]\) directly; otherwise, \(\mathcal{A}_{oracle}\) randomly pick an index \(j\) from \([m]\setminus\mathcal{J}\), and set \(I_{(j)}\gets i\), \(\sigma_{2}(i)\gets j\), and calls \(\mathcal{A}_{ord\_stat}\) to generate \(Z_{(j)}\). Finally, it returns \(L_{2}[j]\). #### 4.2.1 Sampling Ordered Noises In this section, we prove Lemma 4.4. Deciding the conditional distribution of the \(Z_{(j)}\)'s and sampling them directly from such distribution can be non-trivial. As a result, we follow the three-step approach outlined below: * First, we show that \(Z_{1},\ldots,Z_{m}\) can be converted from a sequence of i.i.d. uniform random variables \(U_{1},\ldots,U_{m}\), and that the sorted se quence \(Z_{(1)},\ldots,Z_{(m)}\) of \(Z_{1},\ldots,Z_{m}\) can therefore be transferred from the sorted sequence \(U_{(1)},\ldots,U_{(m)}\) of \(U_{1},\ldots,U_{m}\). * Next, to avoid generating the entire sequence of random variables, we study the distribution of \(U_{(j)}\), conditioned on a set of \(U_{(j^{\prime})}\) which have already been sampled. * Finally, we show how to sample an \(U_{(j)}\) from such distribution in \(O(\log\log m)\) expected time. **Transform \(U_{(j)}\) to \(Z_{(j)}\).** Since all potential noise distributions (\(\mathbb{L}\mathrm{ap}\left(1/\varepsilon\right)\) or \(\mathbb{G}\mathrm{umbel}\left(1/\varepsilon\right)\)) of the \(Z_{i}\)'s (the noise random variables, before sorting) have continuous cumulative distribution function, we can sample them indirectly via uniform random variables, based on the _inversion method_. _Fact 4.5_ (Inversion Method (Devroye, 1986)).: Let \(F\) be a continuous cumulative distribution function on \(\mathbb{R}\) with inverse \(F^{-1}\) defined by \[F^{-1}(u)\doteq\inf\left\{x:F(x)=u,0<u<1\right\}.\] If \(U\) is a uniform \([0,1]\) random variable, then \(F^{-1}(U)\) has distribution function \(F\). Let \(F\) be the cumulative distribution function of the \(Z_{i}\), and \(U_{1},\ldots,U_{m}\) be independent uniform random variables on \([0,1]\). We compare the two post-processing procedures: * By Fact 4.5, we can obtain \(Z_{1},\ldots,Z_{m}\) by computing \(F^{-1}(U_{1}),\ldots,F^{-1}(U_{m})\); then we obtain \(Z_{(1)},\ldots,Z_{(m)}\) by sorting this sequence in descending order. * Alternatively, we first sort \(U_{1},\ldots,U_{m}\) in descending order, to obtain a sequence \(U_{(1)}\), \(\ldots\), \(U_{(m)}\); then we compute \(F^{-1}(U_{(1)}),\ldots,F^{-1}(U_{(m)})\). **Lemma 4.6**.: _The two procedures output the same sequence._ The proof of the lemma is included in Appendix B.2. Based on the lemma, we can sample \(U_{(j)}\) first then compute \(F^{-1}(U_{(j)})\) to generate \(Z_{(j)}\). It reduces to study the distribution of \(U_{(j)}\) (conditioned on the subset of \(U_{(j^{\prime})}\)'s which have been sampled), and algorithm for sampling a random variable efficiently from such distribution. **Distribution of \(U_{(j)}\).** Recall that \(\mathcal{J}\) is the set of indexes which have been queried already. Denote \(U_{(\mathcal{J})}\doteq\left\{U_{(j^{\prime})}:{j^{\prime}\in\mathcal{J}}\right\}\) a shorthand for the order statistics that have been sampled. Further, write \(u_{(\mathcal{J})}\doteq\left\{u_{(j^{\prime})}:{j^{\prime}\in\mathcal{J}} \right\}\in[0,1]^{|\mathcal{J}|}\) as a set of numbers within \([0,1]\), indexed by \(\mathcal{J}\). Call \(u_{(\mathcal{J})}\) a _feasible realization_ of \(U_{(\mathcal{J})}\), if for each \(j,j^{\prime}\in\mathcal{J}\) s.t. \(j\leq j^{\prime}\), it holds that \(u_{(j)}\geq u_{(j^{\prime})}\). Given a new query index \(j\in[m]\setminus\mathcal{J}\), we are interested in the conditional probability density, \(p_{U_{(j)}|U_{(\mathcal{J})}}(u_{(j)}\mid u_{(\mathcal{J})})\), of \(U_{(j)}\), given the occurrence of a feasible realization \(u_{(\mathcal{J})}\) of \(U_{(\mathcal{J})}\). For ease of reading, we omit the subscripts of the conditional probability densities, whenever their meaning can be unambiguously determined from their parameters. Depending on the relative position of \(j\) w.r.t. the indexes in \(\mathcal{J}\), we consider the following three cases: * \(\mathcal{J}\) is empty. It reduces to study the un-conditional probability density \(p(u_{(j)})\) of \(U_{(j)}\). * \(\mathcal{J}\) is not empty, and \(j\) is greater than the largest index in \(\mathcal{J}\); in this case, \(j\) has a predecessor (the largest index that is smaller than \(j\)), denoted by \(\ell\), in \(\mathcal{J}\). * \(\mathcal{J}\) is not empty, and \(j\) is smaller than the largest index in \(\mathcal{J}\); in this case, \(j\) has both a predecessor, denoted by \(\ell\), and a successor (the smallest index that is larger than \(j\)), denoted by \(r\), in \(\mathcal{J}\). Hereafter, if \(\mathcal{J}\neq\emptyset\), we consider only feasible realization \(u_{(\mathcal{J})}\) of \(U_{(\mathcal{J})}\). **Theorem 4.7**.: (1) _If \(\mathcal{J}\) is empty, then the density \(p(u_{(j)})\) of \(U_{(j)}\) is given by: \(\forall\,u_{(j)}\in[0,1]\),_ \[p(u_{(j)})=\tfrac{m!}{(j-1)!(m-j)!}\cdot(1-u_{(j)})^{j-1}(u_{(j)})^{m-j}. \tag{2}\] (2) _If \(\mathcal{J}\) is not empty, and \(j\) is greater than the largest index in \(\mathcal{J}\), then given \(U_{(\ell)}=u_{(\ell)}\), \(U_{(j)}\) is independent of all other random variables \(U_{(j^{\prime})}\) for all \(j^{\prime}\in\mathcal{J}\setminus\{\ell\}\), i.e., \(p\big{(}u_{(j)}\mid u_{(\mathcal{J})}\big{)}=p\big{(}u_{(j)}\mid u_{(\ell)} \big{)}\); further, for each \(u_{(j)}\in[0,u_{(\ell)}]\),_ \[\begin{array}{c}p\big{(}u_{(j)}\mid u_{(\ell)}\big{)}=\tfrac{(m-\ell)!}{(j- \ell-1)!(m-j)!}\\ \Big{(}\tfrac{u_{(\ell)}-u_{(j)}}{u_{(\ell)}}\Big{)}^{j-\ell-1}\Big{(}\tfrac {u_{(j)}}{u_{(\ell)}}\Big{)}^{m-j}\tfrac{1}{u_{(\ell)}}.\end{array} \tag{3}\] _If \(\mathcal{J}\) is not empty, and \(j\) is smaller than the largest index in \(\mathcal{J}\), then given \(U_{(\ell)}=u_{(\ell)}\) and \(U_{(r)}=u_{(r)^{\prime}}\), \(U_{(j)}\) is independent of all other random variables \(U_{(j^{\prime})}\) for all \(j^{\prime}\in\mathcal{J}\setminus\{\ell,r\}\), i.e., \(p\big{(}u_{(j)}\mid u_{(\mathcal{J})}\big{)}=p\big{(}u_{(j)}\mid u_{(\ell)},u_{ (r)}\big{)}\); further, for each \(u_{(j)}\in[u_{(r)},u_{(\ell)}]\),_ \[\begin{array}{c}p\big{(}u_{(j)}\mid u_{(\ell)},u_{(r)}\big{)}=\tfrac{(r-\ell -1)!}{(j-\ell-1)!(r-j-1)!}.\\ \Big{(}\tfrac{u_{(\ell)}-u_{(j)}}{u_{(\ell)}-u_{(r)}}\Big{)}^{j-\ell-1}\Big{(} \tfrac{u_{(j)}-u_{(r)}}{u_{(\ell)}-u_{(r)}}\Big{)}^{r-j-1}\tfrac{1}{u_{(\ell)} -u_{(r)}}.\end{array} \tag{4}\] The theorem removes the dependency of \(U_{(j)}\) from all but at most two variables in \(U_{(\mathcal{J})}\). The detailed proof is in the Appendix B.3. We provide an informal but intuitive Figure 1: A pictorial illustration for Lemma 4.6. explanation of the conditional probability densities. Take the Equation 4 for example. Conditioned on \(U_{(r)}=u_{(r)}\) and \(U_{(\ell)}=u_{(\ell)}\), \(r-\ell-1\) uniform random variables fall into the interval \([u_{(r)},u_{(\ell)}]\). Of these \(r-\ell-1\) random variables, \(j-\ell-1\) of them are \(\geq u_{i}\), and \(r-j-1\) of them are \(<u_{i}\). The number of possible combinations is given by \(\frac{(r-\ell-1)!}{(j-\ell-1)!(r-j-1)!}\). For a fixed combination, the former happens with probability \(\left(\frac{u_{(\ell)}-u_{(j)}}{u_{(\ell)}-u_{(r)}}\right)^{j-\ell-1}\), the latter happens with probability \(\left(\frac{u_{(j)}-u_{(r)}}{u_{(\ell)}-u_{(r)}}\right)^{r-j-1}\), and the probability density of \(U_{(j)}=u_{(j)}\) is \(\frac{1}{u_{(\ell)}-u_{(r)}}\). **Sampling \(U_{(j)}\).** We now discuss how to sample the \(U_{(j)}\) efficiently from their conditional distributions. First, note that determining the conditional distributions may need to find the index \(j\)'s predecessor or successor in \(\mathcal{J}\). This can be done by Van Emde Boas tree (van Emde Boas, 1975) in \(O(\log\log m)\) time. Next, we show that sampling from such conditional distributions takes \(O(1)\) expected time. Specifically, we will sample random variables with _Beta distributions_, then convert them into ones which follow desired conditional distributions. **Definition 4.8** (Beta Distribution (Ross, 2018)).: The beta distribution \(\mathcal{B}\mathrm{eta}\left(\alpha,\beta\right)\) is a distribution defined on \([0,1]\) whose density is given by \[f(x)=\tfrac{x^{\alpha-1}(1-x)^{\beta-1}}{\mathrm{B}(\alpha,\beta)},\qquad \forall x\in[0,1], \tag{5}\] where \(\alpha,\beta>0\) are _shape parameters_, \(\mathrm{B}(\alpha,\beta)\doteq\int_{0}^{1}x^{\alpha-1}(1-x)^{\beta-1}\,dx\) is a normalisation constant. It is known that, when \(\alpha\geq 1,\beta\geq 1\), a random variable \(X\sim\mathcal{B}\mathrm{eta}\left(\alpha,\beta\right)\) can be generated in \(O(1)\) expected time (Devroye, 1986; Gentle, 2009). **Theorem 4.9**.: _Assume that \(\ell<j<r\leq m\), and \(1\geq u_{(\ell)}>u_{(\ell)}\geq 0\). Then_ 1. _If_ \(X\sim\mathcal{B}\mathrm{eta}\left(m-j+1,j\right)\)_, then the density function of_ \(X\) _is the same as Equation (_2_)._ 2. _If_ \(X\sim\mathcal{B}\mathrm{eta}\left(m-j+1,j-\ell\right)\)_, then the density function of_ \(Y\doteq u_{(\ell)}\cdot X\) _is the same as Equation (_3_)._ 3. _If_ \(X\sim\mathcal{B}\mathrm{eta}\left(r-j,j-\ell\right)\)_, then the density function of_ \(Y\doteq(u_{(\ell)}-u_{(r)})\cdot X\) _is the same as Equation (_4_)._ The proof of the Theorem is included in Appendix B.2. ## 5 Lower Bounds In this section, we generate the lower bounds for the problem. Following the setting in Section 2, since \(\vec{h}\) is the sum of voting vectors of \(n\) clients, we have \(||\vec{h}||_{\infty}\leq n\). It follows that an arbitrary subset \(S\subseteq[m]\) of size \(k\) is \((n,k)\)-accurate. All lower bounds in this section hold for algorithms that are \((n-O(1),k)\)-accurate, which is just slightly better than the trivial error guarantee. ### Random Access **Theorem 5.1**.: _Assume that \(0\leq\beta<0.1\). Let \(\mathcal{A}\) be an algorithm that has only random access to \(\vec{h}\), does not return items which it has not seen, and for each input, with probability at least \(1-\beta\), returns a solution that is \((n-1,k)\)-accurate. Then there exists a family of histograms \(\mathcal{H}\), and a distribution \(\mu\) on \(\mathcal{H}\), if \(\vec{h}\) is sampled from \(\mathcal{H}\) according to distribution \(\mu\), it holds that_ \[\operatorname*{\mathbb{E}}_{\vec{h}}\left[cost\Big{(}\mathcal{A},\vec{h}\Big{)} \right]\in\Omega(m). \tag{6}\] Note that the theorem does not even require \(\mathcal{A}\) to be a \((\varepsilon,\delta)\)-DP algorithm. The proof of the theorem is in Appendix C.1. At a high level, our construction focuses on a family of histograms the values of whose entries are either \(n\) or \(0\). Further, if an \(\vec{h}\) is sampled from \(\mathcal{H}\), there are roughly \(2k\) entries of \(\vec{h}\) that have value \(n\), and those entries appear at random positions of \(\vec{h}\), so that \(\mathcal{A}\) is unlikely to identify more than \(k\) of them, before learning the values of \(\Omega(m)\) entries. ### Sorted Access The lower bound for the sorted access case relies on the following lemma. **Lemma 5.2**.: _Let \(\mathcal{A}\) be an \((\varepsilon,\delta)\)-DP algorithm, which for each input histogram, with probability at least \(1-\beta\), returns a solution that is \((n-2,k)\)-accurate. Let \(S\subseteq[m]\), and \(\vec{h}_{S}\) be a histogram, s.t._ \[\vec{h}_{S}[i]\doteq\begin{cases}n-1,&\forall\,i\in S,\\ 0,&\forall\,i\in\bar{S},\end{cases} \tag{7}\] _where \(\bar{S}\doteq[m]\setminus S\). Let \(S_{L}\) be a subset of \(S\) of size \(|S|/k\) sampled uniformly at random (without replacement), \(\underline{S}_{H}\doteq\mathcal{S}\setminus S_{L}\), and \(\vec{h}_{S_{H},S_{L}}\) be a histogram neighboring to \(\vec{h}\) s.t._ \[\vec{h}_{S_{H},S_{L}}[i]\doteq\begin{cases}n,&\forall\,i\in S_{H},\\ n-1,&\forall\,i\in S_{L},\\ 0,&\forall\,i\notin S.\end{cases} \tag{8}\] _Then,_ \[\Pr_{S_{L},\mathcal{A}}\Big{[}\mathcal{A}(\vec{h}_{S_{H},S_{L}})\cap S_{L} \neq\emptyset\Big{]}\geq\tfrac{1-\beta-\delta-e^{-1}}{e^{\varepsilon}}\,, \tag{9}\] _where the randomness is first over the choice of \(S_{L}\) then over the output of \(\mathcal{A}\)._ The proof of the Lemma is included in Appendix C.2. Note that, for each \(i\in S_{L}\), \(\vec{h}_{S_{H},S_{L}}[i]\) is among the \(|S|/k\) smallest entries of the \(|S|\) largest entries in \(\vec{h}_{S_{H},S_{L}}\). Informally, the lemma states that the probability that output of \(\mathcal{A}\) contains some item \(i\in S_{L}\) is not too "small". **Theorem 5.3**.: _Let \(\varepsilon,\delta,\beta\) be non-negative parameters, s.t., \(\varepsilon\in O(1),\delta+\beta\leq 0.6\). Let \(\mathcal{A}\) be an \((\varepsilon,\delta)\)-DP algorithm that has only sorted access, does not return items which it has not seen, and for each input histogram, with probability at least \(1-\beta\), returns a solution that is \((n-2,k)\)-accurate. Then there exists a family of histograms \(\mathcal{H}\) so that, if \(\vec{h}\) is sampled uniformly at random from \(\mathcal{H}\), it holds that_ \[\operatorname*{\mathbb{E}}_{\vec{h}}\Big{[}cost\Big{(}\mathcal{A},\vec{h}\Big{)} \Big{]}\in\Omega(m). \tag{10}\] Proof.: Let \(S=[m/2]\), \(S_{L}\) be a uniform random subset of \(S\) of size \(|S|/k\), and \(\vec{h}_{S_{H},S_{L}}\) be a histogram built as outlined in Equation (8). Let \(\mathcal{H}\) be the collection of all possible outcomes of \(\vec{h}_{S_{H},S_{L}}\). Then by Lemma 5.2, \[\Pr_{S_{L},\mathcal{A}}\Big{[}\mathcal{A}(\vec{h}_{S_{H},S_{L}})\cap S_{L}\neq \emptyset\Big{]}\geq\tfrac{1-\beta-\delta-e^{-1}}{e^{e}}.\] But for each \(i\in S_{L}\), \(\vec{h}_{S_{H},S_{L}}[i]\) is among the top \((|S|-|S|/k+1)^{(th)}\) to \(|S|^{(th)}\) largest numbers in \(\vec{h}_{S_{H},S_{L}}\). Since \(\mathcal{A}\) has only sorted access to \(\vec{h}_{S_{H},S_{L}}\) and does not return an item which it has not seen, if it returns an item in \(S_{L}\), it needs to invoke at least \(|S|-|S|/k\in\Omega(m)\) sorted accesses. It follows that the expected access cost of \(\mathcal{A}\) is at least \(\tfrac{1-\beta-\delta-e^{-1}}{e^{e}}\cdot\Omega(m)\). Inequality (10) follows from the assumption that \(\varepsilon\in O(1)\), and \(\beta+\delta\leq 0.6\). ### Random and Sorted Access **Theorem 5.4**.: _Let \(\varepsilon,\delta,\beta\) be non-negative parameters, s.t., \(\varepsilon\leq 1,\delta+\beta\leq 0.05\). Let \(\mathcal{A}\) be an algorithm that has both sorted access and random access to \(\vec{h}\), does not return items which it has not seen, and for each input, with probability at least \(1-\beta\), returns a solution that is \((n-2,k)\)-accurate. Then there exists a family of histograms \(\mathcal{H}\), if \(\vec{h}\) is sampled uniformly at random from \(\mathcal{H}\), it holds that_ \[\operatorname*{\mathbb{E}}_{\vec{h}}\Big{[}cost\Big{(}\mathcal{A},\vec{h} \Big{)}\Big{]}\in\Omega\Big{(}\sqrt{mk}\Big{)}. \tag{11}\] Proof.: Let \(\mathcal{H}\) be the collection of all possible \(\vec{h}_{S_{H},S_{L}}\) generated as follows: * First, sample a subset \(S\subseteq[m]\) of size \(\tau\doteq\sqrt{mk}\) without replacement uniformly at random. * Then we randomly sample a uniform subset \(S_{L}\) of \(S\) of size \(|S|/k\), and construct a histogram \(\vec{h}_{S_{H},S_{L}}\) as described by Equation (8). Since Lemma 5.2 holds for each \(S\subseteq[m]\), we have \[\Pr_{S,S_{L},\mathcal{A}}\Big{[}\mathcal{A}\Big{(}\vec{h}_{S_{H},S_{L}}\Big{)}\cap S_{L}\neq\emptyset\Big{]} \geq\tfrac{1}{e^{\tau}}\big{(}1-\beta-\delta-\tfrac{1}{e}\big{)}\] \[\stackrel{{(a)}}{{\geq}}e^{-1}\big{(}1-0.05-e^{-1} \big{)} \geq 0.21,\] where the randomness is first over the choice of \(S\), then over the choice of \(S_{L}\), and finally over the output of \(\mathcal{A}\), and inequality \((a)\) follows from the assumption that \(\varepsilon\leq 1\) and \(\beta+\delta\leq 0.05\). In what follows, we omit the subscripts \(S,S_{L},\mathcal{A}\) from the probability notations, when the source of randomness is clear from the context. Consider the event \(\mathcal{E}:\mathcal{A}\) accesses some item \(i\in S_{L}\). As \(\mathcal{A}\) does not return an item which it has not seen, the event \(\mathcal{E}\) is a necessary condition for \(\mathcal{A}\big{(}\vec{h}_{S_{H},S_{L}}\big{)}\cap S_{L}\neq\emptyset\). Hence, \[\Pr\left[\mathcal{E}\right]\geq\Pr\left[\mathcal{A}\big{(}\vec{h}_{S_{H},S_{ L}}\big{)}\cap S_{L}\neq\emptyset\right].\] Let \(\eta\doteq\tau/20\). We decompose \(\mathcal{E}\) into two mutually exclusive events: \(\mathcal{E}_{1}:\mathcal{A}\) accesses some item \(i\in S_{L}\) for the first time within \(\eta\) access operations; \(\mathcal{E}_{2}:\mathcal{A}\) accesses some item \(i\in S_{L}\) for the first time after \(\eta\) access operations. Then \(\Pr\left[\mathcal{E}\right]=\Pr\left[\mathcal{E}_{1}\right]+\Pr\left[\mathcal{ E}_{2}\right]\). **Lemma 5.5**.: _The probability that \(\mathcal{A}\) accesses some item \(i\in S_{L}\) for the first time within \(\eta\) access operations, denoted by \(\Pr\left[\mathcal{E}_{1}\right]\), is upper bounded by \(\Pr\left[\mathcal{E}_{1}\right]\leq 0.19\)._ The proof of Lemma 5.5 is omitted here, and is included in Appendix C.3. Intuitively, the lemma holds since: 1) \(\mathcal{A}\) can not access some \(i\in S_{L}\) within \(\eta\) sorted accesses; 2) because of the way it is generated, \(S_{L}\) is a random subset from \([m]\) of size \(\sqrt{mk}/k\), hence it is also unlikely for \(\mathcal{A}\) to come across some \(i\in S_{L}\) with at most \(\eta=\sqrt{mk}/20\) random access. To conclude the justification of (11), and hence prove the Theorem, we apply Lemma 5.5. \[\begin{array}{rl}\Pr\left[\mathcal{E}_{2}\right]&=\Pr\left[\mathcal{E}\right] -\Pr\left[\mathcal{E}_{1}\right]\\ &\geq\Pr\left[\mathcal{A}\Big{(}\vec{h}_{S_{H},S_{L}}\big{)}\cap S_{L}\neq \emptyset\right]-\Pr\left[\mathcal{E}_{1}\right]\\ &\geq 0.21-0.19\in\Omega(1).\end{array}\] But when \(\mathcal{E}_{2}\) happens, the access cost is \(\Omega(\eta)\). Therefore, the expected access cost of \(\mathcal{A}\) is lower bounded by \(\Omega(\eta)=\Omega(\sqrt{mk})\). ## 6 Related Work Private Selection.The private top-\(1\) selection problem is a special case of the private top-\(k\) problem. The latter has been studied extensively, e.g., the exponential mechanism (McSherry and Talwar, 2007), report noisy max (Dwork and Roth, 2014), permute-and-flip (McKenna and Sheldon, 2020; Ding et al., 2021). Of interest is the permute-and-flip mechanism: when the largest score of the items is known a prior, the mechanism can potentially terminate without iterating over all \(m\) items. However, in this scenario, an asymptotic upper bound for the number of items evaluated remains unresolved. Private Top-\(k\) Mechanisms.Bhaskar et al. (2010) were the first to apply the "peeling exponential mechanism", which iteratively invoked the exponential mechanism to select the item with highest score, then remove it. They also proposed an oneshot Laplace mechanism for private top-\(k\) selection. Bhaskar et al. (2010) analyzed the pure differential privacy guarantees of both algorithms. Subsequently, Durfee and Rogers (2019) showed that the peeling exponential mechanism has an equivalent oneshot implementation (i.e., Algorithm 2 with Gumbel noise), and studied its approximate privacy guarantee. Qiao et al. (2021) provided the approximate privacy guarantee for the oneshot Laplace mechanism, without the help of the composition theorem. Both Bhaskar et al. (2010) and Durfee & Rogers (2019) have proposed private algorithms which estimate top-\(k\) based on the true top \(\bar{k}\) items for some \(\bar{k}\geq k\). Given an integer \(k\), both algorithm may need to set \(\bar{k}=m\), in order to return \(k\) items. _Accuracy Lower Bound._Bafna & Ullman (2017) and Steinke & Ullman (2017) show that, for approximate private algorithms, the error guarantees of existing algorithms (McSherry & Talwar, 2007; Dwork & Roth, 2014; Durfee & Rogers, 2019; Qiao et al., 2021) are essentially optimal.
2309.17065
Minimality of the inner automorphism group
By [6], a minimal group $G$ is called $z$-minimal if $G/Z(G)$ is minimal. In this paper, we present the $z$-Minimality Criterion for dense subgroups with some applications to topological matrix groups. For a locally compact group $G$, let $\operatorname{Inn}(G)$ be the group of all inner automorphisms of $G,$ endowed with the Birkhoff topology. Using a theorem by Goto [14], we obtain our main result which asserts that if $G$ is a connected Lie group and $H\in\{G/Z(G), \operatorname{Inn}(G)\},$ then $H$ is minimal if and only if it is centre-free and topologically isomorphic to $\operatorname{Inn}(G/Z(G)).$ In particular, if $G$ is a connected Lie group with discrete centre, then $\operatorname{Inn}(G)$ is minimal. We prove that a connected locally compact nilpotent group is $z$-minimal if and only if it is compact abelian. In contrast, we show that there exists a connected metabelian $z$-minimal Lie group that is neither compact nor abelian.
Dekui Peng, Menachem Shlossberg
2023-09-29T08:55:09Z
http://arxiv.org/abs/2309.17065v2
# More on \(z\)-minimality of topological groups ###### Abstract. The authors of [4] introduced the notion of \(z\)-minimality. A minimal group \(G\) is \(z\)_-minimal_ if \(G/Z(G)\) is minimal. Answering a question of G. Lukacs, we present, in this paper, the \(z\)_-Minimality Criterion_ for dense subgroups. This criterion implies that if \(\mathbb{F}\) is a subfield of a local field of characteristic distinct than \(2\), then the special linear group \(\operatorname{SL}(n,\mathbb{F})\) is \(z\)-minimal precisely when its subgroup \(\operatorname{ST}^{+}(n,\mathbb{F})\), the special upper triangular group, is \(z\)-minimal. Some applications to Number Theory are provided. We prove that the following conditions are equivalent for a positive integer \(n\): 1. \(n\) is not square-free; 2. there exists a subfield \(\mathbb{F}\) of \(\mathbb{C}\) such that \(\operatorname{SL}(n,\mathbb{F})\) is minimal but not \(z\)-minimal; 3. there exists a subfield \(\mathbb{F}\) of \(\mathbb{C}\) such that \(\operatorname{ST}^{+}(n,\mathbb{F})\) is minimal but not \(z\)-minimal. We also answer [18, Question 6] in the positive, showing that both topological products \(G=\prod_{p\in\mathcal{P}}\operatorname{SL}(p+1,(\mathbb{Q},\tau_{p}))\) and \(H=\prod_{p\in\mathcal{P}}\operatorname{ST}^{+}(p+1,(\mathbb{Q},\tau_{p}))\) are minimal, where \(\mathcal{P}\) is the set of all primes and \((\mathbb{Q},\tau_{p})\) is the field of rationals equipped with the \(p\)-adic topology. In another direction, it is proved that if \(G\) is a locally compact \(z\)-minimal group, then the quotient group \(G/Z(G)\) is topologically isomorphic to \(\operatorname{Inn}(G)\), the group of all inner automorphisms of \(G\), endowed with the Birkhoff topology. Key words and phrases:minimal group, \(z\)-minimal group, \(z\)-Minimality Criterion, square-free integer, local field, special linear group, Birkhoff topology 2 centralizer \(C_{G}(x)=\{y\in G|\ xy=yx\}\) in \(G\) is always unconditionally closed. This implies that for every subgroup \(H\) of \(G,\) the centralizer \[C_{G}(H)=\{y\in G|\ xy=yx,\forall x\in H\}=\bigcap_{x\in H}C_{G}(x)\] of \(H\) in \(G\) is also unconditionally closed. A particular example is that for groups \(X\) and \(Y\), \(Z(X)\times Y\) is unconditionally closed in \(X\times Y\) because it coincides with the centralizer of \(X\times\{e_{Y}\},\) see [13, Lemma 1.11]. It is also known that the \(n\)-centres are unconditionally closed. ### Some preliminary results **Lemma 1.3**.: _A topological group \(G\) is \(z\)-minimal if and only if both \(Z(G)\) and \(G/Z(G)\) are minimal._ Proof.: First we assume that \(G\) is \(z\)-minimal. Since minimality is preserved by taking closed central subgroups (see, for example, [7, Proposition 7.2.5]), \(Z(G)\) is minimal. The minimality of \(G/Z(G)\) follows from the definition of \(z\)-minimal groups. For the other direction, it suffices to show that \(G\) is minimal under the assumption that \(Z(G)\) and \(G/Z(G)\) are minimal. This is an immediate corollary of Merson's Lemma (Fact 1.2) as \(Z(G)\) is unconditionally closed. Recall that minimal abelian groups are necessarily precompact by a theorem of Prodanov-Stoyanov [16]. This implies the following result. **Corollary 1.4**.: _If \(G\) is a \(z\)-minimal group, then the second centre \(Z_{2}(G)\) is precompact. If \(G\) is also complete, then \(Z_{2}(G)\) is compact._ Proof.: Both \(Z(G)\) and \(Z(G/Z(G))=Z_{2}(G)/Z(G)\) are precompact, being minimal abelian. As precompactness is a three space property (see [3] for example), it follows that \(Z_{2}(G)\) is precompact. If \(G\) is also complete, then its unconditionally closed subgroup \(Z_{2}(G)\) must be also complete, hence, compact. Let \(G\) be a locally compact group and \(\operatorname{Aut}(G)\) be the group of all topological automorphisms of \(G\) endowed with the _Birkhoff topology_ (see [7, p. 260]). This group topology has a local base at the identity formed by the sets \[\mathcal{B}(C,U):=\{f\in\operatorname{Aut}(G)|\ f(g)\in Ug,\ f^{-1}(g)\in Ug \ \forall g\in C\},\] where \(C\) runs over compact subsets and \(U\) runs over neighborhoods of the identity in \(G.\) **Theorem 1.5**.: _Let \(G\) be a locally compact group and \(\operatorname{Inn}(G)\) be the group of all inner automorphisms of \(G\) endowed with the Birkhoff topology \(\tau_{B}.\) If the quotient group \(G/Z(G)\) is minimal, then it is topologically isomorphic to \((\operatorname{Inn}(G),\tau_{B}).\) In particular, if \(G\) is a locally compact \(z\)-minimal group, then \((\operatorname{Inn}(G),\tau_{B})\) is locally compact minimal._ Proof.: It is known that algebraically \(G/Z(G)\) is isomorphic to \(\operatorname{Inn}(G).\) Clearly the action \(\alpha:(G/Z(G),\tau_{q})\times G\to G\) defined by \(\alpha(gZ(G),h)=ghg^{-1}\) is continuous, where \(\tau_{q}\) is the quotient topology on \(G/Z(G)\). By [13, Remark 1.9], if \(P\) is a topological subgroup of \(\operatorname{Aut}(G),\) then the Birkhoff topology is the coarsest group topology on \(P\) which makes the natural action of \(P\) on \(G\) continuous. This implies that \(\tau_{B}\subseteq\tau_{q}.\) Now use the minimality of \((G/Z(G),\tau_{q}).\) ### The \(z\)-Minimality Criterion **Definition 1.6**.: _Let \(H\) be a subgroup of a topological group \(G\). Then \(H\) is essential in \(G\) if \(H\cap L\neq\{e\}\) for every non-trivial closed normal subgroup \(L\) of \(G\)._ The following Minimality Criterion of dense subgroups is well-known (for compact \(G\) see also [15, 19]). **Fact 1.7**.: _[_2_, Minimality Criterion]_ _Let \(H\) be a dense subgroup of a topological group \(G.\) Then \(H\) is minimal if and only if \(G\) is minimal and \(H\) is essential in \(G\)._ During the "Algebra, Topology and Their Interactions 2023 Conference", G. Lukacs asked whether a similar criterion for \(z\)-minimality exists. The following theorem provides the answer. **Theorem 1.8** (\(z\)-Minimality Criterion).: _Let \(H\) be a dense subgroup of a topological group \(G\). Then \(H\) is \(z\)-minimal if and only if \(G\) is \(z\)-minimal and_ 1. \(Z(H)\) _is a dense essential subgroup of_ \(Z(G)\)_; and_ 2. _every closed normal subgroup of_ \(G\) _properly containing_ \(Z(G)\) _contains a non-central element of_ \(H\)_._ Proof.: We first assume that \(H\) is \(z\)-minimal. Then we have particularly that \(H\) is minimal; hence so is \(G\). By denseness of \(H\) in \(G\), one has that \(Z(H)=Z(G)\cap H\). Let \(q:G\to G/Z(G)\) be the canonical mapping. Then the kernel of \(q\!\upharpoonright_{H}\) is \(Z(G)\cap H=Z(H)\). The minimality of \(H/Z(H)\) implies that the restriction mapping \(q\!\upharpoonright_{H}:H\to q(H)\) is open, in particular, \(q(H)\) is topologically isomorphic to \(H/Z(H)\). According to [7, 4.3.2 Lemma], \(Z(H)\) must be dense in \(Z(G)\). Since \(q(H)\cong H/Z(H)\) is a dense minimal subgroup of \(G/Z(G)\), the Minimality Criterion implies that \(G/Z(G)\) is minimal (so \(G\) is \(z\)-minimal) and \(q(H)\) is essential in \(G/Z(G)\). The latter implies (b) since for every closed normal subgroup \(N\) of \(G\) properly containing \(Z(G)\), \(q(G)\) is non-trivial and hence \(q(N)\cap q(H)\neq\{e\}\), or equivalently, \(N\cap H\nsubseteq Z(G)\). It remains to note that \(Z(H)\subseteq Z(G)\), so \(N\cap H\nsubseteq Z(H)\). To see (a), note that minimality is preserved by taking centres (so \(Z(H)\) is minimal) and we have already proved that the \(Z(H)\) is dense in \(Z(G)\), thus the Minimality Criterion applies. Now we assume that \(G\) is \(z\)-minimal and (a) and (b) are satisfied. Again the centre \(Z(G)\) of the minimal group \(G\) is minimal. By (a) and the Minimality Criterion, \(Z(H)\) is minimal. Moreover, as \(Z(H)\) is dense in \(Z(G)\), the image \(q(H)\) of \(H\) under the canonical mapping \(q:G\to G/Z(G)\) is topologically isomorphic to \(H/(H\cap Z(G))=H/Z(H)\). It remains to see that \(q(H)\) is a minimal group. By Minimality Criterion, it suffices to show that \(q(H)\) is essential in \(G/Z(G)\). Let \(M\) be a closed normal subgroup of \(G/Z(G)\) distinct from the trivial group. Then \(q^{-1}(M)\) is a closed normal subgroup of \(G\) properly containing \(Z(G)\). By (b), there exists \(x\in q^{-1}(M)\cap H\) such that \(x\notin Z(H)\). As \(Z(H)=Z(G)\cap H\) and \(x\in H\), this implies that \(x\) does not belong to \(Z(G)\), Thus, \(e\neq q(x)\in M\cap q(H)\). In other words, \(q(H)\) is essential in \(G/Z(G)\) and \(H/Z(H)\) is minimal. Now apply Lemma 1.3. \(z\)-Minimality of \(\operatorname{ST}^{+}(n,\mathbb{F})\) and \(\operatorname{SL}(n,\mathbb{F})\) Let \(\mathbb{F}\) be a topological field. Denote by \(\operatorname{SL}(n,\mathbb{F})\) the special linear group over \(\mathbb{F}\) of degree \(n\) equipped with the pointwise topology inherited from \(\mathbb{F}^{n^{2}}\), and by \(\operatorname{ST}^{+}(n,\mathbb{F})\) its topological subgroup consisting of upper triangular matrices. Let \(\mathsf{N}:=\operatorname{UT}(n,\mathbb{F})\) and \(\mathsf{A}\) be the subgroups of \(\operatorname{ST}^{+}(n,\mathbb{F})\) consisting of upper unitriangular matrices and diagonal matrices, respectively. Note that \(\mathsf{N}\) is normal in \(\operatorname{ST}^{+}(n,\mathbb{F})\) and \(\operatorname{ST}^{+}(n,\mathbb{F})\cong\mathsf{N}\rtimes_{\alpha}\mathsf{A}\), where \(\alpha\) is the action by conjugations. It is known that \(\mathsf{N}\) is the derived subgroup of \(\operatorname{ST}^{+}(n,\mathbb{F}).\) Recall also that \(\operatorname{SL}(n,\mathbb{F})\) has finite center (e.g., see [17, 3.2.6] or [20, page 78]) \[Z=Z(\operatorname{SL}(n,\mathbb{F}))=\{\lambda I_{n}:\lambda\in\mu_{n}\},\] where \(\mu_{n}\) is a finite cyclic group consisting of the \(n\)-th roots of unity in \(\mathbb{F}\) and \(I_{n}\) is the identity matrix of size \(n.\) Moreover, \(Z(\operatorname{ST}^{+}(n,\mathbb{F}))=Z\) and \(\operatorname{ST}^{+}(n,\mathbb{F})/Z\) is center-free (see [18, Lemma 1]). _Remark 2.1_.: If \(\mathbb{F}\) is a subfield of a local field \(P,\) then its completion \(\widehat{\mathbb{F}}\) is a topological field that can be identified with the closure of \(\mathbb{F}\) in \(P\). In case \(\mathbb{F}\) is infinite then \(\widehat{\mathbb{F}}\) is also a local field, as the local field \(P\) contains no infinite discrete subfields (see [11, p. 27]). Clearly, if \(G\) is totally minimal, then it is \(z\)-minimal. The following lemma provides a case in which also the converse is true. **Lemma 2.2**.: _Let \(\mathbb{F}\) be a topological field. Then \(\operatorname{SL}(n,\mathbb{F})\) is totally minimal if and (only if) it is \(z\)-minimal._ Proof.: Being algebraically simple (see [17, 3.2.9]), the projective special linear group \[\operatorname{PSL}(n,\mathbb{F})=\operatorname{SL}(n,F)/Z\] is totally minimal if \(\operatorname{SL}(n,\mathbb{F})\) is \(z\)-minimal. Since \(Z\) is a finite normal subgroup of \(\operatorname{SL}(n,\mathbb{F})\) it follows from [7, Theorem 7.3.1] that \(\operatorname{SL}(n,\mathbb{F})\) is totally minimal. The following result is an immediate corollary of Theorem 3.17, Theorem 3.19 and Theorem 4.3 of [14]. **Corollary 2.3**.: _Let \(\mathbb{F}\) be a local field and \(n\in\mathbb{N}.\) Then_ 1. \(\operatorname{SL}(n,\mathbb{F})\) _is_ \(z\)_-minimal (equivalently, totally minimal);_ 2. _if_ \(\mathbb{F}\) _has a characteristic distinct than_ \(2\)_, then_ \(\operatorname{ST}^{+}(n,\mathbb{F})\) _is_ \(z\)_-minimal._ **Theorem 2.4**.: _Let \(\mathbb{F}\) be a subfield of a local field of characteristic distinct than \(2.\) Then \(\operatorname{SL}(n,\mathbb{F})\) is \(z\)-minimal if and only if \(\operatorname{ST}^{+}(n,\mathbb{F})\) is \(z\)-minimal._ Proof.: Clearly, we may assume that \(\mathbb{F}\) is infinite and \(n\geq 2.\) Assume first that \(\operatorname{ST}^{+}(n,\mathbb{F})\) is \(z\)-minimal and let us show that \(\operatorname{SL}(n,\mathbb{F})\) is totally minimal. Note that to prove this implication we do not need to assume anything about the characteristic of \(\mathbb{F}.\) By [14, Theorem 4.7], it suffices to show that \(Z(\operatorname{SL}(n,\mathbb{F}))=Z(\operatorname{SL}(n,\widehat{\mathbb{F}})).\) In view of the \(z\)-minimality of \(\operatorname{ST}^{+}(n,\mathbb{F})\) and item (a) of Theorem 1.8, we deduce that \(Z(\operatorname{ST}^{+}(n,\mathbb{F}))=Z(\operatorname{ST}^{+}(n,\widehat{ \mathbb{F}}))\) since these centres are finite. Moreover, we always have \(Z(\operatorname{ST}^{+}(n,\mathbb{F}))=Z(\operatorname{SL}(n,\mathbb{F}))\) and \(Z(\operatorname{ST}^{+}(n,\widehat{\mathbb{F}}))=Z(\operatorname{SL}(n, \widehat{\mathbb{F}}))\) so it follows that \(Z(\operatorname{SL}(n,\mathbb{F}))=Z(\operatorname{SL}(n,\widehat{\mathbb{F}}))\), as needed. To prove the converse implication, we assume, in addition, that the characteristic of \(\mathbb{F}\) is distinct than \(2.\) By Corollary 2.3, \(\operatorname{ST}^{+}(n,\widehat{\mathbb{F}})\) is \(z\)-minimal. By Lemma 2.2, \(\operatorname{SL}(n,\mathbb{F})\) is totally minimal. Then \(Z=Z(\operatorname{SL}(n,\mathbb{F}))=Z(\operatorname{SL}(n,\widehat{\mathbb{F}}))\) in view of [14, Theorem 4.7]. So, we also have \(Z=Z(\operatorname{ST}^{+}(n,\mathbb{F}))=Z(\operatorname{ST}^{+}(n,\widehat{ \mathbb{F}})).\) Let us assume first that \(n\geq 3.\) Observe that by the Minimality Criterion, it suffices to prove that the dense subgroup \(\mathrm{ST}^{+}(n,\mathbb{F})/Z\) is essential in the minimal group \(\mathrm{ST}^{+}(n,\widehat{\mathbb{F}})/Z\). To this aim, let \(L\) be a non-trivial closed normal subgroup of \(\mathrm{ST}^{+}(n,\widehat{\mathbb{F}})/Z.\) Clearly, this means that \(q^{-1}(L)\) is a normal subgroup of \(\mathrm{ST}^{+}(n,\widehat{\mathbb{F}})\) which is not central, where \[q:\mathrm{ST}^{+}(n,\widehat{\mathbb{F}})\to\mathrm{ST}^{+}(n,\widehat{ \mathbb{F}})/Z\] is the quotient map. Then, since \(n\geq 3\) the normal subgroup \(q^{-1}(L)\) must non-trivially intersect \(\mathrm{UT}(n,\widehat{\mathbb{F}})\), the derived subgroup of \(\mathrm{ST}^{+}(n,\widehat{\mathbb{F}})\), in view of [21, Lemma 2.3]. By [18, Lemma 2], \(q^{-1}(L)\) intersects \(\mathrm{UT}(n,\mathbb{F})\) non-trivially. This implies that \(L\) intersects \(\mathrm{ST}^{+}(n,\mathbb{F})/Z\) non-trivially. Now we prove the case \(n=2\). By Corollary 2.3.2, \(G:=\mathrm{ST}^{+}(2,\widehat{\mathbb{F}})\) is \(z\)-minimal. Hence, to see that \(H:=\mathrm{ST}^{+}(2,\mathbb{F})\) is \(z\)-minimal, we need to verify the validity of conditions (a) and (b) of Theorem 1.8. Condition (a) is satisfied as \(Z(H)=Z(G)=\{\pm I\}.\) According to the argument in the proof of [14, Theorem 3.4], every normal subgroup \(L\) of \(G\) properly containing \(Z(G)\) contains the element \(\left(\begin{array}{cc}1&1\\ 0&1\end{array}\right);\) thus (b) is satisfied. Recall that \(n\in\mathbb{N}\) is _square-free_ if \(n\) is divisible by no square number other than \(1\). **Proposition 2.5**.: _Let \(n\in\mathbb{N}\) and \(\mathbb{F}\) a subfield of a local field. If \(n\) is square-free, then \(\mathrm{SL}(n,\mathbb{F})\) is minimal if and only if it is totally minimal. If, in addition, \(\mathbb{F}\) has a characteristic distinct than \(2,\) then \(\mathrm{ST}^{+}(n,\mathbb{F})\) is minimal if and only if it is \(z\)-minimal._ Proof.: By [14, Proposition 5.1], if \(H:=\mathrm{SL}(n,\mathbb{F})\) is minimal, then \(Z(H)\) meets non-trivially any non-trivial central subgroup of \(G:=\mathrm{SL}(n,\widehat{\mathbb{F}})\). On the other hand, \(Z(G)\) is isomorphic to the group of solutions of the equation \(x^{n}=1\) in \(\widehat{\mathbb{F}}\). Since \(n\) is square-free, this centre is a cyclic group of order a product of distinct primes. It follows that any subgroup of \(Z(G)\) with the announced property can only be \(Z(G)\) itself. Apply [14, Theorem 4.7] to deduce that \(H\) is totally minimal. By [18, Theorem 2] and Theorem 2.4, if \(\mathbb{F}\) has a characteristic distinct than \(2\), then \(\mathrm{SL}(n,\mathbb{F})\) is minimal (\(z\)-minimal) if and only if \(\mathrm{ST}^{+}(n,\mathbb{F})\) is minimal (resp., \(z\)-minimal). This proves the last assertion. **Theorem 2.6**.: _Let \(n\) be a positive integer. Then the following conditions are equivalent:_ * \(n\) _is not square-free;_ * _there exists a subfield_ \(\mathbb{F}\) _of_ \(\mathbb{C}\) _such that_ \(\mathrm{SL}(n,\mathbb{F})\) _is minimal but not totally minimal;_ * _there exists a subfield_ \(\mathbb{F}\) _of_ \(\mathbb{C}\) _such that_ \(\mathrm{ST}^{+}(n,\mathbb{F})\) _is minimal but not_ \(z\)_-minimal._ Proof.: \((b)\Rightarrow(a)\) Use Proposition 2.5. \((a)\Rightarrow(b)\) Now assume that there exists a prime number \(p\) with \(p^{2}\mid n\). Let \(n=pm\), \(\alpha=e^{\frac{2\pi i}{n}}\) and \(N=\langle\alpha\rangle\). Consider the subfield \(\mathbb{F}=\mathbb{Q}(\alpha^{p})\) of \(\mathbb{C}\). Then \(N^{\prime}:=\mathbb{F}\cap N=\langle\alpha^{p}\rangle\). Since the subgroup \(N^{\prime}\) contains the socle of the finite cyclic group \(N\), it must non-trivially meet any non-trivial subgroup of \(N\). By [18, Proposition 2], \(\mathrm{SL}(n,\mathbb{F})\) is minimal but not totally minimal. \((b)\Leftrightarrow(c)\) Use [18, Theorem 2], Theorem 2.4 and Lemma 2.2, taking into account that the field \(\mathbb{C}\) has zero characteristic. The next proposition answers [18, Question 6] in the positive. **Proposition 2.7**.: _Let \(\mathcal{P}\) be the set of all primes. Then \(G=\prod_{p\in\mathcal{P}}\operatorname{SL}(p+1,(\mathbb{Q},\tau_{p}))\) and \(H=\prod_{p\in\mathcal{P}}\operatorname{ST}^{+}(p+1,(\mathbb{Q},\tau_{p}))\) are both minimal._ Proof.: For every \(p\in\mathcal{P}\) we have \(Z(\operatorname{SL}(p+1,\mathbb{Q}_{p}))=Z(\operatorname{ST}^{+}(p+1,\mathbb{ Q}_{p}))\subseteq\{I,-I\}.\) By [14, Corollary 4.8], \(\operatorname{SL}(p+1,(\mathbb{Q},\tau_{p}))\) is totally minimal. Since \(Z(G)\) is compact we deduce from [4, theorem 4.8] that \(G\) is minimal. By Theorem 2.4, \(\operatorname{ST}^{+}((p+1,(\mathbb{Q},\tau_{p}))\) is \(z\)-minimal for every prime \(p.\) Using [4, theorem 4.8] again and the fact that \(Z(H)=Z(G),\) we establish the minimality of \(H\). ## 3. Open questions and concluding remarks **Question 3.1**.: _Is a product of locally compact \(z\)-minimal groups \(z\)-minimal?_ In view of Theorem 1.5 it is also natural to ask: **Question 3.2**.: _Let \(G\) be a locally compact group for which \((\operatorname{Inn}(G),\tau_{B})\) is locally compact minimal. Is \(\operatorname{Inn}(G)^{\kappa}\) minimal for every \(\kappa\)?_ Recall that a complete minimal abelian must be compact while there exists locally compact non-minimal nilpotent groups (even of class \(2\)). **Question 3.3**.: _Let \(G\) be a two-step nilpotent locally compact minimal group. Is \(G^{\kappa}\) minimal for every \(\kappa\)?_ It is still open whether the Fermat and Mersenne numbers are square-free (see [9]). As a corollary of Proposition 2.5, we obtain the following result. **Corollary 3.4**.: _Let \(\mathbb{F}\) be a subfield of a local field, \(F_{n}=2^{2^{n}}+1\) and \(M_{p}=2^{p}-1\) be a Fermat number and a Mersenne number, respectively, where \(n\in\mathbb{N}\) and \(p\) is a prime number._ 1. _If_ \(\operatorname{SL}(F_{n},\mathbb{F})\) _is minimal but not totally minimal, then_ \(F_{n}\) _is not square-free._ 2. _If_ \(\operatorname{SL}(M_{p},\mathbb{F})\) _is minimal but not totally minimal, then_ \(M_{p}\) _is not square-free._ **Definition 3.5**.: _A minimal group \(G\) is \(z_{n}\)-minimal if \(G/Z_{n}(G)\) is minimal._ **Lemma 3.6**.: _Let \(X\) and \(Y\) be groups. Then \(X\times Z_{n}(Y)\) is unconditionally closed in \(X\times Y\) for every \(n\in\mathbb{N}.\)_ Proof.: We prove the assertion by induction. For \(n=1\) use [13, Lemma 1.11]. Let \(\sigma\) be a Hausdorff group topology on \(X\times Y\) and \(n\in\mathbb{N}.\) By the induction hypothesis, \(X\times Z_{n-1}(Y)\) is \(\sigma\)-closed in \(X\times Y.\) So, \[(X\times Y)/(X\times Z_{n-1}(Y))\] is Hausdorff. Being the centre of the latter group, \((X\times Z_{n}(Y))/(X\times Z_{n-1}(Y))\) is closed in \((X\times Y)/(X\times Z_{n-1}(Y)).\) By the _third isomorphism theorem for topological groups_ (which is easy to verify for all topological groups; see [1, Theorem 1.5.18] and [10, Proposition 3.6]), we have \[(X\times Y)/(X\times Z_{n}(Y))\cong((X\times Y)/(X\times Z_{n-1}(Y)))/((X \times Z_{n}(Y))/(X\times Z_{n-1}(Y))).\] This implies that \((X\times Z_{n}(Y))\) is \(\sigma\)-closed in \(X\times Y.\) Mimicking the proof of [4, Theorem 4.8] we can show the following. **Proposition 3.7**.: _Let \(\{G_{i}|\ i\in I\}\) be a family of \(z_{n}\)-minimal groups for some \(n\in\mathbb{N}.\) If \(\Pi_{i\in I}Z_{n}(G_{i})\) is minimal, then \(\Pi_{i\in I}G_{i}\) is minimal._
2309.07276
Language-Conditioned Observation Models for Visual Object Search
Object search is a challenging task because when given complex language descriptions (e.g., "find the white cup on the table"), the robot must move its camera through the environment and recognize the described object. Previous works map language descriptions to a set of fixed object detectors with predetermined noise models, but these approaches are challenging to scale because new detectors need to be made for each object. In this work, we bridge the gap in realistic object search by posing the search problem as a partially observable Markov decision process (POMDP) where the object detector and visual sensor noise in the observation model is determined by a single Deep Neural Network conditioned on complex language descriptions. We incorporate the neural network's outputs into our language-conditioned observation model (LCOM) to represent dynamically changing sensor noise. With an LCOM, any language description of an object can be used to generate an appropriate object detector and noise model, and training an LCOM only requires readily available supervised image-caption datasets. We empirically evaluate our method by comparing against a state-of-the-art object search algorithm in simulation, and demonstrate that planning with our observation model yields a significantly higher average task completion rate (from 0.46 to 0.66) and more efficient and quicker object search than with a fixed-noise model. We demonstrate our method on a Boston Dynamics Spot robot, enabling it to handle complex natural language object descriptions and efficiently find objects in a room-scale environment.
Thao Nguyen, Vladislav Hrosinkov, Eric Rosen, Stefanie Tellex
2023-09-13T19:30:53Z
http://arxiv.org/abs/2309.07276v1
# Language-Conditioned Observation Models for Visual Object Search ###### Abstract Object search is a challenging task because when given complex language descriptions (_e.g._, "find the white cup on the table"), the robot must move its camera through the environment and recognize the described object. Previous works map language descriptions to a set of fixed object detectors with predetermined noise models, but these approaches are challenging to scale because new detectors need to be made for each object. In this work, we bridge the gap in realistic object search by posing the search problem as a partially observable Markov decision process (POMDP) where the object detector and visual sensor noise in the observation model is determined by a single Deep Neural Network conditioned on complex language descriptions. We incorporate the neural network's outputs into our language-conditioned observation model (LCOM) to represent dynamically changing sensor noise. With an LCOM, any language description of an object can be used to generate an appropriate object detector and noise model, and training an LCOM only requires readily available supervised image-caption datasets. We empirically evaluate our method by comparing against a state-of-the-art object search algorithm in simulation, and demonstrate that planning with our observation model yields a significantly higher average task completion rate (from 0.46 to 0.66) and more efficient and quicker object search than with a fixed-noise model. We demonstrate our method on a Boston Dynamics Spot robot, enabling it to handle complex natural language object descriptions and efficiently find objects in a room-scale environment. ## I Introduction Object search is a challenging task because the robot has incomplete knowledge of the environment, limited field of view, and noisy sensors. When asked to find an object in an environment, the robot must first infer the desired object from the language instruction, then efficiently move around the space to look for the object. Most images captured by the robot in this process will not contain the target object. Furthermore, even when the object is in the robot's field of view, it might not be detected due to occlusion, sensor noise, the viewing angle, the object's distance from the robot, etc. There have been many previous works on improving the efficiency and accuracy of robot object search by using prior semantic knowledge [1], active visual search [2, 3], object manipulation [4, 5, 6], and belief factorization for multi-object search [7, 8]. However, these works have often assumed that the target objects are specified with very simple language (such as "cup" for the object's class), and thus cannot fully utilize more complex language descriptions (such as "white cup") to avoid exhaustively searching over similar object instances in the environment. Furthermore, the robot is usually assumed to have a fixed-accuracy object detector [1, 3, 7, 8] or that detection noise only comes from occlusion [4, 5]. This is challenging to scale as new occlusion models and detectors have to be made for new objects. Additionally, modeling the detector as having a fixed accuracy prevents the robot from dynamically reasoning about observations it gets from the detector. This means the robot is unable to decide to gather more data instead of trusting a low-confidence detection, or trust a high-confidence detection more easily, potentially leading to reduced object search success rates and efficiencies. The computer vision community has developed deep learning models that can detect objects with high accuracy [9, 10], even given complex language descriptions of the desired object such as "white cup on the left" [11, 12, 13]. However, these models often assume that the object is somewhere in the input image and must be localized within that image. In contrast, when searching for objects, most images captured by the robot will not contain the object being searched for (Figure 1). Our work addresses these problems by embedding a deep-learned object detector within a Partially Observable Markov Decision Process (POMDP) [14]. Our approach takes as input a language description of an object, and uses it to condition a camera-based observation model that is used to plan object-search actions and update the agent's belief about the object's pose. To achieve this, we modify the training process for the detector from Hu et al. [11] to handle the large number of images which do not contain the target object, and incorporate the detector's confidence scores into the POMDP belief updates. This allows us to handle complex language descriptions and search for objects more efficiently in household environments by reasoning dynamically. Our contributions are five-fold: **1)** An experimental analysis on confidence scores outputted from language-conditioned visual segmentation mod Fig. 1: Our system takes as input a natural language description of the target object, and constructs a detector for that object based on the description. It addresses the problem that most images captured by a robot when searching for an object do not contain that object by incorporating a modified training process and using the confidence score of the detector in a POMDP model for object search. els as a proxy for different sources of observation noise. **2)** A novel class of visual observation models (Language-Conditioned Observation Models - LCOMs) whose detections and parameters are conditioned on natural language. **3)** A novel decision making framework for object search that leverages LCOMs to use natural language to account for scene-dependent detection accuracy when estimating state uncertainty for planning. **4)** A set of experiments on simulated robot hardware that compare the performance of planning models using LCOMs against fixed-noise sensor models on the object search task. **5)** A demonstration of our method on a Boston Dynamics Spot robot [15], which enables Spot to handle complex natural language object descriptions and efficiently find objects in a room-scale environment, without using fiducial markers. ## II Related Work Related work for robot object search generally falls into one of two categories: _model-based_ and _end-to-end policy learning_. Model-based approaches separate state estimation and planning to leverage probabilistic inference, whereas model-free approaches leverage deep neural networks to learn a policy end-to-end. There is a collection of works that employ deep learning for end-to-end visual and object search [16, 17, 18, 19] or modular differentiable components [20, 21]. Our work differs from these in that we perform model-based planning to leverage our known dynamics models. Model-based planning has the potential to generalize better to new environments and systems with less training data because we encode a model of the robot's sensor and actuation capabilities, and only use deep learning for visual processing. POMDPs [22] are a framework for sequential decision making under uncertainty frequently used for object search problems. Li et al. [4] and Xiao et al. [5] treat object search in clutter as a POMDP that can be efficiently solved by using approximate online planners and constraining planning samples based on spatial constraints and conditioning action selection on the current belief state, respectively. However, their observation models are only based on occlusion statistics calculated from object region overlap. Our proposed observation model can instead account for errors not solely derived from occlusion by conditioning on complex language. Danielczuk et al. [6] train a deep learning model to segment colored masks for objects in a pile from RGB-D images and score each mask on whether it belongs to the target object. They, however, use a fixed object priority policy for action selection and assume a fixed sensor pose, while we focus on planning how to explore a space for the purpose of object search by leveraging an active sensor. Aydemir et al. [3] frame the object search problem as active visual search and calculate candidate viewpoints based on a probability distribution over the search region, which is informed by prior knowledge of correspondences between objects and semantic room categories. However, they do not account for sensor error and assume the object to be detected if it is in the robot's field of view. Atanasov et al. [2] plan a sequence of sensor views to effectively estimate an object's orientation. These approaches are similar to our work in that they account for realistic sensor model errors, but unlike our work they do not use a general camera-based object detector. Wandzel et al. [7] introduce Object-Oriented POMDP (OO-POMDP) to factorize the robot's belief into independent object distributions, enabling the size of the belief to scale linearly in the number of objects, and employ it for efficient multi-object search. Zheng et al. [8] extend OO-POMDP for efficient multi-object search in 3D space. Both of these works assume simple language inputs and fixed-accuracy object detectors. Our work builds on these frameworks but instead explores using a deep-learned detector that takes as input a natural language phrase and camera image to create an object detector that models varying levels of accuracy. The computer vision community has developed deep learning models trained on object segmentation datasets that can detect objects with high accuracy [9, 10, 11, 12, 13]. The models self-supervisedly learned to output confidence scores along with their detection results. The output confidence scores do a good job of reflecting the models' detection accuracy, which dynamically changes depending on the input images. We build on the model developed by Hu et al. [11] for our object detector because it is trained to handle referring expressions--complex noun phrases to describe objects. ## III Preliminaries POMDPs are a framework for modeling sequential decision making problems where the environment is not fully observable. Formally, a POMDP can be defined as a tuple \(<S,A,\Omega,T,O,R>\), where \(S,A,\Omega\) denote the state, action, and observation spaces of the problem, respectively. After the agent takes action \(a\in A\), the environment state transitions from \(s\in S\) to \(s^{\prime}\in S\) following the transitional probability distribution \(T(s,a,s^{\prime})=p(s^{\prime}|s,a)\). As a result of its action and the state transition, the agent receives an observation \(z\in\Omega\) following the observational probability distribution \(O(s^{\prime},a,z)=p(z|s^{\prime},a)\), and a real-valued reward \(R(s,a)\). Because the environment is partially observable, the agent does not have full knowledge of the current state \(s\) and instead maintains a _belief state_\(b\) which is a probability distribution over the states in \(S\). The agent starts with an initial belief \(b_{0}\) and updates its belief after taking an action and receiving an observation (where \(\eta\) is the normalizing constant): \(b^{\prime}(s^{\prime})=\eta O(s^{\prime},a,z)\sum_{s\in S}T(s,a,s^{\prime})b(s)\). A policy \(\pi\) is a mapping from belief states to actions. The agent's task is to find a policy that maximizes the expected sum of discounted rewards given an initial belief: \(V^{\pi}(b_{0})=\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t}) \middle|a_{t}=\pi(b_{t})\right]\) where the discount factor \(\gamma\) determines the impact of future rewards on current decision making. While many problems can be modeled by POMDPs, they are typically computationally intractable for exact planning in large domains [23]. To address the planning complexity, we use the sampling-based planner PO-UCT [24], which uses Monte-Carlo tree search with upper confidence bounds to select an action by estimating the best state-action values using rollouts conditioned on states sampled from the current belief state, and then performs an exact belief update each time step based on the incoming observation and performed action. PO-UCT has successfully been used in robotic object-search settings [7, 8]. However, we note that our contribution is not dependent on the specific method of planning and state estimation, and LCOMs would be useful in any approach that requires a model of the observation probability distribution. ## IV Object Search Formulation We model object search as a POMDP with an observation model corresponding to a deep-learned object detector. ### _Planning Framework_ To model the object search problem, we assume access to an occupancy-grid map, \(M\), which is an \(m\times n\) grid that marks locations as either empty or occupied, and is used for defining the space of positions and directions in the environment. We assume an object is completely contained within one of the grid cells in the map. Our main contribution is the novel language-conditioned observation model (LCOM), which modifies the observation model dynamically based on the results of the deep-learned object detector, and which we describe in detail in Section IV-B. Formally, we define the object search POMDP problem as a 10 tuple: \(<o_{d},L,S,A,T,R,\gamma,\Omega,h_{L},O>\) 1. \(o_{d}\): is a desired object that exists in the environment (not including the robot). The desired object \(o_{d}\) has a 2D position attribute \((x_{o_{d}},y_{o_{d}})=o_{d}\), representing its discrete position in the occupancy-grid map \(M\). The desired object is used to define the reward function. 2. \(L\): is a string of words representing the natural language command given by the human, such as "The white cup on the table." \(L\) is only used to condition the visual observation model and transform raw images into our fan-shaped sensor model. We defer more details to Section IV-B. In this work we assume \(L\) to be given at the start and remain constant throughout the interaction, and defer handling dynamical language to future work. 3. \(S\): is a set of states, where each state \(s\in S\) is a \(2\) dimensional vector \((o_{d},r)=s\), where \(r=(r_{x},r_{y},r_{o})\) is a 2D position and discrete orientation (NORTH, EAST, SOUTH, WEST) for the robot in \(M\). We assume \(r\) is fully observable and \(o_{d}\) is only partially observable, yielding a mixed-observable state. This assumption is equivalent to assuming our robot is equipped with a LIDAR sensor and has previously run SLAM [25] and localized itself within that map, but does not know where the desired object is currently located. 4. \(A\): is a set of actions the robot can execute to move around the map, observe object locations, and declare the desired object as found. Specifically, we have three types of parameterized actions: 1. \(Move\)(_DIR_): points the robot in direction _DIR_ and moves it one grid in that direction, with _DIR_ being either NORTH, EAST, SOUTH, WEST. 2. \(Look\): has the robot execute a look action from its current position and orientation \((r_{x},r_{y},r_{o})\). 3. \(Find(X,Y)\): has the robot attempt to find the desired object \(o_{d}\) at grid cell \((X,Y)\). If \(o_{d}\) is at \((X,Y)\), the action will mark the object as found and terminate the episode. 5. \(T\): is a deterministic transition function, where \(Move\) actions transition the robot to different states by changing its position and orientation \((r_{x},r_{y},r_{o})\). \(Find\) actions can transition the robot to a terminal state after finding the desired object. 6. \(R\): is a reward function, where all \(Move\) actions receive \(-2\) reward each, \(Look\) receives \(-1\) reward, and \(Find(X,Y)\) receives a \(1000\) reward when done at the location of the desired object \((X,Y)==(x_{o_{d}},y_{o_{d}})\) and \(-1000\) otherwise. 7. \(\gamma\): is the discount factor, which we set to \(0.9\). 8. \(\Omega\): is the set of observations from our sensor, where each \(\omega\in\Omega\) is a pair of RGB and depth images. 9. \(h_{L}\): \(\Omega\to z^{s},c^{s}\) is a language-conditioned observation-mapping function that transforms raw images into observations from the same fan-shaped sensor model described in Wandzel et al. [7] and confidence scores for each object detection. If the desired object \(o_{d}\) is not detected by the sensor, \(z^{s}=NULL\). Otherwise, \(z^{s}\) is the location \((X,Y)\) where \(o_{d}\) is detected in the discretized fan-shaped region \(V\). \(c^{s}\) represents the object-specific observation confidence score. We describe how \(h_{L}\) is used for LCOMs in Section IV-B, and our particular instantiation of \(h_{L}\) for our experiments in Section IV-C. 10. \(O\): is the Language-Conditioned Observation Model (LCOM), which assigns probabilities to observations \(z_{t}\) based on the current state \(s_{t}\), action \(a_{t}\), and natural language command \(L\). The \(Move\) actions always produce the \(NULL\) observation, the \(Look\) action produces noisy fan-shaped measurements conditioned on the language, and the \(Find(X,Y)\) action always produces the \(NULL\) observation except when \((X,Y)==(x_{o_{d}},y_{o_{d}})\). We discuss the observation model in more detail in the following subsection. ### _Language-Conditioned Observation Model (LCOM)_ Figure 2 presents an overview of LCOM. When we receive an RGB-D sensor observation, \(\omega\), we can transform it into our fan-shaped sensor observation \(z^{s}\) and associated confidence scores \(c^{s}\) by using the language-conditioned observation mapper \(h_{L}(\omega)=z^{s},c^{s}\). LCOMs are independent of any particular instantiation of \(h_{L}\) as long as they satisfy the functional definition described in Section IV-A, and for the rest of this section's discussion we treat \(h_{L}\) as a black box function. In our experiments, we instantiate \(h_{L}\) using a deep neural network. We treat \(z^{s}\) as having a probability of being drawn from three mutually exclusive and exhaustive events: a true positive (\(A\)), a false positive (\(B\)), or a true or false negative (\(C\)). More formally, let \(A\) be when \(z^{s}\) is from the desired object \(o_{d}\) and \(z^{s}\in V\), \(B\) be when \(z^{s}\in V\) but \(z^{s}\) comes from other sources besides \(o_{d}\), and \(C\) be when \(z^{s}=NULL\). We assume the \(Find(X,Y)\) action always give perfect information about the potential object at location \((X,Y)\) (_i.e.,_ observations resulting from \(Find\) are not language-conditioned). In simulation this is reflected by knowing the ground truth state, and in real-life this can be reflected by asking a human to verify the selected location. For the \(Look\) action, we parameterize the probability of each of the events and the noise model for the observation conditioned on that event based on the associated confidence score \(c^{s}\), and decompose the observation model \(p(z^{s}|s,a)\) into: \[p(z^{s}|s,a,c^{s})=\sum_{e\in\{A,B,C\}}p(z^{s}|e,s,a,c^{s})p(e|s,a,c^{s}) \tag{1}\] If event \(A\) occurs, the observation is distributed normally with \(\mu\) being the true object position: \(p(z^{s}|A,s,a,c^{s})=\eta^{\prime}Norm(z^{s}|\mu,\Sigma)\). \(\eta^{\prime}\) is the normalization factor, and the covariance matrix is defined by \(\Sigma=\sigma\textbf{I}^{2\times 2}\). If event \(B\) occurs, the observation is distributed uniformly within the sensor region: \(p(z^{s}|B,s,a,c^{s})=\frac{1}{|V|}\). If event \(C\) occurs, the null observation has nearly 1 probability while any other observation has nearly 0 probability, which we implement with additive smoothing. Similar to Wandzel et al. [7], we define the probability of the events as \(p(A|s,a)=\alpha\), \(p(B|s,a)=\beta\), \(p(C|s,a)=\gamma\), where \(\alpha+\beta+\gamma=1\). The probability of these events are conditioned on whether or not the desired object \(o_{d}\) is in the fan-shaped sensing region \(V\), which is defined as: \[(\alpha,\beta,\gamma)=\begin{cases}(\epsilon_{TPR},0,1-\epsilon_{TPR})&\text {if $o_{d}$ is in $V$}\\ (0,1-\epsilon_{TNR},\epsilon_{TNR})&\text{if $o_{d}$ is not in $V$}\end{cases} \tag{2}\] where \(\epsilon_{TPR}\) represents the sensor's true positive rate, and \(\epsilon_{TNR}\) represents its true negative rate. Together \(\sigma,\epsilon_{TPR}\), and \(\epsilon_{TNR}\) define the sensor's overall accuracy. To implement the function \(g_{L}\), which transforms the confidence scores to the sensor noise in the observation model, we map the continuous value of \(c^{s}\) to a discrete range of hyper-parameter values that represent high-confidence and low-confidence for each setting, respectively. In our experiments, we map \(\epsilon_{TPR}\) to \(0.7\) and \(\sigma\) to 0.6 when \(c^{s}\geq 1\), and \(\epsilon_{TPR}\) to \(0.5\) and \(\sigma\) to 1 otherwise. These numbers reflect that when the detector's confidence is high, the true positive rate should be high and the uncertainty over the observed position of the object should be low. We note that LCOMs depend on visual input to detect potential objects in the image and report confidence scores that are used to define the sensor noise in the observation model. During state estimation with real-robot hardware, acquiring visual input is straightforwardly done by capturing images with the robot's camera. During planning, however, acquiring visual input may be challenging because it requires synthesizing novel images based on the pose of the robot and potential location of the target object. For computational efficiency, when performing visual object search in our experiments, we only use LCOMs for updating the agent's belief during state estimation, and use a fixed observation model similar to Wandzel et al. [7] during planning based on the 2D geometries of the known occupancy-grid map \(M\). Integrating different 3D scene representations into the planning module is straightforward but orthogonal to our contribution, so we defer this investigation to future work. ### _Object Detector_ We build upon the model developed by Hu et al. [11] for our object detector as it can handle complex noun phrases to describe objects. The model takes in a referring expression and RGB image and outputs scores for every pixel in the image, which are then binarized and returned as the predicted segmentation mask for the image region described by the language. The loss function used for training is the average pixelwise loss: \(Loss=\frac{1}{WH}\sum_{i=1}^{W}\sum_{j=1}^{H}L(v_{ij},M_{ij})\). \(W\) and \(H\) are image width and height, \(v_{ij}\) is the pixel's score, and \(M_{ij}\) is the binary ground-truth label at pixel \((i,j)\). The original model by Hu et al. [11] was trained on the Referlt dataset [26] which mostly contains outdoor images, whereas we are interested in detecting indoor household objects. We, therefore, additionally trained the model on the RefCOCO dataset [26] which contains referring expression annotations for segmented objects from the COCO dataset of common objects in context [27]. Furthermore, the original model was primarily trained on positive examples such that most images contained the target object, and the model only had to learn to identify where the object was in the image. In contrast, when using a model like this for object search, Fig. 2: **LCOM Overview**: The robot receives an RGB-D image and language description of the object. RGB-D and language go into \(h_{L}\), which produces language-conditioned confidence scores \(c^{s}\) for our fan-shaped detected observations \(z^{s}\). The confidence scores are then transformed by \(g_{L}\) into a noise model for the detected observations, which is used to update the belief about the object’s location via state estimation. Ovals are algorithms, and rectangles are data. The shaded oval is learned. most images fed to the model will not contain the referenced object. Thus filtering a large number of true negatives without missing the rare true positive is key to good performance in search tasks. We augmented the model's training data with negative examples where the object described by the referring expression does not appear in the image, and thus the model should return an empty segmentation mask. Our model, trained on the augmented data with a learning rate of 0.01, achieved a true negative rate of **0.918**, a significant improvement over the original model's true negative rate of **0.124**. We now describe our instantiation of \(h_{L}\) for our experiments based on the deep learning segmentation model. The model takes in the RGB image and language \(L\) and outputs a segmentation mask--a binary image which identifies pixels that are part of the target object described by \(L\). If the mask is empty, the model did not detect the object and \(z^{s}=NULL\). Otherwise, we take the average of the depth value at each pixel in the mask as well as the coordinates of the mask's center point and project it into a location \((X,Y)\) in the robot's fan-shaped sensing region (_i.e.,_ fan-shaped projection) and return \((X,Y)\) as \(z^{s}\). We also retain the model's original output score for each pixel, which we average over all pixels in the mask and use as the confidence value \(c^{s}\) for the detection. We note that the scores were not specifically trained for this task. ## V Experiments and Results Our aim is to test the hypothesis that language-conditioned observation models combined with POMDPs can increase a robot's speed and accuracy in finding objects in complex environments. We evaluated our system both in a variety of simulation environments and on a real physical robot. ### _Simulation Results_ We use scenes from the AI2-THOR simulator [28] to conduct our experiments. AI2-THOR consists of 120 near photo-realistic 3D scenes spanning four different categories: kitchen, living room, bedroom, bathroom. We select a subset of 15 scenes with 30 target objects (for an average of 2 objects/scene) for our experiments. Figure 3 shows images of the scenes used in our experiments. The average size of a scene is \(4\times 4\) meters, which we discretize into a \(16\times 16\) cell grid map with each cell being \(0.25\times 0.25\) meters. We build upon the POMDP implementation by Zheng and Tellex [29] in the pomdp_py library. We modeled the POMDP as having no prior knowledge of the target object's location, thus it had a uniform initial belief state over all possible object locations. We used a planning depth of \(3\), exploration constant of \(10000\), planning time of \(10\) seconds for each action, and gave the agent a maximum time of \(5\) minutes and \(10\)\(Find\) actions to complete each object search task. We generated simple natural language descriptions of the objects in our experiments as input to the agent. Results appear in Figure 4. We present both the task completion rate--the percentage of time the robot successfully finds the object, and success weighted by normalized inverse path length (SPL) [30]. SPL is calculated as: \(\frac{1}{N}\sum_{i=1}^{N}S_{i}\frac{l_{i}}{max(p_{i},l_{i})}\) where \(N\) is the total number of tasks, \(l_{i}\) is the shortest path from the agent to the goal for task \(i\), \(p_{i}\) is the path the agent actually took for the task, and \(S_{i}\) is a binary indicator of success in the task. For our experiments, \(p_{i}\) is the number of actions the agent actually took to search for the object, and \(l_{i}\) is the lowest number of actions needed to find the object. If the agent achieves a higher task completion rate but took more steps overall to find the objects, it will have a lower increase in its SPL. We collected \(l_{i}\) by performing planning with a perfect sensor with no noise. The perfect sensor was able to find all 30 objects at an average of 7.8 actions per object search task. Each different version of our model was tested 3 times and we report the average and standard error in their performance. We present results for fixed optimal values of the sensor parameters computed from the scenes in our dataset. Our deep learning model achieved a true positive rate (TPR) of 0.581, a true negative rate (TNR) of 0.918, and a covariance of 0.827 for the normal distribution over the desired object's position. We then show results with \(\sigma\), \(\epsilon_{TPR}\), and both \(\sigma\) & \(\epsilon_{TPR}\) values set dynamically based on the deep-learned detector's output confidence score As the sensor's TNR is already high, we decide to keep \(\epsilon_{TNR}\) fixed. Lastly, we show the performance with a simulated sensor whose noise model perfectly matches the model used for planning by the POMDP. As expected, the performance for the simulated sensor is Fig. 4: **Simulation Results**: Task completion rates and success weighted by path lengths for a simulated sensor and deep-learned sensor with static/dynamic observation models. Fig. 3: **Simulated Scenes**: example images of the AI2-THOR scenes used in our experiments. The scene categories are: kitchen _(top left)_, living room _(top right)_, bedroom _(bottom left)_, and bathroom _(bottom right)_. the best. This is because the sensor observations are being generated from ground truth with exact noise models. This provides an upper bound on our system's performance, and also indicates that if we used a more realistic sensor model, our system has the potential to perform even better. In particular, the simulated sensor will sample multiple images with the same viewpoint independently, which is not true for the deep-learned detector. All versions of our system with a dynamic observation model outperform the static version. In addition, the version with dynamical \(\sigma\) & \(\epsilon_{TPR}\) achieved a significantly higher average task completion rate and SPL than the static version (from **0.46** to **0.66**, and from **0.30** to **0.45**, respectively). On average, this version took **10.1** actions and **104** seconds to find each object, compared to the **11.5** actions and **118** seconds taken by the static version. Overall, these results demonstrate that using a dynamic observation model significantly improves the ability of our system to find objects quickly and efficiently in realistic environments. ### _Real-World Demonstration_ We provide a real-world demonstration on the Boston Dynamics Spot robot. The robot takes as input an occupancy-grid map of the environment and a typed natural language phrase describing the desired object. RGB and Depth images are taken from two separate cameras in the robot's hand, and pixel correspondence between the two images is computed using both cameras' intrinsic and extrinsic matrices. Spot moves through the environment by taking steps that are \(0.6\) meters (one grid cell) in length, and all decisions are driven by the POMDP until it finds the object. Scenes from our demonstration and the corresponding belief updates from using LCOMs with real robot hardware are shown in Figure 5. Full video footage of the robot executing the task, the incoming sensor data, and the LCOM outputs is available at [https://youtu.be/3Z4XQUQXCsY](https://youtu.be/3Z4XQUQXCsY). The robot was asked to find "the green mug on the left" and successfully did so in \(8\) actions, where the planning and execution of each action took \(10\) seconds. This demonstration shows our system runs on a real-world platform in a realistically sized environment, computes a policy and observations efficiently, and enables a robot to efficiently search for and find objects. ### _ViLD Experiments_ Given the recent success in open-vocabulary image classification/object detection powered by CLIP [31], we swapped out our trained object detector with ViLD [12], an object detector trained via vision and language knowledge distillation, for our object search experiments. ViLD takes in natural language expressions and an RGB image, proposes regions of interest within the image, computes visual embeddings for the regions, and calculates the dot product (score) between the visual embeddings and text embeddings generated by CLIP. It then returns the highest scoring image regions that correspond to the input natural language. For each object in our experiment, we pass a simple natural language description of the object into ViLD along with the RGB image taken by the robot, and take as output the segmentation mask associated with the highest scoring image region. The mask has the same size as the input RGB image, and value 1 for every pixel within the proposed region and 0 otherwise. If ViLD does not find an image region corresponding to the input language, the segmentation mask is empty and the observation \(z^{s}=NULL\). Otherwise, we take the average of the depth value at each pixel in the mask as well as the coordinates of the mask's center point and project it into a location \((X,Y)\) in the robot's fan-shaped sensing region and return \((X,Y)\) as \(z^{s}\). We also retain the image region's score as the confidence score \(c^{s}\). Without fine-tuning, ViLD achieved a true positive rate (TPR) of 0.976, a true negative rate (TNR) of 0.118, and a covariance of 1.825 for the normal distribution over the desired object's position on our AI2-THOR dataset. Similar to other object segmentation methods, ViLD tends to return a non-empty segmentation mask even when the queried object Fig. 5: **Real Robot Demonstration**: sample images from our real robot experiments with the Spot using LCOMs to find an object. _top left_: the robot is turned on and tasked with finding “the green mug on the left.” _top right:_ the robot’s uniform initial belief about the target object’s location. _bottom left:_ the robot moves and looks at a part of the room where the object is not located, and updates its belief that the object is most likely somewhere else. _bottom right:_ the robot moves and looks where the object is actually located, and after updating its belief has maximum likelihood estimate at the target object’s true location. Fig. 6: **ViLD Simulation Results**: Task completion rates and success weighted by path lengths for a simulated sensor and ViLD with static/dynamic observation models. is not in the input image. Experiment results are shown in Figure 6. The settings are the same as those described in Section V-A. We present results for fixed values of the ViLD sensor parameters. Next, we show results with \(\sigma\), \(\epsilon_{TNR}\), and both \(\sigma\) & \(\epsilon_{TNR}\) values set dynamically based on the output confidence score \(c^{s}\). We map \(\epsilon_{TNR}\) to \(0.1\) and \(\sigma\) to \(1.0\) when \(c^{s}\geq 0.25\), and \(\epsilon_{TNR}\) to \(0.3\) and \(\sigma\) to \(2.0\) otherwise. As ViLD's TPR is already high, we decide to keep \(\epsilon_{TPR}\) fixed. Each different version was tested 3 times and we report the average and standard error in their performance. The simulated sensor's performance is still the upper bound on the object search task. ViLD's performance trails behind our object detector which is fine-tuned for the task. However, all versions of our system with a dynamic observation model still significantly outperform the static version. The version with dynamical \(\sigma\) & \(\epsilon_{TNR}\) achieved an average task completion rate of **0.578** and SPL of **0.34**, compared to the **0.3** and **0.21** achieved by the static version. These results demonstrate that our system works seamlessly with different object detectors and using dynamic observation models improves object search performance. ## VI Conclusion Our contribution is a novel observation model that uses the detector's confidence score to better model the detection accuracy. This enables us to handle complex language descriptions of objects and perform object search with a real object detector in realistic environments. In addition, our method can be easily adapted to new environments without having to relearn the observation model's parameters. Our model only considers 2D space. In future work, we plan to extend to 3D models, building on Zheng et al. [32] and Fang et al. [33] to model the 3D structure of objects. This extension will enable the robot to reason about different 3D viewpoints and predict the structure of a partially observed object to gather more views to identify and localize it. We also plan to specifically train the detector's output confidence scores to represent its detection accuracy. Additionally, our model cannot reason about the likelihood of different views of the same object to improve its detection/localization of that object. Our current observation model assumes that each observation is independent, so if the robot observes the same scene from the same viewpoint, it will become more and more certain whether the object is present or not. However, in practice, when viewing an image from the same viewpoint, a deep-learned detector will give the same results; the observations are not independent samples. In the future, we could address this problem by creating a new observation model based on inverse graphics and an expected 3D model of the object appearance, enabling the robot to predict the next best view to maximally reduce its uncertainty about the object's location. Furthermore, we focus on language descriptions of the desired object to generate the object detector and observations. More complex language instructions that provide information about the location of the object such as "look in the kitchen" or "the object is to your right" can be incorporated by directly updating the agent's belief about the object's pose. Overall we see object search as a central problem for human-robot interaction, as finding, localizing, and then grasping an object is a first step for almost anything a person wants the robot to do in the physical world. Embedding object search as a sub-component of a more sophisticated dialog system can enable the robot to engage in collaborative dialog with a human partner to interpret complex natural language commands, find and manipulate objects being referenced, and fluidly collaborate with a person to meet their needs. ## Acknowledgments The authors would like to thank Nick DeMarinis for all his support and help. This work was supported by NSF under grant awards IIS-1652561 and CNS-2038897, AFOSR under grant award FA9550-21-1-0214, and Echo Labs.
2309.11414
EDMP: Ensemble-of-costs-guided Diffusion for Motion Planning
Classical motion planning for robotic manipulation includes a set of general algorithms that aim to minimize a scene-specific cost of executing a given plan. This approach offers remarkable adaptability, as they can be directly used off-the-shelf for any new scene without needing specific training datasets. However, without a prior understanding of what diverse valid trajectories are and without specially designed cost functions for a given scene, the overall solutions tend to have low success rates. While deep-learning-based algorithms tremendously improve success rates, they are much harder to adopt without specialized training datasets. We propose EDMP, an Ensemble-of-costs-guided Diffusion for Motion Planning that aims to combine the strengths of classical and deep-learning-based motion planning. Our diffusion-based network is trained on a set of diverse kinematically valid trajectories. Like classical planning, for any new scene at the time of inference, we compute scene-specific costs such as "collision cost" and guide the diffusion to generate valid trajectories that satisfy the scene-specific constraints. Further, instead of a single cost function that may be insufficient in capturing diversity across scenes, we use an ensemble of costs to guide the diffusion process, significantly improving the success rate compared to classical planners. EDMP performs comparably with SOTA deep-learning-based methods while retaining the generalization capabilities primarily associated with classical planners.
Kallol Saha, Vishal Mandadi, Jayaram Reddy, Ajit Srikanth, Aditya Agarwal, Bipasha Sen, Arun Singh, Madhava Krishna
2023-09-20T15:40:32Z
http://arxiv.org/abs/2309.11414v1
# EDMP: Ensemble-of-costs-guided Diffusion for Motion Planning ###### Abstract Classical motion planning for robotic manipulation includes a set of general algorithms that aim to minimize a _scene-specific cost_ of executing a given plan. This approach offers remarkable adaptability, as they can be directly used off-the-shelf for any new scene without needing specific training datasets. However, without a prior understanding of what diverse valid trajectories are and without specially designed cost functions for a given scene, the overall solutions tend to have low success rates. While deep-learning-based algorithms tremendously improve success rates, they are much harder to adopt without specialized training datasets. We propose EDMP, an Ensemble-of-costs-guided Diffusion for Motion Planning that aims to combine the strengths of classical and deep-learning-based motion planning. Our diffusion-based network is trained on a set of diverse kinematically valid trajectories. Like classical planning, for any new scene at the time of inference, we compute scene-specific costs such as "collision cost" and guide the diffusion to generate valid trajectories that satisfy the scene-specific constraints. Further, instead of a single cost function that may be insufficient in capturing diversity across scenes, we use an _ensemble_ of costs to guide the diffusion process, significantly improving the success rate compared to classical planners. EDMP performs comparably with SOTA deep-learning-based methods while retaining the generalization capabilities primarily associated with classical planners. ## I Introduction Planning a trajectory from a start to goal position while avoiding collisions (self-collisions and collisions with the objects around the robot) is a fundamental challenge in robotic manipulation [1]. Over the years, many approaches have been introduced to tackle this challenge, including classical planning algorithms like [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], and more recent deep-learning-based algorithms like MPNets [15], and M\(\pi\)Nets [16]. While the latter approaches tremendously improved the success rate in the manipulation tasks (determined by the number of times the manipulator reaches the goal while avoiding collisions), classical planners greatly reduce the dependency on specialized training datasets, generalizing to novel scenes and therefore are still widely the go-to off-the-shelf choice for motion planning. In a classical planning approach, a motion plan for a new scene is generated on the go by minimizing a _cost function_ computed on the given scene. For example, minimizing collision cost, path length, and the distance to goal. However, designing this cost function with multiple constraints requires many hit-and-trials, and careful fine-tuning, and still may not capture the diversity _across_ scene structures. Deep learning-based methods [17] improve upon this by adopting data-driven techniques that learn a mapping from a given context (or the scene) to a solution using a neural network via behavior cloning [18]. This allows the network to gain an overall understanding of different scene structures and map the structures to a viable solution. Such methods are significantly faster and more accurate. Despite the gains, they falter in generalization - performance on out-of-distribution scenes is significantly impacted. They also require a large number of high-quality demonstrations and are shown to be inefficient when fed with multimodal datasets [19], for example, data collected using teleoperation [20]. In this work, we aim to bridge the gap between the two approaches by first learning a prior (as in deep-learning-based methods) over kinematically _valid_ trajectory for _any_ scene and then incorporating _scene-specific cost_, such as collision cost, directly at the time of inference (as in classical planners). As shown in the experimental section, this builds a powerful motion planner that generalizes to diverse scenes. Recently, diffusion models [21, 22] have gained tremendous popularity for generating diverse sets of modalities such as vision [23, 24, 25], language [26], and more recently in motion planning [27, 28, 29, 30]. Unlike generative models like GANs [31] and VAEs [32], diffusion models generate datapoints in multiple timesteps. In the backward pass (generation step), a diffusion process generates the output by iteratively removing "noise" in \(h\) steps to eventually denoise a given noisy input. For example, generating a kinematically valid trajectory from a noisy trajectory. As shown in [27, 33], this allows for a unique opportunity to "guide" the generation process by incorporating "scene-specific cues," (such as a collision cost) as in the classical planners, at each of the intermediate steps, thereby producing a kinematically valid trajectory that also satisfies the scene-specific constraints. Taking inspiration from this, we propose, **EDMP**, an **E**nsemble-of-costs-guided **D**iffusion for **M**otion **P**lanning, that first learns a prior of kinematically valid trajectories and then is guided at the time of inference to satisfy scene-specific constraints while retaining the kinematic validity. Further, instead of relying on a single cost function, we incorporate \(N\) different cost functions (ensemble-of-costs) to guide the diffusion process, thereby generating trajectories that can solve diverse challenging scenes (as shown in Fig 1). Interestingly, based on the concepts of diffusion, EDMP can generate multimodal trajectories (multiple ways of getting from point A to point B). The ensemble-of-costs further improves the diversity in multimodality that could be handpicked on different parameters such as path length or smoothness (see Figure 9). Our key contributions are: 1. We propose **EDMP**, an **E**nsemble-of-costs-guided **D**iffusion for **M**otion **P**lanning which combines the strengths of classical and deep learning-based motion planning by first learning a prior over kinematically valid trajectories and then using multiple cost functions to capture diverse scene-specific cost guidance directly at the time of inference. 2. EDMP generalizes to diverse novel and even out-of-distribution scenes (such as planning for a manipulator holding an arbitrary object) while performing comparably with SOTA deep-learning-based methods on specific datasets. 3. Based on diffusion, EDMP generates multimodal trajectories. Further, the ensemble-of-costs improves the diversity of multimodality. This can be exploited by downstream planners to choose a trajectory from the set of solutions that evaluates to an optimal objective. ## II Problem Formulation Consider a robotic manipulator with \(m\) joints corresponding to a given joint state \(\mathbf{s}_{i}\in\mathbb{R}^{m}\) for the \(i^{\text{th}}\) trajectory-time-step where a trajectory, \(\mathbf{\tau}\), is a sequence of joint states over a horizon \(h\) between a start and goal configuration, \(\mathbf{s}_{0}\) and \(\mathbf{s}_{h-1}\), respectively, given as, \(\mathbf{\tau}=[s_{0},s_{1},...\ \mathbf{s}_{h-1}]^{\top}\in\mathbb{R}^{m\times h}\). We are interested in solving the following optimization problem for obtaining a trajectory for a given task, \[\min_{\mathbf{\tau}}J(\mathbf{\tau};\mathbf{o},\mathbf{E}) \tag{1}\] The vector \(\mathbf{o}\) represents the hyperparameters of the cost function \(J\). The variable \(\mathbf{E}\) represents the scene. The cost function has two distinct parts. The first part models aspects that depend purely on the manipulator's kinematics, such as smoothness and feasibility (e.g. self-collision) of the joint trajectory. The second part represents the scene-specific cost that couples the manipulator motion with the scene. ### _Diffusion Based Optimal Solution_ Diffusion models [21, 22] are a category of generative models where a datapoint is converted into an isotropic Gaussian noise by iteratively adding Gaussian noise through a fixed forward diffusion process \(q(\mathbf{\tau}_{t}|\mathbf{\tau}_{t-1})=\mathcal{N}(\mathbf{\tau}_{t};\sqrt{1-\mathbf{\beta} _{t}}\mathbf{\tau}_{t-1},\mathbf{\beta}_{t}\text{I})\), where \(\mathbf{\beta}_{t}\) is the variance schedule and \(t\) is the diffusion timestep. A forward diffusion process of \(T\) timesteps can be reversed by sampling \(\mathbf{\tau}_{T}\sim\mathcal{N}(0,\text{I})\) from a standard normal distribution and iteratively removing gaussian noise using the trainable reverse process \(p_{\theta}(\mathbf{\tau}_{t-1}/\mathbf{\tau}_{t})=\mathcal{N}(\mathbf{\tau}_{t-1};\mu_{ \theta}(\mathbf{\tau}_{t},t),\mathbf{\Sigma}_{t})\) parametrized by \(\theta\). We train a diffusion model to learn a prior \(p_{\theta}\), over a set of smooth kinematically valid trajectories collected from a large-scale dataset [16]. For a given trajectory in the dataset, forward diffusion first corrupts it by gradually adding noise, and then the reverse diffusion process aims to reconstruct back the original trajectory from the noise. This is shown in Figure 3-(Diffusion Model). **Conditioning on start and goal:** As we train the diffusion prior on the set of trajectories from the dataset, we repeatedly fix \(\mathbf{s}_{0}\) and \(\mathbf{s}_{h-1}\) at each diffusion-time-step to correspond to the original \(\mathbf{s}_{0}\) and \(\mathbf{s}_{h-1}\) at the first diffusion-time-step for each trajectory. As shown in Section V, this creates a conditioning effect on the start-goal pair, encouraging the network to generate a smooth trajectory between the two. ### _Guidance Process_ For a specific scene, we also want to incorporate a _scene-specific cost_ function that can guide the diffusion process toward a trajectory that satisfies the scene-specific constraints, such as collision cost, denoted by \(J(\mathbf{\tau};\mathbf{o},\mathbf{E})\). Specifically, at each step of the denoising-time-step, we modify the intermediate trajectory \(\mathbf{\tau}_{t}\) predicted by the denoiser at the \(t^{\text{th}}\) diffusion-time-step by adding gradients from the scene-specific collision cost function, before passing to the next step as shown in Figure 3. This is given by, \[\mathbf{\tau}_{t}^{*}=\mathbf{\tau}_{t}-\alpha\nabla J(\mathbf{\tau};\mathbf{o},\mathbf{E}) \tag{2}\] where \(\alpha\) is a hyperparameter. This form of conditioning is analogous to classifier-based guidance in [34], where a trained classifier guides the diffusion towards a goal. In our case, it guides the trajectory to collision-free regions. Although adding cost-conditioning directly at the time of inference results in a cost-guided posterior, denoising from the cost-guided posterior is equivalent to denoising from a Gaussian distribution, as shown in [33]. ## III Ensemble-of-Costs guided Diffusion Equation 2 uses gradients from a scene-specific cost function to modify the trajectory obtained from the diffusion prior. Thus, the nature of the gradient dictates the efficacy of the process, which in turn depends on many factors, some of which we summarize below: * **Algebraic form of the cost model:** Collision cost is highly scene-specific. For example, the signed-distance-field-based collision cost [2] is known to struggle in scenes with narrow racks between the start and the goal positions. Similarly, performance of collision costs modeled around convex collision checking [4] depends on whether we penalize each independent waypoint's intersection volume with the environment (intersection volume) or the swept volume between each of the adjacent waypoints (swept volume) as shown in Figure 5. * **Gradient and Cost Hyperparameters:** Each collision cost model has several hyperparameters that also affect the gradient descent step in Equation 2. For example, gradient weight schedule, whether adaptive or independent, is an important parameter in gradient-based approaches. Similarly, we show that in some cases, choosing to expand the object over the initial period of optimization greatly helps in bringing out the retraction behavior, which otherwise seemed less likely as shown in Figure 4 (retraction prevents the manipulator from colliding with the shelf). The above discussion makes the case for guided diffusion with an adaptive learning rate and collision cost whose algebraic form and hyper-parameters are adaptive to the environment in which the manipulator is operating. However, to the best of our knowledge, there exists no general framework towards this end. We propose an ensemble-of-collision-costs where we run a batch of guided diffusion processes in parallel, as shown in Figure 3 where each sub-batch uses a different cost function. Moreover, we embed the hyperparameter tuning of the cost functions as a part of the diffusion process. Algorithm1 summarizes our proposed improvements. We assume that we have \(l\) different choices of the cost function stemming from different collision models with each having their own hyperparameters. We also consider \(r\) different schedules to adapt the hyperparameter of each of the \(l\) costs. Thus, in Algorithm 1, we have a triple loop that runs \(l\times r\) parallel diffusion processes. In practice, however, the outer two loops are decoupled from each other and run in batch fashion over GPUs. ### _Collision Cost Guidance_ To detect and compute the collision cost between any two rigid bodies, we take inspiration from Gilbert-Johnson-Keerthi (GJK) [35] algorithm and the Expanding Polytope Fig. 4: **Effect of Obstacle Expansion. (a) scene without obstacle expansion applied. (b) shows the change in the scene after applying obstacle expansion. Obstacle Expansion widens the thinner dimension by an amount \(O_{e}\), thus altering the gradient direction enabling the manipulator find a configuration for retracting around the shelf.** Fig. 5: **Intersection v/s Swept Volume. (a) Gradient (red arrows) of link-obstacle intersection volume moves link away from obstacle. (b) Gradient (red arrows) of swept volume between consecutive link poses prevents collision between trajectory waypoints** Fig. 3: **Architecture. EDMP leverages a diffusion model alongside an ensemble of \(l\) cost functions. The diffusion model denoises a batch of trajectories \(\mathbf{\tau}\) from \(t=T\) to \(0\), while each cost in the ensemble guides a specific sub-batch. We calculate the gradient \(\nabla J\) of each collision cost (intersection or swept volume) in the ensemble from robot and environment bounding boxes, using differentiable forward kinematics. After denoising is over, the trajectory with minimum swept volume is chosen as the solution.** Algorithm (EPA) [36]. We compute the collision cost between two bodies, \(A\) and \(B\) as a function of overlap between them along the three axes. We refer to this as the penetration depth. To achieve this, we enclose each object in a bounding box cuboid and compute the intersection volume \(V\) as: \[\begin{split}\text{V}(A,B)&=\text{prod}(|\max(\min(A),\ \min(B))\\ &\quad-\min(\max(A),\ \max(B))|)\end{split} \tag{3}\] To leverage this in manipulator collision avoidance, we first approximate each object on the table as a cuboid corresponding to a 3D bounding box that encloses the full object. Similarly, we represent each manipulator link as a 3D cuboid, which acts as a bounding box enclosing that link. We compute the poses of each of these bounding boxes for a given configuration \(s_{k}\) using differentiable forward kinematics \(\text{FK}(\mathbf{s}_{k}[i])\), where \(\mathbf{s}_{k}[i]\) denotes the configuration of \(i^{th}\) link. If there are \(n\) obstacles in the scene, we can approximate the scene as a list of \(n\) bounding boxes \(\mathbf{E}\in\mathbb{R}^{n\times 4\times 4}\). Each bounding box is represented as a transformation matrix of size \(4\times 4\) in the SE(3) Lie Group. We then compute the collision cost from the intersection volume, which we call "intersection volume cost", \(J_{inter}\), of each link state in \(\mathbf{\tau}\) with respect to each obstacle in \(E\) given as: \[J_{inter}(\mathbf{\tau},\mathbf{E})=\sum_{k=0}^{h-1}\sum_{i=0}^{m-1}\text{V}(\text{FK} (\mathbf{s}_{k}[i]),\mathbf{E}) \tag{4}\] The gradient of intersection volume cost \(\nabla J_{inter}\) gives the penetration depth along each dimension. Averaging these values produces an average movement direction for a link along the penetration depth, to displace it out of collision, as shown in Figure 5. We show empirically that this simple collision cost enables collision avoidance even in complex scenes. We define a swept volume cost function from swept volume \(SV\), inspired from [35], as: \[\begin{split} SV(\mathbf{s}_{k}[i],\mathbf{s}_{k}[i+1])=\\ \text{prod}(|\max(\text{FK}(\mathbf{s}_{k}[i]),\text{FK}(\mathbf{s}_{k}[ i+1]))\\ -\min(\text{FK}(\mathbf{s}_{k}[i]),\text{FK}(\mathbf{s}_{k}[i+1]))|) \end{split} \tag{5}\] The swept volume approximates the volume swept out by a link between two adjacent waypoints. Such a cost function helps account for collisions that may happen between consecutive joint states in a trajectory, as in Figure 5. For a given trajectory \(\mathbf{\tau}\), and environment \(\mathbf{E}\), we define swept-volume cost as the sum of volumes swept out by the links between each pair of adjacent waypoints. As the swept volume is also modelled as a cuboid, we can plug it into Equation 4 to get the swept volume cost: \[J_{swept}(\mathbf{\tau},\mathbf{E})=\sum_{i=0}^{h-2}\sum_{i=0}^{m-1}\text{V}(\text{ SV}(\mathbf{s}_{k}[i],\mathbf{s}_{k}[i+1]),\mathbf{E}) \tag{6}\] Our architecture is also outlined in Figure 3. ## IV Experiments All our experiments are conducted using the Pybullet simulator [37], employing the Franka Panda robot. **Dataset:** We benchmark on the M\(\pi\)Nets [16] dataset. The **training** dataset consists of 6.54 million collision-free trajectories. These 6.54 million trajectories are generated on various scenes (such as tabletop, dresser, and cubby) using two different classical planning pipelines: **Global** Planner (3.7 million trajectories) based on AIT\({}^{*}\)[38] and **Hybrid** Planner (3.7 million trajectories) that combined AIT\({}^{*}\)[38] for planning in the end-effector space and Geometric Fabrics [39] for producing a geometrically consistent motion conditioned on the generated end-effector waypoints. The **test** dataset consists of three datasets: (1) Global Solvable Problem (_global_): consists of 1800 scenes that could be solved by only the Global Planner, that is, Global Planner could generate a valid collision-free trajectory for these scenes. (2) Hybrid Solvable Problems (_hybrid_): consists of 1800 scenes solvable by only the Hybrid Planner. (3) Both Solvable Problem (_both_): a set of 1800 scenes that could be solved by both, Global and Hybrid Planners. More information regarding data collection can be found in [16]. **Training and architectural details:** Similar to M\(\pi\)Nets, we train two different planners; one on _global_ and another on _hybrid_ data. Each denoiser is modeled as a temporal UNet similar to [27], and is trained for 20k steps on a 2080 GeForce GTX GPU for approximately 9 hours. Each **scene** consists of obstacle configurations, a start position (in joint configuration), a goal position (in joint configuration during training and in end-effector space during test). We use inverse kinematics to compute the joint configuration from the end-effector position. Each **trajectory** consists of 50 waypoints, including the initial and final joint configurations. For our experiments, we use an ensemble of \(12\) cost functions, \(5\) with intersection and \(7\) with swept volume cost. However, one can define their own number and type of cost functions. **Baselines:** We compare our framework against different types of SOTA planners, two behavior cloning-based planners (M\(\pi\)Nets [16], MPNets [15]) and two stochastic optimization-based planners (G. Fabrics [39], STORM [3]), one optimization-based planner with quintic-spline initialization (CHOMP [2]), and a set of sampling-based planners through OMPL [40] on _global_, _hybrid_, and _both_ test sets. **Metrics:** We define success rate (SR \(\%\)) as the percentage of problems that are successfully solved by the given planner. We say that a problem is successfully solved if the planner can generate a trajectory that avoids all collisions in order to reach the goal. M\(\pi\)Nets doesn't release the full global data, therefore the metrics are taken from the paper directly. ### _Performance on M\(\pi\)Nets Dataset_ Table I and II presents the success rates of EDMP against the baselines. As shown in Table I, EDMP outperforms all of the classical planners significantly and MPNets (Table II). Although M\(\pi\)Nets is better than EDMP in many of the settings, it is important to note that our method is agnostic of the variations in the dataset, whereas the performance of methods like M\(\pi\)Nets is dependent on the dataset it is behaviorally cloned on. This is because our main performance improvement comes from the ensemble of cost functions that is applied directly at the time of inference and is common across all of the settings (_global_, _hybrid_, and _both_). Moreover, the prior encodes kinematic constraints, which can be learned even from much simpler training datasets like _global_ instead of _hybrid_. Moreover, unlike M\(\pi\)Nets, EDMP generalizes to out-of-distribution scenes (Section IV-B) and generates multimodal trajectories (Section IV-C). ### _Generalization to Out-of-distribution Scenes_ We test EDMP's performance on scenes outside of the training distribution. First, we make the manipulator hold arbitrary objects and see if EDMP can find a _valid collision-free_ trajectory for the manipulator along with the object. As can be seen in Figure 6, EDMP works out of the box for such scenes. Deep-learning-based approaches like M\(\pi\)Nets and MPNets require retraining or specialized training datasets to take objects into account. In our case, we treat the object as just another link (as explained in Section III-A) with fixed joints and adding an object-environment collision-cost to the overall cost function (Equation 5). We also test EDMP by generating a scene with collision spheres (Figure 6). Such a scene is very different from the types of scenes the training trajectories were generated on. Even in this highly challenging setup, EDMP succeeds in 8 out of 10 evaluated scenes with spheres generated at random spatial positions, indicating that EDMP has learned a generic prior that is highly adaptable to diverse scenes. ### _Multimodality_ Multimodal trajectories can enable one to achieve the same goal in multiple different ways. This is much closer to how humans plan - out of many different ways to execute a task; we often pick the one that is most optimal for the _current_ situation. Based on the concept of diffusion, we inherently support such multimodal trajectories. Moreover, our ensemble-of-costs further improves the diversity in the generated trajectories. We assess trajectory diversity for 100 random scenes from the M\(\pi\)Nets validation dataset. We use Average Cosine Similarity Mean (ACSM) metric that calculates the mean of cosine similarities among all pairs of trajectories within \begin{table} \begin{tabular}{c|c c|c c c} \hline \hline & \multicolumn{2}{c|}{Global-trained Planners} & \multicolumn{3}{c}{Hybrid-trained Planners} \\ Test & M\(\pi\)Nets & EDMP & MPNets & M\(\pi\)Nets & EDMP \\ \hline Global & **75.06** & 71.67 & 41.33 & 75.78 & **75.93** \\ Hybrid & 80.39 & **82.84** & 65.28 & **95.33** & 86.13 \\ Both & 82.78 & **82.79** & 67.67 & **95.06** & 85.06 \\ \hline \hline \end{tabular} \end{table} TABLE II: Comparison against deep-learning based methods in terms of success rate (%, \(\uparrow\)). EDMP is only marginally influenced by the training dataset. This is because the major performance gain is a result of the ensemble-of-costs-guide directly at inferences common across all the settings. \begin{table} \begin{tabular}{c|c c c c c} \hline \hline Test & CHOMP & OMPL & G. Fabrics & STORM & EDMP \\ \hline Global & 26.67 & 37.27 & 38.44 & 50.22 & **75.93** \\ Hybrid & 31.61 & 40.37 & 59.33 & 74.5 & **86.13** \\ Both & 32.2 & 42.6 & 60.06 & 76 & **85.06** \\ \hline \hline \end{tabular} \end{table} TABLE I: Comparison of EDMP (trained on Hybrid [16] dataset) against classical motion planners in terms of success rate (%, \(\uparrow\)). Fig. 6: **Qualitative Results**: From top-to-bottom: M\(\pi\)Nets scene (dresser) and out-of-distribution scenes: merged cubby with handheld cuboid and a collision spheres scene. a batch to measure the diversity in trajectories. Lower ACSM suggests greater diversity. As shown in Table III, our ensemble-of-costs guidance outperforms the average of 12 costs function when measured individually, demonstrating the advantage of multiple cost-guidance over a single one. ## V Ablation Studies **Ensemble-of-costs.** In this section we show the effect of ensemble-of-costs in Figure 7 and 8. As can be seen in Figure 7, as the number of guides increase, the overall Success Rates increases. This is because diverse cost functions capture different parameters and aspects across scenes, as explained in Section III. Although these are specially designed cost functions, the overall idea of combining multiple cost functions to solve diverse scenes can be applied in any domain using custom cost functions. Figure 8 further emphasizes the contribution of each guide to the overall success rate. **Conditioning with start and goal.** One primary condition of generating a trajectory is avoiding any abrupt manipulator motion that could cause serious damage in the real-world. We evaluate the effect of conditioning the trajectory with the start and goal. We defined Roughness as the L2 norm between adjacent waypoints. To evaluate this, we consider four metrics - Average Roughness (AR), Max Roughness Excluding Start and Goal (MRESG), Roughness between first two waypoints (RF2W), and Roughness between last two waypoints (RL2W). As shown in Table IV, conditional training generates significantly smoother trajectories. ## VI Conclusion & Future Work In this paper, we propose diffusion models as a mechanism to learn a prior over kinematically valid trajectories complemented by scene-specific guidance through gradient-based cost functions directly at inference. We introduce an ensemble of cost-functions that capture scene-specific nuances across diverse scenes and improve the generated trajectories significantly. Our empirical results concretely point towards our approach's ability to effectively exploit the strengths of classical motion planners and the recent deep-learning-based planners, by showcasing remarkable generalization capability across diverse scenes, such as object manipulation, without any object-specific retraining. Moreover, EDMP is capable of generating diverse multimodal trajectories for a given scene, facilitating optimal trajectory selection based on specific downstream parameters. We believe that our research provides significant strides towards more generalizable planning with broad applications to robotic manipulation. **Future work.** Despite showcasing impressive performance and generalization, EDMP still relies on a set of handcrafted cost functions to capture diversity across the scenes. Automating the design of cost-functions based on scene diversity could be an interesting future work. ## Acknowledgment We would like to thank Adam Fishman for assisting with M\(\pi\)Nets and providing valuable insights into collision checking and benchmarking. \begin{table} \begin{tabular}{c c c c} \hline \hline Condition & AR(\(\downarrow\)) & MRESG(\(\downarrow\)) & RF2W(\(\downarrow\)) & RL2W(\(\downarrow\)) \\ \hline w/cond & **0.0589** & **0.0821** & **0.0266** & **0.0266** \\ w/cond & 0.1121 & 0.3171 & 0.6459 & 0.4960 \\ \hline \hline \end{tabular} \end{table} TABLE IV: **Ablation on start-goal conditioned trajectory**: As we generate the trajectory through the denoising process conditioned on the start and goal, the trajectories are smoother. The metrics are defined in Section V Fig. 8: **Contribution of each guide.** x-axis and y-axis denote the guide index and contribution (%) of each guide to overall Success Rate. The contribution of a guide is proportional to the likelihood that a successful path was generated by it. \begin{table} \begin{tabular}{c c c} \hline \hline **Method** & **SR**(\(\uparrow\)) & **ACSM**(\(\downarrow\)) \\ \hline Avg. of single guides & 79.25 & 0.981 \\ EDMP & **89.30** & **0.892** \\ \hline \hline \end{tabular} \end{table} TABLE III: Comparison of EDMP against the individual guides on 100 M\(\pi\)Net validation scenes. Fig. 7: **Effect of adding guides.** There is an asymptotic rise in the overall Success Rates with the number of guides demonstrated across diverse scenes - tabletop, cubby, merged cubby, and dresser. Fig. 9: **Multimodal trajectory predictions in** (a) tabletop scene, (b) merged cubby scene, and (c) dresser scene.
2309.05461
Topological nonsymmorphic insulator versus Dirac semimetal in KZnBi
KZnBi was discovered recently as a new three-dimensional Dirac semimetal with a pair of bulk Dirac fermions in contrast to the $\mathbb{Z}_2$ trivial insulator reported earlier. In order to address this discrepancy, we have performed electronic structure and topological state analysis of KZnBi using the local, semilocal, and hybrid exchange-correlation (XC) functionals within the density functional theory framework. We find that various XC functionals, including the SCAN meta-GGA and hybrid functional with 25\% Hartree-Fock (HF) exchange (HSE06), resolve a topological nonsymmorphic insulator state with the glide-mirror protected hourglass surface Dirac fermions. By carefully tuning the XC strength in modified Becke-Johnson (mBJ) potential, we recover the correct orbital ordering and Dirac semimetal state of KZnBi. We further show that increasing the default HF exchange in hybrid functional ($> 40\%$) can also capture the desired Dirac semimetal state with the correct orbital ordering of KZnBi. The calculated energy dispersion and carrier velocities of Dirac states are found to be in excellent agreement with the available experimental results. Our results demonstrate that KZnBi is a unique topological material where large XC effects are crucial to producing the Dirac semimetal state.
Rahul Verma, Bikash Patra, Bahadur Singh
2023-09-11T13:59:52Z
http://arxiv.org/abs/2309.05461v2
# Topological nonsymmorphic insulator versus Dirac semimetal in KZnBi ###### Abstract KZnBi was discovered recently as a new three-dimensional (3D) Dirac semimetal with a pair of bulk Dirac fermions in contrast to the \(\mathbb{Z}_{2}\) trivial insulator reported earlier. In order to address this discrepancy, we have performed electronic structure and topological state analysis of KZnBi using the local, semilocal, and hybrid exchange-correlation (XC) functionals within the density functional theory framework. We find that various XC functionals, including the SCAN meta-GGA and hybrid functionals with 25% Hartree-Fock (HF) exchange, resolve a topological nonsymmorphic insulator state with the glide-mirror protected hourglass surface Dirac fermions. By carefully tuning the modified Becke-Johnson (mBJ) potential parameters, we recover the correct orbital ordering and Dirac semimetal state of KZnBi. We further show that increasing the default HF exchange in hybrid functionals (\(>40\%\)) can also capture the desired Dirac semimetal state with the correct orbital ordering of KZnBi. The calculated energy dispersion and carrier velocities of Dirac states are found to be in excellent agreement with the available experimental results. Our results demonstrate that KZnBi is a unique topological material where large electron correlations are crucial to realize the Dirac semimetal state. ## I Introduction Interest in symmetry-protected nontrivial states of topological materials has driven search and discovery efforts for finding materials with nontrivial electronic properties useful for fundamental studies and device applications [1; 2; 3]. Many topological materials are now predicted in high-throughput materials searches ranging from insulators to metals and semimetals, where topological states are protected not only by the free-fermion symmetries such as time-reversal or particle-hole symmetries but also by crystalline symmetries [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. Among various topological materials, three-dimensional (3D) Dirac semimetals support four-fold band crossings described by pseudo-relativistic Dirac fermions with linear energy dispersion in their electronic structure. These band crossings are protected by crystalline symmetries such as rotational or nonsymmorphic symmetries other than time-reversal and inversion symmetries [11; 12; 13; 14; 15; 16]. Systematically reducing these symmetries can transition a topological Dirac semimetal into a nontrivial insulator or Weyl semimetal [17; 18; 19; 20; 12]. The nonsymmorphic crystalline symmetries that accompany a symmetry operation with a fractional translation can protect topological states in nontrivial insulators. Such topological nonsymmorphic insulators can realize special hourglass-like Dirac fermions over the material's surface that respect nonsymmorphic symmetries [21; 22; 23; 24; 25; 26; 27]. Among various families of topological materials, the AB\(X\)-type hexagonal honeycomb materials attract significant attention due to their greater materials and topological state tunability [28; 29; 30; 31; 32]. Specifically, these materials can realize \(\mathbb{Z}_{2}\) topological insulators, topological nonsymmorphic crystalline insulators, topological Dirac semimetals, and three-dimensional quantum spin-Hall insulators as well as provide an opportunity to realize topological phase transitions from a nontrivial insulator state to a Dirac or Weyl semimetal state by pressure or systematically lowering the crystalline symmetries. For example, KHg\(X\) (\(X=\) As, Sb, or Bi) materials with Hg\(X\) honeycomb layer host topological nonsymmorphic crystalline insulator with hourglass-like surface Dirac fermions and undergo a phase transition to topological Dirac semimetal state under external pressure [23; 24; 25; 33]. Due to diverse chemical bondings, the topological state in these materials is highly-sensitive to the materials' parameters and electron interactions [30]. The distinction between the topological Dirac semimetal or topological insulating state is thus not very obvious in these materials. KZnBi, which is of particular interest to this study, was predicted theoretically as a \(\mathbb{Z}_{2}\) trivial insulator [30] or a metal [32]. A recent experimental work, however, reported it as a 3D topological Dirac semimetal with a pair of bulk Dirac fermions in the bulk Brillouin zone (BZ) [34]. The angle-dependent magneto-transport measurements further reveal a phase transition to realize chiral fermion states with anomalous Hall effect under an applied magnetic field [35]. The nontrivial features in KZnBi can be tuned by varying the direction and flux of the Berry curvature field [35; 36]. These results indicate that KZnBi realizes a topological Dirac semimetal state in disagreement with \(\mathbb{Z}_{2}\) trivial insulator or a metallic state reported in the first-principles studies. In this context, the density functional theory (DFT) based first-principles calculations, in principle, can reproduce the ground state of materials accurately provided the exact exchange-correlation (XC) functionals. Proper inclusion of exchange and correlation effects may thus be crucial to describe the topological Dirac semimetal state of KZnBi. In this work, we provide a comprehensive investigation of the electronic structure and topological proper ties of KZnBi based on first-principles calculations. By carefully considering the different XC functionals including the strongly constrained and appropriately normed (SCAN) and hybrid (with 25% Hartree-Fock (HF) exchange) functionals, we delineate the electronic and topological state of KZnBi. Our systematic analysis shows that SCAN density functional reproduces the structural parameters in agreement with the experimental results. However, it produces a topological nonsymmorphic insulator state in disagreement with the experiments. By tuning the modified Becke-Johnson (mBJ) potential parameters, we recover the correct orbital ordering and Dirac semimetal state of KZnBi. We further show that the Dirac semimetal state can also be captured using the hybrid functionals with 42% Hartree-Fock (HF) exchange. Our results demonstrate the importance of XC functionals in the first-principles calculations to reproduce the topological Dirac semimetal state of KZnBi in agreement with the experiments. ## II Methodology Electronic structure calculations are performed within the Kohn-Sham DFT (KSDFT) framework [37] using the projector augmented-wave (PAW) method to model the ionic potentials as implemented in Vienna _ab-initio_ simulation package (VASP) [38; 39]. The accuracy of the KS-DFT calculations depends solely on the types of the XC density functionals, which can be arranged along different rungs of Jacob's ladder [40; 41]. The XC functionals are broadly categorized into semilocal, hybrid, and post-hybrid functionals. The semilocal XC functionals can be written as \[E_{\rm xc}[\rho_{\uparrow},\rho_{\downarrow}]=\int d^{3}r\rho(\mathbf{r}) \epsilon_{\rm xc}(\rho_{\uparrow},\rho_{\downarrow},\nabla\rho_{\uparrow}, \nabla\rho_{\downarrow},\tau_{\uparrow},\tau_{\downarrow}) \tag{1}\] where \(\rho_{\uparrow}(\mathbf{r})\) and \(\rho_{\downarrow}(\mathbf{r})\) are the electron spin densities and constitute the local density approximation (LDA) [42; 43]. Apart from the electron spin densities, spin density gradients (\(\nabla\rho_{\uparrow}(\mathbf{r})\) and \(\nabla\rho_{\downarrow}(\mathbf{r})\)) are included in generalized gradient approximations (GGA) [44]. In the meta-GGAs additional kinetic energy densities (\(\tau_{\sigma=\uparrow,\downarrow}=\sum_{i=1}^{\rm occ}\frac{1}{2}|\nabla\psi_{ i,\sigma}|^{2}\)) are added [45]. Notably, semilocal XC functionals can explain the equilibrium geometry and symmetries of materials although they can fail to reproduce the location of the highest occupied and lowest unoccupied state due to the inherent self-interaction error [42]. The hybrid functionals are designed to overcome this issue by suitably mixing the semilocal part with the exact HF exchange energy [46; 47]. The range-separated hybrid functional can be written as \[E_{\rm xc}^{\rm HSE}=\alpha E_{\rm x}^{\rm SR,HF}(\omega)+(1-\alpha)E_{\rm x }^{\rm SR,PBE}(\omega)+E_{\rm x}^{\rm LR,PBE}+E_{\rm c}^{\rm PBE} \tag{2}\] where \(\alpha\) defines the fraction of HF exchange mixed at the short range and \(\omega\) determines the range-separation parameter. Another way to correct the band gap at the semilocal level is to directly approximate the exchange potential [48]. In this spirit, modified Becke-Johnson (mBJ) exchange potential is proposed with the following form \[v_{\rm x,\sigma}^{\rm TB-mBJ}(\mathbf{r})=cv_{\rm x,\sigma}^{\rm BR}(\mathbf{ r})+(3c-2)\frac{1}{\pi}\sqrt{\frac{5}{12}}\sqrt{\frac{2\tau_{\sigma}( \mathbf{r})}{\rho_{\sigma}(\mathbf{r})}} \tag{3}\] where, \(v_{x,\sigma}^{BR}(\mathbf{r})\) is the Becke-Roussel potential [49]. The value of parameter \(c\) (\(c_{mBJ}\), henceforth) can be determined from the average of \(\nabla\rho/\rho\) over the unit cell [50]. The first term in Eq. 3 denotes the average HF potential, whereas the second term gives the screening potential. Equation 3 can be regarded as a hybrid potential where the parameter \(c_{mBJ}\) controls how much the HF and the screening component are mixed. We use the aforementioned XC functionals lying on different rungs of the Jacob's ladder [40] to calculate the electronic and topological state of KZnBi (see Table 1). The kinetic energy cut-off of 420 eV for the plane-wave basis set and Gaussian smearing with a smearing width of 50 meV are used. We consider the van der Waals corrections within the DFT-D3 method of Grimme with zero damping function [51]. The spin-orbit coupling (SOC) is added self-consistently to include the relativistic effects [44]. The optimization of lattice geometry is performed until the forces on each atom are less than \(10^{-2}\) eV\(\AA^{-1}\). A tolerance for electronic energy minimization is \(10^{-6}\) eV and a \(\Gamma-\)centred \(11\times 11\times 9\)\(k\)-mesh for the BZ sampling is considered. The topological properties are calculated using the materials-specific tight-binding model generated using the vasp2wannier interface. The atom-centred Wannier functions are generated using the Zn-\(s\) and Bi-\(p\) orbitals [52]. We calculate the surface states using the iterative Green's functions method with WannierTools package [53; 54]. ## III Results ### Crystal structure and symmetries KZnBi belongs to the non-symmorphic space group P\(6_{3}/mmc\) (No. 194; \(D_{6h}^{4}\)) and consists of ZnBi planer honeycomb layers with K atoms lying between them [34]. The \(\{-A-B-\}_{n}\) stacking of the ZnBi honeycomb layers with weak interlayer coupling mediated by K atoms in the \(\hat{z}\) direction forms the KZnBi crystal structure (see Fig. 1(a)). The K, Zn, and Bi atoms occupy Wyckoff positions \(2a\) (\(0,0,0\)), \(2d\) (\(\frac{1}{3},\frac{2}{3},\frac{3}{4}\)), and \(2c\) (\(\frac{1}{3},\frac{2}{3},\frac{1}{4}\)), respectively, in the lattice. Upon considering the valence electron configuration of the K, Zn, and Bi atoms, we find that the K atom forms a monovalent cationic state (K\({}^{+}\)) with one electron transferred to the honeycomb lattice ([ZnBi]\({}^{-}\)). Such a charge transfer satisfies the octet rule of electron filling and suggests that KZnBi may realize a small gap insulator or band overlap semimetal [55]. The crystal symmetries of KZnBi include an inversion \(\hat{\mathcal{I}}\) centred at K atom, six-fold screw rotation \(\tilde{C}_{6z}\): \((x,y,z)\rightarrow(x\ cos(\theta)-y\ sin(\theta),x\ sin(\theta)+ycos(\theta),z+c/2)\) with \(\theta=2\pi/6\), a glide reflection \(\tilde{M}_{x}\): \((x,y,z)\rightarrow(-x,y,z+c/2)\), \(M_{y}\): \((x,y,z)\rightarrow(x,-y,z)\), and \(\tilde{M}_{z}\): \((x,y,z)\rightarrow(x,y,-z+1/2)\). Note that here \(\tilde{M}_{x}\) and \(\tilde{C}_{6z}\) are non-symmorphic symmetries that involve unremovable lattice translation even by shifting the origin of the unit cell. The top view of the crystal structure is shown in Fig. 1(b) with a transformed orthorhombic unit cell. The (100) and (010) planes (surface normal is given in the cartesian coordinate system) represent the truncated ZnBi honeycomb lattice in zigzag and armchair directions. The bulk BZ and its associated (010) and (100) surface BZs of KZnBi are shown in Fig. 1(c). The grey plane centred at \(k_{z}=0\) shows the \(\tilde{M}_{z}\) mirror plane. A detailed discussion of surface symmetries and their associated electronic features is given below. We present the optimized lattice parameters of KZnBi obtained with different XC density functionals in Table 1. The parameters obtained with the LDA are smaller than the corresponding experimental values. The calculated bond lengths of planer Zn-Bi bond (\(d_{Zn-Bi}\)) and out-of-plane K-[ZnBi] bond (\(d_{K-ZnBi}\)) are smaller than their associated experimental values as expected since the LDA is known to overestimates bond strengths in many cases. The lattice parameters obtained with GGA are overestimated almost by the same amount as the LDA underestimated them. Since KZnBi has a layered structure, the van der Waals (vdW) interaction plays a significant role in its structure parameters. The mismatch in lattice parameters with experimental values decreases when vdW [51] interactions are included during geometry optimization with GGA. While considering the meta-GGA, SCAN and R2SCAN density functionals provide lattice parameters close to their experimental values. Importantly, the SCAN+vdW provides an excellent match with the corresponding experimental parameters. We thus describe the electronic properties of KZnBi using the SCAN+vdW optimized parameters in our later discussion. ### Topological non-symmorphic crystalline insulator state The bulk band structure of KZnBi using fully relaxed structural parameters is shown in Fig. 1(d). It realizes an insulating state with an indirect bulk gap of 445 meV. The bands near the Fermi level consist of strongly hybridized Bi-\(p\) and Zn-\(s\) states with a band inversion at the \(\Gamma\) and \(A\) points. The presence of both time-reversal and inversion symmetries in KZnBi enforces two-fold Kramers's degeneracy in all the bands. The non-symmorphic symmetries enforce additional degeneracy in the valence and conduction bands at the BZ boundaries. Such band degeneracies are evident along the \(L-A\) symmetry line in Fig. 1(d). To discuss the topological state, we present the irreducible representations of the space group characterizing band symmetries along the \(\Gamma-A\) line in Fig. 2(a). The bands at \(\Gamma\) and \(A\) points are two and four-fold degenerate, respectively. At \(A\) point, there are two types of representations, \(A_{6}\) and \(A_{4}+A_{5}\) (\(A_{4/5}\), in short). As one moves away from A to \(\Gamma\) at any intermediate point \(\Delta\) (0, 0, \(\delta k_{z}\)), the four-fold bands split into two \begin{table} \begin{tabular}{l c c c c} & \(a\) (Å) & \(c\) (Å) & \(d_{Zn-Bi}\) (Å) & \(d_{K-ZnBi}\) (Å) \\ \hline LDA & 4.591 & 10.262 & 2.650 & 2.565 \\ GGA & 4.743 & 10.913 & 2.738 & 2.728 \\ GGA+vdW & 4.711 & 10.692 & 2.720 & 2.673 \\ SCAN & 4.652 & 10.706 & 2.685 & 2.676 \\ R2SCAN & 4.678 & 10.725 & 2.700 & 2.681 \\ SCAN+vdW & 4.636 & 10.601 & 2.676 & 2.650 \\ \hline Exp. [34] & 4.676 & 10.597 & 2.699 & 2.649 \\ \end{tabular} \end{table} Table 1: Calculated lattice parameters for KZnBi in P6\({}_{3}/mmc\) (No. 194; \(D_{6h}^{4}\)) space group using different XC density functionals. \(a\) and \(c\) are the hexagonal in-plane and out-of-plane lattice constants. \(d_{Zn-Bi}\) and \(d_{K-ZnBi}\) are bond lengths of in-plane Zn-Bi and out-of-plane K-{ZnBi} bonds. Figure 1: **Crystal lattice, symmetries, and band structure of KZnBi.** Bulk crystal structure of KZnBi with the space group P6\({}_{3}/mmc\) (left). The ZnBi honeycomb layers mediated by K layers are stacked along the hexagonal \(\hat{z}\) axis. The minimal unit cell arrangement and symmetries are shown in the right panel. Maroon dotted plane and solid grey line denote the \(\tilde{M}_{x}\) and \(\tilde{M}_{z}\) mirror planes. The dark yellow line shows the three-fold rotational axis. (b) Top view of the crystal structure with truncated zigzag and armchair directions. (c) Bulk Brillouin zone (BZ) and associated (100) (blue color), and (001) (green color) plane projected surface BZs of KZnBi. The high-symmetry points are marked. The grey plane inside the bulk BZ denotes the \(\tilde{M}_{z}\) mirror plane. (d) Calculated band structure of KZnBi with SCAN+vdW density functional using fully optimized structural parameters. Red and sky-blue colors indicate the contribution from the Bi-\(p\) and Zn-\(s\) states. two-fold bands \(A_{6}\rightarrow\Delta_{8}+\Delta_{9}\) and \(A_{4}+A_{5}\rightarrow\Delta_{7}+\Delta_{7}\). At the \(\Gamma\) point, the three \(\Delta\) point representations evolve to \(\Gamma_{i}^{\pm}\) (\(i=7-12\)), where \(\pm\) represents parity eigenvalue of the states. Importantly, \(A_{6}\) and \(\Gamma_{11,12}^{-}\) bands are inverted and lie in the valence region without having any band crossings along the \(\Gamma-A\) direction. The parity eigenvalue analysis gives a \(\mathbb{Z}_{2}=0\), revealing band inverted \(\mathbb{Z}_{2}\) trivial insulator state of KZnBi. To characterize the exact topological state of KZnBi, we present the Wannier charge centers (WCC) evolution of the occupied states along \(X-U-Z-\Gamma-X\) direction in Fig. 2(b). The nontrivial connection in the WCC spectrum is clearly seen. Note that the glide reflection \(\tilde{M}_{x}\) combined with the time-reversal \(\hat{\mathcal{T}}\) symmetry defines a \(\mathbb{Z}_{4}\) invariant (\(\chi\)) while the mirror operator \(\tilde{M}_{z}\) ensures a well-defined mirror Chern number (MCN) \(C_{m}\) on the \(k_{z}=0\) and \(\pi/c\) planes. The relation between the WCC spectrum and topological invariants is given as [33; 56], \[\chi=2(n_{XU}^{+}+n_{\text{ZT}}^{+})+n_{\Gamma X}\ mod\ 4 \tag{4}\] and \[C_{m}=(n_{-X\Gamma X}^{+i}-n_{-X\Gamma X}^{-i})/2 \tag{5}\] The numbers \(n\)'s in Eq. 4 and 5 can be evaluated from the WCC spectrum as follows. If we choose the \(\tilde{M}_{x}\) glide subspace with eigenvalue \(+ie(-ik_{z}c/2)\) along \(X\to U\) and \(Z\rightarrow\Gamma\) direction, \(n_{XU}^{+}\) and \(n_{Z\Gamma}^{+}\) is defined as the difference between the number of times an arbitrary horizontal reference line crosses the bands with positive and negative slopes. \(n_{\Gamma X}\) is the difference between the number of times the reference line crosses positive and negative slope bands in the \(\Gamma\to X\) direction. Similarly, in the \(\tilde{M}_{z}\) mirror subspace along \(-X\rightarrow\Gamma\to X\) direction, \(n_{-X\Gamma X}^{+i}\) and \(n_{-X\Gamma X}^{-i}\) gives the difference between the number of times this line crosses the positive and negative slope bands. Since the system is time-reversal symmetric, \(n_{-X\Gamma X}^{+i}=-n_{-X\Gamma X}^{-i}\). Note that while taking this difference, one should compute {positive slope} - {negative slope} bands. Following this convention, the values deduced from the WCC spectrum are \(n_{XU}^{+}=0\), \(n_{Z\Gamma}^{+}=0\), \(n_{\Gamma X}=2\) and \(n_{-X\Gamma X}^{+i}=-2\). The obtained topological invariants are \(\chi=2\) and \(C_{m}=-2\). This shows that KZnBi is a topological non-symmorphic crystalline insulator similar to KHgX [23; 24; 25; 33]. The robustness of this topological state is further verified by calculating the electronic structure and WCCs spectrum using different XC density functionals (LDA, GGA, R2SCAN) including the hybrid density functional with 25% HF exchange (results not shown for brevity). The nontrivial surface states in topological crystalline insulators (non-symmorphic and symmorphic) appear only on the crystal surfaces that respect crystalline symmetries protecting the topological state. To showcase these nontrivial states, we calculate the surface band structure of (010) and (100) surfaces of KZnBi (Figs. 2(c) and (e)). On the (010) surface, the glide-mirror \(\tilde{M}_{x}\) and mirror \(\tilde{M}_{z}\) are preserved. Since the time-reversal \(\hat{\mathcal{T}}\) and \(\tilde{M}_{x}\) symmetries commute \(i.\ e.\ [\hat{\mathcal{T}},\tilde{M}_{x}]=0\). This results \((\hat{\mathcal{T}}\tilde{M}_{x})^{2}=\hat{\mathcal{T}}^{2}\ \tilde{M}_{x}^{2}=\text{t}(\text{c}\hat{z}) \text{=}e^{-ik_{z}c}\). At \(k_{z}=\pi/c\), \((\hat{\mathcal{T}}\tilde{M}_{x})^{2}=-1\) and thus, a Kramer-like double degeneracy is enforced at every wave vector (**k**) on the \(\overline{Z}-\overline{U}\) line as seen in Figs. 2(c). We observe Mobius twisted bands along the \(\tilde{M}_{x}\) invariant \(\overline{X}-\overline{U}\) and \(\overline{Z}-\overline{\Gamma}\) paths. The eigenvalue of \(\tilde{M}_{x}\) operator at any wave vector along these lines is \(\pm ie^{-ik_{z}c/2}\). When \(k_{z}=0\), the eigenvalues are \(\pm i\) and when \(k_{z}=\pi/c\), the eigenvalues are \(\pm 1\). This eigenvalues switching enforces bands to come in pair of four, giving rise to hourglass surface fermions in KZnBi. The hourglass band dispersion is clearly revealed in the zoom-in view of bands along \(\overline{Z}-\overline{\Gamma}\) in Fig. 2(c). The evolution of hourglass surface states in the 2D surface BZ is shown in Fig. 2(d). To resolve the \(\tilde{M}_{z}\) mirror symmetry protected surface states, we present the calculated band structure of the (100) surface in Fig. 2(e). On the \(k_{z}=0\) plane (gray plane in Fig. 1(c)), the bulk bands split into \(+i\) or \(-i\) sub Figure 2: **Topological non-symmorphic crystalline insulator state of KZnBi.** (a) Closeup of the bulk bands along the \(\Gamma-A\) line with irreducible representations. The Bi-\(p\) and Zn-\(s\) orbital characters are given in red and sky-blue, respectively. (b) Calculated Wannier charge centers (WCC) spectrum along various high-symmetry lines. The dashed red line identifies an arbitrary reference line to calculate the topological invariants (see text for details). Closeups of the spectrum in the light red, yellow, and green boxes are given in the insets. (c) Band structure of the (010) surface of KZnBi. The inset shows a zoomed-in view of the surface Hourglass fermion. (d) Constant surface band contours at various energies. (e) Band structure of the (100) surface of KZnBi. (f) Surface band contour at 100 meV (yellow line \(e\)) with the associated spin texture. The four counter-propagating spin-polarized states represent the double quantum spin-Hall states. (g) Evaluation of the WCCs on the \(k_{z}=0\) plane with nonzero mirror Chern number \(C_{m}=-2\). space of \(\tilde{M_{z}}\). By decomposing the Hamiltonian \(H(k)=H^{+i}(k)+H^{-i}(k)\), where \(H^{\pm i}(k)\) is spanned by the mirror eigenstates \(\left|\psi_{k,n}^{\pm i}\right\rangle\) satisfying \(\tilde{M_{z}}\left|\psi_{k,n}^{\pm i}\right\rangle=\pm i\left|\psi_{k,n}^{\pm i }\right\rangle\), and calculating the associated Chern numbers, we obtain the mirror Chern number \(C_{m}=-2\) (Fig. 2(g)). Due to this nontrivial \(C_{m}\) on the \(k_{z}=0\) plane, topological nontrivial Dirac cone states are seen. However, these bands are almost non-dispersive along the \(k_{z}\) direction (\(\overline{\Gamma}-\overline{Z}\) line), resulting in a highly anisotropic band dispersion with a nearly flat band dispersion along \(\overline{\Gamma}-\overline{Z}\). We show the constant energy contours at 100 meV above the Fermi level on the \(k_{y}-k_{z}\) plane in Fig. 2(f). The two pairs of spin-polarized counter-propagating surface states that are well-spaced in momentum space are clear. These states reflect the double quantum spin-Hall insulator state in KZnBi [23]. ### Electronic state tuning and topological Dirac semimetal state The preceding analysis demonstrates that KZnBi realizes a topological nonsymmorphic crystalline insulator with different XC functionals in their default setting. This is in disagreement with the recent ARPES experiments where a topological Dirac semimetal state is reported [34]. To resolve this issue, we now present the bulk bands obtained by changing the parameter \(c_{mBJ}\) of mBJ potential in Fig. 3. This parameter mixes the average HF and screening potential and can be tuned manually to obtain the correct band ordering [50; 57]. Figure 3(a) presents the evolution of bulk bands with \(c_{mBJ}\) along \(A-\Gamma-A\) line. In the atomic limit of KZnBi, the space group representations of the valence bands are \(\Delta_{7}\) and \(\Delta_{7}\), whereas the conduction band are \(\Delta_{8}\) and \(\Delta_{9}\) at an intermediate point \(\Delta\) on this line. The associated \(\tilde{C}_{6z}\) rotational eigenvalues are \[\begin{split}\Delta_{7}&=\{e^{i\pi/2},e^{-i\pi/2}\} \\ \Delta_{8}&=\{e^{i5\pi/6},e^{-i5\pi/6}\}\\ \Delta_{9}&=\{e^{i\pi/6},e^{-i\pi/6}\}\end{split} \tag{6}\] Here, we omit the phase factor \(e^{(-ik_{z}c/2)}\) arising from the lattice translation along \(c\) direction. Due to inversion and time-reversal symmetries, these eigenvalues are degenerate in pairs. The band crossings between different \(\Delta_{i}\) bands will not hybridize and can result in symmetry-protected band crossings. At the \(c_{mBJ}\) value of 1.15, \(\Delta_{8}\) and \(\Delta_{9}\) bands lie in the valence region, developing double band inversions at both \(\Gamma\) and \(A\) points. With an increase in \(c_{mBJ}\) to 1.20, \(\Delta_{7}\) and \(\Delta_{8}\) bands cross at the Fermi level to form stable band crossings. With a further increase in \(c_{mBJ}\) to 1.26, \(\Delta_{7}\) band crosses with \(\Delta_{9}\) band to generate the stable band crossings at the Fermi level. When \(c_{mBJ}\) is increased to 1.35, both \(\Delta_{7}\) bands move to the valence region and generate a trivial insulator state. To clarify the distinct ordering of the crossings bands, we define energy gap \(\delta_{i}\) (\(i=1-5\)) at \(\Gamma\) and \(A\) points as \[\begin{split}\delta_{1}&=\Gamma_{9}^{+}-\Gamma_{10} ^{-}\\ \delta_{2}&=\Gamma_{11}^{-}-\Gamma_{7}^{+}\\ \delta_{3}&=\Gamma_{9}^{+}-\Gamma_{7}^{+}\\ \delta_{4}&=\Gamma_{11}^{-}-\Gamma_{10}^{-}\\ \delta_{5}&=A_{6}-A_{4/5}\end{split} \tag{7}\] The evolution of these \(\delta_{i}\)'s as function of \(c_{mBJ}\) is shown in Fig. 3(b). All \(\delta_{i}\)'s are negative at lower \(c_{mBJ}\) values (\(c_{mBJ}<1.17\)), and whenever \(\delta_{i}\)'s changes sign, a band inversion gets unlocked. The critical values of \(c_{mBJ}\) at the phase transition points are marked by arrows in Fig. 3(b). At lower \(c_{mBJ}\) values, a topological nonsymmorphic crystalline insulator with \(\chi=2\) and \(C_{m}=-2\) is formed. At the intermediate values of \(c_{mBJ}\), a topological Dirac semimetal state with \(C_{m}=-1\) is realized. However, the ordering of crossings bands that form Dirac states changes with \(c_{mBJ}\). At larger values of \(c_{mBJ}>1.32\), a trivial insulator is realized. On comparing with the experimental results, we find that the topological Dirac semimetal phase realized with \(1.26<c_{mBJ}<1.32\) better reflects the experimental situation. Such a Dirac semimetal phase and band Figure 3: **Evolution of band crossings and topological phase tuning in KZnBi.** (a) Calculated band structure along \(A-\Gamma-A\) line for various values of \(c_{mBJ}\). The space group representations are noted on bands. (b) Band gap at \(\Gamma\) (\(\delta_{i,\,\,i=1-4}\)) and \(A\) points as a function of \(c_{mBJ}\) (see text for details). The horizontal grey line at \(\delta=0\) marks the reference line separating the normal and inverted states. The solid markers represent the calculated values, whereas solid lines are guides to the eye. TDSM1, TDSM2, and TDSM3 represent topological Dirac semimetal with different ordering of the crossings bands. TNCI represents a topological non-symmorphic crystalline insulator. (c) Bands structure obtained with \(c_{mBJ}=1.265\) and hybrid density functional with \(\omega=0.2\) Å\({}^{-1}\) and \(\alpha=42\%\) (see Eq. 2). Closeup of the bulk bands along \(\Gamma-A\) is shown in the right panel. orderings are reproduced by increasing the default HF exchange to 42% with \(\omega=0.2\) A\({}^{-1}\) in hybrid density functionals, as shown in Fig. 3(c). These results clearly reveal that enhanced electron correlations are essential to restore the correct orbital ordering and topological Dirac semimetal state of KZnBi. We discuss topological properties of the Dirac semimetal state in Fig. 4 and compare them with the experimental results. The two symmetry-protected four-fold degenerate Dirac nodes lie on the \(A-\Gamma-A\) rotational axis at \((0,0,\pm 0.143)\) A\({}^{-1}\) (Fig. 4(a)). The energy dispersion around the Dirac nodes is highly anisotropic between in-plane and out-of-plane directions as resolved in Fig. 4(b). Since energy dispersion on the \(k_{z}=0\) plane is gapped, we can define the \(\mathbb{Z}_{2}\) number similar to insulators. The calculated WCC spectrum reveals a nontrivial \(\mathbb{Z}_{2}=1\) on this plane. This demonstrates that the Dirac semimetal state of KZnBi is topologically protected, similar to Na\({}_{3}\)Bi and Cd\({}_{3}\)As\({}_{2}\) Dirac semimetals [14; 15]. Figures 4(d) and 4(e) show the surface band structure of (001) and (010) surfaces of KZnBi. The two Dirac nodes project at the \(\overline{\Gamma}\) point on the (001) plane. On the (010) surface, the Dirac nodes project in the \(\Gamma-Z\) line. The topological surface states emanating from a projected Dirac node are seen in Fig. 4(e). The associated constant energy spectrum at various energies in Fig. 4(f) reveals the double Fermi arcs states that emanate and terminate at the projected Dirac nodes at the Fermi level. The Fermi arc states change topology as one moves away from the Fermi level. We present the local energy dispersion of Dirac bands along \(k_{x}\), \(k_{y}\), and \(k_{z}\) directions in Figs. 5(a)-(c) and associated Fermi velocities \(v_{f}=\partial E/\partial k\) in Figs. 5(d)-(f). The Fermi velocity of bands at the Dirac nodes is \(9.09\times 10^{5}\)\(m/s\), \(8.78\times 10^{5}\)\(m/s\), and \(1.31\times 10^{5}\)\(m/s\) along \(k_{x}\), \(k_{y}\) and \(k_{z}\) directions, respectively. The energy dispersion and Fermi velocity reveal the anisotropic nature of Dirac carriers. The calculated location, nature, and carrier velocities of the Dirac states are in excellent agreement with their corresponding experimental results [34]. ## IV Conclusion We have investigated the electronic structure of KZnBi within the framework of density functional theory, with a focus on delineating its exact topological state. Our structural optimization based on various levels of XC density functionals shows that the lattice parameters obtained with SCAN+vdW are in excellent agreement with the experimental parameters. The associated electronic state realizes an insulator state with inverted band ordering in bulk BZ. The band symmetries and WCC spectrum show that the system realizes a topological non-symmorphic crystalline insulator with topological invariants \(\chi=2\) and \(C_{m}=-2\). This calculated electronic and topological state is found to be distinct from the recent experiments that report KZnBi as a topological Dirac semimetal. We address this discrepancy by showing that enhanced XC effects are essential to reproduce the topological Dirac semimetal state with correct orbital ordering in KZnBi. Specifically, a higher value of \(c_{mBJ}\) parameter that mixes the HF exchange and screening potentials reproduces the correct orbital ordering and Dirac semimetal state. Our calculated Dirac nodes, their location, and associated carrier velocities are found to be in Figure 4: **Topological Dirac semimetal state of KZnBi.** (a) Location of the four-fold Dirac nodes in the hexagonal Brillouin zone on the rotational axis (red sphere). (b) Energy dispersion of the Dirac cones along the out-of-plane (left) and in-plane (right) directions. (c) Evolution of Wannier charge centers (WCCs) on the \(k_{z}=0\) plane along \(\Gamma-X\) line. The WCCs cross the red reference line one-time, revealing nontrivial \(\mathbb{Z}_{2}=1\). (d)-(e) Calculated band structure of (001) and (010) surface of KZnBi. Topological surface states emanating from the projected Dirac nodes are resolved. (f) Evolution of surface band contours as a function of energy. Double Fermi arc surface states connecting the projected bulk Dirac nodes are resolved at the Fermi energy. Figure 5: **Energy dispersion and Fermi velocity of Dirac states.** Calculated energy dispersion of the Dirac bands along (a) \(k_{z}\), (b) \(k_{y}\) and (c) \(k_{z}\) directions. The center of the axis is located at a Dirac node. Calculated carrier velocity along (d) \(k_{x}\), (e) \(k_{y}\) and (f) \(k_{z}\) directions. excellent agreement with available experimental results. Our study thus demonstrates that enhanced XC effects are essential to reproduce the correct orbital ordering and topological Dirac semimetal state of KZnBi. ## Acknowledgements This work is supported by the Department of Atomic Energy of the Government of India under Project No. 12-R&D-TFR-5.10-0100 and benefited from the computational resources of TIFR Mumbai.
2309.13217
Machine Learning-Aided First-Principles Calculations of Redox Potentials
Redox potentials of electron transfer reactions are of fundamental importance for the performance and description of electrochemical devices. Despite decades of research, accurate computational predictions for the redox potential of even simple metals remain very challenging. Here we use a combination of first principles calculations and machine learning to predict the redox potentials of three redox couples, $\mathrm{Fe}^{2+}$/$\mathrm{Fe}^{3+}$, $\mathrm{Cu}^{+}$/$\mathrm{Cu}^{2+}$ and $\mathrm{Ag}^{+}$/$\mathrm{Ag}^{2+}$. Using a hybrid functional with a fraction of 25\% exact exchange (PBE0) the predicted values are 0.92, 0.26 and 1.99 V in good agreement with the best experimental estimates (0.77, 0.15, 1.98 V). We explain in detail, how we combine machine learning, thermodynamic integration from machine learning to semi-local functionals, as well as a combination of thermodynamic perturbation theory and $\Delta$-machine learning to determine the redox potentials for computationally expensive hybrid functionals. The combination of these approaches allows one to obtain statistically accurate results.
Ryosuke Jinnouchi, Ferenc Karsai, Georg Kresse
2023-09-22T23:45:16Z
http://arxiv.org/abs/2309.13217v2
# Machine Learning-Aided First-Principles Calculations of Redox Potentials ###### Abstract Redox potentials of electron transfer reactions are of fundamental importance for the performance and description of electrochemical devices. Despite decades of research, accurate computational predictions for the redox potential of even simple metals remain very challenging. Here we use a combination of fictitious principles calculations and machine learning to predict the redox potentials of three redox couples, Fe\({}^{2+}\)/Fe\({}^{3+}\), Cu\({}^{+}\)/Cu\({}^{2+}\) and Ag\({}^{+}\)/Ag\({}^{2+}\). Using a hybrid functional with a fraction of 25% exact exchange (PBE0) the predicted values are 0.92, 0.26 and 1.99 V in good agreement with the best experimental estimates (0.77, 0.15, 1.98 V). We explain in detail, how we combine machine learning, thermodynamic integration from machine learning to semi-local functionals, as well as a combination of thermodynamic perturbation theory and \(\Delta\)-machine learning to determine the redox potentials for computationally expensive hybrid functionals. The combination of these approaches allows one to obtain statistically accurate results. + Footnote †: Also at VASP Software GmbH. + Footnote †: Also at VASP Software GmbH. ## I Introduction Green energy and a circular economy are one of the key paradigms that our human society needs to realize in the next few decades. This implies that we need to give up on the combustion of fossil fuels. A key element to achieve this paradigm shift is the use of electrochemistry, be it for batteries and fuel cells, to convert electrical energy to hydrogen or other valuable chemicals, or to convert hydrogen back to energy without direct combustion in air. The redox potential of electron transfer (ET), Ox + \(n\)e\({}^{-}\)\(\rightarrow\) Red in liquids, is an essential property for a variety of electrochemical energy conversion devices, such as batteries, fuel cells, and electrochemical fuel synthesis. It determines the alignment of redox levels relative to the Fermi level of a metal, or valence band maximum (VBM) and conduction band minimum (CBM) of semiconductor and insulator electrodes. It also determines the potential windows of ions and molecules in solutions, that is the range of voltages within which a specific ion or molecule can undergo electrochemical reactions. This information is vital to design redox species and solvent molecules, such as redox couples for redox-flow batteries [1], solvents and additives for Li-ion batteries [2; 3; 4], radical scavengers for fuel cells [5] and electrocatalysts for fuel synthesis [6; 7]. Unfortunately up to date, accurate first-principles (FP) predictions of this key property are difficult to come by, with typical prediction errors being 0.5 V. For instance, Sprik and co-workers [8; 9], found a large spread of values for two metal ion couples, with values between \(-\)1.13 and \(-\)0.20 V for Cu\({}^{+}\)/Cu\({}^{2+}\) (exp. 0.16 V) and 0.90 and 1.72 V for Ag\({}^{+}\)/Ag\({}^{2+}\) (exp. 1.98 V) with variations being related to differences in the density functional, pseudopotential, but also code base (CMPD versus CP2K). Although the "best" values using hybrid functionals and highly accurate pseudopotentials are reasonably close to experiment (\(-\)0.20 V for Cu, and 1.72 V for Ag), the agreement is still far from quantitative. The main goals of the present work are three-fold: First, we want to accurately calculate the redox potential of metal ions in water for three prototypical cases: Ag, Cu, and Fe. Ag\({}^{2+}\) ions are among the most aggressive oxidants with a large redox potential, whereas the redox potential of Cu\({}^{2+}\) ions is fairly shallow, and the Fe\({}^{2+}\)/Fe\({}^{3+}\) reaction lies in between. The first two redox reactions involve large changes in the ion water coordination, which makes the calculation challenging, whereas the redox reaction of Fe is a so called simple outer sphere ET reaction and has been the subject of numerous experimental and theoretical studies [13]. The Fe ions are conceived to be particularly challenging for density functional theory. Second, we want to establish a computationally feasi \begin{table} \begin{tabular}{l l l l l} \hline \hline XC functional & Fe & Cu & Ag & RMSE \\ \hline RPBE+D3 & 0.80 & 0.66 & 1.88 & 0.29 \\ PBE0 (0.25) & **0.92** & **0.26** & **1.99** & 0.11 \\ PBE0 (0.50) & 0.79 & \(-\)0.34 & 2.12 & 0.30 \\ PBE0+D3 (0.25) & **0.94** & **0.24** & **2.02** & 0.11 \\ PBE0+D3 (0.50) & 0.83 & \(-\)0.38 & 2.13 & 0.32 \\ Exp.\({}^{[\mathrm{a}]}\) & 0.77 & 0.15 & 1.98 & \\ \hline \end{tabular} \end{table} Table 1: Redox potentials \(U_{\mathrm{ redox}}\) of three transition metal cations calculated by RPBE+D3, PBE0 (0.25), PBE0 (0.50), PBE0+D3 (0.25) and PBE0+D3 (0.50) using MLFF and \(\Delta\)-ML. Here, values in the parenthesis are the fraction of the exact exchange. The results for 64 water molecular systems are tabulated. The absolute potential of SHE is set to 4.44 V [10]. The root means square errors (RMSE) compared to the experimental redox potentials are also shown. ble pathway that yields statistically accurate results. Last but not least, we want to systematically explore different density functionals to set a guideline for future studies. ## II Overview of modelling We start with an overview of the used theoretical modelling. The Nernst equation implies that the redox potential \(U_{\text{redox}}\) is determined by the free energy difference \(\Delta F\) between the reduced and oxidized states as \[U_{\text{redox}}=-\frac{\Delta F}{en}, \tag{1}\] where \(e\) is the elementary charge and \(n\) is the number of electrons involved in the reaction. The free energy difference \(\Delta F\) can be exactly determined by thermodynamic integration (TI)[14; 15] \[\Delta F=\int_{0}^{1}\left\langle\frac{\partial H}{\partial\lambda}\right\rangle _{\lambda}d\lambda. \tag{2}\] Here, \(\left\langle A\right\rangle_{\lambda}\) denotes the expectation value of \(A\) for an ensemble created by the Hamiltonian at coupling \(\lambda\). The integral seamlessly connects the oxidized state (\(\lambda=0\)) to the reduced state (\(\lambda=1\)) along a coupling path [16; 17]. If the structural changes are significant from the oxidized to the reduced species -- recall that this is the case for Ag and Cu -- many integration steps are required to accurately determine the energy difference. The application of this approach entails two difficulties. (i) Clearly, it implies huge computational cost if applied directly to hybrid functionals; if 100.000 timesteps are required, several 10 mio core hours are necessary to obtain good statistical accuracy. (ii) Second, during the reaction one electron needs to be transferred to a reservoir, characterizing the chemical potential of the electrons. In experiment that redox potential is usually measured with respect to the vacuum level, a quantity not directly accessible during a simulation using periodic boundary conditions. Chemical potential of electronsWe will address the second point (ii) first. Jiao and co-workers [18] suggested to use the average electrostatic potential as suitable reference point, and Leung [19] calculated the position of the average electrostatic potential with respect to the vacuum level in a second independent calculation involving a water slab. We refine this approach in a conceptually easy to understand way that simultaneously reduces finite size errors. During the redox reactions, we use the 1s level of oxygen atoms far away from the reactant as conceptual reference. The first principles code used in the present work (VASP) conveniently calculates this reference point for each atom in the simulation cell. This means that the energy contribution \(\frac{\partial H}{\partial\lambda}\) is replaced by the more appropriate \(\frac{\partial H}{\partial\lambda}-\mu\frac{\partial N_{\text{e}}}{\partial\lambda}\), where \(N_{\text{e}}\) is the electronic number of the system, and \(\mu\) is the chemical potential that we fix to be the energy of the O 1s level. In practice, the average position of the O 1s level varies slowly and smoothly along the coupling path, so we can replace the integral \(\int\left\langle\mu\right\rangle d\lambda\) by the average of the O 1s levels in the fully reduced and oxidized state (trape-zoidal rule). In the second step, we conceptually transfer the electron from the O 1s level to the vacuum level. This implies that we calculate the electrostatic potential in the middle of the vacuum (vacuum level) and subtract from this the average O 1s level of water molecules in the middle of a water slab. The scheme is illustrated in Fig. 1. One key advantage of using machine learned (ML) force fields for this step is that one can create many statistically independent configurations for the water slab. We do this by on-the-fly learning an H\({}_{2}\)O force field for the bulk and then for the surface, and performing finally extensive million step ML molecular dynamics for the surface. From this simulation we draw 3000 statistically inde Figure 1: Conceptual steps to oxidize the reduced state by transferring an electron from the metal ion to the vacuum: the transfer from the redox level to an oxygen 1s level far from the reactant in the bulk in the same super cell (Step I) and the electron transfer from an oxygen 1s level in the slab to the vacuum (Step II). White small spheres, red medium spheres and large spheres with other colors are H, O and metal atoms, respectively. The figure inset shows the snapshot of the water slab and computed local potential profile across the water slab. The graphics showing bulk and interfacial models are made by VESTA [12]. pendent snapshots. Only for these snapshots, first principles calculations are performed to determine the average O 1s level with respect to the vacuum level. This greatly accelerates the calculations but maintains excellent statistical accuracy. We finally note that using atoms far away from the "defect" as reference, also helps to reduce finite size effects [20; 21]. Thermodynamic integration:Determining the free energy difference, i.e. point (i), is somewhat more difficult to address. Naively, one could just perform the required thermodynamic integration using ML surrogate models. As we will show below this yields only acceptable accuracy. Hence, one still needs to perform thermodynamic integration from the ML surrogate model to the first principles calculations, for both the oxidized as well as the reduced state. We will adopt this strategy for semi-local functions. So this involves three calculations: thermodynamic integration from the oxidised to the reduces species using a ML surrogate model, and then for each oxidation state, integration from MLFF to the first principles Hamiltonian. There is one final obstacle though: performing thermodynamic integration to a hybrid functional that includes non-local exchange is still exceedingly demanding and challenging. So in this specific case, we have decided to replace the integration by thermodynamic perturbation theory (TPT), \[\Delta F=F_{1}-F_{0}=-\frac{1}{\beta}\ln\left\langle e^{-\beta\Delta U} \right\rangle_{0}=-\frac{1}{\beta}\ln\left\langle e^{\beta\Delta U}\right\rangle _{1}, \tag{3}\] where the symbol \(\Delta U\) denotes the potential energy difference between the two end points. Although equation Eq. (3) is in principle exact, the potential energy difference might need to be evaluated for thousands or even many ten thousand configurations if the ensembles generated by the two potentials are too distinct. However, TPT has the distinct advantage that the configurations can be generated using the cheap Hamiltonian. In our case, we use Eq. (3) to determine the free energy difference between a cheap semi-local functional and a hybrid functional. A further key advance is to learn the _difference_\(\Delta U\) between the semi-local functional and the non-local hybrid functional, that is, we adopt \(\Delta\)-machine learning (\(\Delta\)-ML) [31; 39; 40; 41; 42; 43; 44; 45]. Since the energy difference between the semi-local functional and the non-local functional is very smooth, few tens of hybrid functional calculations are sufficient to learn a highly accurate ML representation of \(\Delta U\). The computational cost is ultimately only limited by generating sufficient configurations using the semi-local functional. ## III Results and discussion We now detail our results, and will show that the adopted procedure yields statistically highly accurate results. The calculations were performed using VASP [22; 23] and the projector augmented wave (PAW) method [24; 25]. For the ML force fields (MLFFs) the implementation detailed in previous publications is used [26; 27; 28]. Similar to the pioneering ML approaches [29; 30; 31], the potential energy in our MLFF method is approximated as a summation of local energies [see Figure 2: Schematic of ML-aided TI and TPT to compute the free energy difference \(\Delta\)F\({}_{\text{bulk}}\). See details of equations in the Methods section. Figure 3: Metal-oxygen radial distribution functions (\(g_{\text{X--O}}\)) and running integration numbers (\(n_{\text{X--O}}\)) provided by 100 ps MLFF-MD and 10 ps FPMD simulations. Gray and black lines are for the reduced and oxidized states, respectively. Solid and dashed lines are results obtained by the MLFF and FP\({}_{\text{sl}}\), respectively. Graphics in the insets show first solvation structures at the reduced and oxidized states. Eq. (17)]. The local energy is approximated as a weighted sum of kernel basis functions [see Eq. (18)]. A Bayesian formulation allows to efficiently predict energies, forces and stress tensor components as well as their uncertainties. The predicted uncertainty enables the reliable sampling of the reference structures on-the-fly during the FPMD simulation. Details of the equations, parameters and training conditions are summarized in the Methods section and Section S3 in Supplementary Information (SI). As in the previous studies [32, 33, 34, 26, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 232, 240, 241, 242, 243, 244, 245, 246, 247, 248, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 411, 422, 434, 445, 446, 447, 448, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 411, 422, 434, 445, 449, 446, 449, 450, 452, 454, 456, 457, 459, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 489, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 515, 509, 515, 515, 52, 53, 54, 546, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 84, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 102, 103, 104, 105, 106, 107, 108, 109, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 141, 142, 143, 145, 146, 147, 148, 149, 150, 161, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 223, 240, 241, 242, 243, 244, 245, 246, 247, 248, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 271, 273, 274, 275, 276, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 293, 294, 295, 296, 297, 298, 299, 300, 300, 300, 301, 300, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 314, 315, 316, 317, 318, 319, 320, 321, 323, 324, 325, 326, 327, 328, 329, 334, 333, 340, 341, 342, 343, 345, 346, 347, 348, 350, 351, 352, 354, 356, 357, 358, 359, 360, 371, 372, 373, 374, 375, 376, 378, 379, 380, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 411, 422, 434, 44, 448, 409, 425, 409, 435, 409, 446, 447, 448, 450, 409, 411, 442, 449 persion correction is small for all redox couples. This implies that changes of the electronic properties (such as water valence band maximum and conduction band minimum) are most relevant, whereas all the functionals give a similar and good account of the solvation structure. In summary, our approach enables efficient statistical sampling that is indispensable for accurate computations of the free energies of aqueous systems. The TI and TPT calculations allow to improve the accuracy from the ML model to the semi-local functional and from the semi-local functional to the hybrid functional step-by-step. Combining TPT and \(\Delta\)-machine learning are particularly promising, since this allows to obtain statistically highly accurate results even for expensive functionals in very little compute time. For instance, it is well conceivable that one could also use methods beyond density functional theory for the final step. Our final results reproduce the redox potentials of the three transition metal cations with excellent accuracy using a standard hybrid functional. The integration pathways chosen here are generalizable to a wide variety of electron transfer reactions. We believe that the scheme will pave the way to first-principles electrochemistry predicting the key property of redox reactions in energy conversion devices. ## IV Methods ### TI and TPT The TI and TPT shown in Fig. 2 in the main text is conducted by using the equations listed below. * \(\mathbf{FP_{nl}}(\mathbf{Ox})\rightarrow\mathbf{FP_{sl}}(\mathbf{Ox})\) \[-\Delta F_{0}^{\mathrm{FP_{nl}-FP_{sl}}} \simeq-\left\langle\Delta U_{0}^{\mathrm{AML}}\right\rangle_{ \mathrm{FP_{sl}}}\] \[+\frac{\beta}{2}\left\langle\left(\Delta U_{0}^{\mathrm{AML}}- \left\langle\Delta U_{0}^{\mathrm{AML}}\right\rangle_{\mathrm{FP_{sl}}} \right)^{2}\right\rangle_{\mathrm{FP_{sl}}}.\] (4) * \(\mathbf{FP_{sl}}(\mathbf{Ox})\rightarrow\mathbf{ML}(\mathbf{Ox})\) \[-\Delta F_{0}^{\mathrm{FP_{sl}-ML}} =-\int_{0}^{1}\left\langle\frac{\partial H_{0}^{\mathrm{FP_{sl}- ML}}}{\partial\eta}\right\rangle_{\eta}d\eta,\] (5) \[H_{0}^{\mathrm{FP_{sl}-ML}} =\sum_{i=1}^{N_{\mathrm{s}}}\frac{\left|\mathbf{p}_{i}\right|^{2 }}{2m_{i}}+\eta U_{0}^{\mathrm{FP_{sl}}}+\left(1-\eta\right)U_{0}^{\mathrm{ ML}}.\] (6) * \(\mathbf{ML}(\mathbf{Ox})\rightarrow\mathbf{ML}(\mathbf{Red})\) \[\Delta F^{\mathrm{ML}} =\int_{0}^{1}\left\langle\frac{\partial H^{\mathrm{ML}}}{ \partial\lambda}\right\rangle_{\lambda}d\lambda,\] (7) \[H^{\mathrm{ML}} =\sum_{i=1}^{N_{\mathrm{s}}}\frac{\left|\mathbf{p}_{i}\right|^{2 }}{2m_{i}}+\lambda U_{1}^{\mathrm{ML}}+\left(1-\lambda\right)U_{0}^{\mathrm{ ML}}.\] (8) * \(\mathbf{ML}(\mathbf{Red})\rightarrow\mathbf{FP_{sl}}(\mathbf{Red})\) \[\Delta F_{1}^{\mathrm{FP_{sl}-ML}} =\int_{0}^{1}\left\langle\frac{\partial H_{1}^{\mathrm{FP_{sl}- ML}}}{\partial\eta}\right\rangle_{\eta}d\eta,\] (9) \[H_{1}^{\mathrm{FP_{sl}-ML}} =\sum_{i=1}^{N_{\mathrm{s}}}\frac{\left|\mathbf{p}_{i}\right|^{2 }}{2m_{i}}+\eta U_{1}^{\mathrm{FP_{sl}}}+\left(1-\eta\right)U_{1}^{\mathrm{ ML}}.\] (10) * \(\mathbf{FP_{sl}}(\mathbf{Red})\rightarrow\mathbf{FP_{sl}}(\mathbf{Red})\) \[\Delta F_{1}^{\mathrm{FP_{sl}-FP_{sl}}} \simeq\left\langle\Delta U_{1}^{\mathrm{AML}}\right\rangle_{ \mathrm{FP_{sl}}}\] \[-\frac{\beta}{2}\left\langle\left(\Delta U_{1}^{\mathrm{AML}}- \left\langle\Delta U_{1}^{\mathrm{AML}}\right\rangle_{\mathrm{FP_{sl}}}\right) ^{2}\right\rangle_{\mathrm{FP_{sl}}}.\] (11) Figure 4: Computed and experimental redox potentials. ML means the results obtained by the MLFF trained on FP\({}_{sl}\) (RPBE+D3) without any correction. The small letters ‘w/ ML’ under FP\({}_{sl}\) mean that the ML result was corrected by the scheme shown in Fig. 2. The letters ‘w/o ML’ mean the result calculated by FP\({}_{sl}\) using Eqs.(15) and (16) without MLFF. Here, FP\({}_{\mathrm{ml}}\) means PBE0+D3 (0.25) result obtained by the scheme shown in Fig. 2. Experimental values are taken from Ref. [11]. Here, \(N_{\rm a}\) is the number of atoms, and \(m_{i}\) and \(\mathbf{p}_{i}\) are the mass and momentum vector of \(i\)-th atom. The symbols \(U_{\mu}^{\rm FP_{nl}}\), \(U_{\mu}^{\rm FP_{nl}}\) and \(U_{\mu}^{\rm ML}\) are the potential energies for the oxidized (\(\mu=0\)) and reduced (\(\mu=1\)) states calculated by the non-local functional, semi-local functional and MLFF trained on the semi-local functional, respectively. The symbol \(\Delta U_{\mu}^{\rm aML}\) denotes the potential energy difference calculated by the \(\Delta\)-ML model trained on the potential energy difference \(U_{\mu}^{\rm FP_{nl}}-U_{\mu}^{\rm FP_{nl}}\) between the non-local and semi-local functionals. In Eqs. (4) and (11), the second-order cumulant expansion is employed. The expansion is exact if the probability distribution of \(\Delta U_{\mu}^{\rm aML}\) is Gaussian. The condition is reasonably satisfied as shown later in this subsection. The free energy differences \(\Delta F_{\rm bulk}\) of the ML, \(\rm FP_{sl}\) and \(\rm FP_{nl}\) methods are obtained as \[\Delta F_{\rm bulk}^{\rm ML} = \Delta F^{\rm ML}, \tag{12}\] \[\Delta F_{\rm bulk}^{\rm FP_{sl}} = \Delta F_{\rm bulk}^{\rm ML}+\Delta F_{1}^{\rm FP_{sl}-ML}- \Delta F_{0}^{\rm FP_{sl}-ML},\] (13) \[\Delta F_{\rm bulk}^{\rm FP_{sl}} = \Delta F_{\rm bulk}^{\rm FP_{sl}}+\Delta F_{1}^{\rm FP_{sl}-FP_{sl }}-\Delta F_{0}^{\rm FP_{sl}-FP_{sl}}. \tag{14}\] To validate the MLFF-aided computations of the free energy difference \(\Delta F_{\rm bulk}^{\rm FP_{sl}}\), the same property was also computed by the TI without using the ML method: \[\Delta F^{\rm FP_{sl}} = \int_{0}^{1}\left\langle\frac{\partial H^{\rm FP_{sl}}}{\partial \lambda}\right\rangle_{\lambda}d\lambda, \tag{15}\] \[H^{\rm FP_{sl}} = \sum_{i=1}^{N_{\rm B}}\frac{\left|\mathbf{p}_{i}\right|^{2}}{2m_{ i}}+\lambda U_{1}^{\rm FP_{sl}}+\left(1-\lambda\right)U_{0}^{\rm FP_{sl}}, \tag{16}\] For the TI calculation in Eq. (7), the trapezoidal rule with equidistant five points was used following the previous FP study by Blumberger and co-workers Blumberger and Co-workers Blumberger and Co-workers (1998). For each point, the ensemble average was taken over an 80-ps-NVT-ensemble MD simulation at 298 K after a 20 ps equilibration. Similar to the MLFF calculations, the trapezoidal rule with equidistant five points was used for the TI calculation in Eq. (15). For each grid, the ensemble average was taken over a 20-ps-MD simulation starting from the final structure of the TI calculation using the MLFF at the same grid point. Each initial structure of the MD simulations was prepared by annealing the system from 400 K to 298 K by a 100-ps-NVT-ensemble MD simulation using the MLFF after annealing the same system from 1000 K to 400 K by a 1-ns-NVT ensemble MD simulation using the polymer consistent force field (PCFF) Binder _et al._ (2012) implemented in a homemade MD program Binder _et al._ (2012). Figures S1 and S2 in SI show the integrands of Eqs. (7) and (15), respectively, as functions of the coupling parameter \(\lambda\). In the same figures, probability distributions of \(\Delta U^{\rm ML}=U_{1}^{\rm ML}-U_{0}^{\rm ML}\) and \(\Delta U^{\rm FP_{sl}}=U_{1}^{\rm FP_{sl}}-U_{0}^{\rm FP_{sl}}\) at each \(\lambda\) are also shown. For all redox couples, the variance of the distribution varies with changing \(\lambda\), and thus, the integrand is non-linear with respect to \(\lambda\) [see Eq. (S6) in SI]. Hence, the second cumulant expansion Eq. (S4) is not applicable to the whole integration from the oxidized state to the reduced state. The TI calculations in Eqs. (5) and (9) were conducted using the trapezoidal rule with three equidistant points. At each point, a 10-ps-NVT-ensemble MD simulation at 298 K was performed. The integrands shown in Fig. S3 are smaller than the ones shown in Figs. S1 and S2, respectively. They are also nearly proportional to the coupling parameter \(\eta\). In the TPT calculations using the \(\Delta\)-ML models, the ensemble average in each of Eqs. (4) and (11) was taken over 1400 configurations selected randomly from 70-ps-NVT-ensemble FPMD simulations using the \(\rm FP_{sl}\) method. Although these FPMD simulations are expensive, the overall computational time is still much smaller than full FP simulations. To ensure the applications of the second-order cumulant expansion, we show the probability distributions of the energy difference \(\Delta U_{\mu}^{\rm aML}\) in Fig. S4. The distribution are well fitted by Gaussian functions, indicating that Eqs. (4) and (11) are reasonable approximations. The MD simulations were performed using a Langevin thermostat Binder _et al._ (2012). For efficient sampling, the mass of hydrogen and time step were set to 2 a.u. and 1 fs. ### MLFF and \(\Delta\)-ML Similar to previous machine-learning approaches Kittel _et al._ (2013); Kittel _et al._ (2014), the potential energy \(U\) of a structure with \(N_{\rm a}\) atoms in our MLFF method is approximated as a summation of local energies \(U_{i}\) written as \[U=\sum_{i=1}^{N_{\rm B}}U_{i}. \tag{17}\] Following the Gaussian approximation potential pioneered by Bartok and co-workers Kittel _et al._ (2013), the local energy \(U_{i}\) is approximated as a weighted sum of functions \(K\left(\mathbf{x}_{i},\mathbf{x}_{i_{\rm B}}\right)\) centered at reference points \(\{\mathbf{x}_{i_{\rm B}}|i_{\rm B}=1,...,N_{\rm B}\}\) \[U_{i}=\sum_{i_{\rm B}=1}^{N_{\rm B}}w_{i_{\rm B}}K\left(\mathbf{x}_{i},\mathbf{ x}_{i_{\rm B}}\right). \tag{18}\] The coefficients \(\{w_{i_{\rm B}}|i_{\rm B}=1,...,N_{\rm B}\}\) are optimized to best reproduce the FP energies, forces, and stress tensor components as obtained by the FPMD simulations. The descriptor \(\mathbf{x}_{i}\) used in this study is a vector containing two and three body contributions Binder _et al._ (2012): \[\mathbf{x}_{i}^{\rm T}\rightarrow\left(\sqrt{\beta^{(2)}}\mathbf{x}_{i}^{(2){ \rm T}},\sqrt{\beta^{(3)}}\mathbf{x}_{i}^{(3){\rm T}}\right), \tag{19}\] Here, \(\beta^{(2)}\) and \(\beta^{(3)}(=1-\beta^{(2)})\) are the weights on the two and three body descriptors, \(\mathbf{x}_{i}^{(2)}\) and \(\mathbf{x}_{i}^{(3)}\), respectively. The vectors \(\mathbf{x}_{i}^{(2)}\) and \(\mathbf{x}_{i}^{(3)}\) collect the expansion coefficients of two and three body distribution functions with respect to the orthonormal radial and angular basis sets Kittel _et al._ (2013); Binder _et al._ (2012): \[\rho_{i}^{(2)}\left(r\right) = \frac{1}{\sqrt{4\pi}}\sum_{n=1}^{N_{\rm B}^{\rm M}}c_{n}^{i} \mathcal{X}_{n0}\left(r\right) \tag{20}\] \[\rho_{i}^{(3)}\left(r,s,\theta\right) = \sum_{i=0}^{L_{\rm max}}\sum_{n=1}^{N_{\rm B}^{\rm M}}\sum_{\nu=1}^{ N_{\rm B}^{\rm M}}\sqrt{\frac{2l+1}{2}}\] (21) \[\times p_{\nu\nu\nu\nu}^{i}\mathcal{Z}_{nl}\left(r\right)\mathcal{Z}_{ \nu\nu}\left(s\right)P_{l}\left(\cos\theta\right).\] The two and three body distribution functions \(\rho_{i}^{(2)}\) and \(\rho_{i}^{(3)}\) are defined as: \[\rho_{i}^{(2)}\left(r\right) =\frac{1}{4\pi}\int\rho_{i}\left(r\hat{\mathbf{r}}\right), \tag{22}\] \[\rho_{i}^{(3)}\left(r,s,\theta\right) =\iint d\hat{\mathbf{r}}d\hat{\mathbf{s}}\ \delta\left(\hat{\mathbf{r}}\cdot\hat{\mathbf{s}}-\cos\theta\right)\] \[\times\rho_{i}\left(r\hat{\mathbf{r}}\right)\rho_{i}\left(s\hat{ \mathbf{s}}\right),\] (23) \[\rho_{i}\left(\mathbf{r}\right) =\sum_{j=1}^{N_{\text{s}}}f_{\text{cut}}\left(\left|\mathbf{r}_{ j}-\mathbf{r}_{i}\right|\right)g\left(\mathbf{r}-\left(\mathbf{r}_{j}-\mathbf{r}_{i} \right)\right) \tag{24}\] The function \(g\) is the smoothed \(\delta\)-function, and \(f_{\text{cut}}\) is a cutoff function that smoothly eliminates the contribution from atoms outside a given cutoff radius \(R_{\text{cut}}\). For \(\mathcal{X}_{nl}\) and \(P_{l}\), normalized spherical Bessel functions \(\mathcal{Y}_{nl}=j_{l}(q_{n}r)\) and Legendre polynomials of order \(l\) are used in this work, respectively. For the kernel basis functions, the smooth overlap of atomic positions (SOAP) kernel [31] is employed \[K\left(\mathbf{x}_{i},\mathbf{x}_{i_{\text{B}}}\right)=\left(\hat{\mathbf{x} }_{i}\cdot\hat{\mathbf{x}}_{i_{\text{B}}}\right)^{\zeta}. \tag{25}\] The hat symbol \(\hat{\mathbf{x}}_{i}\) denotes a normalized vector of \(\mathbf{x}_{i}\). The normalization and exponentiation in Eq. (25) introduce non-linear terms that mix two- and three-body contributions. The same formulation is used for the \(\Delta\)-ML method. In the \(\Delta\)-ML method, differences of potential energies and forces between two FP methods, semi-local and non-local functionals in this study, are used as the training data. Parameter sets of the descriptors and kernel basis functions used in previous publications were employed in this study [26, 34, 28]. The parameters are tabulated in Table S1. Bulk solutions were modeled by systems as shown in Fig. 1 in the main text. The number of water molecules was set to 32, 64, and 96. Three different model sizes were examined to clarify the system size effect. The sizes of the unit cells were set to obtain a water density of 0.99 g cm\({}^{-3}\) as found by MLFF calculations on a water slab (see Section S4 in SI). The size of the unit cell for the 32 water molecules is same as the one used in previous FPMD studies [53, 9, 16, 8]. For each of the reduced and oxidized states, MLFF and \(\Delta\)-ML models were constructed. All MLFF models were generated on the fly during a 100-ps-NVT-MD simulation at 400 K by using the active-learning algorithm developed in our previous study [26]. The temperature for the training runs was set to a value higher than the target one of 298 K for production runs, to ensure that training data and kernel basis functions were provided in a wider phase space. A Langevin thermostat [57] was used to control the temperature. Exchange-correlation interactions between electrons were described by the semi-local RPBE functional [35] with Grimme's D3 dispersion corrections [36, 37]. Root mean square errors (RMSEs) of the constructed MLFFs for energies, forces, and stress tensor components on test data are tabulated in Table S2. The RMSEs are similar to those of MLFFs used in previous studies [32, 33, 34, 26, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44]. After examining the system size effect using the semi-local functional, \(\Delta\)-ML models were constructed on systems containing 64 water molecules. Each \(\Delta\)-ML model was trained on FP energies and forces of 40 structures selected randomly from a trajectory of a 20 ps NVT-ensemble FPMD simulation at 298K. The FPMD simulation was performed using the RPBE+D3 functional. Differences in energies and forces between the non-local and semi-local functionals for these 40 structures were used as training data. PBE0 [38] with and without the Grimme's D3 dispersion correction [36, 37] was employed as the non-local functional because the functional is known to accurately predict properties of water [58]. The fraction of the exact exchange was set to 0.25 and 0.50 to determine how this influences the redox potentials. RMSEs of the \(\Delta\)-ML models on test structures are tabulated in Table S3. The RMSEs are one to two orders of magnitude smaller than the errors of the RPBE+D3 MLFFs tabulated in Table S2. Thanks to the remarkable accuracy of the \(\Delta\)-ML models in describing the energy difference between various FP methods, it is possible to obtain exceedingly accurate free energy differences between different FP methods using thermodynamic perturbation theory as shown in Fig. S5. This is one of the key advances of the present work. The vacuum-water interface for the production run was modeled by a water slab composed of 128 water molecules per unit cell. Following the previous study [19], a rectangular cell of \(12.5\times 12.5\times 50\) A\({}^{3}\) was employed. Similar to the MLFFs for the bulk systems, the MLFF for the interface was also generated by using the active-learning scheme. The systems used for the training were a water bulk composed of 64 water molecules in a \(12.4\times 12.4\times 12.4\) A\({}^{3}\) cubic cell and slab composed of 64 water molecules in an \(8.8\times 8.8\times 40.8\) A\({}^{3}\) rectangular cell. Training simulations for both the bulk and slab were performed by NVT-ensemble MD simulations at 300, 400, 600 and 800 K. As shown in Table S2, the constructed MLFF realizes small RMSEs on test data taken from a 100-ps-MD simulation of a water slab composed of 128 water molecules at 298 K. The annealing procedure used for the production runs explained in the previous subsection was also used to prepare for the initial structures for the training runs. All FP calculations were performed using VASP [22, 23]. A \(2\times 2\times 2\) **k**-point mesh was used for the bulk systems containing 32 water molecules. For other systems, \(\Gamma\)-point was used. Plane-wave cutoff energy was set to 520 eV. The PAW atomic reference configuration was 1s\({}^{1}\) for H, 2s\({}^{2}\)2p\({}^{4}\) for O, 3d\({}^{7}\)4s\({}^{1}\) for Fe, 3d\({}^{10}\)4p\({}^{1}\) for Cu, and 4d\({}^{10}\)5s\({}^{1}\) for Ag. The parameters for the MD simulations are same as the ones described in the previous subsection. ## Acknowledgements We thank Carla Verdi for helpful discussions. ## Code availability The VASP code is distributed by the VASP Software GmbH. The machine learning modules will be included in the release of vasp.6.3. Prerelease versions are available from G.K. upon reasonable request. ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2309.17071
Nuclear Modification Factor in Pb-Pb and p-Pb collisions at $\sqrt{s_{NN}}$=5.02 TeV at LHC energies using Boltzmann Transport Equation with Tsallis Blast Wave Description
In this article, we have studied the nuclear modification factor measured in Pb-Pb collisions ($R_{PbPb}$) for $\pi^{\pm}$, $K^{\pm}$, $p+\bar{p}$, $K^{*0} + \bar{K^{*0}}$, $\phi$ and in p-Pb collisions ($R_{pPb}$) for $\pi^{\pm}$, $K^{\pm}$, $p+\bar{p}$ at Large hadron collider (LHC) energy of $\sqrt{s_{NN}}$ = 5.02 TeV for the most central and peripheral collisions. We have also analysed the experimental data of transverse momentum ($p_T$) spectra for these identified hadrons at LHC for Pb-Pb as well as for p-Pb collisions. We have used Boltzmann transport equation (BTE) in relaxation time approximation (RTA) for this analysis. The Tsallis statistics is used as an initial distribution function and The Tsallis blast wave (TBW) model is employed as an equilibrium distribution in BTE. The present model fits the measured transverse momentum spectra, $R_{PbPb}$, and $R_{pPb}$ successfully upto $p_T$ = 8 GeV with a reasonable $\chi^2/ndf$ for all the considered hadrons at various centralities. The experimental data for $R_{pPb}$ are generated using the particle yields at pPb and pp collisions where number of binary collisions are taken from Glauber model calculations. We find that the average transverse flow velocity ($<\beta_r>$) follows the mass and centrality ordering and decreases with the mass as well as when one move from the central collisions to peripheral collisions. These findings are inline with the results of the hydrodynamical calculations.
Aditya Kumar Singh, Aviral Akhil, Swatantra Kumar Tiwari, Pooja Pareek
2023-09-29T09:08:56Z
http://arxiv.org/abs/2309.17071v1
Nuclear Modification Factor in Pb-Pb and p-Pb collisions at \(\sqrt{s_{NN}}\)=5.02 TeV at LHC energies using Boltzmann Transport Equation with Tsallis Blast Wave Description. ###### Abstract In this article, we have studied the nuclear modification factor measured in Pb-Pb collisions (\(R_{PbPb}\)) for \(\pi^{\pm}\), \(K^{\pm}\), \(p+\bar{p}\), \(K^{\star 0}+\bar{K^{\star 0}}\), \(\phi\) and in p-Pb collisions (\(R_{Pb}\)) for \(\pi^{\pm}\), \(K^{\pm}\), \(p+\bar{p}\) at Large hadron collider (LHC) energy of \(\sqrt{s_{NN}}=5.02\) TeV for the most central and peripheral collisions. We have also analysed the experimental data of transverse momentum (\(p_{T}\)) spectra for these identified hadrons at LHC for Pb-Pb as well as for p-Pb collisions. We have used Boltzmann transport equation (BTE) in relaxation time approximation (RTA) for this analysis. The Tsallis statistics is used as an initial distribution function and The Tsallis blast wave (TBW) model is employed as an equilibrium distribution in BTE. The present model fits the measured transverse momentum spectra, \(R_{PbPb}\), and \(R_{pPb}\) successfully upto \(p_{T}=8\) GeV with a reasonable \(\chi^{2}/ndf\) for all the considered hadrons at various centralities. The experimental data for \(R_{Pb}\) are generated using the particle yields at pPb and pp collisions where number of binary collisions are taken from Glauber model calculations. We find that the average transverse flow velocity (\(<\beta_{r}>\)) follows the mass and centrality ordering and decreases with the mass as well as when one move from the central collisions to peripheral collisions. These findings are inline with the results of the hydrodynamical calculations. pacs: 25.75.-q,25.75.Nq,25.75.Gz, 25.75.Dw,12.38.Mh, 24.85.+p ## I Introduction The ultra-relativistic heavy-ion collision experiment at Relativistic Heavy-Ion Collider (RHIC), BNL, Large hadron collider (LHC) at CERN, Facility for Antiproton and Ion research (FAIR), Darmstadt, and Nuclotron-based Ion Collider Facility (NICA), Dubna aim to create the deconfined state of quarks and gluons at extreme conditions of temperature and/or energy densities called Quark-gluon plasma (QGP) [1; 2; 3; 4]. The QGP formed in high-energy heavy-ion collisions exists for a brief time. It rapidly undergoes a transformation into a state resembling a gas of hadrons which are captured by detectors for the purpose of studying these states. To gather information about the transient state of matter, it is necessary to examine the remnants (hadrons) of this state of matter. During the course of the evolution, the system engages in a series of interactions involving multiple partons (quarks and gluons) causing energy dissipation of the emitted particles which results in the modification of the particle yields [5; 6] depending on centrality and energy of the colliding nuclei. When both elastic and inelastic interactions eventually cease, the measurement of the invariant particle yield in the final state provides information about the characteristics of the quark-gluon plasma. The invariant transverse momentum spectra are essential for testing hydrodynamic models that describe the collective behaviour of particles and thermalisation properties of QGP [7; 8; 9]. The final particle yield is modified to some extent representing a factor by which it is suppressed or enhanced known as nuclear modification factor (\(R_{AA}\)). This is evaluated by dividing the actual yield of particles observed in \(A-A\) and \(p-A\) collisions by the yield observed in a scaled proton-proton collision, taking into account the number of binary collisions (\(N_{coll}\)) in the heavy-ion collision. This ratio helps us to understand the modifications that occur as particles pass through the extremely hot and dense medium created in relativistic nuclear collisions. It is a key observable in the study of heavy-ion collisions and given as [10]: \[R_{AA}=\frac{1}{N_{coll}}\frac{d^{2}N_{AA}/dydp_{T}}{d^{2}N_{pp}/dydp_{T}}, \tag{1}\] where \(N_{AA}\) and \(N_{pp}\) is the particle yield in nucleon-nucleon and proton-proton collision, respectively and \(N_{coll}\) is the number of binary nucleon-nucleon collisions. In various papers [11; 12; 13], fitting of \(p_{T}\) spectra is done using Boltzmann Gibbs blast wave (BGBW) model. These studies have demonstrated that the BGBW model effectively describes the transverse momentum spectra within a specific \(p_{T}\) range. This model draws inspiration from hydrodynamics and posits that particles reach a state of local thermal equilibrium at the kinetic freeze-out temperature. Additionally, they have assumed that the particles have a common transverse flow velocity as they propagate [12]. After reaching local equilibrium, the system undergoes hydrodynamic evolution assuming all \(p-p\) events to be same [14]. In studies [15; 16], particles emission does not exclusively happen from a precisely de fined freeze-out hypersurface. Instead, it occurs continuously from various regions within the expanding system, each at different temperatures and points in time. The conventional equilibrium description becomes inadequate when dealing with the high transverse momentum (\(p_{T}\)) phenomena. In these cases, the particle production tends to be predominantly governed by non-equilibrium or hard processes, and it often displays a distinct power-law tail pattern [17]. Further, the significant fluctuations that vary from one event to another are anticipated in these cases [18]. To consider the influence of these fluctuations many authors [19; 14; 20] have taken Tsallis Blast Wave model to fit the transverse momentum spectra. In a recent study [14], a comprehensive examination of particle spectra has been undertaken specifically employing the TBW model to accurately depict spectra up to \(p_{T}\)= 3 GeV/c in the context of Pb-Pb, p-Pb, and Xe-Xe collisions. In references [21; 22], the authors have employed the Boltzmann transport equation within the framework of the Relaxation Time Approximation. They have utilized the Tsallis distribution as the initial distribution function and the BGBW distribution as the equilibrium distribution function. The approach was effectively used to examine the evolution of particle distributions and skillfully reproduce spectra upto \(p_{T}\)=5 GeV/c. In the present research work, our approach involves employing the Tsallis distribution as the initial distribution function within the Boltzmann Transport Equation (BTE) framework and instead of utilizing the BGBW model, we adopt the TBW model as the equilibrium distribution function. Significantly, any departure from the value of the non-extensive parameter in \(A-A\) collisions (\(q_{AA}\)) from 1 indicates that the system does not achieves a state of full equilibrium, contradicts the assumption made by the BGBW model. The manuscript is organized as follows: In Section II, we outline the process of deriving the transverse momentum spectra and nuclear modification factor using the Boltzmann Transport Equation within the relaxation time approximation. Section III is dedicated to an in-depth exploration and analysis of the obtained results. Finally, in Section IV, we offer a concise recapitulation of the findings of this study, accompanied by potential avenues for future research. ## II The nuclear modification factor in the BTE within the context of RTA In the domain of scientific investigation, numerous challenges revolve around understanding the temporal evolution of a statistical ensemble. Here, we narrow our focus to a particle system described by the probability distribution \(f(r,p,t)\), where this distribution depends on position (\(r\)), momentum (\(p\)), and time (\(t\)). In general, when dealing with a dynamically evolving physical system operating irreversibly away from thermodynamic equilibrium, it's important to note that \(f(r,p,t)\) typically differs from that of a Boltzmannian ensemble. Researchers often investigate its temporal evolution by employing the Boltzmann Transport Equation (BTE), whose general form is utilized for this purpose and is given as [23], \[\frac{df(x,p,t)}{dt}=\frac{\partial f}{\partial t}+\vec{v}.\nabla_{x}f+\vec{F}.\nabla_{p}f=C[f]. \tag{2}\] Here, \(\vec{v}\) represents velocity, while \(\vec{F}\) stands as the external force. The notations \(\nabla_{x}\) and \(\nabla_{p}\) denote partial derivatives concerning position and momentum, respectively. The term \(C[f]\) stands for the collision term. The Boltzmann Transport Equation (BTE) within the framework of Relaxation Time Approximation (RTA) has previously found applications in investigating a range of phenomena. These applications encompass the study of temperature fluctuations in non-equilibrium systems over time [24], the analysis of anisotropic flow in identified hadrons [25], and the evaluation of \(R_{AA}\) for various light and heavy flavors, particularly at energies relevant to the Large Hadron Collider (LHC) [26]. In cases where the considered system is homogeneous (\(\nabla_{x}f=0\)) and devoid of external forces (i.e \(\vec{F}=\)0), Eq.(2) simplifies to the following form. \[\frac{df(x,p,t)}{dt}=\frac{\partial f}{\partial t}=C[f]. \tag{3}\] Due to the flexibility in choosing the functional form of \(C[f]\), the equation remains highly versatile, accommodating a wide range of circumstances. Nevertheless, in many practical applications, it is customary to simplify it further into what is known as the relaxation time approximation (RTA). This involves adopting a more streamlined version of the collision term, representing a simplified form often used in such situations given as [27; 28; 29; 23]. \[C[f]=-\frac{f-f_{eq}}{\tau}, \tag{4}\] where \(f_{eq}\) is the Boltzmann local equilibrium distribution characterized by a temperature \(T\). The parameter \(\tau\) signifies the relaxation time, representing the duration required for a non-equilibrium system to transition into equilibrium. By incorporating Eq.(4) into the context, Eq.(3) transforms as follows, \[\frac{\partial f}{\partial t}=-\frac{f-f_{eq}}{\tau}. \tag{5}\] Solving the above equation with the initial conditions _i.e._ at \(t=0,f=f_{in}\) and at \(t=t_{f},f=f_{fin}\), we get, \[f_{fin}=f_{eq}+(f_{in}-f_{eq})e^{-\frac{t_{f}}{\tau}}, \tag{6}\] where \(t_{f}\) is the freeze-out time and \(\tau\) is the relaxation time. Using Eq.(6) the nuclear modification factor can be given as, \[R_{AA}=\frac{f_{fin}}{f_{in}}=\frac{f_{eq}}{f_{in}}+\big{(}1-\frac{f_{eq}}{f_{ in}}\big{)}e^{\frac{-t_{f}}{\tau}} \tag{7}\] Equation 7 represents the nuclear modification factor obtained by incorporating the relaxation time approximation, serving as the foundation for our analysis in this current paper. In this context, we have selected the Tsallis blast wave as the equilibrium function denoted by \(f_{eq}\) which can be written as [14], \[f_{eq}=\frac{d^{3}N}{p_{T}dp_{T}dyd\phi_{p}}=D\int_{\sum_{f}}dy_ {s}rdrd\phi_{b}\times\cosh(y_{s}-y)\Big{[}1+\frac{q_{AA}-1}{T}[m_{T}\cosh\rho \cosh(y_{s}-y)\] \[-p_{T}\sinh\rho\cos(\phi_{p}-\phi_{b})]\Big{]}\frac{-1}{ \overline{q_{AA}-1}}. \tag{8}\] Where, \(D=\frac{q_{T}nm_{T}}{(2\pi)^{3}}\), \(\sum_{f}\) is the decoupling hypersurface. Here the parameter \(g\) corresponds to the degeneracy factor while temperature denoted as \(T\) represents the kinetic freeze-out temperature. The variables \(y\) and \(m_{T}\) are employed to denote the rapidity and transverse mass of the identified particles, respectively. Additionally, \(y_{s}\) is utilized to represent the rapidity of the emitting source, while \(\phi_{p}\) and \(\phi_{b}\) describe the azimuthal angles of the emitted particle velocity and the flow velocity relative to the x-axis within the reaction plane. The parameter \(q_{AA}\) serves to encapsulate non-extensivity, providing a quantitative measure of the extent of deviation from equilibrium. A departure from unity in this parameter signifies the non-equilibrium nature of the system. Additionally, the parameter \(\rho\), known as the transverse expansion rapidity [14], is mathematically expressed as \(\rho=\tanh^{-1}\beta_{r}\)[30], where \(\beta_{r}\) is defined as \(\beta_{s}(\xi)^{n}\). Here, \(\beta_{s}\) denotes the maximum surface velocity, and \(\xi\) is defined as \((r/R)\), with \(r\) representing the radial distance and \(R\) is the maximum radius of the expanding source at freeze-out (\(0<\xi<1\)). The parameter \(n\) characterizes the flow profile. It is noteworthy that within the Tsallis blast-wave (TBW) model, particles located closer to the center of the fireball exhibit slower motion compared to those at the periphery. The calculation of the average transverse velocity is conducted according to [31]. \[<\beta_{r}>=\frac{\int\beta_{s}\xi^{n}\xi\;d\xi}{\int\xi\;d\xi}=\Big{(}\frac{ 2}{2+n}\Big{)}\beta_{s}. \tag{9}\] In our calculations, we have varied the parameter \(n\) to explore a range of flow profiles within the Tsallis Blast-Wave model. We adopt Bjorken's longitudinal expansion assumption, implying that the measured particle yield remains constant across rapidity, as we integrate over the entire rapidity range of the source [32]. This approximation is reasonably valid for mid-rapidity measurements at both the RHIC and LHC energies [2]. While we simplify our analysis by assuming isotropic azimuthal emission within each local source, it's essential to acknowledge that, in reality, the source's distribution may exhibit variations or dependencies in the azimuthal direction [33]. We also assume that the emission source maintains uniformity in both density and degree of non-equilibrium at the moment of kinetic freeze-out. However, it's important to note that this assumption doesn't hold for high-\(p_{T}\) particles (jets) as they often exhibit emission patterns concentrated near the surface, deviating from the assumed uniformity [34, 35]. We have omitted the contributions from resonance decay when examining the \(p_{T}\) spectra of stable particles, as their impact becomes significant primarily at extremely low \(p_{T}\). A thorough investigation of resonance decay kinematics and its influence on the spectra can be found in the references [32] and [13]. With the mentioned assumptions and by defining \(\phi_{p}-\phi_{b}\) as \(\phi\), equation 8 is expressed as [20] \[\frac{d^{3}N}{2\pi p_{T}dp_{T}}=D\int_{-Y}^{+Y}\cosh(y)dy\int_{-\pi}^ {+\pi}d\phi\times\int_{0}^{R}rdr\Big{[}1+\frac{(q_{AA}-1)}{T}[m_{T}\cosh(\rho) \cosh(y)\] \[-p_{T}\sinh(\rho)\cos(\phi)]\Big{]}\frac{-1}{q_{AA}-1}\,. \tag{10}\] Here, we have used Jacobian for the transformation of the coordinates and integrated it over \(d\phi_{p}\). \(Y\) is the rapidity of the emitting beam. At mid-rapidity i.e y \(\simeq\) 0 above equation becomes, \[f_{eq}=\frac{d^{3}N}{2\pi p_{T}dp_{T}dy}=D\int_{-\pi}^{+\pi}d\phi\times\int_{0} ^{R}rdr\Big{[}1+\frac{(q_{AA}-1)}{T}[m_{T}\cosh(\rho)-p_{T}\sinh(\rho)\cos( \phi)]\Big{]}\frac{-1}{q_{AA}-1}\,. \tag{11}\] Following the details in reference [36], the system formed right after high-energy collisions tends to exist far from thermal equilibrium. The temperature of this system exhibits variations from event to event. This unique circumstance is characterized by the application of non-extensive statistics, specifically the Tsallis statistics [37]. Therefore, as done in ref. [22], we set the initial distribution as the Tsallis distribution given as, \[f_{in}=D^{\prime}[1+(q_{pp}-1)\,\frac{m_{T}}{T_{ts}}]^{\frac{- q_{pp}}{q_{pp}-1}}. \tag{12}\] Here, \(D^{\prime}=\frac{gVm_{T}}{(2\pi)^{2}}\). V is the volume of the fireball formed in the heavy- ion collisions. Consequently, we have employed the Tsallis distribution to derive both the final particle distribution and the nuclear modification factor (\(R_{AA}\)). Using equations 11 and 12 in equation 6, the final distribution can be expressed as, \[f_{fin}=D\int_{-\pi}^{+\pi}d\phi\times\int_{0}^{R}rdr\Big{[}1+ \frac{(q_{AA}-1)}{T}[m_{T}\cosh(\rho)-p_{T}\sinh(\rho)\cos(\phi)]\Big{]}\frac {-1}{q_{AA}-1}\\ +\left(D^{\prime}\left[1+(q_{pp}-1)\,\frac{m_{T}}{T_{ts}}\right] \overline{q_{pp}-1}-\left(D\int_{-\pi}^{+\pi}d\phi\times\int_{0}^{R}rdr\Big{[} 1+\frac{(q_{AA}-1)}{T}[m_{T}\cosh(\rho)\right.\right.\\ \left.\left.-\,p_{T}\sinh(\rho)\cos(\phi)]\right]\overline{q_{AA} -1}\,\right)\right)\exp^{-t_{f}/\tau}\Bigg{]}. \tag{13}\] Using Eq.(12) and Eq.(13) in Eq.(7), the nuclear modification factor can be expressed as, \[R_{AA}=\frac{f_{fin}}{f_{in}}=\frac{D\int_{-\pi}^{+\pi}\,d\phi \times\int_{0}^{R}r\,dr\bigg{[}1+\frac{(q_{AA}-1)}{T}\big{[}m_{T}\cosh(\rho)-p_{T }\sinh(\rho)\cos(\phi)\big{]}\bigg{]}^{\frac{-1}{q_{AA}-1}}}{D^{\prime}\bigg{[} 1+(q_{pp}-1)\frac{m_{T}}{T_{ts}}\bigg{]}^{\frac{-q_{pp}}{q_{pp}-1}}}+\\ \left(1-\frac{D\int_{-\pi}^{+\pi}\,d\phi\times\int_{0}^{R}r\,dr \bigg{[}1+\frac{(q_{AA}-1)}{T}\big{[}m_{T}\cosh(\rho)-p_{T}\sinh(\rho)\cos(\phi )\big{]}\bigg{]}^{\frac{-1}{q_{AA}-1}}}{D^{\prime}\bigg{[}1+(q_{pp}-1)\frac{m_ {T}}{T_{ts}}\bigg{]}^{\frac{-q_{pp}}{q_{pp}-1}}}\right)e^{\frac{-t_{f}}{\tau}}. \tag{14}\] The equations 13 and 14 are used to fit the experimental results as discussed in the following section. ## III Results and Discussions In this section, we present the results of the transverse momentum (\(p_{T}\)) spectra, \(R_{PbPb}\) for \(\pi^{\pm}\), \(K^{\pm}\), \(p+\bar{p}\), \((K^{*0}+K^{*0})/2\), \(\phi\) and \(R_{pPb}\) for \(\pi^{\pm}\), \(K^{\pm}\), \(p+\bar{p}\) at \(\sqrt{s_{NN}}=5.02\) TeV for the most central and peripheral collisions. We have fitted the \(p_{T}\) spectra, \(R_{PbPb}\) and \(R_{pPb}\) and find a remarkably favourable \(\chi^{2}/ndf\) values. We have performed the fittings for identified particles in both the most central and peripheral Pb-Pb as well as p-Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV. It is noted that, we have kept the parameters such as \(\beta_{s}\), \(n\), \(t_{f}/\tau\), \(q_{pp}\), \(q_{AA}\) and \(T_{ts}\) as free parameters throughout this analysis. To fit the experimental data, we have employed the TF2 [40] class provided by the ROOT library [40] to obtain a convergent solution. This fitting process involved minimization of the chi-squared (\(\chi^{2}\)) value, a technique commonly used for obtaining a consistent and well-fitting solution. In this analysis, the kinetic freeze-out temperature (\(T\)) is regarded as a fixed parameter. It has been noted that \(T\) decreases as one transitions from highly central to more peripheral collisions [41; 20]. This temperature is observed to be 110 MeV for the most central Pb-Pb collisions and decreases to 96 MeV for the peripheral ones. Similarly, for p-Pb collisions, \(T\) is observed to be 120 MeV for the most central collisions and 100 MeV for the peripheral ones [14]. In Figures 1 and 2, we have shown the \(p_{T}\) spectra of light flavors and resonances for Pb-Pb collision and light flavors for p-Pb collision, respectively for the most central and peripheral ones at center-of-mass energy of \(\sqrt{s_{NN}}=5.02\) TeV at mid-rapidity. Transverse momentum spectra are indispensable for unraveling the intricate dynamics associated with the Quark-Gluon Plasma (QGP) and the transitions between quarks and hadrons [42]. Several theoretical models [20; 21; 32] have been developed specifically for the characterization of these \(p_{T}\) spectra explaining well upto a limited \(p_{T}\) range. Nonetheless, the formulation we have put forth, as indicated by equation 13, provides an effective and comprehensive description of the transverse momentum spectra for all identified hadrons, offering a well-matched fit up to a transverse momentum (\(p_{T}\)) = 8 GeV/c. The parameters derived from the analysis of these spectra consistently align with the hydrodynamic behaviour manifested within the system. This concordance lends significant weight to the credibility of our theoretical formulation. Pions, being one of the lightest hadrons, manifest distinct resonance effects due to their relatively small mass [32; 43]. Importantly, we opted not to include the contribution of pion yields originating from resonance decays in our analysis. These resonance decays have a significant impact on the spectral shape at very low momentum values. To address the influence of pions resulting from resonance decays in the low \(p_{T}\) range, we have introduced a lower boundary for the pion spectrum, setting it at 0.5 GeV/c [14]. Figure 3 shows the nuclear modification factor (\(R_{AA}\)) spectra as a function of \(p_{T}\) of \((\pi^{+}+\pi^{-})\), \((K^{+}+K^{-})\), \((p+\bar{p})\), \((K^{*0}+K^{*0})/2\) and \(\phi\) mesons in Pb-Pb for the most central and peripheral collisions at \(\sqrt{s_{NN}}=5.02\) TeV. Proton-nucleus (\(pA\)) collisions occupy an intermediate position between proton-proton (\(pp\)) and nucleus-nucleus (\(AA\)) collisions, both in terms of system size and the quantity of produced particles. A common approach in research involves comparing particle production in \(pp\), \(pA\), and \(AA\) reactions to distinguish between initial state effects, associated with the utilization of the nuclear beams or targets, and final state effects, associated with the existence of high-temperature and high-density matter [45]. Officially ALICE collaboration have not released data of \(R_{pPb}\) of identified hadrons yet. However in reference [44], the ALICE collaboration has released the transverse momentum spectra for pions, kaons, and protons for p-Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV. Additionally, they have presented the \(p_{T}\) spectra of these particles in pp collisions at the same energy. Consequently, we have generated the nuclear modification factor \(R_{pPb}\) from the yields at p-Pb and p-p collisions using equation 1 at \(\sqrt{s_{NN}}=5.02\) TeV considering the number of binary collisions (\(N_{coll}\)). The values for \(N_{coll}\) are taken from the Glauber model calculations [46]. Figure 4 shows the generated \(R_{pPb}\) spectra of \((\pi^{+}+\pi^{-})\), \((K^{+}+K^{-})\) and \((p+\bar{p})\) for both the most central and the peripheral p-Pb collisions. The parameters extracted from fitting of \(R_{PbPb}\) and \(R_{pPb}\) are tabulated in the table 1. The proposed model explains \(R_{AA}\) spectra upto \(p_{T}\) = 8 GeV/c. It is observed that the average transverse flow velocity, \(<\beta_{r}>\) decreases with the particle mass along with the centrality progressing from the most central to the peripheral collisions [20; 47] as shown in the figure 5. This trend is consistent in both the Pb-Pb and p-Pb collisions. In central collisions, a substantially higher energy density is generated within the region where the two colliding nuclei overlap. This elevated energy density results in intensified particle interactions and a more pronounced Figure 2: Transverse momentum spectra of pions, kaons, and protons for p-Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV. transverse expansion when compared to the peripheral collisions. It is also noticed that Pb-Pb collisions involve two heavy lead nuclei colliding at high energies, creating a much hotter and denser medium compared to p-Pb collisions resulting in the high multiplicity, thus a higher transverse flow [14]. In p-Pb collisions, the transverse flow is suggested to be primarily driven by the cold nuclear matter effect [48]. As shown in the table 1, it is evident that the non-extensive parameter, \(q_{AA}\) exhibits a minor departure from the value of 1. This observation unequivocally indicates that the system is not in a state of complete equilibrium, a critical consideration in the Boltzmann transport equation under the relaxation time approximation. This enhances our model, rendering it more adept at describing data across a wide range of transverse momentum. Furthermore, it is observed that the Tsallis parameters \(q_{pp}\) and \(q_{AA}\) demonstrates a significant rise as the transition is made from central to peripheral collisions. This observation implies that central collisions create conditions characterized by a greater degree of particle interactions and higher energy density, thereby fostering a system behavior that more closely resembles the equilibrium in contrast to the behavior observed in the peripheral collisions. Notably, this result aligns with the patterns previously reported in references [20; 41] for different energies. In contrast, the Tsallis temperature, denoted as \(T_{ts}\), exhibits an opposite trend to that of \(q_{pp}\) as it decreases progressively from central to peripheral collisions [14]. We have noticed that the ratio \(t_{f}/\tau\) increases with the mass of the particles in both the Pb-Pb and p-Pb collisions. This may be due to that lighter particles tend to have higher transverse flow velocities which leads to a higher transverse energy (or transverse mass) compared to heavier particles. This higher transverse energy is associated with a more rapid expansion of the fireball created in heavy-ion nuclear collisions. Due to their higher transverse momenta, lighter particles gain kinetic energy more quickly and move outward more rapidly, which can lead to an earlier freeze-out or hadronization of these particles in the collision process. It is also seen that it does not exhibit any centrality-dependent behaviour. This observation warrants further investigation, and we plan to delve into this aspect in our future research endeavours. We have conducted an extensive study by examining a broad range of flow profiles, each characterized by a different value of the parameter \(n\). The velocity profile \(n\) is seen to be decreasing with the particle mass and shows no trend with the centrality. In reference [50], the authors employed a fitting approach to analyze the nuclear modification factors, \(R_{PbPb}\) and \(R_{pPb}\) at \(\sqrt{s_{NN}}=2.76\) TeV and 5.02 TeV, using a combination of the Tsallis distribution as the initial distribution and the Boltzmann-Gibbs Blast-Wave (BGBW) distribution as the equilibrium distribution within the Boltzmann transport equation with RTA. Their results showed that this formulation yielded a good fit to the data, as indicated by favorable \(\chi^{2}/ndf\) values, and was consistent with hydrodynamics upto transverse \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{} & \(\beta_{r}\) & \(\mathbf{n}\) & \(q_{pp}\) & \(T_{ts}\) & \(q_{AA}\) & \(t_{f}/\tau\) & \(\chi^{2}/ndf\) \\ \hline \multirow{8}{*}{**Pb-Pb**} & \multirow{2}{*}{**(0-5)\%**} & \(\pi^{+}+\pi^{-}\) & **0.639** & **2.446** & **1.100** & **0.087** & **1.030** & **2.155** & **0.253** \\ \cline{3-10} & & \(K^{+}+K^{-}\) & **0.584** & **0.601** & **1.139** & **0.129** & **1.029** & **2.178** & **0.533** \\ \cline{3-10} & & \(p+\bar{p}\) & **0.536** & **0.400** & **1.108** & **0.130** & **1.037** & **3.45** & **0.714** \\ \cline{2-10} & \multirow{2}{*}{**(0-10)\%**} & \((K^{*0}+K^{*0})/2\) & **0.546** & **0.587** & **1.138** & **0.120** & **1.053** & **3.00** & **1.365** \\ \cline{3-10} & & \(\phi\) & **0.423** & **0.399** & **1.135** & **0.119** & **1.062** & 4.00 & **1.055** \\ \cline{3-10} & & \(\pi^{+}+\pi^{-}\) & **0.481** & **1.077** & **1.150** & **0.033** & **1.047** & **1.70** & **0.940** \\ \cline{3-10} & & \(K^{+}+K^{-}\) & **0.469** & **1.000** & **1.146** & **0.120** & **1.100** & **1.80** & **1.22** \\ \cline{3-10} & \multirow{2}{*}{**(70-80)\%**} & \(p+\bar{p}\) & **0.351** & **0.400** & **1.156** & **0.097** & **1.099** & **7.41** & **0.831** \\ \cline{3-10} & & \((K^{*0}+\bar{K^{*0}})/2\) & **0.372** & **0.727** & **1.140** & **0.105** & **1.099** & **2.10** & **0.509** \\ \cline{3-10} & & \(\phi\) & **0.268** & **0.300** & **1.141** & **0.110** & **1.100** & **8.00** & **0.823** \\ \hline \multirow{8}{*}{**p-Pb**} & \multirow{2}{*}{**(0-5)\%**} & \(\pi^{+}+\pi^{-}\) & **0.536** & **3.00** & **1.110** & **0.107** & **1.080** & **1.50** & **0.170** \\ \cline{3-10} & & \(K^{+}+K^{-}\) & **0.501** & **1.57** & **1.109** & **0.126** & **1.075** & **2.92** & **0.458** \\ \cline{3-10} & & \(p+\bar{p}\) & **0.447** & **0.84** & **1.100** & **0.127** & **1.065** & **6.21** & **0.134** \\ \cline{1-1} \cline{3-10} & & \(\pi^{+}+\pi^{-}\) & **0.415** & **1.49** & **1.21** & **0.098** & **1.097** & **1.98** & **0.579** \\ \cline{1-1} \cline{3-10} & & \(K^{+}+K^{-}\) & **0.325** & **1.38** & **1.113** & **0.119** & **1.098** & **2.27** & **0.524** \\ \cline{1-1} \cline{3-10} & & \(p+\bar{p}\) & **0.313** & **1.12** & **1.110** & **0.119** & **1.092** & **3.49** & **5.114** \\ \hline \end{tabular} \end{table} Table 1: The parameters obtained by fitting the nuclear modification spectra for Pb-Pb and p-Pb collisions at LHC energies for the most central and peripheral collisions. momenta (\(p_{T}\)) = 3 GeV/c. In our earlier work [22], the same formulation was employed to successfully fit the nuclear modification factor, \(R_{PbPb}\) at \(\sqrt{s_{NN}}\) = 2.76 TeV upto \(p_{T}\)= 8 GeV/c. However, the parameter, \(t_{f}/\tau\) does not show any mass ordering and the value of \(<\beta_{r}>\) for proton and \(K^{*0}\) does not follow the hydrodynamic behaviour. In contrast, our model distinguishes itself by incorporating a departure from local equilibrium distribution. Specifically, we employ the TBW model within the Boltzmann transport equation, which differentiates our approach from the BGBW model. As a result, when we fit the experimental data for the nuclear modification factor across a broader \(p_{T}\) range, from 0 to 8 GeV/c, our model achieves an impressive agreement with hydrodynamic behaviour. Our approach demonstrates that the transverse velocity follows a clear ordering based on mass and centrality, while the Tsallis parameters \(q_{pp}\), \(q_{AA}\) and \(T_{ts}\) exhibit a well-defined centrality-dependent pat Figure 4: \(R_{pPb}\) spectra of pions, kaons and protons for p-Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV tern [14; 20]. This comprehensive consideration of the system's dynamics, including the departure from equilibrium, contributes significantly to the improved agreement with hydrodynamics in our approach. ## IV Summary and Outlook In summary, we have presented the variation of transverse momentum spectra, nuclear modification factor with respect to transverse momentum in Pb-Pb and p-Pb collisions for the most central and peripheral collisions at LHC energy of \(\sqrt{s_{NN}}=5.02\) TeV. We fitted the experimental data using the present formulation where Boltzmann transport equation with relaxation time approximation is used. Here, we have considered Tsallis statistics as an initial while Tsallis blast wave model is used as equilibrium distribution function and find a reasonable \(\chi^{2}/ndf\) for all the particles species at both the Pb-Pb and p-Pb collisions in the most central and peripheral collisions. The outcome of this exercise are succinctly summarized as: 1. \(<\beta_{r}>\) is consistently observed to decrease with the increasing particle mass and the collision centrality transitioning from central to peripheral in both the Pb-Pb and p-Pb collisions at a center-of-mass energy of \(\sqrt{s_{NN}}=5.02\) TeV. This may be due to elevated energy density leads to heightened particle interactions and a more conspicuous transverse expansion when contrasted with collisions occurring in the peripheral regime. 2. The parameter \(t_{f}/\tau\) displays a notable trend as it increases with particle mass in both Pb-Pb and p-Pb collisions, while it does not exhibit any trend with respect to collision centrality. This may be due to lighter particles possess higher transverse momenta, they acquire kinetic energy at a faster rate and undergo a more rapid outward motion. Consequently, this accelerated motion can lead to an earlier freeze-out or hadronization process for these particles within the collision dynamics. 3. The Tsallis parameters \(q_{pp}\) and \(q_{AA}\) exhibit an upward trend when transitioning from the most central to peripheral collisions, whereas \(T_{ts}\) displays a contrasting pattern, decreasing under the same conditions. This phenomenon could be attributed to the notion that central collisions tend to approach a state of closer equilibrium compared to peripheral collisions. 4. We have taken a wide spectrum of flow profiles, with a distinct parameter value \(n\). Notably, the velocity profile demonstrates a decreasing trend as a function of particle mass, while it exhibits no correlation with the centrality of the collisions under scrutiny. ## Acknowledgements SKT acknowledges the financial support of the seed money grant provided by the University of Allahabad, Prayagraj.